Formative Assessment Strategies for Healthcare Educators

formative assessment examples in nursing education

Formative assessments are those lower-stakes assessments that are delivered during instruction in some way, or 'along the way' so to speak. As an educator, it was always a challenge to identify if or what my students were understanding, what skills they had acquired, and if or how I should adjust my teaching strategy to help improve their learning. I’m guessing I am not alone with this. In medical education, the pace is so fast that many instructors feel like they do not have the time to spare in giving assessments ‘along the way’, but would rather focus on teaching everything students need for the higher-stakes exams. With medical education being incredibly intense and fast, this is completely understandable. However, there must be a reason so much research supports the effectiveness in administering formative assessments….along the way.

One reason formative assessments are proven so useful is they provide meaningful and useful feedback; feedback that can be used by both the instructor and students.

Results from formative assessments should have a direct relation to the learning objectives established by the instructor, and because of this, the results provide trusted feedback for both the instructor and student. This is incredibly important. For instructors, it allows them to make immediate adjustments to their teaching strategy and for the students, it helps them develop a more reliable self-awareness of their own learning. These two things alone are very useful, but when combined, they can result in an increase in student outcomes.

Here are 5 teaching strategies for delivering formative assessments that provide useful feedback opportunities.  

1. Pre-Assessment:

Provides an assessment of student prior knowledge, help identify prior misconceptions, and allow instructors to adjust their approach or target certain areas

  • When instructors have feedback from student assessments prior to class, it is easier to tailor the lesson to student needs.
  • Posing questions prior to class can help students focus on what the instructor thinks is important.
  • By assessing students before class, it helps ensure students are more prepared for what learning will take place in class.
  • Pre-assessments can provide more ‘in-class’ time flexibility- knowing ahead of time which knowledge gaps students may have allows the instructor to better use class time in a more flexible way...not as many ‘surprises’ flexibility.

formative assessment examples in nursing education

2. Frequent class assessments:

Provides students with feedback for learning during class, and provides a focus for students related to important topics which help increase learning gains

formative assessment examples in nursing education

  • Adding more formative assessments during class increases student retention.
  • Frequent formative assessments help students stay focused by giving them natural ‘breaks’ from either a lecture or the activity.
  • Multiple formative assessments can provide students with a “road-map” to what the instructor feels is important (i.e. what will appear on summative assessments).
  • By using frequent assessments, the instructor can naturally help students with topic or content transitions during a lecture or activity.
  • The data/feedback from the assessments can help instructors better understand which instructional methods are most effective- in other words, what works and what doesn’t.

3. Guided Study assessments (group or tutorial):

‍ Provides students with opportunities to acquire information needed to complete the assessment, for example through research or group work, and increases student self-awareness related to their own knowledge (gaps)

formative assessment examples in nursing education

  • Assessments where students are expected to engage in research allows them to develop and use higher-level thinking skills.
  • Guided assessments engage students in active learning either independently or through collaboration with a group.
  • Small group assessments encourage students to articulate their thinking and reasoning, and helps them develop self-awareness about what they do and do not yet understand.
  • Tutorial assessments can provide the instructor real-time feedback for student misconceptions and overall understanding- allowing them to make important decisions about how to teach particular topics.

4. Take-Home assessments: ‍

Allows students to preview the instructors assessment style, are low-stakes and self-paced to allow students to engage with the material, and provides the instructor with formative feedback 

  • Assessments that students can engage in outside of class gives them a ‘preview’ of the information that they will likely need to retrieve again on a summative exam.
  • When students take an assessment at home, the instructor can receive feedback with enough time to adjust the classroom instruction to address knowledge gaps or misconceptions.
  • Take home assessments can help students develop self-awareness of their own misunderstandings or knowledge gaps.

formative assessment examples in nursing education

5.“Bedside” observation:

Informs students in clinical settings of their level of competence and learning, and may improve motivation and improve participation in clinical activities.

  • Real-time formative assessments can provide students with critical feedback related to the skills that are necessary for practicing medicine.
  • On the fly assessments can help clinical instructors learn more about student understanding as well as any changes they can make in their instruction.
  • Formative assessments in a clinical setting can equip clinical instructors with a valuable tool to help them make informed decisions around their teaching and student learning.
  • Bedside assessments provide a standardized way of formatively assessing students in a very unpredictable learning environment.

The challenge for many instructors is often in the “how” when delivering formative assessments. Thankfully, improving teaching and learning through the use of formative assessments (and feedback) can be greatly enhanced with educational technology. DaVinci Education’s Leo platform provides multiple ways in which you can deliver formative assessments. With Leos’ exam feature you can:

  • Assign pre-class, in-class or take-home quizzes
  • Deliver IRATs used during TBL exercises to assess student individual readiness
  • Deliver GRATs used during TBL exercises by using Leo’s digital scratch-off tool to encourage collaboration and assess group readiness
  • Monitor student performance in real-time using Leo’s Monitor Exam feature
  • Customize student feedback options during or following an assessment

References:

Burch, v. c., seggie, j. l., & gary, n. e. (2006, may). formative assessment promotes learning in undergraduate clinical clerkships. retrieved from https://www.ncbi.nlm.nih.gov/pubmed/16751919, feedback and formative assessment tools . (n.d.). retrieved from http://www.queensu.ca/teachingandlearning/modules/assessments/11_s2_03_feedback_and_formative.html, hattie, j. and timperely, h. (2007). the power of feedback. review of educational research , 77, 81–112, heritage, m. 2014, formative assessment: an enabler of learning, retrieved from http://www.amplify.com/assets/regional/heritage_fa.pdf, magna publications, inc. (2018). designing better quizzes: ideas for rethinking your quiz practices . madison, wi., schlegel, c. (2018). objective structured clinical examination (osce). osce – kompetenzorientiert prüfen in der pflegeausbildung , 1–7. doi: 10.1007/978-3-662-55800-3_1, other resources.

formative assessment examples in nursing education

510 Meadowmont Village Circle #129 Chapel Hill, NC 27517 ‍ (919) 694-7498

View privacy policy

DAVINCI EDUCATION MANAGEMENT SYSTEM®, ACADEMIC PORTRAIT®, and LEO® are the registered trademarks of DaVinci Education, Inc.

formative assessment examples in nursing education

Search form

  • About Faculty Development and Support
  • Programs and Funding Opportunities

Consultations, Observations, and Services

  • Strategic Resources & Digital Publications
  • Canvas @ Yale Support
  • Learning Environments @ Yale
  • Teaching Workshops
  • Teaching Consultations and Classroom Observations
  • Teaching Programs
  • Spring Teaching Forum
  • Written and Oral Communication Workshops and Panels
  • Writing Resources & Tutorials
  • About the Graduate Writing Laboratory
  • Writing and Public Speaking Consultations
  • Writing Workshops and Panels
  • Writing Peer-Review Groups
  • Writing Retreats and All Writes
  • Online Writing Resources for Graduate Students
  • About Teaching Development for Graduate and Professional School Students
  • Teaching Programs and Grants
  • Teaching Forums
  • Resources for Graduate Student Teachers
  • About Undergraduate Writing and Tutoring
  • Academic Strategies Program
  • The Writing Center
  • STEM Tutoring & Programs
  • Humanities & Social Sciences
  • Center for Language Study
  • Online Course Catalog
  • Antiracist Pedagogy
  • NECQL 2019: NorthEast Consortium for Quantitative Literacy XXII Meeting
  • STEMinar Series
  • Teaching in Context: Troubling Times
  • Helmsley Postdoctoral Teaching Scholars
  • Pedagogical Partners
  • Instructional Materials
  • Evaluation & Research
  • STEM Education Job Opportunities
  • Yale Connect
  • Online Education Legal Statements

You are here

Formative and summative assessments.

Assessment allows both instructor and student to monitor progress towards achieving learning objectives, and can be approached in a variety of ways. Formative assessment refers to tools that identify misconceptions, struggles, and learning gaps along the way and assess how to close those gaps. It includes effective tools for helping to shape learning, and can even bolster students’ abilities to take ownership of their learning when they understand that the goal is to improve learning, not apply final marks (Trumbull and Lash, 2013). It can include students assessing themselves, peers, or even the instructor, through writing, quizzes, conversation, and more. In short, formative assessment occurs throughout a class or course, and seeks to improve student achievement of learning objectives through approaches that can support specific student needs (Theal and Franklin, 2010, p. 151). 

In contrast, summative assessments evaluate student learning, knowledge, proficiency, or success at the conclusion of an instructional period, like a unit, course, or program. Summative assessments are almost always formally graded and often heavily weighted (though they do not need to be). Summative assessment can be used to great effect in conjunction and alignment with formative assessment, and instructors can consider a variety of ways to combine these approaches. 

Examples of Formative and Summative Assessments

Both forms of assessment can vary across several dimensions (Trumbull and Lash, 2013): 

  • Informal / formal
  • Immediate / delayed feedback
  • Embedded in lesson plan / stand-alone
  • Spontaneous / planned
  • Individual / group
  • Verbal / nonverbal
  • Oral / written
  • Graded / ungraded
  • Open-ended response / closed/constrained response
  • Teacher initiated/controlled / student initiated/controlled
  • Teacher and student(s) / peers
  • Process-oriented / product-oriented
  • Brief / extended
  • Scaffolded (teacher supported) / independently performed 

Recommendations

Formative Assessment   Ideally, formative assessment strategies improve teaching and learning simultaneously. Instructors can help students grow as learners by actively encouraging them to self-assess their own skills and knowledge retention, and by giving clear instructions and feedback. Seven principles (adapted from Nicol and Macfarlane-Dick, 2007 with additions) can guide instructor strategies:

  • Keep clear criteria for what defines good performance - Instructors can explain criteria for A-F graded papers, and encourage student discussion and reflection about these criteria (this can be accomplished though office hours, rubrics, post-grade peer review, or exam / assignment wrappers ). Instructors may also hold class-wide conversations on performance criteria at strategic moments throughout a term.
  • Encourage students’ self-reflection - Instructors can ask students to utilize course criteria to evaluate their own or a peer’s work, and to share what kinds of feedback they find most valuable. In addition, instructors can ask students to describe the qualities of their best work, either through writing or group discussion.
  • Give students detailed, actionable feedback - Instructors can consistently provide specific feedback tied to predefined criteria, with opportunities to revise or apply feedback before final submission. Feedback may be corrective and forward-looking, rather than just evaluative. Examples include comments on multiple paper drafts, criterion discussions during 1-on-1 conferences, and regular online quizzes.
  • Encourage teacher and peer dialogue around learning - Instructors can invite students to discuss the formative learning process together. This practice primarily revolves around mid-semester feedback and small group feedback sessions , where students reflect on the course and instructors respond to student concerns. Students can also identify examples of feedback comments they found useful and explain how they helped. A particularly useful strategy, instructors can invite students to discuss learning goals and assignment criteria, and weave student hopes into the syllabus.
  • Promote positive motivational beliefs and self-esteem - Students will be more motivated and engaged when they are assured that an instructor cares for their development. Instructors can allow for rewrites/resubmissions to signal that an assignment is designed to promote development of learning. These rewrites might utilize low-stakes assessments, or even automated online testing that is anonymous, and (if appropriate) allows for unlimited resubmissions.
  • Provide opportunities to close the gap between current and desired performance - Related to the above, instructors can improve student motivation and engagement by making visible any opportunities to close gaps between current and desired performance. Examples include opportunities for resubmission, specific action points for writing or task-based assignments, and sharing study or process strategies that an instructor would use in order to succeed.  
  • Collect information which can be used to help shape teaching - Instructors can feel free to collect useful information from students in order to provide targeted feedback and instruction. Students can identify where they are having difficulties, either on an assignment or test, or in written submissions. This approach also promotes metacognition , as students are asked to think about their own learning. Poorvu Center staff can also perform a classroom observation or conduct a small group feedback session that can provide instructors with potential student struggles. 

Instructors can find a variety of other formative assessment techniques through Angelo and Cross (1993), Classroom Assessment Techniques (list of techniques available here ).

Summative Assessment   Because summative assessments are usually higher-stakes than formative assessments, it is especially important to ensure that the assessment aligns with the goals and expected outcomes of the instruction.  

  • Use a Rubric or Table of Specifications - Instructors can use a rubric to lay out expected performance criteria for a range of grades. Rubrics will describe what an ideal assignment looks like, and “summarize” expected performance at the beginning of term, providing students with a trajectory and sense of completion. 
  • Design Clear, Effective Questions - If designing essay questions, instructors can ensure that questions meet criteria while allowing students freedom to express their knowledge creatively and in ways that honor how they digested, constructed, or mastered meaning. Instructors can read about ways to design effective multiple choice questions .
  • Assess Comprehensiveness - Effective summative assessments provide an opportunity for students to consider the totality of a course’s content, making broad connections, demonstrating synthesized skills, and exploring deeper concepts that drive or found a course’s ideas and content. 
  • Make Parameters Clear - When approaching a final assessment, instructors can ensure that parameters are well defined (length of assessment, depth of response, time and date, grading standards); knowledge assessed relates clearly to content covered in course; and students with disabilities are provided required space and support.
  • Consider Blind Grading - Instructors may wish to know whose work they grade, in order to provide feedback that speaks to a student’s term-long trajectory. If instructors wish to provide truly unbiased summative assessment, they can also consider a variety of blind grading techniques .

Considerations for Online Assessments

Effectively implementing assessments in an online teaching environment can be particularly challenging. The Poorvu Center shares these  recommendations .

Nicol, D.J. and Macfarlane-Dick, D. (2006) Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education 31(2): 2-19.

Theall, M. and Franklin J.L. (2010). Assessing Teaching Practices and Effectiveness for Formative Purposes. In: A Guide to Faculty Development. KJ Gillespie and DL Robertson (Eds). Jossey Bass: San Francisco, CA.

Trumbull, E., & Lash, A. (2013). Understanding formative assessment: Insights from learning theory and measurement theory. San Francisco: WestEd.

YOU MAY BE INTERESTED IN

formative assessment examples in nursing education

The Poorvu Center for Teaching and Learning routinely supports members of the Yale community with individual instructional consultations and classroom observations.

Nancy Niemi in conversation with a new faculty member at the Greenberg Center

Instructional Enhancement Fund

The Instructional Enhancement Fund (IEF) awards grants of up to $500 to support the timely integration of new learning activities into an existing undergraduate or graduate course. All Yale instructors of record, including tenured and tenure-track faculty, clinical instructional faculty, lecturers, lectors, and part-time acting instructors (PTAIs), are eligible to apply. Award decisions are typically provided within two weeks to help instructors implement ideas for the current semester.

formative assessment examples in nursing education

Reserve a Room

The Poorvu Center for Teaching and Learning partners with departments and groups on-campus throughout the year to share its space. Please review the reservation form and submit a request.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of bmcnurs

Exploring the formal assessment discussions in clinical nursing education: An observational study

Ingunn aase.

1 SHARE- Centre for Resilience in Healthcare, Faculty of Health Sciences, University of Stavanger, Kjell Arholms gate 41, N-4036 Stavanger, Norway

Kristin Akerjordet

2 School of Psychology, Faculty of the Arts, Social Sciences & Humanities, University of Wollongong, Wollongong, NSW Australia

Patrick Crookes

3 School of Nursing, Midwifery and Public health, University of Canberra, Canberra, Australia

Christina T. Frøiland

Kristin a. laugaland, associated data.

Original de-identified data of the study will be stored at the Norwegian Centre of Research Data subsequent to completion of the project. Original de- identified data can be requested from the corresponding author upon reasonable request.

Introduction

According to EU standards, 50% of the bachelor education program in nursing should take place in clinical learning environments. Consequently, this calls for high quality supervision, where appropriate assessment strategies are vital to optimize students’ learning, growth, and professional development. Despite this, little is known about the formal assessment discussions taking place in clinical nursing education.

The aim of this study was to explore the characteristics of the formal assessment discussions taking place during first-year students’ clinical education in nursing homes.

An exploratory qualitative study was performed. The data consist of passive participant observations of 24 assessment discussions (12 mid-term and 12 final assessments) with first-year nursing students ( n =12), their assigned registered nurse mentors ( n =12) and nurse educators ( n =5). The study was conducted in three public nursing homes in a single Norwegian municipality. Data were subjected to thematic analysis. The findings were reported using the Standards for Reporting of Qualitative Research.

Three themes were identified regarding the characteristics of the formal assessment discussions: (1) adverse variability in structuring, weighting of theoretical content and pedagogical approach; (2) limited three-part dialogue constrains feedback and reflection; and (3) restricted grounds for assessment leave the nurse educators with a dominant role.

These characteristic signal key areas of attention to improve formal assessment discussions to capitalize on unexploited learning opportunities.

This study focuses on formal assessment practice of nursing students in clinical education in nursing homes. Enabling nursing students to acquire professional competence through clinical education in a variety of healthcare settings is a cornerstone of contemporary nurse education programs [ 1 ]. According to EU standards, 50% of the bachelor education program should take place in clinical learning environments. Consequently, this calls for high-quality clinical supervision that includes appropriate assessment strategies, which are critical to optimize students’ learning, professional development, and personal growth [ 2 ].

Formal assessment of nursing students in clinical education serves two purposes: 1) to facilitate learning by enabling students to judge their own achievements more accurately and encourage their continuous learning process; and 2) to provide certification of achievements [ 3 ]. Accordingly, there are two approaches to assessment: formative assessment and summative assessment. Formative assessment is focused on the learning needs of each student, identifying areas in need of development, and providing feedback. Feedback is the central component of effective formative assessment. A summative assessment is a summary of a student’s achievements and judgement as to whether he/she has met the required learning outcomes [ 4 , 5 ]. Student clinical placements are often assessed on a pass-fail basis, not by a letter grade [ 6 ].

The predominant clinical education model applied in nursing homes involves students being mentored and assessed by an RN and followed up by a nurse teacher [ 7 ]. The formal assessment during clinical education involves a partnership model where nursing students, their assigned Registered Nurse (RN) mentors and nurse educators, cooperate and share responsibility for facilitating and confirming the students’ achievement of expected learning outcomes [ 8 ]. However, substantial variations in assessment practices internationally and nationally have been reported, suggesting that the assessment practices of nursing students in clinical education lack consistency [ 2 , 3 , 9 , 10 ]. Consequently, a variety of tools for assessing student’s clinical competence exist, and these tools depend on different definitions of clinical competence and the components to be assessed such as knowledge, technical care skills, attitudes, behaviours, clinical judgment, and critical thinking [ 11 – 13 ]. Several international researchers have argued that reliable and comparable assessment of nursing students’ knowledge and skill would benefit greatly from the development and consistent use of national competency assessment tools [ 2 , 8 , 14 , 15 ]. In their discussion paper, Gjevjon et al. [ 16 ] highlighted the importance of assessing the students’ progression in clinical skills and ensuring that their nursing competence is in line with official requirements and professional expectations. This implies that student assessments cannot be limited to the period of a single clinical placement period.

Stakeholders have reported challenges with the assessment of students’ competence in clinical nursing education [ 2 , 17 – 19 ]. RN mentors report that they have trouble providing feedback and assessing student competence because of the absence of clear guidelines and assessment criteria [ 17 , 20 ]. RN mentors also experience having a passive and peripheral role during formal assessment discussions [ 18 , 21 ]. Conversely, nursing students report feeling insecure due to power disparities in the assessment discussions; students perceive their RN mentors as much more powerful than they are [ 22 ]. Moreover, students report that the personal chemistry between them and their assigned RN mentor could lead to differential treatment [ 22 ]. In comparison, nurse educators report that it is challenging to make the students’ competence and learning processes visible, and to ensure fair and equitable assessment of students in clinical placement/education (e.g., [ 23 ]). Difficulties in understanding the concepts used to describe the student learning outcomes, the language used in the assessment tools, limited mentor competence in assessment and restricted academic-clinical collaboration have also been reported in the literature (e.g., [ 2 , 24 – 26 ]). A systematic review of assessment of student competence found that the use of a valid and reliable assessment tool, with clear criteria and continued education and support for mentors is critical to quality in learning opportunities [ 3 ].

Formative assessment with feedback as key aspect is arguably one of the most important factors in terms of students’ learning, personal growth, and professional development in discussions of clinical assessment [ 24 , 27 ]. A multilevel descriptive study [ 20 ] concluded that students often do not receive sufficient constructive feedback in or during formal assessment discussions. This is of major concern since clinical learning is considered a signature pedagogy in preparation of nursing students for real-world practice (e.g., [ 2 , 28 ]). For workplace learning to be optimized, nursing students need to explore the complexity of the patients experience and to be able to discuss and evaluate patient care with others inside the clinical environment and in an academic setting (e.g., [ 29 , 30 ]). The focus of assessment is to aid nursing students’ continuous learning process which requires constructive feedbacks and opportunities for reflections between nursing student, RN mentor, and nurse educator [ 3 ]. Formal assessment offers a potential opportunity to optimize nursing students’ learning outcomes by extending and transforming their professional knowledge dialectically. The explicit focus of assessment is therefore very important as students tend to concentrate on achieving the required competencies which they are aware will be assessed [ 2 ].

Internationally, emerging evidence shows that summative assessment of nursing students’ competence is a matter of concern across countries and educational institutions as previously stressed due to a) lack of consistency in use of methods and tool, b) its openness to subject bias, and c) the fact that the quality of assessment varies greatly [ 2 , 3 ]. According to Helminen et al. [ 2 ], there are few studies of summative assessment, and as far as we know, no studies have explored the characteristics of these formal assessment discussions by observing what is really going on in this three-part dialogue between nursing students, RN mentors and nurse educators. There is therefore a need to further increase our knowledge and understanding of assessment discussions’ characteristics and how these discussions can enhance students’ learning (e.g., [ 2 , 20 , 21 ]). To fill this knowledge gap, the aim of this study was to explore the characteristics of the formal assessment discussions that take place during first-year students’ clinical education in a nursing home using observation. This is considered a novel methodological approach to explore this field in nursing education.

The study applied a qualitative, exploratory design using passive observation [ 31 ] to explore the characteristics of the mid-term and final assessment discussions. Such observations allow the researcher to be present and identifiable, but the researcher does not participate or interact with the people being observed [ 31 ]. Observational research is, as previously stressed, a novel way to study assessment discussions as most studies have applied interview methods retrospectively, as a source for collecting data about assessment discussions [ 3 , 20 ]. Observational research offers a rich, in-depth approach which in contrast to interviews allow the researcher to identify context-specific issues of importance, to learn what is taken for granted in a situation and to discover what is happening by watching and listening to arrive at new knowledge [ 31 ]. The observations of the formal assessment discussions were distributed among three of the researchers (IA, CF, KL) who individually carried out passive observations using a structured guide to ensure rigor [ 32 ]. The Standards for Reporting Qualitative Research (SPQR) were used.

The context of the observed assessment practices

In this study, nursing home placements are part of eight weeks of mandatory clinical education during nursing students’ first academic year. In the observational study, a preceptorship model was applied in which students are mentored by a RN and followed up by a nurse educator [ 7 ]. In Norway, mentorship is an integral part of an RN’s work. This implies that the mentor does not receive financial compensation or require formal training in mentorship (e.g., at master level). The RN mentors are employed by the nursing homes and the nurse educators included are employed by the university. The nurse educators are responsible for coordinating the students’ learning, organizing the assessment discussions assessing the nursing students in clinical placement.

The clinical education system for the students in this study is comprised of two formal assessment discussions: the midterm discussion, with a summative assessment and a formative assessment, and the final assessment discussion, where the whole period is encapsulated up in a summative assessment [ 20 ]. The mid-term and final summative assessments take the form of a three-part dialogue among the nursing student, the RN mentor, and the nurse educator at the placement site. Prior to the assessment discussions the nursing student must write an evaluation of his or her learning. This written self-assessment must be sent to the nurse educator and RN mentor two days before the assessment discussion. There is no requirement for written preparation or documentation from the RN mentors. The university would assign the student a pass or fail grade based on six competence areas (i.e., professional ethics and legal-, cooperation-, and patient-centered nursing, pedagogical, management and learning competence) with the accompanying learning outcomes. All six competence areas and accompanying learning outcomes with a particular focus on fundamentals of care were well known by the researchers (IA, CF, KL) serving as important pre-understanding for data-collection and analysis. Beyond this, all the researchers had experience as nurse educators from the nursing home context and one of the researchers (CF) holds a Master of Science degree in gerontology.

Setting and sample

The study was conducted in three public nursing homes within the same municipality in Western Norway as part of a larger research project: “Aiming for quality in nursing home care: Rethinking clinical supervision and assessment of nursing students in clinical studies” [ 19 ]. The nursing homes varied in patient numbers and staffing but were highly motivated to participate in the research project anchored in the top management team. Recruitment was based on a purposive, criterion-based sampling strategy [ 33 ] targeting the nursing students, RN mentors and nurse educators involved in assessment discussions. To make sure that the sample had the virtue of knowledge and expertise, RN with mentorship experiences from nursing homes were included ensuring diversity related to gender, age, and ethnicity. The nursing students and the nurse educators were recruited from the same university representing a distribution in age, gender, healthcare experience, and academic experience (See Table ​ Table1 1 ).

Characteristics of participants

Prior to data collection, approval was obtained from the university and from the nurse managers at the nursing homes enrolled in the study. In addition, an information meeting was held by the first and last author of this study, with the eligible nursing students during their pre-placement orientation week on campus, the RN mentors at the selected nursing home sites, and with the nurse educators responsible for overseeing nursing students on placement.

Invitations to participate in the study were then sent to eligible participants at the three public nursing homes. Nursing students were recruited first, before their assigned RN mentors and nurse educators were emailed with an invitation to participate to ensure the quality of the study sample. Two co-researchers working in two of the three enrolled nursing homes assisted in the recruitment of the RN mentors. A number of 45 allocated nursing students had their clinical placement at these three included public nursing homes. Of the 45 potential nursing students invited to participate, 12 consented. Their assigned RN mentors ( n =12) and nurse educators ( n =5) also agreed to participate. A summary of participant group characteristics is displayed in Table ​ Table1 1 .

Four nursing students and four RN mentors enrolled from each nursing home. As nurse educators are responsible for overseeing several students during placements, fewer nurse educators than students and RN mentors agreed to participated. All but one of the participants, a RN mentor, were women. Out of 29 participants, six (one nursing student, one nurse educator and four RN mentors) did not have Norwegian as their mother language. None of the RNs had prior formal supervision competence and their experience with mentoring students ranged from 1-7 years. Seven of the 12 nursing students had healthcare experience prior to their placement period. Three of the five nurse educators held a PhD and the other two were lecturers with a master’s degree. Two of the nurse educators were overseeing students on nursing home placement for the first and second time, while the other three had several years of experience with nursing student placements within nursing homes. None of the nurse educators had expertise in gerontological nursing.

Data collection

To allow a first-hand experience of the formal assessment discussion passive observation were used as the main source of data collection. The observations were carried out separately by three researchers (IA, CF, KL), all of whom are experienced qualitative researchers with a background in nursing, nursing education and nursing research. The researchers were all familiar faces from lectures at the University and as previously emphasized, as nurse educators in the public nursing home settings. At the beginning of each assessment observation, time was taken to create bond of trust between the participants to reduce contextual stress. The observations were based on a structured observation guide (See Attachment 1). The observation guide contained relatively broad predefined categories: structure, content, duration, interaction, dialogue, and feedback. These predefined categories were used to guide and support the recording and notetaking process allowing space for spontaneous aspects of the formal assessment discussions [ 31 ]. The guide was based on the aim of the study and informed by the literature.

During the observations, each of the three researchers sat on chairs in a corner of the room (two-three meters away) to observe the interaction and listen unobtrusively to the assessment discussions during the study to reduce students’ experience of stress contextually. Observational notes were taken discreetly, according to the structured guide and combined descriptions and personal impressions [ 34 ]. Summaries, including reflective notes, were written in electronic format directly after the observations. The observations were conducted alongside the clinical placements in February and March 2019 with a duration of about 60 minutes on average. The choice of passive observation using three researchers was both of pragmatic and scientific reasons: a) to ensure that data was collected timely at colliding times of assessment meetings, and b) to verify the observed notes supported by triangulation during analysis and the interpretation process [ 33 ].

Data Analysis

Braun and Clarke’s [ 35 ] approach to thematic analysis was used to analyze the observational notes and summary transcripts. The thematic analysis was guided by the aim of the study and followed the six steps described by Braun and Clarke [ 35 ]: (1) becoming familiar with the data; (2) generating initial codes; (3) searching for themes; (4) reviewing themes; (5) defining and naming themes; and (6) producing the report. The 93 pages of observational notes were read independently by three of the researchers (IA, CF, KL) to obtain an overall impression of the dataset (See Fig. ​ Fig.1: 1 : Analysis process).

An external file that holds a picture, illustration, etc.
Object name is 12912_2022_934_Fig1_HTML.jpg

Analysis process

All the observational notes, both from mid-term- and final assessment discussions, were then compared and analyzed as a single dataset with attention to similarities and differences by generating initial codes and searching for themes. The reason for merging the data set was that the mid-term and final assessment discussions overlapped in terms of initial codes and emerging themes contributing to a deeper understanding of the results. The researchers IA, CF and KL met several times to discuss the coding process to finalize the preliminary themes. All authors revised, defined, and identified three key themes reaching consensus. This means that all authors contributed to analytic integrity by rich discussions throughout the process of analysis, clarification, validation, and dissemination of results [ 36 ]. The study also provides a description of the context of the formal assessment discussions, participants enrolled, data collection and the analysis process allowing the readers to evaluate the authenticity and transferability of the research evidence [ 36 ].

Ethical considerations

The study was submitted to The Regional Committees Research Ethics in Norway who found that the study was not regulated by the Health Research Act, since no health or patient data is registered. The study adhered to general ethical principles laid down by The National Committee for Medical & Health Research Ethics in Norway. In addition, the principles of confidentiality, voluntary participation and informed consent were applied by following the World Medical Association’ s Declaration of Helsinki. All participants gave their written informed consent they were informed about the right to withdraw from the study at any point. The nursing students were made aware that participation or non-participation would not affect other aspects of their clinical placement/education period or give them any advantages or disadvantages on their educational path. The study was approved by The Norwegian Centre for Research Data in two phases (Phase 1: NSD, ID 61309; Phase 2: NSD, ID 489776).

All methods were performed in accordance with relevant guidelines and regulations.

The analysis identified three themes that describe the characteristics of the formal assessment discussions taking place during first-year nursing students’ clinical education in nursing homes: (1) Adverse variability in structuring, weighting of theoretical content, and pedagogical approach, (2) Limited three-part dialogue constrains feedback and reflection, and (3) Restricted grounds for assessment leave the nurse educators with a dominant role. These themes are now presented.

Theme 1: Adverse variability in structuring, weighting of theoretical content and pedagogical approach

This theme illuminates adverse variability in the nurse educators structuring of the discussion, weighting of theoretical content (e.g., bridging of theory and practice), and the pedagogical approach applied across the assessment discussions. Some nurse educators went through each competence area and the accompanying learning outcomes, strictly following the sequence adopted in the assessment form. Others adopted a more flexible sequence of progression guided by the discussion. The latter approach led to frequent shifts in focus between competence areas; the observation notes described that students and RN mentors found it difficult to follow the discussion and the assessment process. For example, the observational notes illuminated that a student commented that she felt insecure because the nurse educator appeared to write notes pertaining to one competence area while talking about another one.

The data exposed variations in the nurse educators’ emphases and weighting of bridging theory and practice. While some nurse educators asked theoretical questions to gauge the students’ knowledge and to help students to link theory and practice, others nurse educators did not. An absence of theoretical questions was observed in several assessment discussions. Additionally, the nurse educators’ knowledge and referral to the course curriculum varied. Some nurse educators seemed to be familiar with the course curriculum and mentioned the theoretical subjects taught to the students’ pre-placement, other nurse educators refrained from discussing theoretical issues. The weighting of geriatric nursing was limited in the assessment discussions.

The nurse educators varied in their pedagogical approach across the assessment discussions. Some nurse educators asked open questions inviting the students to self-evaluate and reflect on their own performance and development before approaching and inviting the RN mentor to provide input. Other nurse educators adopted a more confirmative approach, asking closed-ended questions, and reading aloud from the student’s written self-assessment before asking the RN mentors to just confirm the nurse educators’ impressions and evaluations with “yes” or “no” answers.

Theme 2: Limited three-part dialogue constrains feedback and reflection

The second category illuminates limited participation of all three parties in the formal assessment discussions. This was observed as constraining feedback and reflection. Several possible impediments to the dialogue were language barriers, interruptions, preparedness of students and RN mentors for the discussions, justification for assessing the students, and the students’ reported level of stress. There were variations in the way both nurse educators and RN mentors conveyed their feedback to the students. Several of the RN mentors were observed to have assumed a passive, peripheral role, sitting in the sidelines and saying very little. When addressed by the nurse educator, the RN mentors tended to respond with yes/no answers.

Language barriers related to understanding of the learning outcomes to be assessed appeared to hamper the dialogue. On several occasions the RN mentors who did not have Norwegian as their mother language, expressed that it was difficult to fully understand the language used in the assessment form, so they did not know what was required during the assessment. We therefore observed that the nurse educators often took time to explain and “translate” the concepts used to describe the student learning outcomes to ensure mutual understanding.

Interruptions during the assessment discussions interfered with the dialogue. In several of the assessment discussions the RN mentors took phone calls that required them to leave the assessment discussions, sometimes for extended periods of time. This meant that the nurse educator later had to update the RN mentor on what had been covered in her/his absence.

Preparedness of students and RN mentors for the assessment discussion varied. Some students brought their self-assessment document with them to the meeting, others came empty-handed. Some but not all RN mentors brought a hard copy of the student’s self-assessment document. Some RN mentors admitted that they had been too busy to read the self-assessment before the meeting.

Based on body language interpretation, several students appeared nervous during the assessment discussions. The observational notes showed that some of the students later in this assessment discussion confirmed their self-perceived stress by expressing during the assessment discussions that they had had dreams or nightmares about these assessment discussions. The observational notes also illuminated that some students expressed during the assessment discussion that they did not know what would be brought up in these discussions and that they were afraid of failing their clinical placement. The three-part dialogue to a great extent focused on the student competence achievements to pass or fail the clinical placement and less attention providing the student with ability to reflect for enhancing learning.

Theme 3: Restricted grounds for assessment leaves the nurse educators with a dominating role

Limited dialogue and engagement from students and RN mentors often left the nurse educators with restricted basis for assessment and thus gave them a dominant role in the assessment discussions. RN mentors seemed to have insufficient information to assess their students’ performance, learning and development stressing that they had not spent significant amounts of time observing them during the placement period. On several occasions RN mentors expressed that due to sick leave, administrative tasks, and alternating shifts, they had spent only a handful of workdays with their students. The observational notes revealed that some RN mentors expressed that they had to base their formal assessments of student performance on these assessment discussions mainly. Several RN mentors expressed frustration with limited nurse coverage which could impede their mentorship and assessment practices and the amount of student follow-up during placement. Because of limited input from the RN mentors, the researchers observed that the nurse educators had to base their evaluations on students’ written self-assessments. Some nurse educators gave weight to the students’ capacity for self-evaluation. The quality, amount and content of the written self-assessment was observed to be influential for determining the students’ strengths and weaknesses and whether the students passed or failed their competence areas. Our observational notes showed that some students struggled to pass because their written self-assessments were not comprehensive enough. In the formal assessment document, there are fields for ‘strengths and areas of growth /improvement’. It was variations in how much was written beyond the mark for approved or not approved and pass or fall. This implies that some assessment discussions were marked by ticking a box rather than engaging in a reflective dialogue.

The findings of this exploratory observational study suggest that the formal assessment discussions for first-year nursing students are characterized by lack of conformity - referred to as adverse variability- regarding structure, theoretical content and the pedagogical approach. The limited three-part dialogue appeared to constrain feedback and critical reflections to enhance students clinical learning opportunities, leaving the nurse educators with a dominant role and a restricted ground for assessment . Increased awareness is therefore required to improve formal assessment discussions to capitalize on unexploited learning opportunities.

Unexploited learning opportunities

Formal assessment discussions are expected to optimize nursing students’ clinical learning outcomes by extending and transforming their professional knowledge. Our findings illuminate adverse variability in nurse educators’ pedagogical approach and the weighting of theoretical content during formal assessment, which may reduce the students’ ability to learn. This is of major concern since the assessment discussion should be clear and systematic encouraging student’s continuous reflecting and learning process. Both nursing students and nurse mentors seem to need more knowledge of what a formal assessment discussion consists of to reduce unpredictability (e.g., familiarizing with the expected learning outcomes, the assessment criteria, and the context) and to optimize clinical learning. Regarding knowledge or referral to theoretical teaching prior to clinical placements, nurse educators may fail to provide consistency for first-year students during formal assessment; supporting them in bridging theory and practice which is essential to their learning and professional development. These findings resonate with an integrative review which indicated that orientation programs, mentor support, clear role expectations, and ongoing feedback on performance are essential for academic organizations to retain excellent nursing faculty [ 37 ]. This highlights the importance of nurse educators' awareness and active involvement in assessment discussions [ 38 , 39 ] to provide academic support and guidance of students' theory-based assignments [ 40 ]. A critical question is whether the nurse mentors’ competence and active role have been acknowledged sufficiently in the formal assessment discussions? Particularly when our research revealed that gerontological questions were hardly reflected upon to support students clinical learning in nursing homes.

Critical reflections in the assessment discussions were also limited since some nurse educators in this study mostly did not ask for reflections from the students. This is of major concern since critical reflection is a pedagogical way to bridge theory with clinical experience and to tap into unexploited learning opportunities by strengthening the students’ reflection skills and knowledge in the assessment discussions [ 41 ]. Critical reflection in clinical setting and education is known to assist students in the acquisition of necessary skills and competencies [ 42 ].

Encouraging students to reflect on clinical learning experiences contextually and having clearer guidelines for formal assessment discussions in educational nursing programs may be both necessary and important for exploring unexploited learning opportunities. Our findings imply that education programs may increase students’ learning opportunity by decreasing the variability in structure, the weighting of theoretical content and the pedagogical approach applied in clinical education. Increased awareness of assessment by having a common understanding of how the assessment should be managed and what the assessment criteria are therefore considered of importance (e.g., [ 17 ]). Research is however needed to explore the relationship between nurse educators’ clinical expertise, competence, pedagogical skills set and students’ learning outcomes [ 26 ]. Our findings suggest that measures for preparing nurse educators for better theoretical knowledge and pedagogical approach require further development, and that this could be done with i.e., online educational support. Further research should therefore explore and extend our understanding of the need for improvement in structure, theoretical content, and pedagogical approach in formal assessment discussions.

Hampered three-part dialogue

The study findings illuminate that a hampered three-part dialogue makes it hard to offer feedback and engage in reflection. This interfered with the amount of learning potential during formal assessment in clinical education. Many factors affect the degree of interaction in the three-part dialogue. For example, our findings imply that RN mentors gave little feedback to the students in the assessment discussions because they had not spent a significant amount of time with students. A consequence of inadequate feedback in the assessment discussions may limited the clinical learning potential for the nursing students. Formative assessment is arguably one of the most important and influential factors for students’ learning, personal growth, and professional development in their clinical assessment discussions [ 27 ]. According to our findings, formative process assessment is not always used properly, even though it is highly recommended in the literature (e.g., [ 3 , 4 ]). Our study shows that enhanced focus on formative assessment in the nurse educational program and further research using different methodology is of significance to extend our knowledge of the formative process assessment related to clinical learning. Overall, our findings indicate that there are room for improvement in the way that RN mentors participate in the assessment discussions. RN mentors and nurse educators need to increase their knowledge in how giving constructive and substantive feedback and how to encourage students to reflect critically on their learning through the formal assessment discussions. The findings also suggest that linguistic challenges associated with an internationally diverse workforce among RN mentors may constrain assessments of student competence. These findings are consistent with other studies that concluded that understanding the language and meaning of the concepts used in the assessment document was difficult and might have resulted in a peripheral and passive role of the RN nurses [ 18 , 21 , 26 ]. These linguistic challenges and the marginalization of RN nurses may be other causes of the limited dialogue requiring a better pedagogical preparation to nursing students’ clinical placement in nursing homes.

The results from our study also illuminated that the students appeared nervous before and during the assessment discussions. This anxiety could make the discussions difficult and stressful for them. Other studies mentioned the need for more predictability to reduce their stress, so they feel more secure [ 22 , 43 ]. Similar findings are described in the study of nursing students’ experiences with clinical placement in nursing homes, where Laugaland et al. [ 19 ] highlighted the vulnerability of being a first-year student. The students need to be informed about the purpose of the formal assessment discussions, so they prepare for them and feel less anxious consequently. This may give them a better basis for and openness to learning in the three-part dialogue. Enhanced focus on students’ stress in the assessment discussions and further research on the consequences of stress for learning in the assessment discussions are therefore required.

The hampered three-part dialogue, indicated in our results, often left the nurse educators with restricted basis for assessing the nursing students as well as having a dominate role in the discussions. The cooperation between the nurse educators and the RN mentors was also limited related to both interruptions in the assessment discussion and little input from the RN mentors which is not ideal or desirable for the formal assessment discussions to support students clinical learning, also considered as an unexploited learning opportunity. Wu et al. [ 10 ] describe clinical assessment as a robust activity which requires collaboration between clinical partners and academia to enhance the clinical experiences of students. Our research indicates a need for increased collaboration between educational programs and clinical placements in nursing homes. One way to increase the collaboration and giving the RN mentor a more active role, may be to give the RN mentors dedicated time and compensation for mentoring students including access to academic courses in clinical mentoring. Another way to increase this collaboration may be that the leaders in the nursing homes let the RN mentors prioritize the supervision of the students during the clinical placement period. Future research should extend and explore measures to strengthen the collaboration without giving the nurse educators too much of a say in assessment discussions.

Methodology considerations

This study has a strength by using passive observation as method in exploring and describing the formal assessment discussion using a novel approach to expand and deepen our knowledge in this research field.

A methodological consideration may be that the researcher claimed to be a passive observer of an interaction, but an observer’s presence is itself a significant part of the interaction. Participants might be uncomfortable and stressed seeing a stranger silently sitting in the corner and taking notes [ 31 ]. On the other hand, the researchers were familiar faces to the nursing students acknowledged by some of them being comforted with the researchers’ presence feeling more secure contextually. A critical question however is whether use of video recordings would provide richer data. Video recordings allow for the capturing and recording of interactions in the assessment discussions as they occur naturally without many disturbances and because they allow for repeated viewing and detailed analysis [ 44 ]. Frequencies of questions and answers in the tree-part dialogue might also have been counted to for example confirm the observed dominating role of the nurse educator. This was discussed but decided that it was not practical or possible contextually.

Video recording with no researcher present might also reduce the participants’ stress, but on the other hand also perceived as more threatening.

The way of collecting and analyzing the data by using three researchers should also be reconsidered. The data was collected separately by three researchers to ensure that the data was collected timely without any abruption. A limitation may be that all three researchers had similar educational background and experiences from formal assessment discussions in nursing homes. Their preconceptions might have influenced the results. Yet, the researchers were not involved in nursing students’ clinical placement period. To control for research bias, the data analysis process applied triangulation; two of the authors who were not actively involved in the observations, reflected upon the results, providing a basis for checking interpretations to strengthen their trustworthiness [ 45 ].

Conclusions and implications

Adverse variability in structuring and weighting of theoretical content and pedagogical approach, a hampered three-part dialogue and limited basis for assessment, all lead to unexploited learning opportunities. These characteristic signal key areas of attention to optimize the learning potential of these formal assessment discussions.

Higher education is in a unique position to lay the ground for improvements of formal assessment discussions. Educational nursing programs may increase students’ learning opportunity by using a structured guide, decreasing the variability in the weighting of theoretical content and the pedagogical approach applied in clinical placement in nursing homes. The nursing educational program should therefore find ways to increase the collaboration between nurse educators, nursing students and RN mentors to improve feedback, critical reflections, and clinical learning, thereby limiting the nurse educators dominating role in the formal assessment discussions in nursing homes.

Acknowledgements

We express our sincere appreciation to all participants who made this study possible. We wish to thank them for their willingness for being observed.

Abbreviation

Authors’ contributions.

All authors were responsible for design, analyses, and interpretation of data. IA worked out the drafts, revised it and completed the submitted version of the manuscript. IA and KL conducted the data collection and IA, KL and CF did the first analysis. KL, KA, CF and PC contributed comments and ideas throughout the analyses, and interpretation of the data, and helped revise the manuscript. All authors gave final approval of the submitted version.

This work is supported by The Research Council of Norway (RCN) grant number 273558. The funder had no role in the design of the project, data collection, analysis, interpretation of data or in writing and publication of the manuscript.

This study is part of a lager project: Aiming for quality in nursing home: rethinking clinical supervision and assessment of nursing students in clinical practice (QUALinCLINstud) [ 19 ].

Availability of data and materials

Declarations.

Not applicable.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 8, Issue 11
  • Formative peer assessment in healthcare education programmes: protocol for a scoping review
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Marie Stenberg ,
  • Elisabeth Mangrio ,
  • Mariette Bengtsson ,
  • Elisabeth Carlson
  • Department of Care Science, Faculty of Health and Society , Malmö University , Malmo , Sweden
  • Correspondence to Marie Stenberg; marie.stenberg{at}mau.se

Introduction In formative peer assessment, the students give and receive feedback from each other and expand their knowledge in a social context of interaction and collaboration. The ability to collaborate and communicate are essential parts of the healthcare professionals’ competence and delivery of safe patient care. Thereby, it is of utmost importance to support students with activities fostering these competences during their healthcare education. The aim of the scoping review is to compile research on peer assessment presented in healthcare education programmes, focusing formative assessment. The result of the scoping review will form the basis for developing and conducting an intervention focusing collaborative learning and peer assessment in a healthcare education programme.

Methods and analysis The scoping review will be conducted by using the framework presented by Arksey & O’Malley and Levac et al . The primary research question is: How are formative peer assessment interventions delivered in healthcare education? The literature search will be conducted in the peer-reviewed databases PubMed, Cumulative Index to Nursing and Allied Health Literature, Education Research Complete and Education Research Centre between September and December 2018. Additional search will be performed in Google Scholar, hand-searching of reference lists of included studies and Libsearch for identification of grey literature. Two researchers will independently screen title and abstract. Full-text articles will be screened by three researchers using a charting form. Studies meeting the inclusion criteria will be critically evaluated using the Critical Appraisal Skills Programme. A flow diagram will present the included and excluded studies. A narrative synthesis will be conducted by using thematic analysis as presented by Braun and Clarke. The findings will be presented under thematic headings using a summary table. To enhance validity, stakeholders from healthcare education programmes and healthcare institutions will be provided with an overview of the preliminary results.

Ethics and dissemination Research ethics approval is not required for the scoping review.

  • qualitative research

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjopen-2018-025055

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strength and limitations of this study

The result of the scoping review will establish a baseline for understanding the concept of formative peer assessment in healthcare education programmes prior to developing an intervention focusing peer assessment in a healthcare education programme.

A systematic search strategy will be conducted in four electronic databases with peer-reviewed literature, including search in library databases for inclusion of books, e-books and grey literature.

Search strategies will be developed in collaboration with a research librarian well versed using research databases.

No formal quality assessment will be conducted as the scoping review aims to provide a map of the landscape of formative peer assessment in healthcare education.

Only articles and documents published in English will be included.

Introduction 

Peer assessment is described as an essential part of collaborative learning since students exercise their ability to give and receive feedback. 1 This supports students in gaining insights and understanding of assessment criteria and their personal approach to an assessment task mirrored in a peer. 1 Furthermore, peer assessment helps students to develop judgement skills, critiquing abilities and self-awareness. 1 It can be defined as ‘an arrangement in which individuals consider the amount, level, quality, or success of the products or outcomes of learning of peers of similar status ’  (Topping and Ehly, p118). 2 Peer assessment has been described in a variety of contexts and with various aims including measuring professional competence of medical students, 3 as a strategy to enhance students’ engagement in their own learning, 4 5 and development of employability skills for students in higher education. 6

In a peer-assessment activity, students take responsibility for assessing the work of their peers against set assessment criteria, 1 and can be conducted as summative or formative assessments. The purpose of summative assessment is the grading and evaluation of students’ learning. 7 On the other hand, formative assessment focus the development of students’ learning processes. 8 In formative peer assessment, the intention is to help students help each other when planning their learning. 9 The students expand their knowledge in a social context of interaction and collaboration according to social constructivism principles. 10 11 In this social context, they identify their strengths and weaknesses and develop metacognitive, personal and professional skills. 9 It is conversational in nature 12 and fundamental is the use of feedback. Feedback is an integral aspect of peer assessment 7 with the intention to enhance student learning. 13

A recent published review of assessment in higher education 14 raised the issue that studies on peer assessment are deficient in referring to exactly what peer assessment aims to achieve and in addition empirical investigations are missing. Boud et al 1 highlighted the importance of a shift in assessment, from individualistic assessment approaches to peer assessment if collaboration such as manifested in collaborative learning models is to be fostered. The ability to collaborate, communicate, assess, give and receive feedback are essential parts of healthcare professionals’ competence and delivery of safe patient care. Thereby, it is of utmost importance to support students with activities fostering those competences during their healthcare education. These competences are related to professional teamwork, as well as broader goals for lifelong learning, and as argued by Boud et al 1 address course-specific goals not readily developed otherwise. Therefore, the scoping review of peer assessment in higher education will act as an important guide prior to develop an empirical investigation focusing peer assessment interventions in a healthcare education programme.

A scoping review aims to map the concepts, main sources and evidence available in a particular research area to get a broader understanding of a specific subject 15 and has increased in popularity in recent years in health and social sciences. 16 Scoping reviews are often conducted as a preliminary investigative process that help the researchers to formulate a research question and develop research proposals 17 and as essential basis for curriculum development and programme implementation. 18

This scoping review will be conducted by using the York methodology by Arksey and O’Malley 15 and taking into consideration recommendations presented by Levac et al . 19 A scoping review follows a six-stage process including: (1) identifying a research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarising and reporting the result; and (6) consultation. 15 19 This six-stage process associates with the process in conducting a systematic review. They both use rigorous and transparent methods to identify and analyse all the relevant literature pertaining to a research question. 20 This scoping review does not aim to assess the quality and validity of the studies in order to synthesise best practice guidelines as in a systematic review. Rather, it aims to get a broad picture and to highlight recent efforts and key concepts of peer assessment as an integral component for students in higher education. Therefore, this scoping review need to include a greater range of methodologies and study designs than what would be possible in a systematic review, that often focus on randomised controlled trials. 15

Furthermore, a scoping review can be of use when a topic is of a complex or heterogeneous nature 21 and as an essential basis for curriculum development and programme implementation. 18 Since the literature on peer assessment is extensive and with some ambiguity in precise definitions 14 and conducted in varying contexts in higher education, this method seemed appropriate to answer the research questions. In other words, peer assessment is multifaceted, and a scoping review may provide the researchers with a broad and in-depth knowledge of this particular subject. The reported result will be essential for conducting further development of an intervention aiming to implement and evaluate peer assessment as part of a collaborative learning approach in a healthcare education programme.

Stage 1: identifying the research question

The aim of this scoping review is to compile research about peer assessment presented in higher education, focusing formative assessment. The primary research question is:

How are formative peer assessment interventions delivered in healthcare education?

Further questions to be answered are:

What are the rationales for using formative peer assessment in healthcare education?

What experiences of formative peer assessment are presented from the perspective of students and teachers in healthcare education and in what context (eg, clinical practice, preclinical and theoretical courses)?

What outcomes are presented from formative peer assessment interventions?

Levac et al 19 recommend a clear articulation of the research question. In a systematic review, the question to guide the search is often based on the ‘Population Intervention Context Outcome’ elements. Since a scoping review has less restrictive inclusion criteria than a systematic review, the ‘Population Concept and Context’ elements ( table 1 ) can be used to establish effective search criteria. 22

  • View inline

The Population Concept and Context mnemonic as recommended by the Joanna Briggs Institute 22

Stage 2: identifying relevant studies

The literature search will be conducted in the peer-reviewed databases, PubMed, Cumulative Index to Nursing and Allied Health Literature, Education Research Complete and Education Research Centre. Search tools such as Medical Subject Headings, Headings, Thesaurus and Boolean operators (AND/OR) will be used to expand and narrow the search. Additional search will be performed in Google Scholar, hand-search reference lists of included studies and Libsearch for identification of grey literature. The search will be conducted between September and December 2018. No limitations will be set to the year of publication. Finally, search strategies will be developed in collaboration with a research librarian well versed in research databases.

Inclusion and exclusion criteria

The following inclusion criteria will be applied in the search: (a) the articles have to address peer assessment in higher education; (b) focusing formative peer assessment; (c) students in healthcare education programmes; (d) peer reviewed articles, grey literature, books and so on; (e) studies evaluated with moderate or high methodological quality according to the Critical Appraisal Skills Programme (CASP). 23 Initially, the search terms will be purposefully broad (eg, peer assessment, higher education) in order to capture the range of published literature. However, the extensiveness of material will determine if more narrow inclusion criteria are necessary for managing the material.

Since the distinction between different assessment terms and how different authors define peer assessment varies, 14 similar concepts related to peer assessment, for example, peer feedback and peer evaluation, will be incorporated in the search to ensure that no study is missed due to ambiguity in definition of the subject.

Articles including summative peer assessment will be excluded unless the study involves formative assessment. However, a distinction between the two must be transparent if the study is to be included. If there is any uncertainty, the study will be excluded. Furthermore, full articles, abstracts, conference posters or power point presentations unavailable for review will be excluded.

Stage 3: study selection

Initially, the title and abstract will be screened by two members of the research team. The team may at this stage need to discuss the inclusion and exclusion criteria and refine the search. 19 If the title is in line with the review purpose, the abstract will be read. This procedure will be conducted by two researchers separately, guided by the inclusion criteria and research questions. If any disagreement appears, a third research member will be consulted. This initial step will determine whether the criteria capture relevant studies. Further, the full-text articles will be imported into the web-based bibliographic manager RefWorks 2.0 to enable removal of duplicates and for organisational feasibility. Each paper will be given a unique number for identification and to keep track of included and excluded articles. 24

Stage 4: charting the data

The full-text articles will be screened by three researchers independently. A charting form will be used for managing the documentation of extracted data from the included studies. The charting form will include the inclusion criteria and an explanation of why the study is included or excluded at this stage in the process. If there are any reservations or discordant opinions a fourth researcher will be consulted until consensus is reached. Studies meeting the inclusion criteria will be critical evaluated using CASP. 23 The methodological quality will be graded with moderate when meeting 6–8 criteria and high 9–10 criteria of the CASP checklist. 25 To enable replications by others, increase reliability of the findings and for methodological accuracy 15 the process will be documented using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) presented by Moher et al . 26 The PRISMA flow diagram visualise selection process of included and excluded articles during each stage of the search process. The PRISMA checklist will support rigour report of the review using the 24 item checklist. 26

Stage 5: collating, summarising and reporting results

Collating and managing the results from the included articles will be conducted by using a data analysis software program, NVivo V.11. NVivo is a code-based system developed to support structured qualitative data. 27 Even though, the analysis part of the data material needs to be abstracted by the researcher, the software may support an overview of codes, themes and their relationships and connections. 27

We will perform a narrative synthesis using an inductive methodology. Analysing the qualitative data will be conducted by using the principles for thematic analysis as presented by Braun and Clarke. 28 Thematic analysis is a method for identifying, analysing and reporting patterns within data 28 and has a both qualitative and quantitative methodology. 29 It allows a large amount of data and can highlight differences and similarities across a data set. The themes will be identified at a semantic level from the written text. 28 To maintain quality and trustworthiness each stage of the data analysis will be presented in a scheme. 28 The findings will be presented under thematic headings using a summary table which can inform a description of key points. Further, detailed tables will present: (a) author(s), (b) the geographical distribution of studies, (c) year of publication, (d) educational interventions presented, (e) the professional healthcare programme that the studies refers to, (f) reported experiences, outcome and main findings of peer assessment initiatives and (g) research methodology.

Stage 6: consultation

Consultation is an optional stage 15 ; however, since it adds methodological rigour 19 it will be incorporated in the scoping review. The consultation will be conducted when preliminary results are organised in charts and tables (stage 5). Stakeholders from healthcare education programmes (students and teachers) and healthcare institutions (preceptors) will be provided with an overview of the preliminary results. The purpose of the consultation is to enhance the validity of the study outcome.

Ethics and dissemination

Information will only be extracted from public databases. The result of this scoping protocol will form the basis for conducting a scoping review of formative peer assessment in a healthcare education programme. The results will be presented at national and international conferences and published in peer-reviewed journals.

  • Topping KJ ,
  • Dannefer EF ,
  • Henson LC ,
  • Bierer SB , et al
  • Houghton C , et al
  • Orsmond * P ,
  • Callaghan A
  • Macfarlane‐Dick D
  • Academy HE . et al.
  • Greig JD , et al
  • Colquhoun H ,
  • DiCenso A ,
  • Martin-Misener R ,
  • Bryant-Lukosius D , et al
  • Roberts E ,
  • Popay J . et al.
  • 22. ↵ Joanna Briggs Institute . The Joanna Briggs Institute Reviewers’ Manual 2015. Methodology for JBI Scoping reviews . South Australia : The University of Adelaide , 2015 .
  • 23. ↵ Critical Appraisal Skills Programme . CASP checklist . 2018 . https://casp-uk.net/casp-tools-checklists/ ( Accessed 5 Aug 2018 ).
  • van Mossel C ,
  • Horntvedt MT ,
  • Nordsteien A ,
  • Fermann T , et al
  • Liberati A ,
  • Tetzlaff J , et al

Contributors MS led the design, search strategy and conceptualisation of this work and drafted the protocol. EM, MB and EC were involved in the conceptualisation of the review design, inclusion and exclusion criteria and provided feedback on the methodology and the manuscript. All authors give their approval to the publishing of this protocol manuscript.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Patient consent Not required.

Ethics approval Research ethics approval is not required for a scoping review.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

Nursing Course and Curriculum Development – University of Brighton

Innovative – inspirational – inclusive.

formative assessment examples in nursing education

Formative assessment

Formative assessment i.e. assessment for learning

Formative assessment enables students to try out new assessment types, to express their learning, to get feedback on strengths and areas to develop and to understand the quality of their work whilst studying.

The module specification must state (in the ‘Teaching and learning activities’ section):

  • The form of the formative assessment task e.g. online test, essay plan seminar presentation, peer review of work
  • When it will take place i.e. during a taught session or independent study
  • How students will get feedback e.g. online test auto mark, oral feedback, written feedback; from peer or academic

Examples of formative assessments:

  • Set up a discussion board to develop learning networks
  • Develop a reading log
  • Review an article review
  • Group presentation
  • Mock examination
  • Self-assessment  – students generate criteria appropriate for assessing own work
  • Annotated bibliography What is an annotated bibliography?

Annotate exemplars of summative assessments to identify where the student was meeting the learning outcomes or could meet them better.  This can apply to all forms of assessment; essays, posters, OSCEs and presentations (if videoed)

Print Friendly, PDF & Email

We use cookies to personalise content, to provide social media features and to analyse our traffic. Read our detailed cookie policy

Academic staff perspectives of formative assessment in nurse education

Affiliation.

  • 1 Thames Valley University, Faculty of Health and Human Sciences, Paragon House, Boston Manor Road, Brentford, Middx TW8 9GA, UK. [email protected]
  • PMID: 19818688
  • DOI: 10.1016/j.nepr.2009.08.007

High quality formative assessment has been linked to positive benefits on learning while good feedback can make a considerable difference to the quality of learning. It is proposed that formative assessment and feedback is intricately linked to enhancement of learning and has to be interactive. Underlying this proposition is the recognition of the importance of staff perspectives of formative assessment and their influence on assessment practice. However, there appears to be a paucity of literature exploring this area relevant to nurse education. The aim of the research was to explore the perspectives of twenty teachers of nurse education on formative assessment and feedback of theoretical assessment. A qualitative approach using semi-structured interviews was adopted. The interview data were analysed and the following themes identified: purposes of formative assessment, involvement of peers in the assessment process, ambivalence of timing of assessment, types of formative assessment and quality of good feedback. The findings offer suggestions which may be of value to teachers facilitating formative assessment. The conclusion is that teachers require changes to the practice of formative assessment and feedback by believing that learning is central to the purposes of formative assessment and regarding students as partners in this process.

Copyright 2009 Elsevier Ltd. All rights reserved.

  • Education, Nursing / standards*
  • Educational Measurement / methods*
  • Faculty, Nursing / standards*
  • Feedback, Psychological
  • Qualitative Research
  • United Kingdom
  • Open access
  • Published: 16 June 2022

Exploring the formal assessment discussions in clinical nursing education: An observational study

  • Ingunn Aase 1 ,
  • Kristin Akerjordet   ORCID: orcid.org/0000-0002-4300-4496 1 , 2 ,
  • Patrick Crookes   ORCID: orcid.org/0000-0001-8669-7176 3 ,
  • Christina T. Frøiland   ORCID: orcid.org/0000-0002-3902-8022 1 &
  • Kristin A. Laugaland   ORCID: orcid.org/0000-0003-3451-2584 1  

BMC Nursing volume  21 , Article number:  155 ( 2022 ) Cite this article

4491 Accesses

3 Citations

Metrics details

Introduction

According to EU standards, 50% of the bachelor education program in nursing should take place in clinical learning environments. Consequently, this calls for high quality supervision, where appropriate assessment strategies are vital to optimize students’ learning, growth, and professional development. Despite this, little is known about the formal assessment discussions taking place in clinical nursing education.

The aim of this study was to explore the characteristics of the formal assessment discussions taking place during first-year students’ clinical education in nursing homes.

An exploratory qualitative study was performed. The data consist of passive participant observations of 24 assessment discussions (12 mid-term and 12 final assessments) with first-year nursing students ( n =12), their assigned registered nurse mentors ( n =12) and nurse educators ( n =5). The study was conducted in three public nursing homes in a single Norwegian municipality. Data were subjected to thematic analysis. The findings were reported using the Standards for Reporting of Qualitative Research.

Three themes were identified regarding the characteristics of the formal assessment discussions: (1) adverse variability in structuring, weighting of theoretical content and pedagogical approach; (2) limited three-part dialogue constrains feedback and reflection; and (3) restricted grounds for assessment leave the nurse educators with a dominant role.

These characteristic signal key areas of attention to improve formal assessment discussions to capitalize on unexploited learning opportunities.

Peer Review reports

This study focuses on formal assessment practice of nursing students in clinical education in nursing homes. Enabling nursing students to acquire professional competence through clinical education in a variety of healthcare settings is a cornerstone of contemporary nurse education programs [ 1 ]. According to EU standards, 50% of the bachelor education program should take place in clinical learning environments. Consequently, this calls for high-quality clinical supervision that includes appropriate assessment strategies, which are critical to optimize students’ learning, professional development, and personal growth [ 2 ].

Formal assessment of nursing students in clinical education serves two purposes: 1) to facilitate learning by enabling students to judge their own achievements more accurately and encourage their continuous learning process; and 2) to provide certification of achievements [ 3 ]. Accordingly, there are two approaches to assessment: formative assessment and summative assessment. Formative assessment is focused on the learning needs of each student, identifying areas in need of development, and providing feedback. Feedback is the central component of effective formative assessment. A summative assessment is a summary of a student’s achievements and judgement as to whether he/she has met the required learning outcomes [ 4 , 5 ]. Student clinical placements are often assessed on a pass-fail basis, not by a letter grade [ 6 ].

The predominant clinical education model applied in nursing homes involves students being mentored and assessed by an RN and followed up by a nurse teacher [ 7 ]. The formal assessment during clinical education involves a partnership model where nursing students, their assigned Registered Nurse (RN) mentors and nurse educators, cooperate and share responsibility for facilitating and confirming the students’ achievement of expected learning outcomes [ 8 ]. However, substantial variations in assessment practices internationally and nationally have been reported, suggesting that the assessment practices of nursing students in clinical education lack consistency [ 2 , 3 , 9 , 10 ]. Consequently, a variety of tools for assessing student’s clinical competence exist, and these tools depend on different definitions of clinical competence and the components to be assessed such as knowledge, technical care skills, attitudes, behaviours, clinical judgment, and critical thinking [ 11 , 12 , 13 ]. Several international researchers have argued that reliable and comparable assessment of nursing students’ knowledge and skill would benefit greatly from the development and consistent use of national competency assessment tools [ 2 , 8 , 14 , 15 ]. In their discussion paper, Gjevjon et al. [ 16 ] highlighted the importance of assessing the students’ progression in clinical skills and ensuring that their nursing competence is in line with official requirements and professional expectations. This implies that student assessments cannot be limited to the period of a single clinical placement period.

Stakeholders have reported challenges with the assessment of students’ competence in clinical nursing education [ 2 , 17 , 18 , 19 ]. RN mentors report that they have trouble providing feedback and assessing student competence because of the absence of clear guidelines and assessment criteria [ 17 , 20 ]. RN mentors also experience having a passive and peripheral role during formal assessment discussions [ 18 , 21 ]. Conversely, nursing students report feeling insecure due to power disparities in the assessment discussions; students perceive their RN mentors as much more powerful than they are [ 22 ]. Moreover, students report that the personal chemistry between them and their assigned RN mentor could lead to differential treatment [ 22 ]. In comparison, nurse educators report that it is challenging to make the students’ competence and learning processes visible, and to ensure fair and equitable assessment of students in clinical placement/education (e.g., [ 23 ]). Difficulties in understanding the concepts used to describe the student learning outcomes, the language used in the assessment tools, limited mentor competence in assessment and restricted academic-clinical collaboration have also been reported in the literature (e.g., [ 2 , 24 , 25 , 26 ]). A systematic review of assessment of student competence found that the use of a valid and reliable assessment tool, with clear criteria and continued education and support for mentors is critical to quality in learning opportunities [ 3 ].

Formative assessment with feedback as key aspect is arguably one of the most important factors in terms of students’ learning, personal growth, and professional development in discussions of clinical assessment [ 24 , 27 ]. A multilevel descriptive study [ 20 ] concluded that students often do not receive sufficient constructive feedback in or during formal assessment discussions. This is of major concern since clinical learning is considered a signature pedagogy in preparation of nursing students for real-world practice (e.g., [ 2 , 28 ]). For workplace learning to be optimized, nursing students need to explore the complexity of the patients experience and to be able to discuss and evaluate patient care with others inside the clinical environment and in an academic setting (e.g., [ 29 , 30 ]). The focus of assessment is to aid nursing students’ continuous learning process which requires constructive feedbacks and opportunities for reflections between nursing student, RN mentor, and nurse educator [ 3 ]. Formal assessment offers a potential opportunity to optimize nursing students’ learning outcomes by extending and transforming their professional knowledge dialectically. The explicit focus of assessment is therefore very important as students tend to concentrate on achieving the required competencies which they are aware will be assessed [ 2 ].

Internationally, emerging evidence shows that summative assessment of nursing students’ competence is a matter of concern across countries and educational institutions as previously stressed due to a) lack of consistency in use of methods and tool, b) its openness to subject bias, and c) the fact that the quality of assessment varies greatly [ 2 , 3 ]. According to Helminen et al. [ 2 ], there are few studies of summative assessment, and as far as we know, no studies have explored the characteristics of these formal assessment discussions by observing what is really going on in this three-part dialogue between nursing students, RN mentors and nurse educators. There is therefore a need to further increase our knowledge and understanding of assessment discussions’ characteristics and how these discussions can enhance students’ learning (e.g., [ 2 , 20 , 21 ]). To fill this knowledge gap, the aim of this study was to explore the characteristics of the formal assessment discussions that take place during first-year students’ clinical education in a nursing home using observation. This is considered a novel methodological approach to explore this field in nursing education.

The study applied a qualitative, exploratory design using passive observation [ 31 ] to explore the characteristics of the mid-term and final assessment discussions. Such observations allow the researcher to be present and identifiable, but the researcher does not participate or interact with the people being observed [ 31 ]. Observational research is, as previously stressed, a novel way to study assessment discussions as most studies have applied interview methods retrospectively, as a source for collecting data about assessment discussions [ 3 , 20 ]. Observational research offers a rich, in-depth approach which in contrast to interviews allow the researcher to identify context-specific issues of importance, to learn what is taken for granted in a situation and to discover what is happening by watching and listening to arrive at new knowledge [ 31 ]. The observations of the formal assessment discussions were distributed among three of the researchers (IA, CF, KL) who individually carried out passive observations using a structured guide to ensure rigor [ 32 ]. The Standards for Reporting Qualitative Research (SPQR) were used.

The context of the observed assessment practices

In this study, nursing home placements are part of eight weeks of mandatory clinical education during nursing students’ first academic year. In the observational study, a preceptorship model was applied in which students are mentored by a RN and followed up by a nurse educator [ 7 ]. In Norway, mentorship is an integral part of an RN’s work. This implies that the mentor does not receive financial compensation or require formal training in mentorship (e.g., at master level). The RN mentors are employed by the nursing homes and the nurse educators included are employed by the university. The nurse educators are responsible for coordinating the students’ learning, organizing the assessment discussions assessing the nursing students in clinical placement.

The clinical education system for the students in this study is comprised of two formal assessment discussions: the midterm discussion, with a summative assessment and a formative assessment, and the final assessment discussion, where the whole period is encapsulated up in a summative assessment [ 20 ]. The mid-term and final summative assessments take the form of a three-part dialogue among the nursing student, the RN mentor, and the nurse educator at the placement site. Prior to the assessment discussions the nursing student must write an evaluation of his or her learning. This written self-assessment must be sent to the nurse educator and RN mentor two days before the assessment discussion. There is no requirement for written preparation or documentation from the RN mentors. The university would assign the student a pass or fail grade based on six competence areas (i.e., professional ethics and legal-, cooperation-, and patient-centered nursing, pedagogical, management and learning competence) with the accompanying learning outcomes. All six competence areas and accompanying learning outcomes with a particular focus on fundamentals of care were well known by the researchers (IA, CF, KL) serving as important pre-understanding for data-collection and analysis. Beyond this, all the researchers had experience as nurse educators from the nursing home context and one of the researchers (CF) holds a Master of Science degree in gerontology.

Setting and sample

The study was conducted in three public nursing homes within the same municipality in Western Norway as part of a larger research project: “Aiming for quality in nursing home care: Rethinking clinical supervision and assessment of nursing students in clinical studies” [ 19 ]. The nursing homes varied in patient numbers and staffing but were highly motivated to participate in the research project anchored in the top management team. Recruitment was based on a purposive, criterion-based sampling strategy [ 33 ] targeting the nursing students, RN mentors and nurse educators involved in assessment discussions. To make sure that the sample had the virtue of knowledge and expertise, RN with mentorship experiences from nursing homes were included ensuring diversity related to gender, age, and ethnicity. The nursing students and the nurse educators were recruited from the same university representing a distribution in age, gender, healthcare experience, and academic experience (See Table 1 ).

Prior to data collection, approval was obtained from the university and from the nurse managers at the nursing homes enrolled in the study. In addition, an information meeting was held by the first and last author of this study, with the eligible nursing students during their pre-placement orientation week on campus, the RN mentors at the selected nursing home sites, and with the nurse educators responsible for overseeing nursing students on placement.

Invitations to participate in the study were then sent to eligible participants at the three public nursing homes. Nursing students were recruited first, before their assigned RN mentors and nurse educators were emailed with an invitation to participate to ensure the quality of the study sample. Two co-researchers working in two of the three enrolled nursing homes assisted in the recruitment of the RN mentors. A number of 45 allocated nursing students had their clinical placement at these three included public nursing homes. Of the 45 potential nursing students invited to participate, 12 consented. Their assigned RN mentors ( n =12) and nurse educators ( n =5) also agreed to participate. A summary of participant group characteristics is displayed in Table 1 .

Four nursing students and four RN mentors enrolled from each nursing home. As nurse educators are responsible for overseeing several students during placements, fewer nurse educators than students and RN mentors agreed to participated. All but one of the participants, a RN mentor, were women. Out of 29 participants, six (one nursing student, one nurse educator and four RN mentors) did not have Norwegian as their mother language. None of the RNs had prior formal supervision competence and their experience with mentoring students ranged from 1-7 years. Seven of the 12 nursing students had healthcare experience prior to their placement period. Three of the five nurse educators held a PhD and the other two were lecturers with a master’s degree. Two of the nurse educators were overseeing students on nursing home placement for the first and second time, while the other three had several years of experience with nursing student placements within nursing homes. None of the nurse educators had expertise in gerontological nursing.

Data collection

To allow a first-hand experience of the formal assessment discussion passive observation were used as the main source of data collection. The observations were carried out separately by three researchers (IA, CF, KL), all of whom are experienced qualitative researchers with a background in nursing, nursing education and nursing research. The researchers were all familiar faces from lectures at the University and as previously emphasized, as nurse educators in the public nursing home settings. At the beginning of each assessment observation, time was taken to create bond of trust between the participants to reduce contextual stress. The observations were based on a structured observation guide (See Attachment 1). The observation guide contained relatively broad predefined categories: structure, content, duration, interaction, dialogue, and feedback. These predefined categories were used to guide and support the recording and notetaking process allowing space for spontaneous aspects of the formal assessment discussions [ 31 ]. The guide was based on the aim of the study and informed by the literature.

During the observations, each of the three researchers sat on chairs in a corner of the room (two-three meters away) to observe the interaction and listen unobtrusively to the assessment discussions during the study to reduce students’ experience of stress contextually. Observational notes were taken discreetly, according to the structured guide and combined descriptions and personal impressions [ 34 ]. Summaries, including reflective notes, were written in electronic format directly after the observations. The observations were conducted alongside the clinical placements in February and March 2019 with a duration of about 60 minutes on average. The choice of passive observation using three researchers was both of pragmatic and scientific reasons: a) to ensure that data was collected timely at colliding times of assessment meetings, and b) to verify the observed notes supported by triangulation during analysis and the interpretation process [ 33 ].

Data Analysis

Braun and Clarke’s [ 35 ] approach to thematic analysis was used to analyze the observational notes and summary transcripts. The thematic analysis was guided by the aim of the study and followed the six steps described by Braun and Clarke [ 35 ]: (1) becoming familiar with the data; (2) generating initial codes; (3) searching for themes; (4) reviewing themes; (5) defining and naming themes; and (6) producing the report. The 93 pages of observational notes were read independently by three of the researchers (IA, CF, KL) to obtain an overall impression of the dataset (See Fig. 1 : Analysis process).

figure 1

Analysis process

All the observational notes, both from mid-term- and final assessment discussions, were then compared and analyzed as a single dataset with attention to similarities and differences by generating initial codes and searching for themes. The reason for merging the data set was that the mid-term and final assessment discussions overlapped in terms of initial codes and emerging themes contributing to a deeper understanding of the results. The researchers IA, CF and KL met several times to discuss the coding process to finalize the preliminary themes. All authors revised, defined, and identified three key themes reaching consensus. This means that all authors contributed to analytic integrity by rich discussions throughout the process of analysis, clarification, validation, and dissemination of results [ 36 ]. The study also provides a description of the context of the formal assessment discussions, participants enrolled, data collection and the analysis process allowing the readers to evaluate the authenticity and transferability of the research evidence [ 36 ].

Ethical considerations

The study was submitted to The Regional Committees Research Ethics in Norway who found that the study was not regulated by the Health Research Act, since no health or patient data is registered. The study adhered to general ethical principles laid down by The National Committee for Medical & Health Research Ethics in Norway. In addition, the principles of confidentiality, voluntary participation and informed consent were applied by following the World Medical Association’ s Declaration of Helsinki. All participants gave their written informed consent they were informed about the right to withdraw from the study at any point. The nursing students were made aware that participation or non-participation would not affect other aspects of their clinical placement/education period or give them any advantages or disadvantages on their educational path. The study was approved by The Norwegian Centre for Research Data in two phases (Phase 1: NSD, ID 61309; Phase 2: NSD, ID 489776).

All methods were performed in accordance with relevant guidelines and regulations.

The analysis identified three themes that describe the characteristics of the formal assessment discussions taking place during first-year nursing students’ clinical education in nursing homes: (1) Adverse variability in structuring, weighting of theoretical content, and pedagogical approach, (2) Limited three-part dialogue constrains feedback and reflection, and (3) Restricted grounds for assessment leave the nurse educators with a dominant role. These themes are now presented.

Theme 1: Adverse variability in structuring, weighting of theoretical content and pedagogical approach

This theme illuminates adverse variability in the nurse educators structuring of the discussion, weighting of theoretical content (e.g., bridging of theory and practice), and the pedagogical approach applied across the assessment discussions. Some nurse educators went through each competence area and the accompanying learning outcomes, strictly following the sequence adopted in the assessment form. Others adopted a more flexible sequence of progression guided by the discussion. The latter approach led to frequent shifts in focus between competence areas; the observation notes described that students and RN mentors found it difficult to follow the discussion and the assessment process. For example, the observational notes illuminated that a student commented that she felt insecure because the nurse educator appeared to write notes pertaining to one competence area while talking about another one.

The data exposed variations in the nurse educators’ emphases and weighting of bridging theory and practice. While some nurse educators asked theoretical questions to gauge the students’ knowledge and to help students to link theory and practice, others nurse educators did not. An absence of theoretical questions was observed in several assessment discussions. Additionally, the nurse educators’ knowledge and referral to the course curriculum varied. Some nurse educators seemed to be familiar with the course curriculum and mentioned the theoretical subjects taught to the students’ pre-placement, other nurse educators refrained from discussing theoretical issues. The weighting of geriatric nursing was limited in the assessment discussions.

The nurse educators varied in their pedagogical approach across the assessment discussions. Some nurse educators asked open questions inviting the students to self-evaluate and reflect on their own performance and development before approaching and inviting the RN mentor to provide input. Other nurse educators adopted a more confirmative approach, asking closed-ended questions, and reading aloud from the student’s written self-assessment before asking the RN mentors to just confirm the nurse educators’ impressions and evaluations with “yes” or “no” answers.

Theme 2: Limited three-part dialogue constrains feedback and reflection

The second category illuminates limited participation of all three parties in the formal assessment discussions. This was observed as constraining feedback and reflection. Several possible impediments to the dialogue were language barriers, interruptions, preparedness of students and RN mentors for the discussions, justification for assessing the students, and the students’ reported level of stress. There were variations in the way both nurse educators and RN mentors conveyed their feedback to the students. Several of the RN mentors were observed to have assumed a passive, peripheral role, sitting in the sidelines and saying very little. When addressed by the nurse educator, the RN mentors tended to respond with yes/no answers.

Language barriers related to understanding of the learning outcomes to be assessed appeared to hamper the dialogue. On several occasions the RN mentors who did not have Norwegian as their mother language, expressed that it was difficult to fully understand the language used in the assessment form, so they did not know what was required during the assessment. We therefore observed that the nurse educators often took time to explain and “translate” the concepts used to describe the student learning outcomes to ensure mutual understanding.

Interruptions during the assessment discussions interfered with the dialogue. In several of the assessment discussions the RN mentors took phone calls that required them to leave the assessment discussions, sometimes for extended periods of time. This meant that the nurse educator later had to update the RN mentor on what had been covered in her/his absence.

Preparedness of students and RN mentors for the assessment discussion varied. Some students brought their self-assessment document with them to the meeting, others came empty-handed. Some but not all RN mentors brought a hard copy of the student’s self-assessment document. Some RN mentors admitted that they had been too busy to read the self-assessment before the meeting.

Based on body language interpretation, several students appeared nervous during the assessment discussions. The observational notes showed that some of the students later in this assessment discussion confirmed their self-perceived stress by expressing during the assessment discussions that they had had dreams or nightmares about these assessment discussions. The observational notes also illuminated that some students expressed during the assessment discussion that they did not know what would be brought up in these discussions and that they were afraid of failing their clinical placement. The three-part dialogue to a great extent focused on the student competence achievements to pass or fail the clinical placement and less attention providing the student with ability to reflect for enhancing learning.

Theme 3: Restricted grounds for assessment leaves the nurse educators with a dominating role

Limited dialogue and engagement from students and RN mentors often left the nurse educators with restricted basis for assessment and thus gave them a dominant role in the assessment discussions. RN mentors seemed to have insufficient information to assess their students’ performance, learning and development stressing that they had not spent significant amounts of time observing them during the placement period. On several occasions RN mentors expressed that due to sick leave, administrative tasks, and alternating shifts, they had spent only a handful of workdays with their students. The observational notes revealed that some RN mentors expressed that they had to base their formal assessments of student performance on these assessment discussions mainly. Several RN mentors expressed frustration with limited nurse coverage which could impede their mentorship and assessment practices and the amount of student follow-up during placement. Because of limited input from the RN mentors, the researchers observed that the nurse educators had to base their evaluations on students’ written self-assessments. Some nurse educators gave weight to the students’ capacity for self-evaluation. The quality, amount and content of the written self-assessment was observed to be influential for determining the students’ strengths and weaknesses and whether the students passed or failed their competence areas. Our observational notes showed that some students struggled to pass because their written self-assessments were not comprehensive enough. In the formal assessment document, there are fields for ‘strengths and areas of growth /improvement’. It was variations in how much was written beyond the mark for approved or not approved and pass or fall. This implies that some assessment discussions were marked by ticking a box rather than engaging in a reflective dialogue.

The findings of this exploratory observational study suggest that the formal assessment discussions for first-year nursing students are characterized by lack of conformity - referred to as adverse variability- regarding structure, theoretical content and the pedagogical approach. The limited three-part dialogue appeared to constrain feedback and critical reflections to enhance students clinical learning opportunities, leaving the nurse educators with a dominant role and a restricted ground for assessment . Increased awareness is therefore required to improve formal assessment discussions to capitalize on unexploited learning opportunities.

Unexploited learning opportunities

Formal assessment discussions are expected to optimize nursing students’ clinical learning outcomes by extending and transforming their professional knowledge. Our findings illuminate adverse variability in nurse educators’ pedagogical approach and the weighting of theoretical content during formal assessment, which may reduce the students’ ability to learn. This is of major concern since the assessment discussion should be clear and systematic encouraging student’s continuous reflecting and learning process. Both nursing students and nurse mentors seem to need more knowledge of what a formal assessment discussion consists of to reduce unpredictability (e.g., familiarizing with the expected learning outcomes, the assessment criteria, and the context) and to optimize clinical learning. Regarding knowledge or referral to theoretical teaching prior to clinical placements, nurse educators may fail to provide consistency for first-year students during formal assessment; supporting them in bridging theory and practice which is essential to their learning and professional development. These findings resonate with an integrative review which indicated that orientation programs, mentor support, clear role expectations, and ongoing feedback on performance are essential for academic organizations to retain excellent nursing faculty [ 37 ]. This highlights the importance of nurse educators' awareness and active involvement in assessment discussions [ 38 , 39 ] to provide academic support and guidance of students' theory-based assignments [ 40 ]. A critical question is whether the nurse mentors’ competence and active role have been acknowledged sufficiently in the formal assessment discussions? Particularly when our research revealed that gerontological questions were hardly reflected upon to support students clinical learning in nursing homes.

Critical reflections in the assessment discussions were also limited since some nurse educators in this study mostly did not ask for reflections from the students. This is of major concern since critical reflection is a pedagogical way to bridge theory with clinical experience and to tap into unexploited learning opportunities by strengthening the students’ reflection skills and knowledge in the assessment discussions [ 41 ]. Critical reflection in clinical setting and education is known to assist students in the acquisition of necessary skills and competencies [ 42 ].

Encouraging students to reflect on clinical learning experiences contextually and having clearer guidelines for formal assessment discussions in educational nursing programs may be both necessary and important for exploring unexploited learning opportunities. Our findings imply that education programs may increase students’ learning opportunity by decreasing the variability in structure, the weighting of theoretical content and the pedagogical approach applied in clinical education. Increased awareness of assessment by having a common understanding of how the assessment should be managed and what the assessment criteria are therefore considered of importance (e.g., [ 17 ]). Research is however needed to explore the relationship between nurse educators’ clinical expertise, competence, pedagogical skills set and students’ learning outcomes [ 26 ]. Our findings suggest that measures for preparing nurse educators for better theoretical knowledge and pedagogical approach require further development, and that this could be done with i.e., online educational support. Further research should therefore explore and extend our understanding of the need for improvement in structure, theoretical content, and pedagogical approach in formal assessment discussions.

Hampered three-part dialogue

The study findings illuminate that a hampered three-part dialogue makes it hard to offer feedback and engage in reflection. This interfered with the amount of learning potential during formal assessment in clinical education. Many factors affect the degree of interaction in the three-part dialogue. For example, our findings imply that RN mentors gave little feedback to the students in the assessment discussions because they had not spent a significant amount of time with students. A consequence of inadequate feedback in the assessment discussions may limited the clinical learning potential for the nursing students. Formative assessment is arguably one of the most important and influential factors for students’ learning, personal growth, and professional development in their clinical assessment discussions [ 27 ]. According to our findings, formative process assessment is not always used properly, even though it is highly recommended in the literature (e.g., [ 3 , 4 ]). Our study shows that enhanced focus on formative assessment in the nurse educational program and further research using different methodology is of significance to extend our knowledge of the formative process assessment related to clinical learning. Overall, our findings indicate that there are room for improvement in the way that RN mentors participate in the assessment discussions. RN mentors and nurse educators need to increase their knowledge in how giving constructive and substantive feedback and how to encourage students to reflect critically on their learning through the formal assessment discussions. The findings also suggest that linguistic challenges associated with an internationally diverse workforce among RN mentors may constrain assessments of student competence. These findings are consistent with other studies that concluded that understanding the language and meaning of the concepts used in the assessment document was difficult and might have resulted in a peripheral and passive role of the RN nurses [ 18 , 21 , 26 ]. These linguistic challenges and the marginalization of RN nurses may be other causes of the limited dialogue requiring a better pedagogical preparation to nursing students’ clinical placement in nursing homes.

The results from our study also illuminated that the students appeared nervous before and during the assessment discussions. This anxiety could make the discussions difficult and stressful for them. Other studies mentioned the need for more predictability to reduce their stress, so they feel more secure [ 22 , 43 ]. Similar findings are described in the study of nursing students’ experiences with clinical placement in nursing homes, where Laugaland et al. [ 19 ] highlighted the vulnerability of being a first-year student. The students need to be informed about the purpose of the formal assessment discussions, so they prepare for them and feel less anxious consequently. This may give them a better basis for and openness to learning in the three-part dialogue. Enhanced focus on students’ stress in the assessment discussions and further research on the consequences of stress for learning in the assessment discussions are therefore required.

The hampered three-part dialogue, indicated in our results, often left the nurse educators with restricted basis for assessing the nursing students as well as having a dominate role in the discussions. The cooperation between the nurse educators and the RN mentors was also limited related to both interruptions in the assessment discussion and little input from the RN mentors which is not ideal or desirable for the formal assessment discussions to support students clinical learning, also considered as an unexploited learning opportunity. Wu et al. [ 10 ] describe clinical assessment as a robust activity which requires collaboration between clinical partners and academia to enhance the clinical experiences of students. Our research indicates a need for increased collaboration between educational programs and clinical placements in nursing homes. One way to increase the collaboration and giving the RN mentor a more active role, may be to give the RN mentors dedicated time and compensation for mentoring students including access to academic courses in clinical mentoring. Another way to increase this collaboration may be that the leaders in the nursing homes let the RN mentors prioritize the supervision of the students during the clinical placement period. Future research should extend and explore measures to strengthen the collaboration without giving the nurse educators too much of a say in assessment discussions.

Methodology considerations

This study has a strength by using passive observation as method in exploring and describing the formal assessment discussion using a novel approach to expand and deepen our knowledge in this research field.

A methodological consideration may be that the researcher claimed to be a passive observer of an interaction, but an observer’s presence is itself a significant part of the interaction. Participants might be uncomfortable and stressed seeing a stranger silently sitting in the corner and taking notes [ 31 ]. On the other hand, the researchers were familiar faces to the nursing students acknowledged by some of them being comforted with the researchers’ presence feeling more secure contextually. A critical question however is whether use of video recordings would provide richer data. Video recordings allow for the capturing and recording of interactions in the assessment discussions as they occur naturally without many disturbances and because they allow for repeated viewing and detailed analysis [ 44 ]. Frequencies of questions and answers in the tree-part dialogue might also have been counted to for example confirm the observed dominating role of the nurse educator. This was discussed but decided that it was not practical or possible contextually.

Video recording with no researcher present might also reduce the participants’ stress, but on the other hand also perceived as more threatening.

The way of collecting and analyzing the data by using three researchers should also be reconsidered. The data was collected separately by three researchers to ensure that the data was collected timely without any abruption. A limitation may be that all three researchers had similar educational background and experiences from formal assessment discussions in nursing homes. Their preconceptions might have influenced the results. Yet, the researchers were not involved in nursing students’ clinical placement period. To control for research bias, the data analysis process applied triangulation; two of the authors who were not actively involved in the observations, reflected upon the results, providing a basis for checking interpretations to strengthen their trustworthiness [ 45 ].

Conclusions and implications

Adverse variability in structuring and weighting of theoretical content and pedagogical approach, a hampered three-part dialogue and limited basis for assessment, all lead to unexploited learning opportunities. These characteristic signal key areas of attention to optimize the learning potential of these formal assessment discussions.

Higher education is in a unique position to lay the ground for improvements of formal assessment discussions. Educational nursing programs may increase students’ learning opportunity by using a structured guide, decreasing the variability in the weighting of theoretical content and the pedagogical approach applied in clinical placement in nursing homes. The nursing educational program should therefore find ways to increase the collaboration between nurse educators, nursing students and RN mentors to improve feedback, critical reflections, and clinical learning, thereby limiting the nurse educators dominating role in the formal assessment discussions in nursing homes.

Availability of data and materials

Original de-identified data of the study will be stored at the Norwegian Centre of Research Data subsequent to completion of the project. Original de- identified data can be requested from the corresponding author upon reasonable request.

Abbreviations

Registered nurse

Rush S, Firth T, Burke L, Marks-Maran D. Implementation and evaluation of peer assessment of clinical skills for first year student nurses. Nurse Educ Prac. 2012;12:219–26. https://doi.org/10.1016/j.nepr.2012.01.014 .

Article   Google Scholar  

Helminen K, Coco K, Johnson M, Turunen H, Tossavainen K. Summative assessment of clinical practice of student nurses: A review of the literature. Intern J Nurs Stud. 2016;53:308–16.

Immonen K, Oikarainen A, Tomietto M, et al. Assessment of nursing students` competence in clinical practice: A systematic review of reviews. Intern J Nurs Stud. 2019;100:103414. https://doi.org/10.1016/j.ijnurstu.2019.103414 .

Lauvås P, Handal G. Veiledning og praktisk yrkesteori. 3rd ed. Oslo: CAPPELEN Damm akademisk; 2014.

Google Scholar  

Schwirth LWT, Van der Vleut CPM. Programmatic assessment: From assessment of learning to assessment for learning. Med Teach. 2011;33(6):478–85. https://doi.org/10.3109/0142159X.2011.565828 .

Heaslip V, Scammel J. Failing underperforming students: The role of grading in practice assessment. Nurse Educ Pract. 2012;12:95–100.

Saarikoski M, Kaila P, Lambrinou E, Canaveras RMP, Warne T. Students’ experiences of cooperation with nurse educators during their clinical educations: An empirical study in a western European context. Nurse Educ Prac. 2013;13(2):78–82. https://doi.org/10.1016/j.nepr.2012.07.013 .

Kristofferzon ML, Måtensson G, Mamhidir AG, Løfmark A. Nursing students’ perceptions of clinical supervision: The contributions of preceptors, head preceptors and clinical lecturers. Nurs Educ Today. 2013;33(10):252–1257. https://doi.org/10.1016/j.nedt.2012.08.017 .

Kydland AG, Nordstrøm G. New form is suitable for assessing clinical nursing education. Sykepleien Forskning. 2018;13(74323):(e-74323). https://doi.org/10.4220/Sykepleienf.2018.74323 .

Wu XV, Enskar K, Lee CCS, Wang W. A systematic review of clinical assessment for undergraduate nursing students. Nurse Educ Today. 2015;35:347–59. https://doi.org/10.1016/j.nedt.2014.11.016 .

Article   CAS   PubMed   Google Scholar  

Löfmark A, Thorell-Ekstrand I. Nursing students' and preceptors' perceptions of using a revised assessment form in clinical nursing education. Nurse Educ Prac. 2014;14(3):275–80. https://doi.org/10.1016/j.nepr.2013.08.015 .

Levett- Jones T, Gersbach J, Arthur C, Roche J, et al. Implementing a clinical competency assessment model that promotes critical reflection and ensures nursing graduates’ readiness for professional practice. Nurse Educ Prac. 2011;11:64–9.

Yanhua C, Watson R. A review of clinical competence assessment in nursing. Nurse Educ Today. 2011;31(8):832–6. https://doi.org/10.1016/j.nedt.2011.05.003 .

Article   PubMed   Google Scholar  

Brown RA, Crookes PA. How do expert clinicians assess student nurses’ competency during workplace experience? A modified nominal group approach to devising a guidance package. Collegian. 2017;24(3):219–25. https://doi.org/10.1016/j.colegn.2016.01.004 .

Crookes P, Brown R, Della P, Dignam D, Edwards H, McCutcheon H. The development of a pre-registration nursing competencies assessment tool for use across Australian universities. NSW: ALTC; 2010.  https://ro.uow.edu.au/hbspapers/684/

Gjevjon ER, Rolland, EG, Olsson C. Are we prepared to educate the next generation of bachelor nursing students? A discussion paper. Nordic J Nurs Res. 2021; 1–3, doi: https://doi.org/10.1177/20571585211040444

Helminen K, Johnson M, Isoaho H, Turunen H. Final assessment of nursing students in clinical practice: Perspectives of nursing teachers, students and mentors. J Clin Nurs. 2017;26:4795–803. https://doi.org/10.1111/jocn.13835 .

Frøiland CT, Husebø AML, Akerjordet K, Kihlgren A, Laugaland K. Exploring mentorship practices in clinical education in nursing homes: A qualitative mixed-methods study. J Clin Nurs. 2021;00:1–14. https://doi.org/10.1111/jocn.15943 .

Laugaland KA, Gonzalez MT, McCormack B, et al. Improving quality in clinical placement studies in nursing homes (QUALinCLINstud): The study protocol of a participatory mixed-methods multiple case study design. BMJ Open. 2020;10:e040491. https://doi.org/10.1136/bmjopen-2020-040491 .

Article   PubMed   PubMed Central   Google Scholar  

Vae KJU, Engstrøm M, Mårtensson G, Løfmark A. Nursing students’ and preceptors’ experiences of assessment during clinical practice: A multilevel repeated – interview study of student-preceptor dyads. Nurse Educ Prac. 2018;30:13–9. https://doi.org/10.1016/j.nepr.2017.11.014 .

Christiansen B, et al. Challenges in the assessment of nursing students in clinical placements: Exploring perceptions among nurse mentors. Nursing Open. 2021;8(3):1069–76.

Bogsti WB, Sønsteby Nordhagen S, Struksnes S. Kan SVIP-modellen bidra til å styrke vurderingskompetanse hos praksisveiledere. In: Christiansen, B, Jensen, KT., & Larsen, K. (red.): Vurdering av Kompetanse i praksisstudier- en vitenskaplig antologi. 2019. Gyldendal, Oslo.

Løfmark A, Mårtensson G, Ugland Vae KJ, Engstrøm M. Lecturers' reflection on the three-part assessment discussions with students and preceptors during clinical practice education: A repeated group discussion study. Nurse Educ Prac. 2019;46:1–6.

Cassidy I, Butler MP, Quillinan B, et al. Perceptors’ views of assessing nursing students using a competency based approach. Nurse Educ Pract. 2012;12:346–51.

Wu XV, Enskar K, Pua LH, Heng DGN, Wang W. Clinical nurse leaders’ and academics’ perspectives in clinical assessment of final-year nursing students: A qualitative study. Nurs Health Sci. 2017;19(3):287–93. https://doi.org/10.1111/nhs.12342 .

Laugaland KA, et al. Nursing students’ experience with clinical placement in nursing homes. A focus group study. BMC Nursing. 2021;20:159.

Adamson E, King L, Foy L, McLeod M, Traynor J, Watson W, et al. Feedback in clinical practice: Enhancing the students’ experience through action research. Nurse Educ Pract. 2018;31:48–53, ISSN 1471-5953. https://doi.org/10.1016/j.nepr.2018.04.012 .

Cant R, Colleen R, Cooper S. Nursing students' evaluation of clinical practice placements using the Clinical Learning Environment, Supervision and Nurse Teacher scale – A systematic review. Nurs Educ Today. 2021;104. https://doi.org/10.1016/j.nedt.2021.104983 .

Billet S, Noble C, Sweet L. Pedagogically-rich activities in hospital work. Handovers, ward rounds and team meeting. In: Delany C, Molly E, editors. Learning and Teaching in Clinical Contexts: A Practical Guide. Australia: Elsevier; 2018.

Moroney T, Gerdtz M, Brockenshire N, et al. Exploring the contribution of clinical placement to student learning: A sequential mixed methods study. Nurs Educ Today. 2022;113. https://doi.org/10.1016/j.nedt.2022.105379 .

DeWalt KM, DeWalt BR. Participant Observation: A Guide for Fieldworkers. Plymouth: AltaMira Press; 2011.

Dvar KL. Qualitative inquiry in nursing: Creating rigor. Nursing Forum. 57(1):187–200. https://doi.org/10.1111/nuf.12661 .

Patton MQ. Qualitative research & evaluation methods. 3rd ed. London: SAGE; 2002.

Polit DF, Beck CT. Nursing research: Generating and assessing evidence for nursing practice. Philadelphia: Lippincott Williams & Wilkins; 2018.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

Lincoln YS, Guba EG. Naturalistic inquiry. CA: Sage Publication; 1985.

Book   Google Scholar  

Summers JA. Developing Competencies in the Novice Nurse Educator: An Integrative Review, Teaching and Learning in Nursing. 2017; 12(4): 263-276, ISSN 1557-3087, https://doi.org/10.1016/j.teln.2017.05.001 .

Helminen K, Tossavainen K, Turunen H. Assessing clinical practice of student nurses: views of teachers, mentors and students. Nurse Educ. Today. 2014;34(8):1161–6.

Passmore H, Chenery-Morris S. Exploring the value of the tripartite assessment of students in pre-registration midwifery education: a review of the evidence. Nurse Educ Prac. 2014;14:92–7.

Price L, Hastie L, Duffy K, Ness V, McCallum J. Supporting students in clinical practice: pre-registration nursing students' views on the role of the lecturer. Nurse Educ Today. 2011;31:780–4.

Papathanasiou JV, Tsaras K, Sarafis P. Views and perceptions of nursing students on their clinical learning environment: Teaching and learning. Nurse Educ Today. 2014;34(1):57–60.

Scheel LS, Peters MDJ, Møbjerg ACM. Reflection in the training of nurses in clinical practice setting: a scoping review protocol. JBI Evid Synth. 2017;15(12):2871–80.

Keeping-Bruke L, McCloskey R, Donovan C, et al. Nursing students’ experiences with clinical education in residential aged care facilities: a systematic review of qualitative evidence. JBI Evid Synth. 2020;18(5):986–1018. https://doi.org/10.11124/JBISRIR-D-19-00122 .

Heath C, Hindmarsh J, Luff P. Video in Qualitative Research: Analyzing Social Interaction in Everyday Life. Los Angeles: Sage; 2010.

Hammersly M, Atkinson P. Ethnography: Principles in practice. 3rd ed. Milton Park, Oxon: Routledge; 2007.

Download references

Acknowledgements

We express our sincere appreciation to all participants who made this study possible. We wish to thank them for their willingness for being observed.

This work is supported by The Research Council of Norway (RCN) grant number 273558. The funder had no role in the design of the project, data collection, analysis, interpretation of data or in writing and publication of the manuscript.

This study is part of a lager project: Aiming for quality in nursing home: rethinking clinical supervision and assessment of nursing students in clinical practice (QUALinCLINstud) [ 19 ].

Author information

Authors and affiliations.

SHARE- Centre for Resilience in Healthcare, Faculty of Health Sciences, University of Stavanger, Kjell Arholms gate 41, N-4036, Stavanger, Norway

Ingunn Aase, Kristin Akerjordet, Christina T. Frøiland & Kristin A. Laugaland

School of Psychology, Faculty of the Arts, Social Sciences & Humanities, University of Wollongong, Wollongong, NSW, Australia

Kristin Akerjordet

School of Nursing, Midwifery and Public health, University of Canberra, Canberra, Australia

Patrick Crookes

You can also search for this author in PubMed   Google Scholar

Contributions

All authors were responsible for design, analyses, and interpretation of data. IA worked out the drafts, revised it and completed the submitted version of the manuscript. IA and KL conducted the data collection and IA, KL and CF did the first analysis. KL, KA, CF and PC contributed comments and ideas throughout the analyses, and interpretation of the data, and helped revise the manuscript. All authors gave final approval of the submitted version.

Corresponding author

Correspondence to Ingunn Aase .

Ethics declarations

Ethical approval and consent to participate, consent for publication.

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Aase, I., Akerjordet, K., Crookes, P. et al. Exploring the formal assessment discussions in clinical nursing education: An observational study. BMC Nurs 21 , 155 (2022). https://doi.org/10.1186/s12912-022-00934-x

Download citation

Received : 21 January 2022

Accepted : 10 June 2022

Published : 16 June 2022

DOI : https://doi.org/10.1186/s12912-022-00934-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical assessment discussions
  • Clinical education
  • Clinical placements in nursing homes
  • First-year nursing student
  • Observational study

BMC Nursing

ISSN: 1472-6955

formative assessment examples in nursing education

  • Open access
  • Published: 16 May 2024

Competency gap among graduating nursing students: what they have achieved and what is expected of them

  • Majid Purabdollah 1 , 2 ,
  • Vahid Zamanzadeh 2 , 3 ,
  • Akram Ghahramanian 2 , 4 ,
  • Leila Valizadeh 2 , 5 ,
  • Saeid Mousavi 2 , 6 &
  • Mostafa Ghasempour 2 , 4  

BMC Medical Education volume  24 , Article number:  546 ( 2024 ) Cite this article

116 Accesses

Metrics details

Nurses’ professional competencies play a significant role in providing safe care to patients. Identifying the acquired and expected competencies in nursing education and the gaps between them can be a good guide for nursing education institutions to improve their educational practices.

In a descriptive-comparative study, students’ perception of acquired competencies and expected competencies from the perspective of the Iranian nursing faculties were collected with two equivalent questionnaires consisting of 85 items covering 17 competencies across 5 domains. A cluster sampling technique was employed on 721 final-year nursing students and 365 Iranian nursing faculties. The data were analyzed using descriptive statistics and independent t-tests.

The results of the study showed that the highest scores for students’ acquired competencies and nursing faculties’ expected competencies were work readiness and professional development, with mean of 3.54 (SD = 0.39) and 4.30 (SD = 0.45), respectively. Also, the lowest score for both groups was evidence-based nursing care with mean of 2.74 (SD = 0.55) and 3.74 (SD = 0.57), respectively. The comparison of competencies, as viewed by both groups of the students and the faculties, showed that the difference between the two groups’ mean scores was significant in all 5 core-competencies and 17 sub-core competencies ( P  < .001). Evidence-based nursing care was the highest mean difference (mean diff = 1) and the professional nursing process with the lowest mean difference (mean diff = 0.70).

The results of the study highlight concerns about the gap between expected and achieved competencies in Iran. Further research is recommended to identify the reasons for the gap between the two and to plan how to reduce it. This will require greater collaboration between healthcare institutions and nursing schools.

Peer Review reports

Introduction| Background

Nursing competence refers to a set of knowledge, skills, and behaviors that are necessary to successfully perform roles or responsibilities [ 1 ]. It is crucial for ensuring the safe and high-quality care of patients [ 2 , 3 , 4 , 5 ]. However, evaluating nursing competence is challenging due to the complex, dynamic, and multi factorial nature of the clinical environment [ 3 ]. The introduction of nursing competencies and their assessment as a standard measure of clinical performance at the professional level has been highlighted by the Association of American Colleges of Nursing [ 6 , 7 ]. As a result, AACN (2020) introduces competence assessment as an emerging concept in nursing education [ 7 ].

On the other hand, the main responsibility of nursing education is to prepare graduates who have the necessary competencies to provide safe and quality care [ 3 ]. Although it is believed that it is impossible to teach everything to students, acquiring some competencies requires entering a real clinical setting and gaining work experience [ 8 ]. However, nursing students are expected to be competent to ensure patient safety and quality of care after graduation [ 9 ]. To the extent that the World Health Organization (WHO), while expressing concern about the low quality of nursing education worldwide, has recommended investing in nursing education and considers that the future to require nurses who are theoretically and clinically competent [ 5 ]. Despite efforts, the inadequate preparation of newly graduated nursing students and doubts about the competencies acquired in line with expectations to provide safe care for entering the nursing setting have become a global concern [ 10 , 11 , 12 , 13 ]. The results of studies in this field are different. The results of Amsalu et al. showed that the competence of newly graduated nursing students to provide quality and safe care was not satisfactory [ 14 ]. Some studies have also highlighted shortcomings in students’ “soft” skills, such as technical competency, critical thinking, communication, teamwork, helping roles, and professionalism [ 15 ]. Additionally, prior research has indicated that several nursing students have an unrealistic perception of their acquired competencies before entering the clinical setting and they report a high level of competence [ 2 ]. In other study, Hickerson et al. showed that the lack of preparation of nursing students is associated with an increase in patient errors and poor patient outcomes [ 16 ]. Some studies also discussed nursing competencies separately; Such as patient safety [ 17 ], clinical reasoning [ 18 ], interpersonal communication [ 19 ], and evidence-based care competence [ 20 ].

On the other hand, the growing need for safe nursing care and the advent of new educational technologies, the emergence of infectious diseases has increased the necessity of nursing competence. As a result, the nursing profession must be educated to excellence more than ever before [ 5 , 21 , 22 ]. Therefore, the self-assessment of students’ competence levels as well as the evaluation of nursing managers about the competencies expected from them is an essential criterion for all healthcare stakeholders, educators, and nursing policymakers to ensure the delivery of safe, and effective nursing care [ 9 , 23 , 24 ].

However, studies of nurse managers’ perceptions of the competence of newly graduated nursing students are limited and mostly conducted at the national level. Hence, further investigation is needed in this field [ 25 , 26 ]. Some other studies have been carried out according to the context and the needs of societies [ 3 , 26 , 27 , 28 ]. The results of some other studies in the field of students’ self-assessment of perceived competencies and managers’ and academic staff’s assessment of expected competency levels are different and sometimes contradictory, and there is the “academic-clinical gap” between expected and achieved competencies [ 25 , 29 , 30 ]. A review of the literature showed that this gap has existed for four decades, and the current literature shows that it has not changed much over time. The academe and practice settings have also been criticized for training nurses who are not sufficiently prepared to fully engage in patient care [ 1 ]. Hence, nursing managers must understand the expected competencies of newly graduated students, because they have a more complete insight into the healthcare system and the challenges facing the nursing profession. Exploration of these gaps can reveal necessities regarding the work readiness of nursing graduates and help them develop their competencies to enter the clinical setting [ 1 , 25 ].

Although research has been carried out on this topic in other countries, the educational system in those countries varies from that of Iran’s nursing education [ 31 , 32 ]. Iran’s nursing curriculum has tried to prepare nurses who have the necessary competencies to meet the care needs of society. Despite the importance of proficiency in nursing education, many nursing graduates often report feeling unprepared to fulfill expected competencies and they have deficiencies in applying their knowledge and experience in practice [ 33 ]. Firstly, the failure to define and identify the expected competencies in the nursing curriculum of Iran led to the absence of precise and efficient educational objectives. Therefore, it is acknowledged that the traditional nursing curriculum of Iran focuses more on lessons organization than competencies [ 34 ]. Secondly, insufficient attention has been given to the scheduling, location, and level of competencies in the nursing curriculum across different semesters [ 35 ]. Thirdly, the large volume of content instead of focusing on expected competencies caused nursing graduates challenged to manage complex situations [ 36 ]. Therefore, we should not expect competencies such as critical thinking, clinical judgment, problem-solving, decision-making, management, and leadership from nursing students and graduates in Iran [ 37 ]. Limited research has been conducted in this field in Iran. Studies have explored the cultural competence of nursing students [ 38 ] and psychiatric nurses [ 39 ]. Additionally, the competence priorities of nurses in acute care have been investigated [ 40 ], as well as the competency dimensions of nurses [ 41 ].

In Iran, after receiving the diploma, the students participate in a national exam called Konkur. Based on the results of this exam, they enter the field of nursing without conducting an aptitude test interview and evaluating individual and social characteristics. The 4-year nursing curriculum in Iran has 130 units including 22 general, 54 specific, 15 basic sciences, and 39 internship units. In each semester, several workshops are held according to the syllabus [ 42 ]. Instead of the expected competencies, a list of general competencies is specified as learning outcomes in the program. Accepted students based on their rank in the exam and their choice in public and Islamic Azad Universities (non-profit), are trained with a common curriculum. Islamic Azad Universities are not supported by government funding and are managed autonomously, this problem limits the access to specialized human resources and sufficient educational fields, and the lower salaries of faculty members in Azad Universities compared to the government system, students face serious challenges. Islamic Azad Universities must pay exorbitant fees to medical universities for training students in clinical departments and medical training centers, doubling these Universities’ financial problems. In some smaller cities, these financial constraints cause students to train in more limited fields of clinical training and not experience much of what they have learned in the classroom in practice and the real world of nursing. The evaluation of learners in the courses according to the curriculum is based on formative and summative evaluation with teacher-made tests, checklists, clinical assignments, conferences, and logbooks. The accreditation process of nursing schools includes two stages internal evaluation, which is done by surveying students, professors and managers of educational groups, and external accreditation is done by the nursing board. After completing all their courses, to graduate, students must participate in an exam called “Final”, which is held by each faculty without the supervision of an accreditation institution, the country’s assessment organization or the Ministry of Health, and obtain at least a score of 10 out of 20 to graduate.

Therefore, we conducted this comprehensive study as the first study in Iran to investigate the difference between the expected and perceived competence levels of final year nursing students. The study’s theoretical framework is based on Patricia Benner’s “From Novice to Expert” model [ 43 ].

Materials and methods

The present study had the following three objectives:

Determining self-perceived competency levels from the perspective of final year nursing students in Iran.

Determining expected levels of competency from the perspective of nursing faculties in Iran.

To determine the difference between the expected competencies from the perspective of nursing faculties and the achieved competencies from the perspective of final-year nursing students.

This study is a descriptive-comparative study.

First, we obtained a list of all nursing schools in the provinces of Iran from the Ministry of Health ( n  = 31). From 208 Universities, 72 nursing schools were randomly selected using two-stage cluster sampling. Among the selected faculties, we chose 721 final-year nursing students and 365 nursing faculties who met the eligibility criteria for the study. Final-year nursing students who consented to participate in the study were selected. Full-time faculty members with at least 2 years of clinical experience and nurse managers with at least 5 years of clinical education experience were also included. In this study, nursing managers, in addition to their educational roles in colleges, also have managerial roles in the field of nursing. Some of these roles include nursing faculty management, nursing board member, curriculum development and review, planning and supervision of nursing education, evaluation, and continuous improvement of nursing education. The selection criteria were based on the significant role that managers play in nursing education and curriculum development [ 44 ]. Non-full-time faculty members and managers without clinical education experience were excluded from the study.

The instrument used in this study is a questionnaire developed and psychometrically tested in a doctoral nursing dissertation [ 45 ]. To design the tool, the competencies expected of undergraduate nursing students in Iran and worldwide were first identified through a scoping review using the methodology recommended by the Joanna Briggs Institute (JBI) and supported by the PAGER framework. Summative content analysis by Hsieh and Shannon (2005) was used for analysis, which included: counting and comparing keywords and content, followed by interpretation of textual meaning. In the second step, the results of the first step were used to create tool statements. Then the validity of the instrument was checked by face validity, content validity (determination of the ratio and index of content validity), and validity of known groups. Its reliability was also checked by internal consistency using Cronbach’s alpha method and stability using the test-retest method. The competency questionnaire comprises 85 items covering 17 competencies across 5 domains: “individualized care” (4 competencies with 21 items), “evidence-based nursing care” (2 competencies with 10 items), “professional nursing process” (3 competencies with 13 items), “nursing management” (2 competencies with 16 items), and “work readiness and professional development” (6 competencies with 25 items) [ 45 ]. “The Bondy Rating Scale was utilized to assess the competency items, with ratings ranging from 1 (Dependent) to 5 (Independent) on a 5-point Likert scale [ 46 ]. The first group (nursing students) was asked to indicate the extent to which they had acquired each competency. The second group (nursing faculties) was asked to specify the level to which they expected nursing students to achieve each competency.

Data collection

First, the researcher contacted the deans and managers of the selected nursing schools by email to obtain permission. After explaining the aims of the study and the sampling method, we obtained the telephone number of the representative of the group of final year nursing students and also the email of the faculty members. The representative of the student group was then asked to forward the link to the questionnaire to 10 students who were willing to participate in the research. Informed consent for students to participate in the online research was provided through the questionnaires, while nursing faculty members who met the eligibility criteria for the study received an informed consent form attached to the email questionnaire. The informed consent process clarified the study objectives and ensured anonymity of respondent participation in the research, voluntary agreement to participate and the right to revoke consent at any time. An electronic questionnaire was then sent to 900 final year nursing students and 664 nursing faculties (from 4 March 2023 to 11 July 2023). Reminder emails were sent to nursing faculty members three times at two-week intervals. The attrition rate in the student group was reported to be 0 (no incomplete questionnaires). However, four questionnaires from nursing faculty members were discarded because of incomplete responses. Of the 900 questionnaires sent to students and 664 sent to nursing faculties, 721 students and 365 nursing faculty members completed the questionnaire. The response rates were 79% and 66% respectively.

Data were analyzed using SPSS version 22. Frequencies and percentages were used to report categorical variables and mean and standard deviations were used for quantitative variables. The normality of the quantitative data was confirmed using the Shapiro-Wilk and Skewness tests. An independent t-test was used for differences between the two groups.

Data analysis revealed that out of 721 students, 441 (61.20%) was female. The mean and deviation of the students’ age was 22.50 (SD = 1.21). Most of the students 577 (80%) were in their final semester. Also, of the total 365 faculties, the majority were female 253 (69.31%) with a mean of age 44.06 (SD = 7.46) and an age range of 22–65. The academic rank of most nursing faculty members 156 (21.60%) was assistant professor (Table  1 ).

The results of the study showed that in both groups the highest scores achieved by the students and expected by the nursing faculty members were work readiness and professional development with a mean and standard deviation of 3.54 (0.39) and 4.30 (0.45) respectively. The lowest score for both groups was also evidence-based nursing care with a mean and standard deviation of 2.74 (0.55) for students and 3.74 (0.57) for nursing faculty members (Table  2 ).

Also, the result of the study showed that the highest expected competency score from the nursing faculty members’ point of view was the safety subscale. In other words, faculty members expected nursing students to acquire safety competencies at the highest level and to be able to provide safe care independently according to the rating scale (Mean = 4.51, SD = 0.45). The mean score of the competencies achieved by the students was not above 3.77 in any of the subscales and the highest level of competency achievement according to self-report of students was related to safety competencies (mean = 3.77, SD = 0.51), preventive health services (mean = 3.69, SD = 0.79), values and ethical codes (mean = 3.67, SD = 0.77), and procedural/clinical skills (mean = 3.67, SD = 0.71). The other competency subscales from the perspective of the two groups are presented in Table  3 , from highest to lowest score.

The analysis of core competencies achieved and expected from both students’ and nursing faculty members’ perspectives revealed that, firstly, there was a significant difference between the mean scores of the two groups in all five core competencies ( P  < .001) and that the highest mean difference was related to evidence-based care with mean diff = 1 and the lowest mean difference was related to professional care process with mean diff = 0.70 (Table  4 ).

Table  5 indicates that there was a significant difference between the mean scores achieved by students and nursing faculty members in all 5 core competencies and 17 sub-core Competencies ( p  < .001).

The study aimed to determine the difference between nursing students’ self-perceived level of competence and the level of competence expected of them by their nursing faculty members. The study results indicate that students scored highest in work readiness and professional development. However, they were not independent in this competency and required support. The National League for Nursing (NLN) recognizes nursing professional development as the goal of nursing education programs [ 47 ] However, Aguayo-Gonzalez [ 48 ] believes that the appropriate time for professional development is after entering a clinical setting. This theme includes personal characteristics, legality, clinical/ procedural skills, patient safety, preventive health services, and mentoring competence. Personality traits of nursing students are strong predictors of coping with nursing stress, as suggested by Imus [ 49 ]. These outcomes reflect changes in students’ individual characteristics during their nursing education. Personality changes, such as the need for patience and persistence in nursing care and understanding the nurse identity prepare students for the nursing profession, which is consistent with the studies of Neishabouri et al. [ 50 ]. Although the students demonstrated a higher level of competence in this theme, an examination of the items indicates that they can still not adapt to the challenges of bedside nursing and to use coping techniques. This presents a concerning issue that requires attention and resolution. Previous studies have shown that nursing education can be a very stressful experience [ 51 , 52 , 53 ].

Of course, there is no consensus on the definition of professionalism and the results of studies in this field are different. For example, Akhtar et al. (2013) identified common viewpoints about professionalism held by nursing faculty and students, and four viewpoints emerged humanists, portrayers, facilitators, and regulators [ 54 ]. The findings of another study showed that nursing students perceived vulnerability, symbolic representation, role modeling, discontent, and professional development are elements that show their professionalism [ 55 ]. The differences indicate that there may be numerous contextual variables that affect individuals’ perceptions of professionalism.

The legal aspects of nursing were the next item in this theme that students needed help with. The findings of studies regarding the legal competence of newly graduated nursing students are contradictory reported that only one-third of nurse managers were satisfied with the legal competence of newly graduated nursing students [ 56 , 57 ]. Whereas the other studies showed that legality was the highest acquired competence for newly graduated nursing students [ 58 , 59 ]. However, the results of this study indicated that legality may be a challenge for newly graduated nursing students. Benner [ 43 ] highlighted the significant change for new graduates in that they now have full legal and professional responsibility for the patient. Tong and Epeneter [ 60 ] also reported that facing an ethical dilemma is one of the most stressful factors for new graduates. Therefore, the inexperience of new graduates cannot reduce the standard of care that patients expect from them [ 60 ]. Legal disputes regarding the duties and responsibilities of nurses have increased with the expansion of their roles. This is also the case in Iran. Nurses are now held accountable by law for their actions and must be aware of their legal obligations. To provide safe healthcare services, it is essential to know of professional, ethical, and criminal laws related to nursing practice. The nursing profession is accountable for the quality of services delivered to patients from both professional and legal perspectives. Therefore, it is a valuable finding that nurse managers should support new graduates to better deal with ethical dilemmas. Strengthening ethical education in nursing schools necessitates integrating real cases and ethical dilemmas into the curriculum. Especially, Nursing laws are missing from Iran’s undergraduate nursing curriculum. By incorporating authentic case studies drawn from clinical practice, nursing schools provide students with opportunities to engage in critical reflection, ethical analysis, and moral deliberation. These real cases challenge students to apply ethical principles to complex and ambiguous situations, fostering the development of ethical competence and moral sensitivity. Furthermore, ethical reflection and debriefing sessions during clinical experiences enable students to discuss and process ethical challenges encountered in practice, promoting self-awareness, empathy, and professional growth. Overall, by combining theoretical instruction with practical application and the use of real cases, nursing schools can effectively prepare future nurses to navigate ethical dilemmas with integrity and compassion.

However, the theme of evidence-based nursing care was the lowest scoring, indicating that students need help with this theme. The findings from studies conducted in this field are varied. A limited number of studies reported that nursing students were competent to implement evidence-based care [ 61 ], while other researchers reported that nursing students’ attitudes toward evidence-based care to guide clinical decisions were largely negative [ 20 , 62 ]. The principal barriers to implementing evidence-based care are lack of authority to change patient care policy, slow dissemination of evidence and lack of time at the bedside to implement evidence [ 10 ], and lack of knowledge and awareness of the process of searching databases and evaluating research [ 63 ]. While the European Higher Education Area (EHEA) framework and the International Council of Nurses Code of Ethics introduce the ability to identify, critically appraise, and apply scientific information as expected learning outcomes for nursing students [ 64 , 65 ], the variation in findings highlights the complexity of the concept of competence and its assessment [ 23 ]. Evidence-Based Nursing (EBN) education for nursing students is most beneficial when it incorporates a multifaceted approach. Interactive workshops play a crucial role, providing students with opportunities to critically appraise research articles, identify evidence-based practices, and apply them to clinical scenarios. Simulation-based learning further enhances students’ skills by offering realistic clinical experiences in a safe environment. Additionally, clinical rotations offer invaluable opportunities for students to observe and participate in evidence-based practices under the guidance of experienced preceptors. Journal clubs foster a culture of critical thinking and ongoing learning, where students regularly review and discuss current research articles. Access to online resources such as databases and evidence-based practice guidelines allows students to stay updated on the latest evidence and best practices. To bridge the gap between clinical practice and academic theory, collaboration between nursing schools and healthcare institutions is essential. This collaboration can involve partnerships to create clinical learning environments that prioritize evidence-based practice, inter professional education activities to promote collaboration across disciplines, training and support for clinical preceptors, and continuing education opportunities for practicing nurses to strengthen their understanding and application of EBN [ 66 ]. By implementing these strategies, nursing education programs can effectively prepare students to become competent practitioners who integrate evidence-based principles into their clinical practice, ultimately improving patient outcomes.

The study’s findings regarding the second objective showed that nursing faculty members expected students to achieve the highest level of competence in work readiness and professional development, and the lowest in evidence-based nursing care competence. The results of the studies in this area revealed that there is a lack of clarity about the level of competence of newly graduated nursing students and that confusion about the competencies expected of them has become a major challenge [ 13 , 67 ]. Evidence of nurse managers’ perceptions of newly graduated nursing student’s competence is limited and rather fragmented. There is a clear need for rigorous empirical studies with comprehensive views of managers, highlighting the key role of managers in the evaluation of nurse competence [ 1 , 9 ]. Some findings also reported that nursing students lacked competence in primary and specialized care after entering a real clinical setting [ 68 ] and that nursing managers were dissatisfied with the competence of students [ 30 ].

The results of the present study on the third objective confirmed the gap between expected and achieved competence requirements. The highest average difference was related to evidence-based nursing care, and the lowest mean difference was related to the professional nursing process. The findings from studies in this field vary. For instance, Brown and Crookes [ 13 ] reported that newly graduated nursing students were not independent in at least 26 out of 30 competency domains. Similar studies have also indicated that nursing students need a structured program after graduation to be ready to enter clinical work [ 30 ]. It can be stated that the nursing profession does not have clear expectations of the competencies of newly graduated nursing students, and preparing them for entry into clinical practice is a major challenge for administrators [ 13 ]. These findings can be explained by the Duchscher transition shock [ 69 ]. It is necessary to support newly graduated nursing students to develop their competence and increase their self-confidence.

The interesting but worrying finding was the low expectations of faculty members and the low scores of students in the theme of evidence-based care. However, nursing students need to keep their competencies up to date to provide safe and high-quality care. The WHO also considers the core competencies of nurse educators to be the preparation of effective, efficient, and skilled nurses who can teach the evidence-based learning process and help students apply it clinically [ 44 ]. The teaching of evidence-based nursing care appears to vary across universities, and some clinical Faculties do not have sufficient knowledge to support students. In general, it can be stated that the results of the present study are in line with the context of Iran. Some of the problems identified include a lack of attention to students’ academic talent, a lack of a competency-based curriculum, a gap between theory and clinical practice, and challenges in teaching and evaluating the achieved competencies [ 42 ].

Strengths and limitations

The study was conducted on a national level with a sizable sample. It is one of the first studies in Iran to address the gap between students’ self-perceived competence levels and nursing faculty members’ expected competency levels. Nevertheless, one of the limitations of the study is the self-report nature of the questionnaire, which may lead to social desirability bias. In addition, the COVID-19 pandemic coinciding with the student’s first and second years could potentially impact their educational quality and competencies. The limitations established during the outbreak negatively affected the nursing education of students worldwide.

Acquiring nursing competencies is the final product of nursing education. The current study’s findings suggest the existence of an academic-practice gap, highlighting the need for educators, faculty members, and nursing managers to collaborate in bridging the potential gap between theory and practice. While nursing students were able to meet some expectations, such as value and ethical codes, there is still a distance between expectations and reality. Especially, evidence-based care was identified as one of the weaknesses of nursing students. It is recommended that future research investigates the best teaching strategies and more objective assessments of competencies. The findings of this study can be used as a guide for the revision of undergraduate nursing education curricula, as well as a guide for curriculum development based on the development of competencies expected of nursing students. Nursing managers can identify existing gaps and plan to fill them and use them for the professionalization of students. This requires the design of educational content and objective assessment tools to address these competencies at different levels throughout the academic semester. This significant issue necessitates enhanced cooperation between healthcare institutions and nursing schools. Enhancing nursing education requires the implementation of concrete pedagogical strategies to bridge the gap between theoretical knowledge and practical skills. Simulation-based learning emerges as a pivotal approach, offering students immersive experiences in realistic clinical scenarios using high-fidelity simulators [ 70 ]. Interprofessional education (IPE) is also instrumental, in fostering collaboration among healthcare professionals and promoting holistic patient care. Strengthening clinical preceptorship programs is essential, with a focus on providing preceptors with formal training and ongoing support to facilitate students’ clinical experiences and transition to professional practice [ 71 ]. Integrating evidence-based practice (EBP) principles throughout the curriculum cultivates critical thinking and inquiry skills among students, while technology-enhanced learning platforms offer innovative ways to engage students and support self-directed learning [ 72 ]. Diverse and comprehensive clinical experiences across various healthcare settings ensure students are prepared for the complexities of modern healthcare delivery. By implementing these practical suggestions, nursing education programs can effectively prepare students to become competent and compassionate healthcare professionals.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Huston C, et al. The academic-practice gap: strategies for an enduring problem . In nursing forum . Wiley Online Library; 2018.

Meretoja R, Isoaho H, Leino-Kilpi H. Nurse competence scale: development and psychometric testing. J Adv Nurs. 2004;47(2):124–33.

Article   Google Scholar  

Järvinen T, et al. Nurse educators’ perceptions of factors related to the competence of graduating nursing students. Nurse Educ Today. 2021;101:104884.

Satu KU, et al. Competence areas of nursing students in Europe. Nurse Educ Today. 2013;33(6):625–32.

World Health Organization. State of the world’s nursing 2020: investing in education, jobs and leadership. 2020 [cited 12 June 2023; https://www.who.int/publications-detail-redirect/9789240003279 .

Lee W-H, Kim S, An J. Development and evaluation of korean nurses’ core competency scale (KNCCS) 2017.

American Association of Colleges of Nursing. The essentials: core competencies for professional nursing education… 2020 [cited 2023; https://www.aacnnursing.org/Portals/0/PDFs/Publications/Essentials-2021.pdf .

Gardulf A, et al. The Nurse Professional competence (NPC) scale: self-reported competence among nursing students on the point of graduation. Nurse Educ Today. 2016;36:165–71.

Kajander-Unkuri S, et al. The level of competence of graduating nursing students in 10 European countries-comparison between countries. Nurs Open. 2021;8(3):1048–62.

Labrague LJ et al. A Multicountry Study on Nursing Students’ Self-Perceived Competence and Barriers to Evidence‐Based Practice 2019. 16(3): pp. 236–246.

Visiers-Jiménez L, et al. Clinical learning environment and graduating nursing students’ competence: a multi-country cross-sectional study. Nurs Health Sci. 2021;23(2):398–410.

Herron EK. New graduate nurses’ preparation for recognition and prevention of failure to rescue: a qualitative study. J Clin Nurs. 2018;27(1–2):e390–401.

Google Scholar  

Brown RA, Crookes PA. What are the ‘necessary’ skills for a newly graduating RN? Results of an Australian survey. BMC Nurs. 2016;15:23.

Amsalu B, et al. Clinical practice competence of Mettu University nursing students: a cross-sectional study. Adv Med Educ Pract. 2020;11:791–8.

Song Y, McCreary LL. New graduate nurses’ self-assessed competencies: an integrative review. Nurse Educ Pract. 2020;45:102801.

Hickerson KA, Taylor LA, Terhaar MF. The Preparation-Practice gap: an Integrative Literature Review. J Contin Educ Nurs. 2016;47(1):17–23.

Huang FF, et al. Self-reported confidence in patient safety competencies among Chinese nursing students: a multi-site cross-sectional survey. BMC Med Educ. 2020;20(1):32.

Hong S et al. A Cross-Sectional Study: What Contributes to Nursing Students’ Clinical Reasoning Competence? Int J Environ Res Public Health, 2021. 18(13).

Mcquitty DJPE, Counseling. Teaching interpersonal communication concepts to increase awareness and reduce health disparities: Presenter (s): DaKysha Moore, NCAT, United States 2023. 109: pp. 25–26.

Lam CK, Schubert CF, Herron EK. Evidence-based practice competence in nursing students preparing to transition to practice. Worldviews Evid Based Nurs. 2020;17(6):418–26.

Lake MA. What we know so far: COVID-19 current clinical knowledge and research. Clin Med (Lond). 2020;20(2):124–7.

Shustack L. Integrating Google Earth in Community Health nursing courses: preparing globally aware nurses. Nurse Educ. 2020;45(2):E11–2.

Zeleníková R, et al. Self-assessed competence of final-year nursing students. Nurs Open. 2023;10(7):4607–18.

Hyun A, Tower M, Turner C. Exploration of the expected and achieved competency levels of new graduate nurses. J Nurs Manag. 2020;28(6):1418–31.

Kukkonen P, et al. Nurse managers’ perceptions of the competence of newly graduated nurses: a scoping review. J Nurs Manag. 2020;28(1):4–16.

Nilsson J et al. Nurse professional competence (NPC) assessed among newly graduated nurses in higher educational institutions in Europe. 2019. 39(3): p. 159–67.

Kajander-Unkuri S et al. Students’ Self-assessed Competence Levels during Nursing Education Continuum - A Cross-sectional Survey. Int J Nurs Educ Scholarsh, 2020. 17(1).

Salminen L, et al. The competence of nurse educators and graduating nurse students. Nurse Educ Today. 2021;98:104769.

Numminen O, et al. Newly graduated nurses’ competence and individual and organizational factors: a multivariate analysis. J Nurs Scholarsh. 2015;47(5):446–57.

Södersved Källestedt ML, et al. Perceptions of managers regarding prerequisites for the development of professional competence of newly graduated nurses: a qualitative study. J Clin Nurs. 2020;29(23–24):4784–94.

Ezzati E, et al. Exploring the social accountability challenges of nursing education system in Iran. BMC Nurs. 2023;22(1):7.

Farsi Z, et al. Comparison of Iran’s nursing education with developed and developing countries: a review on descriptive-comparative studies. BMC Nurs. 2022;21(1):105.

Salem OA et al. Competency based nursing curriculum: establishing the standards for nursing competencies in higher education. 2018. 5(11): p. 1–8.

Noohi E, Ghorbani-Gharani L. and A.J.S.i.D.o.M.E. Abbaszadeh, A comparative study of the curriculum of undergraduate nursing education in Iran and selected renowned universities in the world 2015. 12(3): pp. 450–471.

Rassouli M, Zagheri Tafreshi M. J.J.o.c.e. Esmaeil. Challenges Clin Nurs Educ Iran Strategies. 2014;2(1):11–22.

Riazi S, et al. Understanding gaps and needs in the undergratue nursing curriculum in Iran: a prelude to design a competency-based curriculum model %J payesh (Health Monitor). Journal. 2020;19(2):145–58.

ADIB HM, Mazhariazad F. Nursing Bachelor’s Education program in Iran and UCLA: A comparative study 2019.

Farokhzadian J, et al. Using a model to design, implement, and evaluate a training program for improving cultural competence among undergraduate nursing students: a mixed methods study. BMC Nurs. 2022;21(1):85.

Sargazi O, et al. Improving the professional competency of psychiatric nurses: results of a stress inoculation training program. Psychiatry Res. 2018;270:682–7.

Faraji A, et al. Evaluation of clinical competence and its related factors among ICU nurses in Kermanshah-Iran: a cross-sectional study. Int J Nurs Sci. 2019;6(4):421–5.

Atashzadeh-Shoorideh F, et al. Essential dimensions of professional competency examination in Iran from academic and clinical nurses’ perspective: a mixed-method study. J Educ Health Promot. 2021;10:414.

Purabdollah M, et al. Comparison of the Iranian and scandinavian bachelor of nursing curriculum (Sweden): a scoping review. J Educ Health Promotion. 2023;12(1):389.

Benner P. J.B.o.s., technology and society, using the Dreyfus model of skill acquisition to describe and interpret skill acquisition and clinical judgment in nursing practice and education . 2004. 24(3): p. 188–99.

World Health Organization. Nurse educator core competencies. Geneva, Switzerland: World Health Organization. 2016 [cited 23 June 2023; https://www.who.int/hrh/nursing_midwifery/nurse_educator050416.pdf .

Purabdollah M, et al. Competencies expected of undergraduate nursing students: a scoping review. Nurs Open. 2023;10(12):7487–508.

Bondy KN. Clinical evaluation of student performance: the effects of criteria on accuracy and reliability. Res Nurs Health. 1984;7(1):25–33.

NLf N. Outcomes for graduates of practical/vocational, diploma, associate degree, baccalaureate, master’s, practice doctorate, and research doctorate programs in nursing . New York 2010; 201]. https:api.semanticscholar.org/corpus ID: 68620057.

Aguayo-González M, Castelló-Badía M, Monereo-Font C. Critical incidents in nursing academics: discovering a new identity. Rev Bras Enferm. 2015;68(2):195–202. 219 – 27.

Scott Imus FJJNEP. Nurse anesthesia student’s personality characteristics and academic performance: A big five personality model perspective 2019. 9: pp. 47–55.

Neishabouri M, Ahmadi F, Kazemnejad A. Iranian nursing students’ perspectives on transition to professional identity: a qualitative study. Int Nurs Rev. 2017;64(3):428–36.

Zheng S, et al. New nurses’ experience during a two year transition period to clinical practice: a phenomenological study. Nurse Educ Today. 2023;121:105682.

Aryuwat P, et al. An integrative review of resilience among nursing students in the context of nursing education. Nurs Open. 2023;10(5):2793–818.

Ayaz-Alkaya S, Simones J. Nursing education stress and coping behaviors in Turkish and the United States nursing students: a descriptive study. Nurse Educ Pract. 2022;59:103292.

Akhtar-Danesh N, et al. Perceptions of professionalism among nursing faculty and nursing students. West J Nurs Res. 2013;35(2):248–71.

Keeling J, Templeman J. An exploratory study: student nurses’ perceptions of professionalism. Nurse Educ Pract. 2013;13(1):18–22.

Nilsson J, et al. Development and validation of a new tool measuring nurses self-reported professional competence–the nurse professional competence (NPC) scale. Nurse Educ Today. 2014;34(4):574–80.

Berkow S, et al. Assessing new graduate nurse performance. J Nurs Adm. 2008;38(11):468–74.

Lofmark A, Smide B, Wikblad K. Competence of newly-graduated nurses–a comparison of the perceptions of qualified nurses and students. J Adv Nurs. 2006;53(6):721–8.

Park E, Choi J. Attributes associated with person-centered care competence among undergraduate nursing students. Res Nurs Health. 2020;43(5):511–9.

Tong V, Epeneter BJ. A comparative study of newly licensed registered nurses’ stressors: 2003 and 2015. J Contin Educ Nurs. 2018;49(3):132–40.

Florin J, et al. Educational support for research utilization and capability beliefs regarding evidence-based practice skills: a national survey of senior nursing students. J Adv Nurs. 2012;68(4):888–97.

Reid J, et al. Enhancing utility and understanding of evidence based practice through undergraduate nurse education. BMC Nurs. 2017;16:58.

Phillips JM, Cullen D. Improving the adoption of evidence-based practice through RN-to-BSN education. J Contin Educ Nurs. 2014;45(10):467–72.

Stievano A, Tschudin V. The ICN code of ethics for nurses: a time for revision. Int Nurs Rev. 2019;66(2):154–6.

Ministry of Education and Research. Norwegian qualifications framework: levels and learning outcome descriptors… 2011 [cited 23 June 2023; https://www.regjeringen.no/contentassets/e579f913fa1d45c2bf2219afc726670b/nkr.pdf .

Chen Q, et al. Differences in evidence-based nursing practice competencies of clinical and academic nurses in China and opportunities for complementary collaborations: a cross‐sectional study. J Clin Nurs. 2023;32(13–14):3695–706.

Missen K, McKenna L, Beauchamp A. Registered nurses’ perceptions of new nursing graduates’ clinical competence: a systematic integrative review. Nurs Health Sci. 2016;18(2):143–53.

Leonardsen AL, et al. Nurses’ perspectives on technical skill requirements in primary and tertiary healthcare services. Nurs Open. 2020;7(5):1424–30.

Duchscher JE. B.J.J.o.a.n. Transition Shock: Initial Stage role Adaptation New Graduated Registered Nurses. 2009;65(5):1103–13.

Ajemba MN, Ikwe C, Iroanya JC. Effectiveness of simulation-based training in medical education: assessing the impact of simulation-based training on clinical skills acquisition and retention: a systematic review. World J Adv Res Reviews. 2024;21(1):1833–43.

Krystallidou D et al. Interprofessional education for healthcare professionals. A BEME realist review of what works, why, for whom and in what circumstances in undergraduate health sciences education: BEME Guide No. 83 Medical Teacher, 2024: pp. 1–18.

Sun Y, et al. Critical thinking abilities among newly graduated nurses: a cross-sectional survey study in China. Nurs Open. 2023;10(3):1383–92.

Download references

Acknowledgements

The authors extend their gratitude to all the nursing students and faculties who took part in this study.

This article is part of research approved with the financial support of the deputy of research and technology of Tabriz University of Medical Sciences.

Author information

Authors and affiliations.

Department of Nursing, Khoy University of Medical Sciences, Khoy, Iran

Majid Purabdollah

Medical Education Research Center, Health Management and Safety Promotion Research Institute, Tabriz University of Medical Sciences, Tabriz, Iran

Majid Purabdollah, Vahid Zamanzadeh, Akram Ghahramanian, Leila Valizadeh, Saeid Mousavi & Mostafa Ghasempour

Department of Medical-Surgical Nursing, School of Nursing and Midwifery, Shahid Beheshti University of Medical Sciences, Tehran, Iran

Vahid Zamanzadeh

Department of Medical Surgical Nursing, Faculty of Nursing and Midwifery, Tabriz University of Medical Sciences, Tabriz, Iran

Akram Ghahramanian & Mostafa Ghasempour

Department of Pediatric Nursing, School of Nursing and Midwifery, Shahid Beheshti University of Medical Sciences, Tehran, Iran

Leila Valizadeh

Department of Epidemiology and Biostatistics, Assistant Professor of Biostatistics, School of Health, Tabriz University of Medical Sciences, Tabriz, Iran

Saeid Mousavi

You can also search for this author in PubMed   Google Scholar

Contributions

M P: conceptualized the study, data collection, analysis and interpretation, drafting of manuscript; V Z: conceptualized the study, analysis and interpretation, drafting of manuscript; LV: conceptualized the study, data collection and analysis, manuscript revision; A Gh: conceptualized the study, data collection, analysis, and drafting of manuscript; S M: conceptualized the study, analysis, and drafting of manuscript; M Gh: data collection, analysis, and interpretation, drafting of manuscript; All authors read and approved the final manuscript.

Ethics declarations

Ethics approval and consent to participate.

All the participants voluntarily participated in this study and provided written informed consent. The study was approved by the ethics committee of the Tabriz University of Medical Sciences (Ethical Code: IR.TBZMED.REC.1400.791) and all methods were performed in accordance with relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Purabdollah, M., Zamanzadeh, V., Ghahramanian, A. et al. Competency gap among graduating nursing students: what they have achieved and what is expected of them. BMC Med Educ 24 , 546 (2024). https://doi.org/10.1186/s12909-024-05532-w

Download citation

Received : 23 November 2023

Accepted : 07 May 2024

Published : 16 May 2024

DOI : https://doi.org/10.1186/s12909-024-05532-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Nursing education
  • Self-Assessment
  • Nursing students
  • Professional Competency Professional

BMC Medical Education

ISSN: 1472-6920

formative assessment examples in nursing education

IMAGES

  1. FREE 10+ Nursing Assessment Form Samples in MS Word

    formative assessment examples in nursing education

  2. 39 Printable Nursing Assessment Forms (+Examples)

    formative assessment examples in nursing education

  3. Nursing Assessment

    formative assessment examples in nursing education

  4. FREE 7+ Sample Nursing Assessment Forms in PDF

    formative assessment examples in nursing education

  5. FREE 6+ Sample Nursing Assessments in PDF

    formative assessment examples in nursing education

  6. How Formative Assessment Helps Nursing Students Succeed in the

    formative assessment examples in nursing education

VIDEO

  1. DIAGNOSTIC ASSESSMENT, FORMATIVE A.SSESSMENT & SUMMATIVE ASSESSMENT

  2. FORMATIVE ASSESSMENT: INDIVIDUAL MARKS REGISTER C C E DOCUMENT

  3. FORMATIVE AND SUMMATIVE ASSESSMENT

  4. Types OF Evaluation

  5. How to incorporate references into a reflective paper with examples from nursing

  6. Understanding Formative Assessment Versus Summative Assessment

COMMENTS

  1. How Formative Assessment Helps Nursing Students Succeed in the

    In nursing education, formative assessment has been proven to be highly effective not only for student learning, but for faculty teaching and, as a result, increases the overall quality of learning. ... Examples of formative assessment. By implementing formative assessment and evaluating students along the way, constantly providing feedback ...

  2. PDF Guiding Principles for Competency-Based Education

    The following key components of competency-based education (CBE) provide a foundation for implementing CBE: outcome competencies, sequenced progression, tailored learning experiences, 1 competency-focused instruction, and programmatic assessment. A shared understanding of the components and approaches of CBE is needed for successful ...

  3. Comparing formative and summative simulation-based assessment in

    Background. Clinical simulation methodology has increased exponentially over the last few years and has gained acceptance in nursing education. Simulation-based education (SBE) is considered an effective educational methodology for nursing students to achieve the competencies needed for their professional future [1-5].In addition, simulation-based educational programs have demonstrated to be ...

  4. Comparing formative and summative simulation-based assessment in

    Background Formative and summative evaluation are widely employed in simulated-based assessment. The aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation in undergraduate nursing students and to compare their satisfaction with this methodology using these two evaluation strategies. Methods Two hundred eighteen undergraduate nursing students ...

  5. Formative Assessment Strategies for Healthcare Educators

    Here are 5 teaching strategies for delivering formative assessments that provide useful feedback opportunities. 1. Pre-Assessment: Provides an assessment of student prior knowledge, help identify prior misconceptions, and allow instructors to adjust their approach or target certain areas. When instructors have feedback from student assessments ...

  6. Assessment and evaluation: Nursing education and ACEN ...

    Assessment and evaluation are essential components in nursing education used to determine program effectiveness, guide decision-making, determine if changes are needed, and to enhance the achievement of student learning (Halstead, 2019). Both formative and summative evaluations provide valuable information that can be used by students, faculty ...

  7. PDF Competency-Based Education and Assessment Model: Teaching, Learning

    In your teaching presentation, you described the learning outcomes for the audience and explained why medication safety is important. You went through the four steps in your safety checklist (taking as prescribed, storage, side effects and interactions, and keeping a med list) and gave examples to illustrate each.

  8. Formative Assessment and Its Impact on Student Success

    Formative Assessment and Its Impact on Student Success : Nurse Educator ... Assistant Professor, and Rebecca Thal, MSN, FNP, Graduate Student, School of Nursing, MGH Institute of Health Professions, Boston, MA ([email protected]). Nurse Educator: 1/2 2019 - Volume 44 ... Emerging Technologies in Nursing Education; Curriculum Revision: Making ...

  9. The paradox of an expected level: The assessment of nursing students

    There seems to be a gap in the literature concerning how the lack of clear standards affects the assessment of nursing students' clinical competence. This includes studies focusing on mid-term assessment, comprising both summative and formative assessment, from the point of view of teachers, students and supervisors.

  10. Using standardized exams for formative program evaluation

    The unprecedented, extended, and evolving nature of the pandemic prompted significant changes in nursing education, heightening the need for formative program evaluation. For example, face-to-face courses and some clinical experiences were shifted to virtual platforms, learning activities were modified, and assessment methods were adjusted.

  11. How are formative assessment methods used in the clinical setting? A

    In recent years, however, formative assessment has become a strong theme in postgraduate medical education as a way to facilitate and enhance learning through-out the training period. 12-14 Formative assessment - or assessment for learning - aims to identify a trainee's strengths and weaknesses and to be conducive to progress by means of ...

  12. Formative and Summative Assessments

    In short, formative assessment occurs throughout a class or course, and seeks to improve student achievement of learning objectives through approaches that can support specific student needs (Theal and Franklin, 2010, p. 151). In contrast, summative assessments evaluate student learning, knowledge, proficiency, or success at the conclusion of ...

  13. Exploring the formal assessment discussions in clinical nursing

    The clinical education system for the students in this study is comprised of two formal assessment discussions: the midterm discussion, with a summative assessment and a formative assessment, and the final assessment discussion, where the whole period is encapsulated up in a summative assessment . The mid-term and final summative assessments ...

  14. An exploration of student nurses' experiences of formative assessment

    The idea that formative assessment has the potential to prepare students, not only to succeed in summative assessments during the course, but also in the world beyond the classroom [Melland, H., Volden, C., 1998. Classroom assessment: linking teaching and learning. Journal of Nursing Education 37(6) …

  15. PDF Duke University

    and Formative Assessment Adrienne small, DNP, FNP-c, CNE Duke University School of Nursing Objectives Participants Will be able to: Define formative assessment for clinical learning Define summative assessment for clinical learning Apply formative and summative strategies to personal nursing education practice.

  16. Designing Authentic Assessment: Strategies for Nurse Educato ...

    assessment , assessment strategies , authentic assessment , evaluation , nursing education , technology. Search for Similar Articles. You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search. Increased emphasis on health care safety requires renewed attention to teaching and ...

  17. Perceptions, Practices, and Challenges of Formative Assessment in

    The purpose of this research is to explore, as a first step, how nursing teachers conceptualize formative assessment and how they judge its. usefulness in the teaching/learning process. Secondly ...

  18. Formative peer assessment in healthcare education programmes: protocol

    Introduction In formative peer assessment, the students give and receive feedback from each other and expand their knowledge in a social context of interaction and collaboration. The ability to collaborate and communicate are essential parts of the healthcare professionals' competence and delivery of safe patient care. Thereby, it is of utmost importance to support students with activities ...

  19. Formative assessment

    Formative assessment enables students to try out new assessment types, to express their learning, to get feedback on strengths and areas to develop and to understand the quality of their work whilst studying. The module specification must state (in the 'Teaching and learning activities' section): The form of the formative assessment task e ...

  20. Summative assessment of clinical practice of student ...

    Relevant assessment is a significant part of professional growth in nursing education. ... Formative assessment is an ongoing process and lasts throughout clinical education based on mentors giving feedback; its purpose is to advise the student toward a goal. ... for example, nursing homes where student nurses have no chance to show their ...

  21. Academic staff perspectives of formative assessment in nurse education

    Underlying this proposition is the recognition of the importance of staff perspectives of formative assessment and their influence on assessment practice. However, there appears to be a paucity of literature exploring this area relevant to nurse education. The aim of the research was to explore the perspectives of twenty teachers of nurse ...

  22. New Trends in Formative-Summative Evaluations for Adult Education

    4. Assessment goal is formative or assessment for learning, that is, to improve the performance during the process but evaluation is summative since it is preformed after the program has been completed to judge the quality. 5. Assessment targets the process, whereas evaluation is aimed to the outcome. 6.

  23. PDF The Re s e arch Bas e for Formative As s e s sment

    Formative assessment is at the forefront of many education conversations and, at present, many accept intuitively that it's an important part of the learning process. Yet, how do we know formative assessment actually works? In this blog, we unpack some of the research base underlying the practice of formative assessment.

  24. Full article: Assessing readiness: the impact of an experiential

    We provided formative assessment with each activity but only after the students performed each EPA under direct observation. The rationale for this pedagogical approach is that the antecedent 3-plus years of medical school curriculum should result in students possessing honed skills to perform many core EPAs by the start of the capstone course.

  25. Exploring the formal assessment discussions in clinical nursing

    According to EU standards, 50% of the bachelor education program in nursing should take place in clinical learning environments. Consequently, this calls for high quality supervision, where appropriate assessment strategies are vital to optimize students' learning, growth, and professional development. Despite this, little is known about the formal assessment discussions taking place in ...

  26. Competency gap among graduating nursing students: what they have

    Nurses' professional competencies play a significant role in providing safe care to patients. Identifying the acquired and expected competencies in nursing education and the gaps between them can be a good guide for nursing education institutions to improve their educational practices. In a descriptive-comparative study, students' perception of acquired competencies and expected ...