• Research article
  • Open access
  • Published: 08 June 2021

Comparing formative and summative simulation-based assessment in undergraduate nursing students: nursing competency acquisition and clinical simulation satisfaction

  • Oscar Arrogante 1 ,
  • Gracia María González-Romero 1 ,
  • Eva María López-Torre 1 ,
  • Laura Carrión-García 1 &
  • Alberto Polo 1  

BMC Nursing volume  20 , Article number:  92 ( 2021 ) Cite this article

18k Accesses

17 Citations

1 Altmetric

Metrics details

Formative and summative evaluation are widely employed in simulated-based assessment. The aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation in undergraduate nursing students and to compare their satisfaction with this methodology using these two evaluation strategies.

Two hundred eighteen undergraduate nursing students participated in a cross-sectional study, using a mixed-method. MAES© (self-learning methodology in simulated environments) sessions were developed to assess students by formative evaluation. Objective Structured Clinical Examination sessions were conducted to assess students by summative evaluation. Simulated scenarios recreated clinical cases of critical patients. Students´ performance in all simulated scenarios were assessed using checklists. A validated questionnaire was used to evaluate satisfaction with clinical simulation. Quantitative data were analysed using the IBM SPSS Statistics version 24.0 software, whereas qualitative data were analysed using the ATLAS-ti version 8.0 software.

Most nursing students showed adequate clinical competence. Satisfaction with clinical simulation was higher when students were assessed using formative evaluation. The main students’ complaints with summative evaluation were related to reduced time for performing simulated scenarios and increased anxiety during their clinical performance.

The best solution to reduce students’ complaints with summative evaluation is to orient them to the simulated environment. It should be recommended to combine both evaluation strategies in simulated-based assessment, providing students feedback in summative evaluation, as well as evaluating their achievement of learning outcomes in formative evaluation.

Peer Review reports

Clinical simulation methodology has increased exponentially over the last few years and has gained acceptance in nursing education. Simulation-based education (SBE) is considered an effective educational methodology for nursing students to achieve the competencies needed for their professional future [ 1 – 5 ]. In addition, simulation-based educational programs have demonstrated to be more useful than traditional teaching methodologies [ 4 , 6 ]. As a result, most nursing faculties are integrating this methodology into their study plans [ 7 ]. SBE has the potential to shorten the learning curve for students, increase the fusion between theoretical knowledge and clinical practice, establish deficient areas in students, develop communication and technical skills acquisition, improve patient safety, standardise the curriculum and teaching contents, and offer observations of real-time clinical decision making [ 5 , 6 , 8 , 9 ].

SBE offers an excellent opportunity to perform not only observed competency-based teaching, but also the assessment of these competencies. Simulated-based assessment (SBA) is aimed at evaluating various professional skills, including knowledge, technical and clinical skills, communication, and decision-making; as well as higher-order competencies such as patient safety and teamwork skills [ 1 – 4 , 10 ]. Compared with other traditional assessment methods (i.e. written or oral test), SBA offers the opportunity to evaluate the actual performance in an environment similar to the ‘real’ clinical practice, assess multidimensional professional competencies, and present standard clinical scenarios to all students [ 1 – 4 , 10 ].

The main SBA strategies are formative and summative evaluation. Formative evaluation is conducted to establish students’ progression during the course [ 11 ]. This evaluation strategy is helpful to educators in improving students’ deficient areas and testing their knowledge [ 12 ]. Employing this evaluation strategy, educators give students feedback about their performance. Subsequently, students self-reflect to evaluate their learning and determine their deficient areas. In this sense, formative evaluation includes an ideal phase to achieve the purposes of strategy: the debriefing [ 13 ]. International Nursing Association for Clinical Simulation and Learning (INACSL) defines debriefing as a reflective process immediately following the simulation-based experience where ‘participants explore their emotions and question, reflect, and provide feedback to one another’. Its aim is ‘to move toward assimilation and accommodation to transfer learning to future situations’ [ 14 ]. Therefore, debriefing is a basic component for learning to be effective after the simulation [ 15 , 16 ]. Furthermore, MAES© (according to its Spanish initials of self-learning methodology in simulated environments) is a clinical simulation methodology created to perform formative evaluations [ 17 ]. MAES© allows evaluating specifically nursing competencies acquired by several nursing students at the same time. MAES© is structured through the union of other active learning methodologies such as self-directed learning, problem-based learning, peer education and simulation-based learning. Specifically, students acquire and develop competencies through self-directed learning, as they voluntarily choose competencies to learn. Furthermore, this methodology encourages students to be the protagonists of their learning process, since they can choose the case they want to study, design the clinical simulation scenario and, finally, actively participate during the debriefing phase [ 17 ]. This methodology meets all the requirements defined by the INACSL Standards of Best Practice [ 18 ]. Compared to traditional simulation-based learning (where simulated clinical scenarios are designed by the teaching team and led by facilitators), the MAES© methodology (where simulated clinical scenarios are designed and led by students) provides students nursing a better learning process and clinical performance [ 19 ]. Currently, the MAES© methodology is used in clinical simulation sessions with nursing students in some universities, not only in Spain but also in Norway, Portugal and Brazil [ 20 ].

In contrast, summative evaluation is used to establish the learning outcomes achieved by students at the end of the course [ 11 ]. This evaluation strategy is helpful to educators in evaluating students’ learning, the competencies acquired by them and their academic achievement [ 12 ]. This assessment is essential in the education process to determine readiness and competence for certification and accreditation [ 10 , 21 ]. Accordingly, Objective Structured Clinical Examination (OSCE) is commonly conducted in SBA as a summative evaluation to evaluate students’ clinical competence [ 22 ]. Consequently, OSCE has been used by educational institutions as a valid and reliable method of assessment. OSCE most commonly consists of a ‘round-robin’ of multiple short testing stations, in each of which students must demonstrate defined clinical competencies, while educators evaluate their performance according to predetermined criteria using a standardized marking scheme, such as checklists. Students must rotate through these stations where educators assess students’ performance in clinical examination, technical skills, clinical judgment and decision-making skill during the nursing process [ 22 , 23 ]. This strategy of summative evaluation incorporates actors performing as simulated patients. Therefore, OSCE allows assessing students’ clinical competence in a real-life simulated clinical environment. After simulated scenarios, this evaluation strategy provides educators with an opportunity to give students constructive feedback according to their achieved results in the checklist [ 10 , 21 – 23 ].

Despite both evaluation strategies are widely employed in SBA, there is scarce evidence about the possible differences in satisfaction with clinical simulation when nursing students are assessed using formative and summative evaluation. Considering the high satisfaction with the formative evaluation perceived by our students during the implementation of the MAES© methodology, we were concerned if this satisfaction would be similar using the same simulated clinical scenarios through a summative evaluation. Additionally, we were concerned about the reasons why this satisfaction would be different using both strategies of SBA. Therefore, the aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation methodology in undergraduate nursing students, as well as to compare their satisfaction with this methodology using two strategies of SBA, such as formative and summative evaluation. In this sense, our research hypothesis is that both strategies of SBA are effective in acquiring nursing competencies, but student satisfaction with the formative evaluation is higher than with the summative evaluation.

Study design and setting

A descriptive cross-sectional study using a mixed-method and analysing both quantitative and qualitative data. The study was conducted from September 2018 to May 2019 in a University Centre of Health Sciences in Madrid (Spain). This centre offers Physiotherapy and Nursing Degrees.

Participants

The study included 3rd-year undergraduate students (106 students participated in MAES© sessions within the subject ‘Nursing care for critical patients’) and 4th-year undergraduate students (112 students participated in OSCE sessions within the subject ‘Supervised clinical placements – Advanced level’) in Nursing Degree. It should be noted, 4th-year undergraduate students had completed all their clinical placements and they had to approve OSCE sessions to achieve their certification.

Clinical simulation sessions

To assess the clinical performance of 3rd-year undergraduate students using formative evaluation, MAES© sessions were conducted. This methodology consists of 6 elements in a minimum of two sessions [ 17 ]: Team selection and creation of group identity (students are grouped into teams and they create their own identity), voluntary choice of subject of study (each team will freely choose a topic that will serve as inspiration for the design of a simulation scenario), establishment of baseline and programming skills to be acquired through brainstorming (the students, by teams, decide what they know about the subject and then what they want to learn from it, as well as the clinical and non- technical skills they would like to acquire with the case they have chosen), design of a clinical simulation scenario in which the students practice the skills to be acquired (each team commits to designing a scenario in the simulation room), execution of the simulated clinical experience (another team, different from the one that has designed the case, will enter the high-fidelity simulation room and will have a simulation experience), and finally debriefing and presentation of the acquired skills (in addition to analysing the performance of the participants in the scenario, the students explain what they learned during the design of the case and look for evidence of the learning objectives).

Alternatively, OSCE sessions were developed to assess the clinical performance of 4th-year undergraduate students using summative evaluation. Both MAES© and OSCE sessions recreated critically ill patients with diagnoses of Exacerbation of Chronic Obstructive Pulmonary Disease (COPD), acute coronary syndrome haemorrhage in a postsurgical, and severe traumatic brain injury.

It should be noted that the implementation of all MAES© and OSCEs sessions followed the Standards of Best Practice recommended by the INACSL [ 14 , 24 – 26 ]. In this way, all the stages included in a high-fidelity session were accomplished: pre-briefing, briefing, simulated scenario, and debriefing. Specifically, a session with all nursing students was carried out 1 week before the performance of OSCE stations to establish a safe psychological learning environment and familiarize students with this summative evaluation. In this pre-briefing phase, we implemented several activities based on practices recommended by the INACSL Standards Committee [ 24 , 25 ] and Rudolph, Raemer, and Simon [ 27 ] for establishing a psychologically safe context. Although traditional OSCEs do not usually include the debriefing phase, we decided to include this phase in all OSCEs carried out in our university centre, since we consider this phase is quite relevant to nursing students’ learning process and their imminent professional career.

Critically ill patient’s role was performed by an advanced simulator mannequin (NursingAnne® by Laerdal Medical AS) in all simulated scenarios. A confederate (a health professional who acts in a simulated scenario) performed the role of a registered nurse or a physician who could help students as required. Occasionally, this confederate could perform the role of a relative of a critically ill patient. Nursing students formed work teams of 2–3 students in all MAES© and OSCE sessions. Specifically, each work team formed in MAES© sessions received a brief description of simulated scenario 2 months before and students had to propose 3 NIC (Nursing Interventions Classification) interventions [ 28 ], and 5 related nursing activities with each of them, to resolve the critical situation. In contrast, the critical situation was presented to each work team formed in OSCE sessions for 2 min before entering the simulated scenario. During all simulated experiences, professors were monitoring and controlling the simulation with a sophisticated computer program in a dedicated control room. All simulated scenarios lasted 10 min.

After each clinical simulated scenario was concluded, a debriefing was carried out to give students feedback about their performance. Debriefings in MAES© sessions were conducted according to the Gather, Analyse, and Summarise (GAS) method, a structured debriefing model developed by Phrampus and O’Donnell [ 29 ]. According to this method, the debriefing questions used were: What went well during your performance?; What did not go so well during your performance?; How can you do better next time? . Additionally, MAES© includes an expository phase in debriefings, where the students who performed the simulated scenario establish the contributions of scientific evidence about its resolution [ 17 ]. Each debriefing lasted 20 min in MAES© sessions. In contrast, debriefings in OSCE sessions lasted 10 min and they were carried out according to the Plus-Delta debriefing tool [ 30 ], a technique recommended when time is limited. Consequently, the debriefing questions were reduced to two questions: What went well during your performance?; What did not go so well during your performance? . Within these debriefings, professors communicated to students the total score obtained in the appropriate checklist. Each debriefing lasted 10 min in OSCE sessions. After all debriefings, students completed the questionnaires to evaluate their satisfaction with clinical simulation. In OSCE sessions, students had to report their satisfaction only with the scenario performed, which took part in a series of clinical stations.

In summary, Table  1 shows the required elements for formative and summative evaluation according to the Standards of Best Practice for participant evaluation recommended by the INACSL [ 18 ]. It should be noted that our MAES© and OSCE sessions accomplished these required elements.

Instruments

Clinical performance.

Professors assessed students’ clinical performance using checklists (‘Yes’/‘No’). In MAES© sessions, checklists were based on the 5 most important nursing activities included in the NIC [ 28 ] selected by nursing students. Table  2 shows the checklist of the most important NIC interventions and its related nursing activities selected by nursing students in the Exacerbation of Chronic Obstructive Pulmonary Disease (COPD) simulated scenario. In contrast, checklists for evaluating OSCE sessions were based on nursing activities selected by consensus among professors, registered nurses, and clinical placement mentors. Nursing activities were divided into 5 categories: nursing assessment, clinical judgment/decision-making, clinical management/nursing care, communication/interpersonal relationships, and teamwork. Table  3 shows the checklist of nursing activities that nursing students had to perform in COPD simulated scenario. During the execution of all simulated scenarios, professors checked if the participants perform or not the nursing activities selected.

Clinical simulation satisfaction

To determine satisfaction with clinical simulation perceived by nursing students, the Satisfaction Scale Questionnaire with High-Fidelity Clinical Simulation [ 31 ] was used after each clinical simulation session. This questionnaire consists of 33 items with a 5-point Likert scale ranging from ‘strongly disagree’ to ‘totally agree’. These items are divided into 8 scales: simulation utility, characteristics of cases and applications, communication, self-reflection on performance, increased self-confidence, relation between theory and practice, facilities and equipment and negative aspects of simulation. Cronbach’s α values for each scale ranged from .914 to .918 and total scale presents satisfactory internal consistency (Cronbach’s α value = .920). This questionnaire includes a final question about any opinion or suggestion that participating students wish to reflect after the simulation experience.

Data analysis

Quantitative data were analysed using IBM SPSS Statistics version 24.0 software for Windows (IBM Corp., Armonk, NY, USA). Descriptive statistics were calculated to interpret the results obtained in demographic data, clinical performance, and satisfaction with clinical simulation. The dependent variables after the program in the two groups were analyzed using independent t-tests. The differences in the mean changes between the two groups were analyzed using an independent t-test. Cohen’s d was calculated to analyse the effect size for t-tests. Statistical tests were two-sided (α = 0.05), so the statistical significance was set at 0.05. Subsequently, all students’ opinions and comments were analysed using the ATLAS-ti version 8.0 software (Scientific Software Development GmbH, Berlin, Germany). All the information contained in these qualitative data were stored, managed, classified and organized through this software. All the reiterated words, sentences or ideas were grouped into themes using a thematic analysis [ 32 ]. It should be noted that the students’ opinions and comments were preceded by the letter ‘S’ (student) and numerically labelled.

A total of 218 nursing students participated in the study (106 students were trained through MAES© sessions, whereas 112 students were assessed through OSCE sessions). The age of students ranged from 20 to 43 years (mean = 23.28; SD = 4.376). Most students were women ( n  = 184; 84.4%).

In formative evaluation, professors checked 93.2% of students selected adequately both NIC interventions and its related nursing activities for the resolution of the clinical simulated scenario. Subsequently, these professors checked 85.6% of students, who participated in each simulated scenario, performed the nursing activities previously selected by them. In summative evaluation, students obtained total scores ranged from 65 to 95 points (mean = 7.43; SD = .408).

Descriptive data for each scale of satisfaction with clinical simulation questionnaire, t-test, and effect sizes (d) of differences between two evaluation strategies are shown in Table  4 . Statistically significant differences were found between two evaluation strategies for all scales of the satisfaction with clinical simulation questionnaire. Students´ satisfaction with clinical simulation was higher for all scales of the questionnaire when they were assessed using formative evaluation, including the ‘negative aspects of simulation’ scale, where the students perceived fewer negative aspects. The effect size of these differences was large (including the total score of the questionnaire) (Cohen’s d values > .8), except for the ‘facilities and equipment’ scale, which effect size was medium (Cohen’s d value > .5) [ 33 ].

Table  5 shows specifically descriptive data, t-test, and effect sizes (d) of differences between both evaluation strategies for each item of the clinical simulation satisfaction questionnaire. Statistically significant differences were found between two evaluation strategies for all items of the questionnaire, except for items ‘I have improved communication with the family’, ‘I have improved communication with the patient’, and ‘I lost calm during any of the cases’. Students´ satisfaction with clinical simulation was higher in formative evaluation sessions for most items, except for item ‘simulation has made me more aware/worried about clinical practice’, where students informed being more aware and worried in summative evaluation sessions. Most effect sizes of these differences were small or medium (Cohen’s d values ranged from .238 to .709) [ 33 ]. The largest effect sizes of these differences were obtained for items ‘timing for each simulation case has been adequate’ (d = 1.107), ‘overall satisfaction of sessions’ (d = .953), and ‘simulation has made me more aware/worried about clinical practice’ (d = -.947). In contrast, the smallest effect sizes of these differences were obtained for items ‘simulation allows us to plan the patient care effectively’ (d = .238) and ‘the degree of cases difficulty was appropriate to my knowledge’ (d = .257).

In addition, participating students provided 74 opinions or suggestions expressed through short comments. Most students’ comments were related to 3 main themes after the thematic analysis: utility of clinical simulation methodology (S45: ‘it has been a useful activity and it helped us to recognize our mistakes and fixing knowledge’, S94: ‘to link theory to practice is essential’), to spend more time on this methodology (S113: ‘I would ask for more practices of this type‘, S178: ‘I feel very happy, but it should be done more frequently’), and its integration into other subjects (S21: ‘I consider this activity should be implemented in more subjects’, S64: ‘I wish there were more simulations in more subjects’). Finally, students´ comments about summative evaluation sessions included other 2 main themes related to: limited time of simulation experience (S134: ‘time is short’, S197: ‘there is no time to perform activities and assess properly’) and students´ anxiety (S123: ‘I was very nervous because people were evaluating me around’, S187: ‘I was more nervous than in a real situation’).

The most significant results obtained in our study are the nursing competency acquisition through clinical simulation by nursing students and the different level of their satisfaction with this methodology depending on the evaluation strategy employed.

Firstly, professors in this study verified most students acquired the nursing competencies to resolve each clinical situation. In our study, professors verified that most nursing students performed the majority of the nursing activities required for the resolution of each MAES© session and OSCE station. This result confirms the findings in other studies that have demonstrated nursing competency acquisition by nursing students through clinical simulation [ 34 , 35 ], and specifically nursing competencies related to critical patient management [ 9 , 36 ].

Secondly, students’ satisfaction assessed using both evaluation strategies could be considered high in most items of the questionnaire, regarding their mean scores (quite close to the maximum score in the response scale of the satisfaction questionnaire). The high level of satisfaction expressed by nursing students with clinical simulation obtained in this study is also congruent with empirical evidence, which confirms that this methodology is a useful tool for their learning process [ 6 , 31 , 37 – 40 ].

However, satisfaction with clinical simulation was higher when students were assessed using formative evaluation. The main students’ complaints with summative evaluation were related to reduced time for performing simulated scenarios and increased anxiety during their clinical performance. Reduced time is a frequent complaint of students in OSCE [ 23 , 41 ] and clinical simulation methodology [ 5 , 6 , 10 ]. Professors, registered nurses, and clinical placement mentors tested all simulated scenarios and their checklist in this study. They checked the time was enough for its resolution. Another criticism of summative evaluation is increased anxiety. However, several studies have demonstrated during clinical simulation students’ anxiety increase [ 42 , 43 ] and it is considered as the most disadvantage of clinical simulation [ 1 – 10 ]. In this sense, anxiety may influence negatively students’ learning process [ 42 , 43 ]. Although the current simulation methodology can mimic the real medical environment to a great degree, it might still be questionable whether students´ performance in the testing environment really represents their true ability. Test anxiety might increase in an unfamiliar testing environment; difficulty to handle unfamiliar technology (i.e., monitor, defibrillator, or other devices that may be different from the ones used in the examinee’s specific clinical environment) or even the need to ‘act as if’ in an artificial scenario (i.e., talking to a simulator, examining a ‘patient’ knowing he/she is an actor or a mannequin) might all compromise examinees’ performance. The best solution to reduce these complaints is the orientation of students to the simulated environment [ 10 , 21 – 23 ].

Nevertheless, it should be noted that the diversity in the satisfaction scores obtained in our study could be supported not by the choice of the assessment strategy, but precisely by the different purposes of formative and summative assessment. In this sense, there is a component of anxiety that is intrinsic in summative assessment, which must certify the acquisition of competencies [ 10 – 12 , 21 ]. In contrast, this aspect is not present in formative assessment, which is intended to help the student understand the distance to reach the expected level of competence, without penalty effects [ 10 – 12 ].

Both SBA strategies allow educators to evaluate students’ knowledge and apply it in a clinical setting. However, formative evaluation is identified as ‘assessment for learning’ and summative evaluation as ‘assessment of learning’ [ 44 ]. Using formative evaluation, educators’ responsibility is to ensure not only what students are learning in the classroom, but also the outcomes of their learning process [ 45 ]. In this sense, formative assessment by itself is not enough to determine educational outcomes [ 46 ]. Consequently, a checklist for evaluating students’ clinical performance was included in MAES© sessions. Alternatively, educators cannot make any corrections in students’ performance using summative evaluation [ 45 ]. Gavriel [ 44 ] suggests providing students feedback in this SBA strategy. Therefore, a debriefing phase was included after each OSCE session in our study. The significance of debriefing recognised by nursing students in our study is also congruent with the most evidence found  [ 13 , 15 , 16 , 47 ]. Nursing students appreciate feedback about their performance during simulation experience and, consequently, debriefing is considered as the most rewarding phase in clinical simulation by them  [ 5 , 6 , 48 ]. In addition, nursing students in our study expressed they could learn from their mistakes in debriefing. Learn from error is one of the most advantages of clinical simulation shown in several studies  [ 5 , 6 , 49 ] and mistakes should be considered learning opportunities rather than there being embarrassment or punitive consequences  [ 50 ].

Furthermore, nursing students who participated in our study considered the practical utility of clinical simulation as another advantage of this teaching methodology. This result is congruent with previous studies [ 5 , 6 ]. Specifically, our students indicated this methodology is useful to bridge the gap between theory and practice [ 51 , 52 ]. In this sense, clinical simulation has proven to reduce this gap and, consequently, it has demonstrated to shorten the gap between classrooms and clinical practices  [ 5 , 6 , 51 , 52 ]. Therefore, as this teaching methodology relates theory and practice, it helps nursing students to be prepared for their clinical practices and future careers. According to Benner’s model of skill acquisition in nursing [ 53 ], nursing students become competent nurses through this learning process, acquiring a degree of safety and clinical experience before their professional careers [ 54 ]. Although our research indicates clinical simulation is a useful methodology for the acquisition and learning process of competencies mainly related to adequate management and nursing care of critically ill patients, this acquisition and learning process could be extended to most nursing care settings and its required nursing competencies.

Limitations and future research

Although checklists employed in OSCE have been criticized for their subjective construction [ 10 , 21 – 23 ], they were constructed with the expert consensus of nursing professors, registered nurses and clinical placement mentors. Alternatively, the self-reported questionnaire used to evaluate clinical simulation satisfaction has strong validity. All simulated scenarios were similar in MAES© and OSCE sessions (same clinical situations, patients, actors and number of participating students), although the debriefing method employed after them was different. This difference was due to reduced time in OSCE sessions. Furthermore, it should be pointed out that the two groups of students involved in our study were from different course years and they were exposed to different strategies of SBA. In this sense, future studies should compare nursing students’ satisfaction with both strategies of SBA in the same group of students and using the same debriefing method. Finally, future research should combine formative and summative evaluation for assessing the clinical performance of undergraduate nursing students in simulated scenarios.

It is needed to provide students feedback about their clinical performance when they are assessed using summative evaluation. Furthermore, it is needed to evaluate whether they achieve learning outcomes when they are assessed using formative evaluation. Consequently, it should be recommended to combine both evaluation strategies in SBA. Although students expressed high satisfaction with clinical simulation methodology, they perceived a reduced time and increased anxiety when they are assessed by summative evaluation. The best solution is the orientation of students to the simulated environment.

Availability of data and materials

The datasets analysed during the current study are available from the corresponding author on reasonable request.

Martins J, Baptista R, Coutinho V, Fernandes M, Fernandes A. Simulation in nursing and midwifery education. Copenhagen: World Health Organization Regional Office for Europe; 2018.

Google Scholar  

Cant RP, Cooper SJ. Simulation-based learning in nurse education: systematic review. J Adv Nurs. 2010;66:3–15.

Article   PubMed   Google Scholar  

Chernikova O, Heitzmann N, Stadler M, Holzberger D, Seidel T, Fischer F. Simulation-based learning in higher education: a meta-analysis. Rev Educ Res. 2020;90:499–541.

Article   Google Scholar  

Kim J, Park JH, Shin S. Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis. BMC Med Educ. 2016;16:152.

Article   PubMed   PubMed Central   Google Scholar  

Ricketts B. The role of simulation for learning within pre-registration nursing education—a literature review. Nurse Educ Today. 2011;31:650–4.

PubMed   Google Scholar  

Shin S, Park JH, Kim JH. Effectiveness of patient simulation in nursing education: meta-analysis. Nurse Educ Today. 2015;35:176–82.

Bagnasco A, Pagnucci N, Tolotti A, Rosa F, Torre G, Sasso L. The role of simulation in developing communication and gestural skills in medical students. BMC Med Educ. 2014;14:106.

Oh PJ, Jeon KD, Koh MS. The effects of simulation-based learning using standardized patients in nursing students: a meta-analysis. Nurse Educ Today. 2015;35:e6–e15.

Stayt LC, Merriman C, Ricketts B, Morton S, Simpson T. Recognizing and managing a deteriorating patient: a randomized controlled trial investigating the effectiveness of clinical simulation in improving clinical performance in undergraduate nursing students. J Adv Nurs. 2015;71:2563–74.

Ryall T, Judd BK, Gordon CJ. Simulation-based assessments in health professional education: a systematic review. J Multidiscip Healthc. 2016;9:69–82.

PubMed   PubMed Central   Google Scholar  

Billings DM, Halstead JA. Teaching in nursing: a guide for faculty. 4th ed. St. Louis: Elsevier; 2012.

Nichols PD, Meyers JL, Burling KS. A framework for evaluating and planning assessments intended to improve student achievement. Educ Meas Issues Pract. 2009;28:14–23.

Cant RP, Cooper SJ. The benefits of debriefing as formative feedback in nurse education. Aust J Adv Nurs. 2011;29:37–47.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Simulation Glossary. Clin Simul Nurs. 2016;12:S39–47.

Dufrene C, Young A. Successful debriefing-best methods to achieve positive learning outcomes: a literature review. Nurse Educ Today. 2014;34:372–6.

Levett-Jones T, Lapkin S. A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Educ Today. 2014;34:e58–63.

Díaz JL, Leal C, García JA, Hernández E, Adánez MG, Sáez A. Self-learning methodology in simulated environments (MAES©): elements and characteristics. Clin Simul Nurs. 2016;12:268–74.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM : Participant Evaluation. Clin Simul Nurs. 2016;12:S26–9.

Díaz Agea JL, Megías Nicolás A, García Méndez JA, Adánez Martínez MG, Leal CC. Improving simulation performance through self-learning methodology in simulated environments (MAES©). Nurse Educ Today. 2019;76:62–7.

Díaz Agea JL, Ramos-Morcillo AJ, Amo Setien FJ, Ruzafa-Martínez M, Hueso-Montoro C, Leal-Costa C. Perceptions about the self-learning methodology in simulated environments in nursing students: a mixed study. Int J Environ Res Public Health. 2019;16:4646.

Article   PubMed Central   Google Scholar  

Oermann MH, Kardong-Edgren S, Rizzolo MA. Summative simulated-based assessment in nursing programs. J Nurs Educ. 2016;55:323–8.

Harden RM, Gleeson FA. Assessment of clinical competence using an objective structured clinical examination (OSCE). Med Educ. 1979;13:41–54.

CAS   PubMed   Google Scholar  

Mitchell ML, Henderson A, Groves M, Dalton M, Nulty D. The objective structured clinical examination (OSCE): optimising its value in the undergraduate nursing curriculum. Nurse Educ Today. 2009;29:394–404.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Simulation Design. Clin Simul Nurs. 2016;12:S5–S12.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Facilitation. Clin Simul Nurs. 2016;12:S16–20.

INACSL Standards Committee. INACSL Standards of Best Practice: Simulation SM Debriefing. Clin Simul Nurs. 2016;12:S21–5.

Rudolph JW, Raemer D, Simon R. Establishing a safe container for learning in simulation: the role of the presimulation briefing. Simul Healthc. 2014;9:339–49.

Butcher HK, Bulechek GM, Dochterman JMM, Wagner C. Nursing Interventions Classification (NIC). 7th ed. St. Louis: Elsevier; 2018.

Phrampus PE, O’Donnell JM. Debriefing using a structured and supported approach. In: AI AIL, De Maria JS, Schwartz AD, Sim AJ, editors. The comprehensive textbook of healthcare simulation. New York: Springer; 2013. p. 73–84.

Chapter   Google Scholar  

Decker S, Fey M, Sideras S, Caballero S, Rockstraw L, Boese T, et al. Standards of best practice: simulation standard VI: the debriefing process. Clin Simul Nurs. 2013;9:S26–9.

Alconero-Camarero AR, Gualdrón-Romero A, Sarabia-Cobo CM, Martínez-Arce A. Clinical simulation as a learning tool in undergraduate nursing: validation of a questionnaire. Nurse Educ Today. 2016;39:128–34.

Mayan M. Essentials of qualitative inquiry. Walnut Creek: Left Coast Press, Inc.; 2009.

Cohen L, Manion L, Morrison K. Research methods in education. 7th ed. London: Routledge; 2011.

Lapkin S, Levett-Jones T, Bellchambers H, Fernandez R. Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: a systematic review. Clin Simul Nurs. 2010;6:207–22.

McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. Revisiting “a critical review of simulation-based medical education research: 2003-2009”. Med Educ. 2016;50:986–91.

Abelsson A, Bisholt B. Nurse students learning acute care by simulation - focus on observation and debriefing. Nurse Educ Pract. 2017;24:6–13.

Bland AJ, Topping A, Wood BA. Concept analysis of simulation as a learning strategy in the education of undergraduate nursing students. Nurse Educ Today. 2011;31:664–70.

Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN student satisfaction and self-confidence in learning, design scale simulation, and educational practices questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today. 2014;34:1298–304.

Levett-Jones T, McCoy M, Lapkin S, Noble D, Hoffman K, Dempsey J, et al. The development and psychometric testing of the satisfaction with simulation experience scale. Nurse Educ Today. 2011;31:705–10.

Zapko KA, Ferranto MLG, Blasiman R, Shelestak D. Evaluating best educational practices, student satisfaction, and self-confidence in simulation: a descriptive study. Nurse Educ Today. 2018;60:28–34.

Kelly MA, Mitchell ML, Henderson A, Jeffrey CA, Groves M, Nulty DD, et al. OSCE best practice guidelines-applicability for nursing simulations. Adv Simul. 2016;1:10.

Cantrell ML, Meyer SL, Mosack V. Effects of simulation on nursing student stress: an integrative review. J Nurs Educ. 2017;56:139–44.

Nielsen B, Harder N. Causes of student anxiety during simulation: what the literature says. Clin Simul Nurs. 2013;9:e507–12.

Gavriel J. Assessment for learning: a wider (classroom-researched) perspective is important for formative assessment and self-directed learning in general practice. Educ Prim Care. 2013;24:93–6.

Taras M. Summative and formative assessment. Act Learn High Educ. 2008;9:172–82.

Wunder LL, Glymph DC, Newman J, Gonzalez V, Gonzalez JE, Groom JA. Objective structured clinical examination as an educational initiative for summative simulation competency evaluation of first-year student registered nurse anesthetists’ clinical skills. AANA J. 2014;82:419–25.

Neill MA, Wotton K. High-fidelity simulation debriefing in nursing education: a literature review. Clin Simul Nurs. 2011;7:e161–8.

Norman J. Systematic review of the literature on simulation in nursing education. ABNF J. 2012;23:24–8.

King A, Holder MGJr, Ahmed RA. Error as allies: error management training in health professions education. BMJ Qual Saf. 2013;22:516–9.

Higgins M, Ishimaru A, Holcombe R, Fowler A. Examining organizational learning in schools: the role of psychological safety, experimentation, and leadership that reinforces learning. J Educ Change. 2012;13:67–94.

Hope A, Garside J, Prescott S. Rethinking theory and practice: Pre-registration student nurses experiences of simulation teaching and learning in the acquisition of clinical skills in preparation for practice. Nurse Educ Today. 2011;31:711–7.

Lisko SA, O’Dell V. Integration of theory and practice: experiential learning theory and nursing education. Nurs Educ Perspect. 2010;31:106–8.

Benner P. From novice to expert: excellence and power in clinical nursing practice. Menlo Park: Addison-Wesley Publishing; 1984.

Book   Google Scholar  

Nickless LJ. The use of simulation to address the acute care skills deficit in pre-registration nursing students: a clinical skill perspective. Nurse Educ Pract. 2011;11:199–205.

Download references

Acknowledgements

The authors appreciate the collaboration of nursing students who participated in the study.

STROBE statement

All methods were carried out in accordance with the 22-item checklist of the consolidated criteria for reporting cross-sectional studies (STROBE).

The authors have no sources of funding to declare.

Author information

Authors and affiliations.

Fundación San Juan de Dios, Centro de Ciencias de la Salud San Rafael, Universidad de Nebrija, Paseo de La Habana, 70, 28036, Madrid, Spain

Oscar Arrogante, Gracia María González-Romero, Eva María López-Torre, Laura Carrión-García & Alberto Polo

You can also search for this author in PubMed   Google Scholar

Contributions

OA: Conceptualization, Data Collection, Formal Analysis, Writing – Original Draft, Writing - Review & Editing, Supervision; GMGR: Conceptualization, Data Collection, Writing - Review & Editing; EMLT: Conceptualization, Writing - Review & Editing; LCG: Conceptualization, Data Collection, Writing - Review & Editing; AP: Conceptualization, Data Collection, Formal Analysis, Writing - Review & Editing, Supervision. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Oscar Arrogante .

Ethics declarations

Ethics approval and consent to participate.

The research committee of the Centro Universitario de Ciencias de la Salud San Rafael-Nebrija approved the study (P_2018_012). According to the ethical standards, all participants received written informed consent and written information about the study and its goals. Additionally, written informed consent for audio-video recording was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Arrogante, O., González-Romero, G.M., López-Torre, E.M. et al. Comparing formative and summative simulation-based assessment in undergraduate nursing students: nursing competency acquisition and clinical simulation satisfaction. BMC Nurs 20 , 92 (2021). https://doi.org/10.1186/s12912-021-00614-2

Download citation

Received : 09 February 2021

Accepted : 17 May 2021

Published : 08 June 2021

DOI : https://doi.org/10.1186/s12912-021-00614-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical competence
  • High Fidelity simulation training
  • Nursing students

BMC Nursing

ISSN: 1472-6955

formative assessment in nursing education

Assessment and Evaluation in Nursing Education: A Simulation Perspective

  • First Online: 29 February 2024

Cite this chapter

Book cover

  • Loretta Garvey 7 &
  • Debra Kiegaldie 8  

Part of the book series: Comprehensive Healthcare Simulation ((CHS))

131 Accesses

Assessment and evaluation are used extensively in nursing education. In many instances, these terms are often used interchangeably, which can create confusion, yet key differences are associated with each.

Assessment in undergraduate nursing education is designed to ascertain whether students have achieved their potential and have acquired the knowledge, skills, and abilities set out within their course. Assessment aims to understand and improve student learning and must be at the forefront of curriculum planning to ensure assessments are well aligned with learning outcomes. In the past, the focus of assessment has often been on a single assessment. However, it is now understood that we must examine the whole system or program of assessment within a course of study to ensure integration and recognition of all assessment elements to holistically achieve overall course aims and objectives. Simulation is emerging as a safe and effective assessment tool that is increasingly used in undergraduate nursing.

Evaluation, however, is more summative in that it evaluates student attainment of course outcomes and their views on the learning process to achieve those outcomes. Program evaluation takes assessment of learning a step further in that it is a systematic method to assess the design, implementation, improvement, or outcomes of a program. According to Frye and Hemmer, student assessments (measurements) can be important to the evaluation process, but evaluation measurements come from various sources (Frye and Hemmer. Med Teacher 34:e288-e99, 2012). Essentially, program evaluation is concerned with the utility of its process and results (Alkin and King. Am J Evalu 37:568–79, 2016). The evaluation of simulation as a distinct program of learning is an important consideration when designing and implementing simulation into undergraduate nursing. This chapter will examine assessment and program evaluation from the simulation perspective in undergraduate nursing to explain the important principles, components, best practice approaches, and practical applications that must be considered.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Masters GN. Reforming Education Assessment: Imperatives, principles, and challenges. Camberwell: ACER Press; 2013.

Google Scholar  

MacLellan E. Assessment for Learning: the differing perceptions of tutors and students. Assess Eval High Educ. 2001;26(4):307–18.

Article   Google Scholar  

Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9):S63–7.

Article   CAS   PubMed   Google Scholar  

Alinier G. Nursing students’ and lecturers’ perspectives of objective structured clinical examination incorporating simulation. Nurse Educ Today. 2003;23(6):419–26.

Article   PubMed   Google Scholar  

Norcini J, Anderson MB, Bollela V, Burch V, Costa MJ, Duvivier R, et al. 2018 Consensus framework for good assessment. Med Teach. 2018;40(11):1102–9.

Biggs J. Constructive alignment in university teaching: HERDSA. Rev High Educ. 2014;1:5–22.

Hamdy H. Blueprinting for the assessment of health care professionals. Clin Teach. 2006;3(3):175–9.

Welch S. Program evaluation: a concept analysis. Teach Learn Nurs. 2021;16(1):81–4.

Frye AW, Hemmer PA. Program evaluation models and related theories: AMEE Guide No. 67. Med Teach. 2012;34(5):e288–e99.

Johnston S, Coyer FM, Nash R. Kirkpatrick's evaluation of simulation and debriefing in health care education: a systematic review. J Nurs Educ. 2018;57(7):393–8.

ACGM. Glossary of Terms: Accreditation Council for Graduate Medical Education 2020. https://www.acgme.org/globalassets/pdfs/ab_acgmeglossary.pdf .

Shadish WR, Luellen JK. History of evaluation. In: Mathison S, editor. Encyclopedia of evaluation. Sage; 2005. p. 183–6.

Lewallen LP. Practical strategies for nursing education program evaluation. J Prof Nurs. 2015;31(2):133–40.

Kirkpatrick DL. Evaluation of training. In: Craig RL, Bittel LR, editors. New York: McGraw Hill; 1967.

Cahapay M. Kirkpatrick model: its limitations as used in higher education evaluation. Int J Assess Tools Educ. 2021;8(1):135–44.

Yardley S, Dornan T. Kirkpatrick's levels and education 'evidence'. Med Educ. 2012;46(1):97–106.

Kirkpatrick J, Kirkpatrick W. An introduction to the new world Kirkpatrick model. Kirkpatrick Partners; 2021.

Bhatia M, Stewart AE, Wallace A, Kumar A, Malhotra A. Evaluation of an in-situ neonatal resuscitation simulation program using the new world Kirkpatrick model. Clin Simul Nurs. 2021;50:27–37.

Lippe M, Carter P. Using the CIPP model to assess nursing education program quality and merit. Teach Learn Nurs. 2018;13(1):9–13.

Kardong-Edgren S, Adamson KA, Fitzgerald C. A review of currently published evaluation instruments for human patient simulation. Clin Simul Nurs. 2010;6(1):e25–35.

Solutions S. Reliability and Validity; 2022

Rauta S, Salanterä S, Vahlberg T, Junttila K. The criterion validity, reliability, and feasibility of an instrument for assessing the nursing intensity in perioperative settings. Nurs Res Pract. 2017;2017:1048052.

PubMed   PubMed Central   Google Scholar  

Jeffries PR, Rizzolo MA. Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: a national, multi-site, multi-method study (summary report). Sci Res. 2006;

Unver V, Basak T, Watts P, Gaioso V, Moss J, Tastan S, et al. The reliability and validity of three questionnaires: The Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire. Contemp Nurse. 2017;53(1):60–74.

Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN Student Satisfaction and Self-Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today. 2014;34(10):1298–304.

Guise J-M, Deering SH, Kanki BG, Osterweil P, Li H, Mori M, et al. Validation of a tool to measure and promote clinical teamwork. Simul Healthc. 2008;3(4)

Millward LJ, Jeffries N. The team survey: a tool for health care team development. J Adv Nurs. 2001;35(2):276–87.

Download references

Author information

Authors and affiliations.

Federation University Australia, University Dr, Mount Helen, VIC, Australia

Loretta Garvey

Holmesglen Institute, Healthscope Hospitals, Monash University, Mount Helen, VIC, Australia

Debra Kiegaldie

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Loretta Garvey .

Editor information

Editors and affiliations.

Emergency Medicine, Icahn School of Medicine at Mount Sinai, Director of Emergency Medicine Simulation, Mount Sinai Hospital, New York, NY, USA

Jared M. Kutzin

School of Nursing, University of California San Francisco, San Francisco, CA, USA

Perinatal Patient Safety, Kaiser Permanente, Pleasanton, CA, USA

Connie M. Lopez

Eastern Health Clinical School, Faculty of Medicine, Nursing & Health Sciences, Monash University, Melbourne, VIC, Australia

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Garvey, L., Kiegaldie, D. (2023). Assessment and Evaluation in Nursing Education: A Simulation Perspective. In: Kutzin, J.M., Waxman, K., Lopez, C.M., Kiegaldie, D. (eds) Comprehensive Healthcare Simulation: Nursing. Comprehensive Healthcare Simulation. Springer, Cham. https://doi.org/10.1007/978-3-031-31090-4_14

Download citation

DOI : https://doi.org/10.1007/978-3-031-31090-4_14

Published : 29 February 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-31089-8

Online ISBN : 978-3-031-31090-4

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Formative Assessment Strategies for Healthcare Educators

formative assessment in nursing education

Formative assessments are those lower-stakes assessments that are delivered during instruction in some way, or 'along the way' so to speak. As an educator, it was always a challenge to identify if or what my students were understanding, what skills they had acquired, and if or how I should adjust my teaching strategy to help improve their learning. I’m guessing I am not alone with this. In medical education, the pace is so fast that many instructors feel like they do not have the time to spare in giving assessments ‘along the way’, but would rather focus on teaching everything students need for the higher-stakes exams. With medical education being incredibly intense and fast, this is completely understandable. However, there must be a reason so much research supports the effectiveness in administering formative assessments….along the way.

One reason formative assessments are proven so useful is they provide meaningful and useful feedback; feedback that can be used by both the instructor and students.

Results from formative assessments should have a direct relation to the learning objectives established by the instructor, and because of this, the results provide trusted feedback for both the instructor and student. This is incredibly important. For instructors, it allows them to make immediate adjustments to their teaching strategy and for the students, it helps them develop a more reliable self-awareness of their own learning. These two things alone are very useful, but when combined, they can result in an increase in student outcomes.

Here are 5 teaching strategies for delivering formative assessments that provide useful feedback opportunities.  

1. Pre-Assessment:

Provides an assessment of student prior knowledge, help identify prior misconceptions, and allow instructors to adjust their approach or target certain areas

  • When instructors have feedback from student assessments prior to class, it is easier to tailor the lesson to student needs.
  • Posing questions prior to class can help students focus on what the instructor thinks is important.
  • By assessing students before class, it helps ensure students are more prepared for what learning will take place in class.
  • Pre-assessments can provide more ‘in-class’ time flexibility- knowing ahead of time which knowledge gaps students may have allows the instructor to better use class time in a more flexible way...not as many ‘surprises’ flexibility.

formative assessment in nursing education

2. Frequent class assessments:

Provides students with feedback for learning during class, and provides a focus for students related to important topics which help increase learning gains

formative assessment in nursing education

  • Adding more formative assessments during class increases student retention.
  • Frequent formative assessments help students stay focused by giving them natural ‘breaks’ from either a lecture or the activity.
  • Multiple formative assessments can provide students with a “road-map” to what the instructor feels is important (i.e. what will appear on summative assessments).
  • By using frequent assessments, the instructor can naturally help students with topic or content transitions during a lecture or activity.
  • The data/feedback from the assessments can help instructors better understand which instructional methods are most effective- in other words, what works and what doesn’t.

3. Guided Study assessments (group or tutorial):

‍ Provides students with opportunities to acquire information needed to complete the assessment, for example through research or group work, and increases student self-awareness related to their own knowledge (gaps)

formative assessment in nursing education

  • Assessments where students are expected to engage in research allows them to develop and use higher-level thinking skills.
  • Guided assessments engage students in active learning either independently or through collaboration with a group.
  • Small group assessments encourage students to articulate their thinking and reasoning, and helps them develop self-awareness about what they do and do not yet understand.
  • Tutorial assessments can provide the instructor real-time feedback for student misconceptions and overall understanding- allowing them to make important decisions about how to teach particular topics.

4. Take-Home assessments: ‍

Allows students to preview the instructors assessment style, are low-stakes and self-paced to allow students to engage with the material, and provides the instructor with formative feedback 

  • Assessments that students can engage in outside of class gives them a ‘preview’ of the information that they will likely need to retrieve again on a summative exam.
  • When students take an assessment at home, the instructor can receive feedback with enough time to adjust the classroom instruction to address knowledge gaps or misconceptions.
  • Take home assessments can help students develop self-awareness of their own misunderstandings or knowledge gaps.

formative assessment in nursing education

5.“Bedside” observation:

Informs students in clinical settings of their level of competence and learning, and may improve motivation and improve participation in clinical activities.

  • Real-time formative assessments can provide students with critical feedback related to the skills that are necessary for practicing medicine.
  • On the fly assessments can help clinical instructors learn more about student understanding as well as any changes they can make in their instruction.
  • Formative assessments in a clinical setting can equip clinical instructors with a valuable tool to help them make informed decisions around their teaching and student learning.
  • Bedside assessments provide a standardized way of formatively assessing students in a very unpredictable learning environment.

The challenge for many instructors is often in the “how” when delivering formative assessments. Thankfully, improving teaching and learning through the use of formative assessments (and feedback) can be greatly enhanced with educational technology. DaVinci Education’s Leo platform provides multiple ways in which you can deliver formative assessments. With Leos’ exam feature you can:

  • Assign pre-class, in-class or take-home quizzes
  • Deliver IRATs used during TBL exercises to assess student individual readiness
  • Deliver GRATs used during TBL exercises by using Leo’s digital scratch-off tool to encourage collaboration and assess group readiness
  • Monitor student performance in real-time using Leo’s Monitor Exam feature
  • Customize student feedback options during or following an assessment

References:

Burch, v. c., seggie, j. l., & gary, n. e. (2006, may). formative assessment promotes learning in undergraduate clinical clerkships. retrieved from https://www.ncbi.nlm.nih.gov/pubmed/16751919, feedback and formative assessment tools . (n.d.). retrieved from http://www.queensu.ca/teachingandlearning/modules/assessments/11_s2_03_feedback_and_formative.html, hattie, j. and timperely, h. (2007). the power of feedback. review of educational research , 77, 81–112, heritage, m. 2014, formative assessment: an enabler of learning, retrieved from http://www.amplify.com/assets/regional/heritage_fa.pdf, magna publications, inc. (2018). designing better quizzes: ideas for rethinking your quiz practices . madison, wi., schlegel, c. (2018). objective structured clinical examination (osce). osce – kompetenzorientiert prüfen in der pflegeausbildung , 1–7. doi: 10.1007/978-3-662-55800-3_1, other resources.

formative assessment in nursing education

510 Meadowmont Village Circle #129 Chapel Hill, NC 27517 ‍ (919) 694-7498

View privacy policy

DAVINCI EDUCATION MANAGEMENT SYSTEM®, ACADEMIC PORTRAIT®, and LEO® are the registered trademarks of DaVinci Education, Inc.

formative assessment in nursing education

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Public Health

Applying formative evaluation in the mentoring of student intern nurses in an emergency department

Yan-ru zhang.

1 Department of Nursing, The First Affiliated Hospital of Fujian Medical University, Fuzhou, China

2 Department of Nursing, The First Clinical Medical College of Fujian Medical University, Fuzhou, China

Rong-fang Hu

3 Department of Nursing, The School of Nursing, Fujian Medical University, Fuzhou, China

Tian-yu Liang

Jian-bang chen.

4 Department of Social Work, The School of Health, Fujian Medical University, Fuzhou, China

Yan-hong Xing

Associated data.

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

To explore the effectiveness of formative evaluation in the mentoring of student nursing interns in an emergency department.

A total of 144 intern nursing students in the emergency department of a tertiary care hospital in Fuzhou were selected as the study subjects from July 2020 to February 2021. Adopting quasi-experimental studies methods, the students were divided into the experiment group ( n = 74) and the control group ( n = 70), based on their practicing rotation times. Formative evaluation methods such as in-person interviews, clinical scenario simulations, and clinical operation skills exams were conducted in the experiment group, while traditional summative evaluation methods were adopted for the control group. At the end of the intern period, a unified examination paper on professional knowledge concerning the emergency department, a cardiopulmonary resuscitation skill assessment, and a self-rating scale of self-directed learning was employed to evaluate professional theory performance, clinical practice ability, self-directed learning ability, and academic satisfaction among the nursing students, respectively.

The professional theoretical performance, clinical practice ability assessment scores, academic satisfaction, and self-directed learning abilities of the nursing students were significantly higher in the experiment group compared with the control group ( P < 0.05).

The application of formative evaluation during the mentoring of student intern nurses in an emergency department improved their professional theoretical performance, clinical practice skills, academic satisfaction, and self-directed learning abilities.

Introduction

As an important department for treating acute and critically ill patients, the emergency department is characterized by a fast pace and the presentation of emergency cases, an environment in which theoretical knowledge and professional technical abilities are the skills that the nursing staff who work in this department require ( 1 , 2 ). Nursing internship is an important stage in the process of transforming nursing students into professional nursing staff. Accordingly, a good internship evaluation system must be adopted that can be used to assess and evaluate all the relevant aspects of intern nursing students and ensure that they understand and can master all the competencies required within an emergency department.

At present, the evaluation method adopted in nursing education in China is primarily summative evaluation; this approach is convenient in terms of its implementation ( 3 ) but presents disadvantages in the form of delivering subjective evaluation, being a single evaluation method, and having a focus on memory and test-based questions ( 4 ). The purpose of summative evaluation is to rank, to judge outcomes, and it's extroverted, for others to see. Most of its evaluation subjects are single, generally for the examiner. The form of feedback is generalized and closed, which may cause nursing students to ignore the details of internship practice. Nursing education should avoid this simplistic evaluation approach as a model of ongoing professional education, where standardization, combined with cognitive immaturity, may lead to professional practice being perceived as “a cookbook practice” ( 5 ).

Formative evaluation is the process of teaching knowledge and understanding the complexity of the integrated learning among students, a process that typically includes self-evaluation by the students themselves, student peer evaluation, and students being evaluated by their teachers; these approaches can help to highlight problems within teaching methods and, accordingly, improve teaching methods over time ( 6 ). Formative evaluation mobilizes the initiative and motivation of both the students and their teachers, provides more timely feedback, promotes student learning, assists in the rational planning of learning schedules, fosters a cooperative spirit, and improves learning efficiency ( 6 – 9 ). When applied to clinical practice, a formative evaluation also provides nurse educators with the opportunity to assess the competencies of students; it enables them to provide timely feedback that is more useful to students as it relates to a broad understanding of the learned knowledge and the transition to professional practice in emergency care ( 10 ).

Research on formative evaluation in the field of nursing in China has mainly focused on the curriculum teaching of undergraduate nursing students and nurses in the field of cardiovascular disease ( 11 – 13 ) while this approach is relatively uncommon in clinical intern teaching. Additionally, incorporating skills-based techniques, evidence-based applications, and improved learning techniques in educational programs, continuously profiling nursing curricula, and integrating multiple formative evaluation methods in nursing education to support and assist students in engaging with constructivist learning methods ( 14 ) also represents new learning directions.

This study explored the use of formative evaluation system to comprehensively evaluate the knowledge, skills, attitude and emotion of nursing students in emergency clinical practice, so as to improve the comprehensive quality of nursing students in emergency clinical practice and make them truly competent for clinical work, thereby providing a reference for the construction of a formative evaluation system for the clinical internship education of nursing students.

Objects and methods

Study participants.

Nursing students who underwent a 4-week internship in the emergency department of a tertiary care hospital in Fuzhou from July 2020 to February 2021 were selected as the study population. Based on the adoption of quasi-experimental study methods, the intern nursing students were grouped according to their internship rotation time. The students who rotated to the emergency department in odd months were allocated to the experiment group and those who rotated to the emergency department in even months were allocated to the control group. There were 74 subjects in the experiment group (71 females and 3 males), including 35 with a bachelor's degree and 39 with a junior college background. There were 70 subjects in the control group (67 females and 3 males), including 34 with a bachelor's degree and 36 with a junior college background. The differences in gender composition ratio and educational background composition ratio between the experiment and the control groups were not statistically significant ( P > 0.05), and the data were comparable.

Study methods

Program preparation stage.

The general teaching responsibility system under the leadership of the head nurse was adopted, and the “one-to-one” teaching mode for mentoring nursing practice in the emergency department was conducted by the mentors. The specific requirements for mentors were as follows: Nurses with a junior college degree or above; nurses with a job title of “nurse” or higher; physically fit and dedicated nurses; nurses with 5 years or more clinical experience in emergency nursing, with a solid knowledge base of emergency department nursing, professional clinical operation skills and rich experience in emergency practice, and who had passed the training and assessment qualifications related to hospital mentoring.

Both groups of internship nurses were taught in a one-to-one mode, with clear responsibilities provided to the teaching staff. The nursing students practiced according to the requirements of the internship program in the emergency department, as shown in Table 1 .

The general characteristics of the mentors.

Mentoring and the evaluation methods for the control group

Nursing students in the control group were taught according to the routine mentoring process for intern nursing students in the emergency department ( Figure 1 ) and the details were as follows. When the nursing students rotated to the department, the chief instructor introduced the emergency department environment, a weekly mentoring plan, and clarified the final evaluation objectives and evaluation methods. Bedside demonstrations, on-site lectures, centralized theoretical lectures, case analyses, and discussions were conducted. A mid-term feedback and scenario simulation performance were conducted midway through the internship. In the final week of the internship, a concluding evaluation in the form of a theoretical examination of professional knowledge in emergency medicine and a final assessment of cardiopulmonary resuscitation (CPR) skills were conducted. Before leaving the department, the nursing students completed a self-directed learning evaluation form and a questionnaire on academic satisfaction regarding the internship process.

An external file that holds a picture, illustration, etc.
Object name is fpubh-10-974281-g0001.jpg

Flow chart for mentoring evaluation during the nursing practice in the emergency department (the experiment group/the control group).

Mentoring and evaluation methods in the experiment group

For those in the experiment group, in addition to the routine mentoring process for intern nursing students in the emergency department, a formative evaluation was conducted during mentoring, which comprised four parts and accounted for 60% of their total performance in the emergency department internship ( 15 ). The details were as follows. An online question bank quiz was conducted at the end of the first week of students' admission to the department (all were multiple-choice questions, and the questions would not be repeated in the final theoretical examination paper). A focus discussion was conducted by the chief instructor in the second week of the internship, who conducted in-person interviews with each intern nurse; the intern nurse was instructed to complete the online self-assessment questions before attending the interview. Using the online platform, the chief instructor could gain an understanding of the degree of knowledge mastery for each intern nurse. During the interview, the intern nurse reported on their daily practice to the chief instructor, which included aspects like attendance, operation, and maintaining a learning diary. The chief instructor verbally enquired about emergency department nursing, and the nursing students provided answers immediately. A question-and-answer session was conducted between the nursing student and the chief instructor concerning the problems encountered during the early stage of their internship. At the end of the interview, the chief instructor summarized the performance of the nursing student during the early stage of the internship, praised and reinforced the relevant merits, and pointed out weaknesses that required improvement. Finally, the chief instructor ranked each intern nurse (a percentage score), based on the assessment score of the online question bank, the interview, and the information they provided about daily practice, which should be included in the overall evaluation (and accounted for 20%). If the student's first performance was poor and they failed (below 80 points), they could be interviewed again 1 week later and the best grade would be counted in the overall evaluation. Group scenario simulations were conducted during the third week of admission to the department, in which the nursing students performed a scenario based on the specialties of emergency department nursing, which required the analysis of clinical cases encountered during the internship, based on the clinical problems. The nursing students were encouraged to review the relevant literature and learn about new advances in emergency department care. The department provided the teaching props (e.g., dummies and manipulation tools) that were needed for the scenario simulations. The audience comprised the head nurse, the chief instructor, and the nursing student mentors. After completing a scenario, the head nurse and the chief instructor provided comments and encouraged the students to comment on the performances of their peers. Details of the performance were included in the final evaluation (accounting for 20%). An operational skills testing stage was implemented in the third week, during which each mentor assessed the students' mastery of intravenous infusion skills, and the teacher looked for problems in their operational skills and made comments while doing so. The test results were included in the final evaluation and accounted for 20% of the overall evaluation.

Finally, both groups of nursing students underwent theoretical and clinical operation skills assessment before leaving the emergency department. The theoretical knowledge examination was a professional knowledge paper of emergency medicine. The test paper included 45 multiple-choice questions with 35 single-choice questions, and 10 multiple-choice questions, and the examination duration was 45 min. The operation examination comprised a CPR skills test, the standards of which included the basic life support technology test for CPR, which encompassed an assessment of students in terms of condition observation, clinical operation skills, and humanistic care skills. The operation test duration was 6 min. The exams were conducted by the chief instructor and grades were calculated on a percentage basis. In the control group, the theoretical performance and the clinical operation performance accounted for 50% of the total evaluation score, respectively. In the experiment group, the theoretical performance and the clinical operation performance accounted for 20% of the total evaluation grade of the formative evaluation, respectively.

Evaluation indicators

Exam results.

The final exam results were compared between the two groups of nursing students, including their theoretical performance and clinical operation skills performance (calculated on a percentage basis). The theoretical performances and clinical operation performances accounted for 50% of the total score, respectively.

The self-rating scale for self-directed learning

The self-rating scale for self-directed learning (SRSSDL) was developed by Williamson ( 16 ) and translated, cross-culturally adjusted, and tested for reliability by Shen and Hu ( 17 ) from Fudan University in 2012. The SRSSDL is a self-assessment scale with 5 dimensions as follows: learning awareness, learning strategies, learning activities, learning evaluation, and interpersonal skills. Each dimension included 12 items for a total of 60 items. The Likert 5-point scale measurement was used for scoring, with the total scale scores ranging from 60 to 300 points, and higher scores indicated a more desirable self-directed learning ability for the respondent. The Chinese version of the SRSSDL was tested, and it was reported that Cronbach's alpha (α) internal coefficients were 0.97 for the SRSSDL scale as a whole and 0.87–0.90 for each of the five dimensions. The content validity was 0.96. Thus, the SRSSDL scale had good reliability and validity.

Nursing student internship satisfaction

The undergraduate nursing student academic satisfaction scale (UNSASS), developed by Dennision et al. ( 18 ), was adopted for the nursing student internship satisfaction (NSIS) evaluation; it was revised, translated, and modified by He and Lei ( 19 ) and included 39 entries in four dimensions, i.e., in-class teaching, clinical teaching, internship program, and support and resources within the program. The clinical teaching dimension comprised 15 entries (entries 10–24), the internship program dimension comprised 12 entries (entries 25–36), and the support and resources dimension comprised 3 entries (entries 37–39). A score of 1 point reflected “strongly disagree”, 2 points reflected “disagree”, 3 points reflected “somewhat agree”, 4 points reflected “agree”, and 5 points reflected “strongly agree”. The higher the score, the stronger the ability of the entry. The KMO value in the present study was 0.971, and the Cronbach's α coefficients were 0.92 for the total scale as a whole and ranged within the scope of 0.668–0.789 between dimensions, which indicated good reliability.

Data collection methods

Before leaving the department in the fourth week of the internship program, the chief teaching assistant conducted a questionnaire survey using the SRSSDL and UNSASS to understand the self-directed learning ability and academic satisfaction with the internship among the nursing students in the experiment and control groups, respectively, using uniform instructional speech. In both groups, the scales were delivered by the chief teaching assistant on-site, and were completed and collected on-site.

Statistical methods

The SPSS Statistics 20.0 software was used to conduct the data analysis. The countable data were expressed as the percentage number ( n , %) and tested by the chi-square (X 2 ) or rank-sum tests. The measurement data were expressed as means ± standard deviation ( x ¯ ± s) and were tested using the t- test; P < 0.05 was considered statistically significant.

A comparison of the exam results between the two groups of intern nursing students

The exam results were significantly higher among the nursing students in the experiment group compared with those in the control group ( P < 0.05); the theoretical and clinical operation performances were also significantly higher among the nursing students in the experiment group compared with those in the control group, as shown in Table 2 .

The comparison of the rotation examination performance between the two groups of intern nursing students.

A comparison of the academic satisfaction between the two groups of intern nursing students

After completing the internship, the results of the NSIS related to the aspects of in-class teaching, clinical teaching, and support and resources within the program were significantly higher in the experiment group compared with the control group ( P < 0.05), as demonstrated in Table 3 .

The comparison of the academic satisfaction with the internship between the two groups of intern nursing students after the internship.

A comparison of the self-directed learning ability between the two groups of intern nursing students

Prior to the internship, the results of the SRSSDL revealed that differences in the dimensions of learning awareness, learning strategies, learning activities, and learning evaluation were not statistically significant between the nursing students in the experiment group and those in the control group ( P < 0.05), as illustrated in Table 4 .

The comparison of the self-directed learning ability between the two groups of intern nursing students before the internship.

After completing the internship, the results of the SRSSDL suggested that scores in the dimensions of learning awareness, learning strategies, learning activities, and learning evaluation were significantly higher in the experiment group compared with the control group ( P < 0.05), as shown in Table 5 .

The comparison of the self-directed learning ability between the two groups of intern nursing students after the internship.

Formative evaluation helps to improve the theoretical knowledge and clinical practice skills of intern nursing students

Formative evaluation is a dynamic, complete-process assessment. Education providers should consider formative evaluation as an integral part of effective teaching and learning because of its potential to improve student performance. During the process of transformation from theoretical knowledge to practice among the nursing students, the support of formative evaluation should be implemented, which can be multifaceted, provide positive feedback, and inspire students to achieve higher levels of learning. The practice of formative evaluation includes suggestions for clinical practice, group and pair work within the classroom, peer assessment, and verbal feedback from mentors ( 20 ).

The professional training of nursing students requires direct contact with patients in clinical practice. To ensure the mastery of basic knowledge and skills, students must adhere to clinical practice in the health services field, be able to implement theoretical knowledge, and spend significant time engaged in simulated training ( 21 ). Multiple, dynamic, and timely student-targeted evaluations, guiding students during feedback sessions, as well as the identification, analysis, and solving of problems in a timely manner, based on the results of feedback and evaluation, can effectively prevent the teaching process from being detached from the actual needs, enable the prompt adjustment of teaching methods, and improve the quality of teaching ( 6 ).

In the present study, the baseline characteristics of the nursing students in the experiment and the control groups, respectively, were compared. The results showed that the differences were not statistically significant ( P > 0.05). After completing the internship, the theoretical knowledge performance and clinical practice ability assessment scores of the nursing students in the experiment group were significantly higher than those in the control group ( P < 0.05). These results indicated that the teaching effect of formative evaluation for internship nursing students was better compared with the summative evaluation, which was consistent with the findings of Wu et al. ( 22 ) and Gao et al. ( 23 ).

The location in the present study was the emergency department in a tertiary care hospital in Fuzhou, Fujian Province, which undertakes the clinical practice teaching of nursing students from several medical schools in Fujian Province. For those in the experiment group, in addition to the routine mentoring process for intern nursing students in the emergency department, the evaluation processes, e.g., an in-person interview with the chief instructor, clinical scenario simulations, and stage examinations, were included to provide timely evaluation and feedback on the learning attitude, emotions, theoretical performance, practice, comprehensive analysis and judgment, and learning abilities of the nursing students. This enabled mentors to better understand the mastery level of relevant knowledge among the nursing students and enabled them to adjust the teaching focus and strategy in a timely manner if needed, based on the characteristics of the students. The nursing students could also better recognize any gaps in their learning and the learning objectives and address these accordingly.

Formative evaluation helps to improve academic satisfaction during clinical practice among intern nursing students

The results of the present study showed that the nursing students in the experimental group were significantly more satisfied with the in-class teaching, clinical teaching, as well as the support and resources available within the program compared with those in the control group at the end of the internship ( P < 0.05). This indicated that the level of acceptance concerning formative evaluation by the intern nursing students was higher than that in the control group. For those in the experiment group, many evaluation processes were added, based on the routine mentoring process. During the experiment, contact and communication between the nursing students and the head nurse, as well as other instructors, were increased for the experiment group, which increased their sense of belonging and to some degree improved their academic satisfaction with clinical practice. During the experiment process involving formative evaluation, the process of identifying problems in stages and encouraging the nursing students to provide feedback on these problems, as well as guiding them in combining two learning styles (independent and group learning), enabled improving their learning abilities and enhanced their self-confidence concerning knowledge mastery; to some degree, their academic satisfaction with clinical practice was also increased. Formative evaluation, as a feedback tool for enhancing learning and practice, as well as understanding, can play a role in self-development and peer review ( 24 ). Introducing students to simulated environments can help to reduce the anxiety of nursing students concerning clinical practice. Assessing the achievement of learning outcomes among the students via formative assessment ( 25 ) can also drive the process of internship as a whole more virtuously.

The difference in the satisfaction scores among the internship program students in the present study was not statistically significant between the two groups. This was likely due to the teaching programs all including clinical teaching tasks (issued by the medical schools) to ensure the quality of clinical teaching and regulate the management of clinical teaching. The hospital strictly followed the requirements of the internship syllabus and created a thorough and detailed weekly plan. The use of reasonable, systematic, and effective teaching programs may explain why no differences were detected between the two nursing student groups concerning their academic satisfaction with the teaching programs.

Formative evaluation helps to improve self-directed learning among intern nursing students

The results of the present study showed that the total score of self-directed learning ability among the nursing students in the experiment group was significantly higher compared with the control group at the end of the internship ( P < 0.05). The scores related to learning awareness, learning behavior, learning strategies, and learning evaluation were also higher than those in the control group, and the differences were statistically significant ( P < 0.05). The above results indicated that formative evaluation may help to improve the self-directed learning of nursing students when compared with a summative assessment. Qian et al. ( 13 ) explored the effect of formative evaluation in the training of cardiovascular specialist nurses, and the results showed that it could improve the independent learning abilities of students. The improved self-directed learning skills imparted by the process of formative evaluation can help to transform theoretical understanding into daily learning assessment activities. Therefore, formative evaluation can also be regarded as an evaluation system that is inseparable from the learning process ( 26 ).

The nursing specialty is a lifelong learning discipline. Improving nursing students' enthusiasm for learning and their ability to learn independently represents an important part of clinical teaching management. The formative evaluation implemented in the present study adopted the teaching needs of intern nursing students to improve the level of specialized nursing knowledge and clinical practice skills in the emergency department as a starting point. The paper focused on the timely feedback of teaching effects, as well as the guidance, motivation, and improvement of nursing students, thereby helping nursing students to dynamically regulate their learning processes and transforming them from being educated in clinical practice to being active participants in independent learning, and continuously improving their independent learning ability.

The current study showed no statistically significant differences in the interpersonal skills dimension scores related to the self-directed learning abilities between the two groups of nursing students, which may have been because the focus of nursing interns was on the patients and that during the process of nursing education of the undergraduate and junior college nursing, the care of the nursing objects and the development of interpersonal skills of nursing students were also emphasized. Furthermore, the nursing students had already gained better interpersonal skills via their completed school studies, which may have been a reason for the lack of differences in the interpersonal skills dimension between the two groups of students.

The innovation of this study: 1. the current domestic standard of worth promoting clinical practice nursing students evaluation system, the popularization and application of the formative assessment in recent years gradually as a way of evaluation, help to improve students' theoretical knowledge and clinical practice ability, if successful, this study can provide perfect the evaluation system of clinical practice nursing students practice base. 2. This study breaks the traditional evaluation model of emergency clinical nursing students based on terminal evaluation, and truly achieves the training and evaluation model of “ost competency as the training target”. 3. According to the application model of formative evaluation in standardized resident training in China, this study further optimized and practiced nursing students in emergency clinical practice, and the formulation process was standardized and operable.

The results of the present study suggested that the application of formative evaluation for internship nursing students could improve their theoretical knowledge, clinical practice abilities, academic satisfaction with clinical teaching, and self-directed learning abilities, which could play a positive role in cultivating highly qualified and innovative clinical skills personnel with job competency.

The study has some limitations. Because the nursing students generally arrive to the department in groups of 4–6 students, to reduce the impact of confounding factors such as group interference, a single bimonthly enrollment method was adopted to reduce the subject selection bias. Additionally, the present research reflects an intervention study; accordingly, the intervention subjects and interventionists could not be blinded, and only the data analysts were blinded. Despite these shortcomings, formative evaluation may nonetheless be an effective evaluation system for mentoring nursing students during clinical practice, and future studies should further improve formative evaluation systems to achieve better results.

Data availability statement

Ethics statement.

The studies involving human participants were reviewed and approved by The First Affiliated Hospital of Fujian Medical University. The patients/participants provided their written informed consent to participate in this study.

Author contributions

Y-rZ and YF: conception and design of the research and critical revision of the manuscript for intellectual content. T-yL, J-bC, and Y-hX: acquisition of data. Y-rZ, R-fH, and YW: analysis and interpretation of the data. YW and Y-rZ: statistical analysis. Y-rZ, T-yL, J-bC, and Y-hX: obtaining financing. Y-rZ, YF, and R-fH: writing of the manuscript. All authors read and approved the final draft.

This study was supported by Education and Teaching Reform Research Project of Fujian Medical University (No. J19039).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

formative assessment in nursing education

  • Publisher Home
  • Editorial Board
  • Submit Manuscript

RESEARCH ARTICLE

Perceptions, practices, and challenges of formative assessment in initial nursing education, article information.

formative assessment in nursing education

Identifiers and Pagination:

Article history:, article metrics, crossref citations:, total statistics:, unique statistics:.

Creative Commons License

Background:

Formative assessment is a pedagogical practice that improves teaching, as well as students' learning. There is a multitude of research demonstrating interest in this practice in the field of education. However, this assessment practice is poorly integrated by teachers despite its great pedagogical potential, in addition to the tensions existing between formative and summative assessment that its implementation is more formal by the institutions.

The purpose of this research is to explore, as a first step, how nursing teachers conceptualize formative assessment and how they judge its usefulness in the teaching/learning process. Secondly, the study seeks to identify the main challenges that could influence the practice of formative assessment in the context of nursing education.

The study used a descriptive quantitative research design. The target population of the study was composed of nursing teachers (N = 50) from the Higher Institute of Nursing and Health Techniques of Casablanca (ISPITS).

This target population includes all permanent nursing teachers working at the ISPITS of Casablanca, divided into the various existing fields. They are responsible for the initial training and practical supervision of nursing students and health technicians enrolled in the cycles of the professional license.

To meet our research objective, we conducted a survey using a questionnaire with 37 items divided into five dimensions based on William and Thompson's (2007) model of formative assessment.

The results revealed that, in teachers’ practice, the informal approach to formative assessment takes precedence over formal approaches based on planned assessment tools. In addition, their perception of the usefulness of formative assessment is oriented towards a diagnostic function of students' learning difficulties rather than a function of teaching guidance.

Furthermore, the study showed that the time commitment of formative assessment and the diversity of activities required of teachers might be obstacles to a broader practice of formative assessment.

Conclusion:

This study offers suggestions that may help teachers facilitate and innovate the implementation of formative assessment in the field of nursing. Our research perspective is to demonstrate the effect of formative assessment on student learning outcomes through the implementation of a field experiment in collaboration with nursing teachers.

1. INTRODUCTION

In terms of training, the effects of teaching on students' achievements are often uncertain [ 1 , 2 ]. However, formative assessment is intended as a means by which practitioners can make judgments about the learning attained during the teaching sequences [ 3 , 4 ]. More specifically, this pervasive approach in pedagogical practices provides the teacher and the students with information on the progress of learning [ 5 , 6 ]. The purpose of formative assessment is to improve students' learning, not to judge their performance [ 7 , 8]. Bloom's initial conception, as part of mastery pedagogy, defines formative assessment as an approach that allows students to remediate their learning difficulties. Moreover, the expanded concept states that this is indeed an approach that allows the regulation of both learning and teaching [ 9 ].

In the field of education, formative assessment is carried out in both formal and informal manners, based on class interactions [ 10 , 11 ]. The formal version of formative assessment allows for retroactive regulation of learning difficulties that could not be corrected by interactive regulations resulting from informal approaches [ 9 ]. Moreover, through self-assessment and peer evaluation activities, formal approaches allow for the self-regulation of students' learning and the development of their autonomy [ 3 , 6 ].

Although formative approaches to evaluation affect practitioners and managers, there are barriers to a more developed practice. Implementing this type of assessment may be difficult due to the increase in the number of students and the diversity of activities required of teachers, which leaves less time for the implementation of this evaluative practice [ 12 , 13 ]. Furthermore, practitioners may be hesitant to implement the practice because of the tensions between formative and summative evaluations and the fear that formative evaluation consumes too many resources [ 14 ]. Contrary to the formative assessment, summary assessment is mandatory, as it is formally integrated into the planning of the teachings [ 15 ].

2. LITERATURE REVIEW

2.1. the role and importance of formative assessment.

Formative assessment is the subject of several publications that examine this pedagogical practice and its effects on student learning and the quality of teaching [ 16 - 19 ]. This form of assessment allows students to be informed about the quality of their work and how to self-regulate [ 20 ]. More specifically, formative assessment helps students to develop their learning through feedback provided by teachers [ 21 ]. Black and William (1998) [ 22 ] recommend that teachers integrate formative assessment into their teaching practices in the classroom given the benefits it offers. Moreover, the formative approaches to the assessment give rise to three main types of regulation of teaching: interactive regulation, retroactive regulation, and proactive regulation. Interactive regulation is based on classroom questioning and group interactions. This type of regulation allows for continuous adaptations of teaching; retroactive regulation is carried out at the end of a teaching phase and is based on formal steps of formative assessment. It, therefore, aims to verify the achievement of learning objectives by all learners. Proactive regulation is an approach based on the concern of pedagogical differentiation, taking into account the needs of learners [ 9 , 10 ].

2.2. Practice of Formative Assessment

The concept of formative assessment was first introduced by Scriven as part of the assessment of training programs to enable adjustments. Bloom subsequently applied this concept to student learning in the master's pedagogy model [ 9 ]. Research in the French language further broadened Bloom's original view by focusing on aspects of formative assessment [ 9 ] The main stipulations of this enlargement were: a) Integrating formative assessment into all learning situations; b) Using various means of data collection; c) Regulation of teachings; d) Active participation of students in the formative assessment,; e) Differentiating teaching; f) And continuous improvement in teaching. Regarding the implementation of the formative assessment, the authors [ 9 - 11 , 23 ,24 ] states that this practice can be formal or informal. Formative assessment is formal when teachers use planned instruments such as exercises, online tests, questionnaires, and self-assessment form. In the absence of tools, formative assessment is informal when using group exchanges, classroom questioning, and observation during teaching sequences. The formal version of formative assessment allows teachers to propose retroactive regulations for students' learning difficulties [ 9 ]. Informal formative assessment allows interactive regulation to be conducted throughout the teaching/learning process [ 10 , 25 ].

2.3. Reference Framework for the Practice of Formative Assessment

Formative assessment is the subject of several theoretical guidelines and developments. Thus, and in view of the large number of existing models that have dealt with this concept, we have chosen to use the William and Thompson model [ 26 ], which was derived from the original model of Leahy [ 27 ] in conducting this study. This framework develops the main elements of formative assessment apprehended by the literature. William and Thompson's model conceptualizes formative assessment using five key strategies based on three teaching/learning processes. Leahy [ 27 ] concluded that these strategies are beneficial in all classes and in different fields.

Table 1 outlines the five key strategies by linking them to the three teaching/learning processes.

A first reference strategy is to clarify and share learning intentions and success criteria. This strategy requires communicating to students the objectives and criteria of assessment in a clear way, while taking into account the requirements of certain disciplines. A second strategy is to organize effective classroom discussions and other learning tasks to demonstrate learners' level of understanding. These include pedagogical actions that could lead to clues for the regulation of education. The third strategy is to provide feedback that advances learners. This includes feedback that focuses on the self-regulation process in which the learner is engaged. The fourth strategy is to encourage the learner to be responsible for their learning. This refers to a shared responsibility between the teacher and the learner. The latter participates in his learning through self-assessment processes, taking into account the required evaluation criteria. The final strategy is to encourage students to be resources for their peers. The teacher's job is to offer self-assessment and peer evaluation activities.

2.4. Barriers to Formative Assessment

Although formative assessment promotes learning and improving teaching, there are nevertheless obstacles to a wider practice of this form of assessment. A number of studies have investigated factors that may influence its implementation in the classroom [ 28 - 30 ]. Quyen’s review [ 31 ] analysed several studies that concluded that the key factors impeding the implementation of formative assessment were teachers’ belief in the practice, student learning and commitment to assessment tasks, time required for formative assessment activities, and teacher workload. Other factors identified were teacher training and lack of knowledge on effective formative assessment. The same review showed that, with a large number of students in the classroom, it is difficult for teachers to set up formative assessment activities. Kazman’s model, as presented by Fulmer [ 32 ], classified the factors influencing the practice of formative assessment into three levels (micro, meso and macro). The first level refers to the context of the class and the individual factors of the teacher and the student, such as the number of students per class, the commitment to the tasks of formative assessment, and the evaluative skills of teachers. This level could also include access to educational materials that can be used for this practice in its innovative form. The meso level is linked to institutional factors, including the support provided by the administration and the institutional policies on formative assessment. The macro-level mainly includes national education policies, which can influence the practices of classroom formative assessment.

Based on the above facts, we use these theoretical models to build our conceptual framework for research (Fig 1 ).

2.5. Context

In the nursing profession, several studies have explored the formative assessment and raised interest in this practice for improving nursing learning. The study [ 33 ] focused on formative assessment in the paramedical field and demonstrated that students participating in formal formative assessment held positive perceptions of this approach at each assessment event. A study conducted by Elliott [ 34 ] concluded that self-assessment and peer assessment strategies increase nursing students’ motivation to participate in class project groups. Furthermore, the study [ 35 ] demonstrated that the use of a formative assessment with well-planned quality feedback leads to effective learning, and that it is an essential component in nursing education. In Morocco, there is a lack of literature on the assessment of nursing learning and health techniques. It is important to explore this relevant aspect of the training of nurses, and there is important work to be done in this regard in light of the important developments in this discipline. Keeping in mind, this contribution aims to explore one of the crucial aspects of teaching activities: the formative assessment of learning at the Higher Institute of Nursing Professions and Health Techniques in Casablanca. This institute has the status of a higher education establishment not belonging to universities. This is after the recent introduction of the Master Doctorate system in 2015 within these institutes. The mission of ISPITS is the initial and continuous training of nurses and health technicians, guaranteeing a quality of training that meets the recommended educational and professional requirements

3. METHODOLOGY

Our questioning focused on formative assessment practices as they are developed by allied health teachers at the Institute. This version of formative assessment plays an important role in the second cycle of the paramedical studies training program, Line: Paramedical Education, which refers to the design of formative assessment. It largely illustrates the contributions of this practice on student learning. In addition, official texts governing the training of nurses in Morocco recognize formative assessment as a method of assessing learning.

Specifically, our research objective was to answer the following questions:

• How do nursing teachers design formative assessments?

• How do they view the usefulness of this practice?

• What are the obstacles associated with the practice of formative assessment?

3.1. Study Design

This research is quantitative and descriptive, and the study occurred during 2019 at the ISPITS of Casablanca. The target population of the study was composed of permanent teachers from the nursing professions of ISPITS Casablanca (N =50). The sample is therefore exhaustive.

The permanent teachers, according to their basic training (polyvalent nurse, midwife, neonatology nurse... etc .) are assigned to the different options within the institute (Table 2 ).

3.2. Materials and/or Subjects

To meet our research objective, we conducted a survey using a questionnaire. We chose the questionnaire as a data collection tool because it is a suitable method for quantitative studies. The questionnaire was developed according to the guidelines of the theoretical framework developed by William and Thompson [ 26 ], taking into account the purpose and context of our study. In addition to a section reserved for demographic data, the questionnaire includes five dimensions: functions of formative assessment and perception of its usefulness, sharing and discussing learning attentions and success criteria, how to implement the formative assessment, the temporality of formative assessment and regulation, teacher training, and barriers to the practice of formative assessment. The questionnaire consists of 37 statements with a single answer on a scale of measurement ranging from 1 to 5. Once written, the questionnaire was validated by work managers and resource people. Prior to being administered to participants, we conducted a pre-test with ten nursing teachers to verify the relevance, clarity, and understanding of the issues. The internal consistency of the survey was measured to confirm its reliability. The Cronbach α for the questionnaire (37 items) was 0.854.

3.3. Statistical Analysis

We analysed the data using statistical software (Microsoft office Excel and SPSS version 20). The results are presented in the form of tables and figures. The analysis of the data was based on the description of frequencies, percentages, means, and standard deviations.

3.4. Ethical Statement

Before distributing the questionnaire, respect for ethical aspects was taken into consideration. For this purpose, we obtained an authorization issued jointly by the Regional Directorate of the Ministry of Health of Casablanca and the Directorate of ISPITS of Casablanca. The application for authorization included clarifications around the objectives of the research and its conduct. We also received consent from the study participants after explaining the commitment to respect anonymity and data confidentiality.

4.1. Demographic Characteristics of the Participants

Forty teachers participated in the study. The response rate was (80%) (N: 40 of 50)(Table 3 ).

4.2. Key Results

Table 4 illustrates a comparison of scores obtained for each response. The dimensions with the greatest amount of positive responses were, respectively, ‘identifying the students' strengths and weaknesses’ with 87.5% positive responses, ‘guiding student progress’ with 85% positive responses, ‘increasing the students’ autonomy’ with 75% positive responses. The ‘directing teaching planning’ dimension received only 45% positive responses with (Mean 3.08 ± 1.559).

Table 5 shows that 85% of teachers reported sharing learning goals with students ‘quite often.’ Similarly, 62.5% reported discussing learning goals ‘very often’. Regarding the criteria for success, 45% of teachers reported discussing them ‘quite often’. The results of the questionnaire also indicate that 45% of teachers said they are discussing the modalities of the summative assessment with the students ‘very often’. On the other hand, 47.5% ‘rarely’ discussed the terms of the formative assessment with (Mean 2.88 ± 1.436).

Regarding the implementation of the formative assessment, Table 6 provides a ranking of the modalities practiced by the teachers interviewed.

-Class questioning ranked first among the various modalities proposed: 65% of teachers ‘very often’ offered questioning in class to verify students' understanding (Mean 4.53 ± 0.816).

-Group discussions ranked 2nd among the modalities practiced: 25% of teachers proposed this formative approach ‘quite often’, 25% proposed it ‘occasionally’, and 17.5% proposed it ‘very often’. (Mean 3.2 ± 1.224).

More than half of those surveyed (40%) ‘rarely’ offered exercises and tests for formative assessment (Mean 3.05 ± 1.176). Few teachers said they propose the other modalities: self-assessment, peer evaluation, and digital assessment.

Table 7 shows that 52.5% of teachers declared that they ‘never’ carry out the formative assessment after each teaching activity (Mean 2.13 ± 1.436). 40% reported performing formative assessment ‘very often’ towards the end of a course session and 40% also used the assessment ‘very often’ at the end of a course. Also, more than 30% reported either using formative assessment ‘often’ or ‘quite often’ before the summative evaluation of a course. The majority of teachers surveyed (over 80%) reported giving feedback to their students. 47.5% reported giving individual feedback ‘quite often’ after formative assessment activities. A review of teachers' responses to the types of regulations proposed revealed that ‘giving more explanations’ was used most often and that 40% reported doing it 'quite often' (Mean 3.9 ± 1.297). (Table 8 )

With regard to teacher training, the results show that more than 60% of teachers felt that the initial training was not sufficient to carry out the formative assessment. Furthermore, more than 70% believed that ongoing training on this practice will be useful to them (Mean 4.50 ± 0.934), and 62.5% of participants expected ongoing training on digital formative assessment (Mean 4.10 ± 1.464).

This study explored the barriers that influence the practice of formative assessment. Fig. ( 2 ) shows that, according to teachers, the barriers involved in using this approach are the time required for formative assessment activities (82.5% positive response), the activities required for the teachers (72.5% positive responses), and the commitment of students (75% positive responses). The other two identified barriers, the lack of support from the administration to use formative assessment and the number of students, did not score significantly among the positive responses.

5. DISCUSSION

In this chapter, we will discuss the most striking results of the study.

5.1. Interpretation and suggestion

As part of our study, we analysed teachers' reported practices on integrating formative assessment, while searching for possible obstacles to the implementation of this form of assessment. The study showed that teachers integrate formative assessment into their practice, but their knowledge does not fully correspond to the broad theoretical guidelines for formative assessment. Teachers perceive this tool to be a useful diagnostic function for identifying students' learning difficulties and guiding their learning, but do not understand that formative assessment has two inseparable functions: a diagnostic function used to identify learning difficulties and a regulatory function aimed at regulating teaching [ 22 ].

Compared to the strategy of formative assessment in relation to sharing, discussion of learning attentions, and criteria for success, this behaviour seems to be shared by more than half of the teachers. However, compared to the summative assessment, the discussion of the modalities of the formative assessment is not universal. This may be related to the formal that the summative assessment occupies in the modalities of the assessment of learning at ISPITS Casablanca.

Regarding the modalities of formative assessment, the practices of the teachers interviewed also varied in their implementation. A comparative analysis of the responses clearly demonstrated a lack of the use of formal approaches to formative assessment calling for the use of tools such as exercises and tests. Teachers seem to be contented with a formative assessment based on classroom questioning and group interactions. Furthermore, modalities for involving the student in the process of regulating learning through self-assessment and peer evaluation are rarely mobilized. In their review, Black and William [ 22 ] encourage teachers to use classroom questioning and discussion as an opportunity to improve students' understanding. They also stress the value of using formal approaches with exercises and tests. This data is inconsistent with the broader conception of formative assessment apprehended by Allal [ 9 ], where it is necessary, as part of innovative approaches, to combine interactive regulatory modalities based on informal evaluation approaches with instrumented formal modalities designed for retroactive regulatory procedures.

This study revealed that the majority of teachers believed that their training remains insufficient for the practice of formative assessment. Thus, their expectations of continuous training on formative assessment relate, in particular, to the practical modalities of formative assessment, the modalities of regulation, and how to make a digital assessment. We assume that these results suggest avenues for continuing education. Thus, it is necessary to set-up training programs at the ISPITS level of Casablanca on the evaluation of apprenticeships. It is also possible to encourage professional reading in the field of formative assessment and to provide teachers with access to educational, scientific databases specializing in the field of assessment. In addition, it would be interesting to offer teachers the ability to incorporate innovative approaches to educational evaluation, such as the use of new information and common technologies.

This initial study, which was conducted to identify teacher’s knowledge and perceptions about the practice of formative assessment, has noted potential difficulties in implementation. According to the results, three obstacles are the most significant: the time required for formative assessment activities, the activities required of teachers, and the commitment of students. Many requirements in terms of curricula are being faced by the paramedical teacher in Morocco. At the same time, he is a trainer in the academic environment and is responsible for the supervision of clinical internships. In addition to these responsibilities, various tasks are also involved concerning the organization of internships, teaching planning activities, and exam supervision.

5.2. Comparison with Previous Studies

Previous research has sought to understand teachers' perceptions and knowledge of the practice of formative assessment. In this sense, the study [ 36 ] found that teachers lack expertise and skills in formative assessment strategies, which has negative implications when integrating this form of assessment into their teaching. A study conducted by Fahez [ 37 ] demonstrated that teachers use summative evaluation more frequently than formative assessment in the classroom. The study also displayed incorrect practice of this form of evaluation with a low mobilization of formal formative assessment procedures, such as classroom testing, self-assessment, and peer review. In addition, a recent study [ 38 ] indicated that teachers view formative assessment in a traditional manner and lack knowledge about the usefulness of the practice and how to use it. This research demonstrated the need to develop teachers' knowledge and skills in formative assessment. On the other hand, the study [ 39 ] showed that the teachers interviewed share positive perceptions about formative assessment and its use for improved learning and training. The teachers interviewed also believe that classroom training is essential for planning teaching and for having effective evidence of student progress.

6. LIMITATIONS

It is also important to mention the imitations of our study. The data presented are the results of an initial diagnosis conducted as part of a doctoral research project on the use of digital technology for formative assessment in the field of nursing. This diagnosis provided a general picture of the orientations of teachers' formative assessment practice in relation to expert theories. This first study can serve as a starting point for further research based on observation of class practices, as it will be necessary to consider how this assessment is actually put into practice.

The study shows that teachers incorporate formative assessment into their practice. However, their expertise did not fully match the directions in the William and Thompson (2007) model. Furthermore, teachers are contented with an informal practice of formative assessment, with an under-employment of app-mobilization supporting self-regulation of learning, such as self-assessment and peer evaluation. The study also revealed a need for continuing education in this area, as well as challenges to the practice.

Thus, as a part of the continuity of our research project, we will try to:

• Offer nursing students, via a theoretical course, an online formative assessment to formulate interest in the assessment process.

• Measure the effect of the implementation of formal training assessments on students' learning and motivation for learning.

ETHICS APPROVAL AND CONSENT TO PARTICIPATE

This study was approved by the local ethics committee of The Higher Institute of Nursing and Health Techniques of Casablanca (ISPITS). Morocco.

HUMAN AND ANIMAL RIGHTS

Not applicable

CONSENT FOR PUBLICATION

Informed consent has been obtained form all the participants.

AVAILABILITY OF DATA AND MATERIALS

The data supporting the findings of the article are available from the corresponding author [H.L] upon request.

CONFLICT OF INTEREST

The author declares no conflict of interest, financial or otherwise.

ACKNOWLEDGEMENTS

The authors would like to express their gratitude to all the teaching staff and the management of the ISPITS of Casablanca for providing administrative and technical support.

Track Your Manuscript

Published contents, about the editor, journal metrics, readership statistics:, total views/downloads: 1,079,775, unique views/downloads: 261,144, about the journal, table of contents.

  • INTRODUCTION
  • The Role and Importance of Formative Assessment
  • Practice of Formative Assessment
  • Reference Framework for the Practice of Formative Assessment
  • Barriers to Formative Assessment
  • Study Design
  • Materials and/or Subjects
  • Statistical Analysis
  • Ethical Statement
  • Demographic Characteristics of the Participants
  • Key Results
  • Interpretation and suggestion
  • Comparison with Previous Studies
  • LIMITATIONS

Press Release

The open nursing journal, published by bentham open, has been accepted for inclusion in the directory of nursing journals.

The Directory of Nursing Journals is a joint service of Nurse, Author & Editor and the International Academy of Nursing Editors (INANE) . Their primary goal in maintaining this list is to help nurse authors find suitable and reputable journals in which to publish their work. The directory informs readers and consumers of nursing literature about the credibility of literature sources used to guide practice, research, policy and education. The two committees follow the COPE Principles of Transparency and Best Practice in Scholarly Publishing to vet journals.

The Open Nursing Journal is an Open Access online journal, which publishes research articles, reviews, letters and guest edited thematic issues in all areas of nursing. The Journal is an important and reliable source of current information on developments in nursing research and practice. The journal publishes quality papers that are freely accessible to readers worldwide.

For more information about the journal, please visit: https://opennursingjournal.com/index.php

Bentham Open Welcomes Sultan Idris University of Education (UPSI) as Institutional Member

Bentham Open is pleased to welcome Sultan Idris University of Education (UPSI), Malaysia as Institutional Member. The partnership allows the researchers from the university to publish their research under an Open Access license with specified fee discounts. Bentham Open welcomes institutions and organizations from world over to join as Institutional Member and avail a host of benefits for their researchers.

Sultan Idris University of Education (UPSI) was established in 1922 and was known as the first Teacher Training College of Malaya. It is known as one of the oldest universities in Malaysia. UPSI was later upgraded to a full university institution on 1 May, 1997, an upgrade from their previous college status. Their aim to provide exceptional leadership in the field of education continues until today and has produced quality graduates to act as future educators to students in the primary and secondary level.

Bentham Open publishes a number of peer-reviewed, open access journals. These free-to-view online journals cover all major disciplines of science, medicine, technology and social sciences. Bentham Open provides researchers a platform to rapidly publish their research in a good-quality peer-reviewed journal. All peer-reviewed accepted submissions meeting high research and ethical standards are published with free access to all.

Ministry Of Health, Jordan joins Bentham Open as Institutional Member

Bentham Open is pleased to announce an Institutional Member partnership with the Ministry of Health, Jordan . The partnership provides the opportunity to the researchers, from the university, to publish their research under an Open Access license with specified fee concessions. Bentham Open welcomes institutions and organizations from the world over to join as Institutional Member and avail a host of benefits for their researchers.

The first Ministry of Health in Jordan was established in 1950. The Ministry began its duties in 1951, the beginning of the health development boom in Jordan. The first accomplishment was the establishment of six departments in the districts headed by a physician and under the central administration of the Ministry. The Ministry of Health undertakes all health affairs in the Kingdom and its accredited hospitals include AL-Basheer Hospital, Zarqa Governmental Hospital, University of Jordan Hospital, Prince Hashem Military Hospital and Karak Governmental Hospital.

Bentham Open publishes a number of peer-reviewed, open access journals. These free-to-view online journals cover all major disciplines of science, medicine, technology and social sciences. Bentham Open provides researchers a platform to rapidly publish their research in a good-quality peer-reviewed journal. All peer-reviewed, accepted submissions meeting high research and ethical standards are published with free access to all.

Porto University joins Bentham Open as Institutional Member

Bentham Open is pleased to announce an Institutional Member partnership with the Porto University, Faculty of Dental Medicine (FMDUP) . The partnership provides the opportunity to the researchers, from the university, to publish their research under an Open Access license with specified fee concessions. Bentham Open welcomes institutions and organizations from world over to join as Institutional Member and avail a host of benefits for their researchers.

The Porto University was founded in 1911. Porto University create scientific, cultural and artistic knowledge, higher education training strongly anchored in research, the social and economic valorization of knowledge and active participation in the progress of the communities in which it operates.

Join Our Editorial Board

The Open Nursing Journal is an Open Access online journal, which publishes research articles, reviews, letters, case reports and guest-edited single topic issues in all areas of nursing. Bentham Open ensures speedy peer review process and accepted papers are published within 2 weeks of final acceptance.

The Open Nursing Journal is committed to ensuring high quality of research published. We believe that a dedicated and committed team of editors and reviewers make it possible to ensure the quality of the research papers. The overall standing of a journal is in a way, reflective of the quality of its Editor(s) and Editorial Board and its members.

The Open Nursing Journal is seeking energetic and qualified researchers to join its editorial board team as Editorial Board Members or reviewers.

  • Experience in nursing with an academic degree.
  • At least 20 publication records of articles and /or books related to the field of nursing or in a specific research field.
  • Proficiency in English language.
  • Offer advice on journals’ policy and scope.
  • Submit or solicit at least one article for the journal annually.
  • Contribute and/or solicit Guest Edited thematic issues to the journal in a hot area (at least one thematic issue every two years).
  • Peer-review of articles for the journal, which are in the area of expertise (2 to 3 times per year).

If you are interested in becoming our Editorial Board member, please submit the following information to [email protected] . We will respond to your inquiry shortly.

  • Email address
  • City, State, Country
  • Name of your institution
  • Department or Division
  • Website of institution
  • Your title or position
  • Your highest degree
  • Complete list of publications and h-index
  • Interested field(s)

Testimonials

"Open access will revolutionize 21 st century knowledge work and accelerate the diffusion of ideas and evidence that support just in time learning and the evolution of thinking in a number of disciplines."

"It is important that students and researchers from all over the world can have easy access to relevant, high-standard and timely scientific information. This is exactly what Open Access Journals provide and this is the reason why I support this endeavor."

"Publishing research articles is the key for future scientific progress. Open Access publishing is therefore of utmost importance for wider dissemination of information, and will help serving the best interest of the scientific community."

"Open access journals are a novel concept in the medical literature. They offer accessible information to a wide variety of individuals, including physicians, medical students, clinical investigators, and the general public. They are an outstanding source of medical and scientific information."

"Open access journals are extremely useful for graduate students, investigators and all other interested persons to read important scientific articles and subscribe scientific journals. Indeed, the research articles span a wide range of area and of high quality. This is specially a must for researchers belonging to institutions with limited library facility and funding to subscribe scientific journals."

"Open access journals represent a major break-through in publishing. They provide easy access to the latest research on a wide variety of issues. Relevant and timely articles are made available in a fraction of the time taken by more conventional publishers. Articles are of uniformly high quality and written by the world's leading authorities."

"Open access journals have transformed the way scientific data is published and disseminated: particularly, whilst ensuring a high quality standard and transparency in the editorial process, they have increased the access to the scientific literature by those researchers that have limited library support or that are working on small budgets."

"Not only do open access journals greatly improve the access to high quality information for scientists in the developing world, it also provides extra exposure for our papers."

"Open Access 'Chemistry' Journals allow the dissemination of knowledge at your finger tips without paying for the scientific content."

"In principle, all scientific journals should have open access, as should be science itself. Open access journals are very helpful for students, researchers and the general public including people from institutions which do not have library or cannot afford to subscribe scientific journals. The articles are high standard and cover a wide area."

"The widest possible diffusion of information is critical for the advancement of science. In this perspective, open access journals are instrumental in fostering researches and achievements."

"Open access journals are very useful for all scientists as they can have quick information in the different fields of science."

"There are many scientists who can not afford the rather expensive subscriptions to scientific journals. Open access journals offer a good alternative for free access to good quality scientific information."

"Open access journals have become a fundamental tool for students, researchers, patients and the general public. Many people from institutions which do not have library or cannot afford to subscribe scientific journals benefit of them on a daily basis. The articles are among the best and cover most scientific areas."

"These journals provide researchers with a platform for rapid, open access scientific communication. The articles are of high quality and broad scope."

"Open access journals are probably one of the most important contributions to promote and diffuse science worldwide."

"Open access journals make up a new and rather revolutionary way to scientific publication. This option opens several quite interesting possibilities to disseminate openly and freely new knowledge and even to facilitate interpersonal communication among scientists."

"Open access journals are freely available online throughout the world, for you to read, download, copy, distribute, and use. The articles published in the open access journals are high quality and cover a wide range of fields."

"Open Access journals offer an innovative and efficient way of publication for academics and professionals in a wide range of disciplines. The papers published are of high quality after rigorous peer review and they are Indexed in: major international databases. I read Open Access journals to keep abreast of the recent development in my field of study."

"It is a modern trend for publishers to establish open access journals. Researchers, faculty members, and students will be greatly benefited by the new journals of Bentham Science Publishers Ltd. in this category."

Academic staff perspectives of formative assessment in nurse education

Affiliation.

  • 1 Thames Valley University, Faculty of Health and Human Sciences, Paragon House, Boston Manor Road, Brentford, Middx TW8 9GA, UK. [email protected]
  • PMID: 19818688
  • DOI: 10.1016/j.nepr.2009.08.007

High quality formative assessment has been linked to positive benefits on learning while good feedback can make a considerable difference to the quality of learning. It is proposed that formative assessment and feedback is intricately linked to enhancement of learning and has to be interactive. Underlying this proposition is the recognition of the importance of staff perspectives of formative assessment and their influence on assessment practice. However, there appears to be a paucity of literature exploring this area relevant to nurse education. The aim of the research was to explore the perspectives of twenty teachers of nurse education on formative assessment and feedback of theoretical assessment. A qualitative approach using semi-structured interviews was adopted. The interview data were analysed and the following themes identified: purposes of formative assessment, involvement of peers in the assessment process, ambivalence of timing of assessment, types of formative assessment and quality of good feedback. The findings offer suggestions which may be of value to teachers facilitating formative assessment. The conclusion is that teachers require changes to the practice of formative assessment and feedback by believing that learning is central to the purposes of formative assessment and regarding students as partners in this process.

Copyright 2009 Elsevier Ltd. All rights reserved.

  • Education, Nursing / standards*
  • Educational Measurement / methods*
  • Faculty, Nursing / standards*
  • Feedback, Psychological
  • Qualitative Research
  • United Kingdom

IMAGES

  1. How Formative Assessment Helps Nursing Students Succeed in the

    formative assessment in nursing education

  2. Formative Assessment in Nursing Education by Emily Cavanagh

    formative assessment in nursing education

  3. Evaluation of educational programs in nursing

    formative assessment in nursing education

  4. How Formative Assessment Helps Nursing Students Succeed in the

    formative assessment in nursing education

  5. 39 Printable Nursing Assessment Forms (+Examples)

    formative assessment in nursing education

  6. Formative vs Summative Assessment Comparison Chart

    formative assessment in nursing education

VIDEO

  1. DIAGNOSTIC ASSESSMENT, FORMATIVE A.SSESSMENT & SUMMATIVE ASSESSMENT

  2. FORMATIVE ASSESSMENT BY ANJALIPRIYA B ED NS 23-25 BATCH

  3. FORMATIVE ASSESSMENT: INDIVIDUAL MARKS REGISTER C C E DOCUMENT

  4. FORMATIVE AND SUMMATIVE ASSESSMENT

  5. Formative assessment explained by Namita

  6. Computer-Based Testing for Nursing Education

COMMENTS

  1. How Formative Assessment Helps Nursing Students Succeed in the

    Formative assessment is a bridge between learning and teaching. It allows instructors to gather real data about students as they work, then adjust their instruction to better serve students at their current learning level. In nursing education, formative assessment has been proven to be highly effective not only for student learning, but for ...

  2. Comparing formative and summative simulation-based assessment in

    Background. Clinical simulation methodology has increased exponentially over the last few years and has gained acceptance in nursing education. Simulation-based education (SBE) is considered an effective educational methodology for nursing students to achieve the competencies needed for their professional future [1-5].In addition, simulation-based educational programs have demonstrated to be ...

  3. How are formative assessment methods used in the clinical setting? A

    In recent years, however, formative assessment has become a strong theme in postgraduate medical education as a way to facilitate and enhance learning through-out the training period. 12-14 Formative assessment - or assessment for learning - aims to identify a trainee's strengths and weaknesses and to be conducive to progress by means of ...

  4. PDF Guiding Principles for Competency-Based Education

    The following key components of competency-based education (CBE) provide a foundation for implementing CBE: outcome competencies, sequenced progression, tailored learning experiences, 1 competency-focused instruction, and programmatic assessment. A shared understanding of the components and approaches of CBE is needed for successful ...

  5. Comparing formative and summative simulation-based assessment in

    Formative and summative evaluation are widely employed in simulated-based assessment. The aims of our study were to evaluate the acquisition of nursing competencies through clinical simulation in undergraduate nursing students and to compare their satisfaction with this methodology using these two evaluation strategies. Two hundred eighteen undergraduate nursing students participated in a ...

  6. Formative Assessment and Its Impact on Student Success

    Formative Assessment and Its Impact on Student Success : Nurse Educator ... Assistant Professor, and Rebecca Thal, MSN, FNP, Graduate Student, School of Nursing, MGH Institute of Health Professions, Boston, MA ([email protected]). Nurse Educator: 1/2 2019 - Volume 44 ... Emerging Technologies in Nursing Education; Curriculum Revision: Making ...

  7. Assessment and Evaluation in Nursing Education: A Simulation ...

    Assessment as learning occurs when students reflect and self-assess their progress to inform their future learning goals (formative assessment). Through this process, students can learn about themselves as learners and become aware of how they learn. Examples of this type of assessment used in the undergraduate nursing space include immersive simulations using manikins or simulated participants.

  8. Formative online multiple-choice tests in nurse education: An

    However, while existing commentary provides an overview on how MCTs are used as summative assessment in nurse education (Brady, 2005, Considine et al., 2005, Cox, 2019), there is no review on the use of online MCTs as formative assessment tools in nurse education. Formative MCTs have distinctly different purposes to their summative counterparts.

  9. Assessment and evaluation: Nursing education and ACEN accreditation

    Assessment and evaluation are essential components in nursing education used to determine program effectiveness, guide decision-making, determine if changes are needed, and to enhance the achievement of student learning (Halstead, 2019). Both formative and summative evaluations provide valuable information that can be used by students, faculty ...

  10. The paradox of an expected level: The assessment of nursing students

    This study comprises both formative assessment and summative assessment, with the main focus on experiences of mid-term assessment. ... 2020), which has an impact on the assessment forms used in nursing education. Our study shows that teachers, supervisors and students find the institutional guidelines in the assessment form vague and unclear ...

  11. Exploring the formal assessment discussions in clinical nursing

    Background. This study focuses on formal assessment practice of nursing students in clinical education in nursing homes. Enabling nursing students to acquire professional competence through clinical education in a variety of healthcare settings is a cornerstone of contemporary nurse education programs [].According to EU standards, 50% of the bachelor education program should take place in ...

  12. Formative online multiple-choice tests in nurse education: An

    Abstract. Aim: The aim of this integrative review is to explore how formative online multiple-choice tests used in nurse education promote self-regulated learning and report on pedagogies that support their design. Background: Online multiple-choice tests are widely used as learning and formative assessment tools in a range of educational contexts.

  13. Formative Assessment Strategies for Healthcare Educators

    Here are 5 teaching strategies for delivering formative assessments that provide useful feedback opportunities. 1. Pre-Assessment: Provides an assessment of student prior knowledge, help identify prior misconceptions, and allow instructors to adjust their approach or target certain areas. When instructors have feedback from student assessments ...

  14. Perceptions, Practices, and Challenges of Formative Assessment in

    The purpose of this research is to explore, as a first step, how nursing teachers conceptualize formative assessment and how they judge its. usefulness in the teaching/learning process. Secondly ...

  15. Effect of a formative assessment intervention on nursing skills

    Background: Following a global trend, nursing education institutions in Malawi adopted the skills laboratory approach for the teaching and learning of essential nursing skills in the bachelor degree in nursing. Students and faculty expressed concern about the lack of regular and planned formative assessments that build the competencies required in the middle and senior years of study.

  16. Using standardized exams for formative program evaluation

    The unprecedented, extended, and evolving nature of the pandemic prompted significant changes in nursing education, heightening the need for formative program evaluation. For example, face-to-face courses and some clinical experiences were shifted to virtual platforms, learning activities were modified, and assessment methods were adjusted.

  17. An exploration of student nurses' experiences of formative assessment

    Journal of Nursing Education 37 (6), 275-277] fuelled the desire to explore student nurses experiences of being assessed formatively. Focus group discussion, within a UK Higher Education setting, captured the holistic, dynamic and individual experiences student nurses (n=14) have of formative assessment. Ethical approval was obtained.

  18. Applying formative evaluation in the mentoring of student intern nurses

    Formative evaluation helps to improve the theoretical knowledge and clinical practice skills of intern nursing students. Formative evaluation is a dynamic, complete-process assessment. Education providers should consider formative evaluation as an integral part of effective teaching and learning because of its potential to improve student ...

  19. Perceptions, Practices, and Challenges of Formative Assessment in

    1. INTRODUCTION. In terms of training, the effects of teaching on students' achievements are often uncertain [1, 2].However, formative assessment is intended as a means by which practitioners can make judgments about the learning attained during the teaching sequences [3, 4].More specifically, this pervasive approach in pedagogical practices provides the teacher and the students with ...

  20. New Trends in Formative-Summative Evaluations for Adult Education

    4. Assessment goal is formative or assessment for learning, that is, to improve the performance during the process but evaluation is summative since it is preformed after the program has been completed to judge the quality. 5. Assessment targets the process, whereas evaluation is aimed to the outcome. 6.

  21. Academic staff perspectives of formative assessment in nurse education

    Underlying this proposition is the recognition of the importance of staff perspectives of formative assessment and their influence on assessment practice. However, there appears to be a paucity of literature exploring this area relevant to nurse education. The aim of the research was to explore the perspectives of twenty teachers of nurse ...

  22. Effect of a formative assessment intervention on nursing skills

    Following a global trend, nursing education institutions in Malawi adopted the skills laboratory approach for the teaching and learning of essential nursing skills in the bachelor degree in nursing. Students and faculty expressed concern about the lack of regular and planned formative assessments that build the competencies required in the ...

  23. (PDF) Assessment in Nursing education

    Assessment in Nursing education. September 2016; Authors: Chidinma victory Oforji. ... This piece of work addresses the importance of formative and summative assessment i nursing education.