X

Teaching & Learning

  • Education Excellence
  • Professional development
  • Case studies
  • Teaching toolkits
  • MicroCPD-UCL
  • Assessment resources
  • Student partnership
  • Generative AI Hub
  • Community Engaged Learning
  • UCL Student Success

Menu

Oral assessment

Oral assessment is a common practice across education and comes in many forms. Here is basic guidance on how to approach it.

The words Teaching toolkits ucl arena centre on a blue background

1 August 2019

In oral assessment, students speak to provide evidence of their learning. Internationally, oral examinations are commonplace. 

We use a wide variety of oral assessment techniques at UCL.

Students can be asked to: 

  • present posters
  • use presentation software such as Power Point or Prezi
  • perform in a debate
  • present a case
  • answer questions from teachers or their peers.

Students’ knowledge and skills are explored through dialogue with examiners.

Teachers at UCL recommend oral examinations, because they provide students with the scope to demonstrate their detailed understanding of course knowledge.

Educational benefits for your students

Good assessment practice gives students the opportunity to demonstrate learning in different ways. 

Some students find it difficult to write so they do better in oral assessments. Others may find it challenging to present their ideas to a group of people.  

Oral assessment takes account of diversity and enables students to develop verbal communication skills that will be valuable in their future careers.  

Marking criteria and guides can be carefully developed so that assessment processes can be quick, simple and transparent. 

How to organise oral assessment

Oral assessment can take many forms.

Audio and/or video recordings can be uploaded to Moodle if live assessment is not practicable.

Tasks can range from individual or group talks and presentations to dialogic oral examinations.

Oral assessment works well as a basis for feedback to students and/or to generate marks towards final results.

1. Consider the learning you're aiming to assess 

How can you best offer students the opportunity to demonstrate that learning?

The planning process needs to start early because students must know about and practise the assessment tasks you design.

2. Inform the students of the criteria

Discuss the assessment criteria with students, ensuring that you include (but don’t overemphasise) presentation or speaking skills.

Identify activities which encourage the application or analysis of knowledge. You could choose from the options below or devise a task with a practical element adapted to learning in your discipline.

3. Decide what kind of oral assessment to use

Options for oral assessment can include:

Assessment task

  • Presentation
  • Question and answer session.

Individual or group

If group, how will you distribute the tasks and the marks?

Combination with other modes of assessment

  • Oral presentation of a project report or dissertation.
  • Oral presentation of posters, diagrams, or museum objects.
  • Commentary on a practical exercise.
  • Questions to follow up written tests, examinations, or essays.

Decide on the weighting of the different assessment tasks and clarify how the assessment criteria will be applied to each.

Peer or staff assessment or a combination: groups of students can assess other groups or individuals.

4. Brief your students

When you’ve decided which options to use, provide students with detailed information.

Integrate opportunities to develop the skills needed for oral assessment progressively as students learn.

If you can involve students in formulating assessment criteria, they will be motivated and engaged and they will gain insight into what is required, especially if examples are used.

5. Planning, planning planning!

Plan the oral assessment event meticulously.

Stick rigidly to planned timing. Ensure that students practise presentations with time limitations in mind. Allow time between presentations or interviews and keep presentations brief.  

6. Decide how you will evaluate

Decide how you will evaluate presentations or students’ responses.

It is useful to create an assessment sheet with a grid or table using the relevant assessment criteria.

Focus on core learning outcomes, avoiding detail.

Two assessors must be present to:

  • evaluate against a range of specific core criteria
  • focus on forming a holistic judgment.

Leave time to make a final decision on marks perhaps after every four presentations. Refer to audio recordings later for borderline cases. 

7. Use peers to assess presentations

Students will learn from presentations especially if you can use ‘audio/video recall’ for feedback.

Let speakers talk through aspects of the presentation, pointing out areas they might develop. Then discuss your evaluation with them. This can also be done in peer groups.

If you have large groups of students, they can support each other, each providing feedback to several peers. They can use the same assessment sheets as teachers. Marks can also be awarded for feedback.

8. Use peer review

A great advantage of oral assessment is that learning can be shared and peer reviewed, in line with academic practice.

There are many variants on the theme so why not let your students benefit from this underused form of assessment?

This guide has been produced by the UCL Arena Centre for Research-based Education . You are welcome to use this guide if you are from another educational facility, but you must credit the UCL Arena Centre. 

Further information

More teaching toolkits  - back to the toolkits menu

[email protected] : contact the UCL Arena Centre

UCL Education Strategy 2016–21

Assessment and feedback: resources and useful links 

Six tips on how to develop good feedback practices  toolkit

Download a printable copy of this guide

Case studies : browse related stories from UCL staff and students.

Sign up to the monthly UCL education e-newsletter  to get the latest teaching news, events & resources.

Assessment and feedback events

Funnelback feed: https://search2.ucl.ac.uk/s/search.json?collection=drupal-teaching-learn... Double click the feed URL above to edit

Assessment and feedback case studies

Academic Development Centre

Oral presentations

Using oral presentations to assess learning

Introduction.

Oral presentations are a form of assessment that calls on students to use the spoken word to express their knowledge and understanding of a topic. It allows capture of not only the research that the students have done but also a range of cognitive and transferable skills.

Different types of oral presentations

A common format is in-class presentations on a prepared topic, often supported by visual aids in the form of PowerPoint slides or a Prezi, with a standard length that varies between 10 and 20 minutes. In-class presentations can be performed individually or in a small group and are generally followed by a brief question and answer session.

Oral presentations are often combined with other modes of assessment; for example oral presentation of a project report, oral presentation of a poster, commentary on a practical exercise, etc.

Also common is the use of PechaKucha, a fast-paced presentation format consisting of a fixed number of slides that are set to move on every twenty seconds (Hirst, 2016). The original version was of 20 slides resulting in a 6 minute and 40 second presentation, however, you can reduce this to 10 or 15 to suit group size or topic complexity and coverage. One of the advantages of this format is that you can fit a large number of presentations in a short period of time and everyone has the same rules. It is also a format that enables students to express their creativity through the appropriate use of images on their slides to support their narrative.

When deciding which format of oral presentation best allows your students to demonstrate the learning outcomes, it is also useful to consider which format closely relates to real world practice in your subject area.

What can oral presentations assess?

The key questions to consider include:

  • what will be assessed?
  • who will be assessing?

This form of assessment places the emphasis on students’ capacity to arrange and present information in a clear, coherent and effective way’ rather than on their capacity to find relevant information and sources. However, as noted above, it could be used to assess both.

Oral presentations, depending on the task set, can be particularly useful in assessing:

  • knowledge skills and critical analysis
  • applied problem-solving abilities
  • ability to research and prepare persuasive arguments
  • ability to generate and synthesise ideas
  • ability to communicate effectively
  • ability to present information clearly and concisely
  • ability to present information to an audience with appropriate use of visual and technical aids
  • time management
  • interpersonal and group skills.

When using this method you are likely to aim to assess a combination of the above to the extent specified by the learning outcomes. It is also important that all aspects being assessed are reflected in the marking criteria.

In the case of group presentation you might also assess:

  • level of contribution to the group
  • ability to contribute without dominating
  • ability to maintain a clear role within the group.

See also the ‘ Assessing group work Link opens in a new window ’ section for further guidance.

As with all of the methods described in this resource it is important to ensure that the students are clear about what they expected to do and understand the criteria that will be used to asses them. (See Ginkel et al, 2017 for a useful case study.)

Although the use of oral presentations is increasingly common in higher education some students might not be familiar with this form of assessment. It is important therefore to provide opportunities to discuss expectations and practice in a safe environment, for example by building short presentation activities with discussion and feedback into class time.

Individual or group

It is not uncommon to assess group presentations. If you are opting for this format:

  • will you assess outcome or process, or both?
  • how will you distribute tasks and allocate marks?
  • will group members contribute to the assessment by reporting group process?

Assessed oral presentations are often performed before a peer audience - either in-person or online. It is important to consider what role the peers will play and to ensure they are fully aware of expectations, ground rules and etiquette whether presentations take place online or on campus:

  • will the presentation be peer assessed? If so how will you ensure everyone has a deep understanding of the criteria?
  • will peers be required to interact during the presentation?
  • will peers be required to ask questions after the presentation?
  • what preparation will peers need to be able to perform their role?
  • how will the presence and behaviour of peers impact on the assessment?
  • how will you ensure equality of opportunities for students who are asked fewer/more/easier/harder questions by peers?

Hounsell and McCune (2001) note the importance of the physical setting and layout as one of the conditions which can impact on students’ performance; it is therefore advisable to offer students the opportunity to familiarise themselves with the space in which the presentations will take place and to agree layout of the space in advance.

Good practice

As a summary to the ideas above, Pickford and Brown (2006, p.65) list good practice, based on a number of case studies integrated in their text, which includes:

  • make explicit the purpose and assessment criteria
  • use the audience to contribute to the assessment process
  • record [audio / video] presentations for self-assessment and reflection (you may have to do this for QA purposes anyway)
  • keep presentations short
  • consider bringing in externals from commerce / industry (to add authenticity)
  • consider banning notes / audio visual aids (this may help if AI-generated/enhanced scripts run counter to intended learning outcomes)
  • encourage students to engage in formative practice with peers (including formative practice of giving feedback)
  • use a single presentation to assess synoptically; linking several parts / modules of the course
  • give immediate oral feedback
  • link back to the learning outcomes that the presentation is assessing; process or product.

Neumann in Havemann and Sherman (eds., 2017) provides a useful case study in chapter 19: Student Presentations at a Distance, and Grange & Enriquez in chapter 22: Moving from an Assessed Presentation during Class Time to a Video-based Assessment in a Spanish Culture Module.

Diversity & inclusion

Some students might feel more comfortable or be better able to express themselves orally than in writing, and vice versa . Others might have particular difficulties expressing themselves verbally, due for example to hearing or speech impediments, anxiety, personality, or language abilities. As with any other form of assessment it is important to be aware of elements that potentially put some students at a disadvantage and consider solutions that benefit all students.

Academic integrity

Oral presentations present relative low risk of academic misconduct if they are presented synchronously and in-class. Avoiding the use of a script can ensure that students are not simply reading out someone else’s text or an AI generated script, whilst the questions posed at the end can allow assessors to gauge the depth of understanding of the topic and structure presented. (Click here for further guidance on academic integrity .)

Recorded presentations (asynchronous) may be produced with help, and additional mechanisms to ensure that the work presented is their own work may be beneficial - such as a reflective account, or a live Q&A session. AI can create scripts, slides and presentations, copy real voices relatively convincingly, and create video avatars, these tools can enable students to create professional video content, and may make this sort of assessment more accessible. The desirability of such tools will depend upon what you are aiming to assess and how you will evaluate student performance.

Student and staff experience

Oral presentations provide a useful opportunity for students to practice skills which are required in the world of work. Through the process of preparing for an oral presentation, students can develop their ability to synthesise information and present to an audience. To improve authenticity the assessment might involve the use of an actual audience, realistic timeframes for preparation, collaboration between students and be situated in realistic contexts, which might include the use of AI tools.

As mentioned above it is important to remember that the stress of presenting information to a public audience might put some students at a disadvantage. Similarly non-native speakers might perceive language as an additional barrier. AI may reduce some of these challenges, but it will be important to ensure equal access to these tools to avoid disadvantaging students. Discussing criteria and expectations with your students, providing a clear structure, ensuring opportunities to practice and receive feedback will benefit all students.

Some disadvantages of oral presentations include:

  • anxiety - students might feel anxious about this type of assessment and this might impact on their performance
  • time - oral assessment can be time consuming both in terms of student preparation and performance
  • time - to develop skill in designing slides if they are required; we cannot assume knowledge of PowerPoint etc.
  • lack of anonymity and potential bias on the part of markers.

From a student perspective preparing for an oral presentation can be time consuming, especially if the presentation is supported by slides or a poster which also require careful design.

From a teacher’s point of view, presentations are generally assessed on the spot and feedback is immediate, which reduces marking time. It is therefore essential to have clearly defined marking criteria which help assessors to focus on the intended learning outcomes rather than simply on presentation style.

Useful resources

Joughin, G. (2010). A short guide to oral assessment . Leeds Metropolitan University/University of Wollongong http://eprints.leedsbeckett.ac.uk/2804/

Race, P. and Brown, S. (2007). The Lecturer’s Toolkit: a practical guide to teaching, learning and assessment. 2 nd edition. London, Routledge.

Annotated bibliography

Class participation

Concept maps

Essay variants: essays only with more focus

  • briefing / policy papers
  • research proposals
  • articles and reviews
  • essay plans

Film production

Laboratory notebooks and reports

Objective tests

  • short-answer
  • multiple choice questions

Patchwork assessment

Creative / artistic performance

  • learning logs
  • learning blogs

Simulations

Work-based assessment

Reference list

  • Open access
  • Published: 26 April 2022

Development and validation of the oral presentation evaluation scale (OPES) for nursing students

  • Yi-Chien Chiang 1 ,
  • Hsiang-Chun Lee 2 ,
  • Tsung-Lan Chu 3 ,
  • Chia-Ling Wu 2 &
  • Ya-Chu Hsiao 4  

BMC Medical Education volume  22 , Article number:  318 ( 2022 ) Cite this article

4739 Accesses

1 Citations

16 Altmetric

Metrics details

Oral presentations are an important educational component for nursing students and nursing educators need to provide students with an assessment of presentations as feedback for improving this skill. However, there are no reliable validated tools available for objective evaluations of presentations. We aimed to develop and validate an oral presentation evaluation scale (OPES) for nursing students when learning effective oral presentations skills and could be used by students to self-rate their own performance, and potentially in the future for educators to assess student presentations.

The self-report OPES was developed using 28 items generated from a review of the literature about oral presentations and with qualitative face-to-face interviews with university oral presentation tutors and nursing students. Evidence for the internal structure of the 28-item scale was conducted with exploratory and confirmatory factor analysis (EFA and CFA, respectively), and internal consistency. Relationships with Personal Report of Communication Apprehension and Self-Perceived Communication Competence to conduct the relationships with other variables evidence.

Nursing students’ ( n  = 325) responses to the scale provided the data for the EFA, which resulted in three factors: accuracy of content, effective communication, and clarity of speech. These factors explained 64.75% of the total variance. Eight items were dropped from the original item pool. The Cronbach’s α value was .94 for the total scale and ranged from .84 to .93 for the three factors. The internal structure evidence was examined with CFA using data from a second group of 325 students, and an additional five items were deleted. Except for the adjusted goodness of fit, fit indices of the model were acceptable, which was below the minimum criteria. The final 15-item OPES was significantly correlated with the students’ scores for the Personal Report of Communication Apprehension scale ( r  = −.51, p  < .001) and Self-Perceived Communication Competence Scale ( r  = .45, p  < .001), indicating excellent evidence of the relationships to other variables with other self-report assessments of communication.

Conclusions

The OPES could be adopted as a self-assessment instrument for nursing students when learning oral presentation skills. Further studies are needed to determine if the OPES is a valid instrument for nursing educators’ objective evaluations of student presentations across nursing programs.

Peer Review reports

Competence in oral presentations is important for medical professionals to communicate an idea to others, including those in the nursing professions. Delivering concise oral presentations is a useful and necessary skill for nurses [ 1 , 2 ]. Strong oral presentation skills not only impact the quality of nurse-client communications and the effectiveness of teamwork among groups of healthcare professionals, but also promotion, leadership, and professional development [ 2 ]. Nurses are also responsible for delivering health-related knowledge to patients and the community. Therefore, one important part of the curriculum for nursing students is the delivery of oral presentations related to healthcare issues. A self-assessment instrument for oral presentations could provide students with insight into what skills need improvement.

Three components have been identified as important for improving communication. First, a presenter’s self-esteem can influence the physio-psychological reaction towards the presentation; presenters with low self-esteem experience greater levels of anxiety during presentations [ 3 ]. Therefore, increasing a student’s self-efficacy can increase confidence in their ability to effectively communicate, which can reduce anxiety [ 3 , 4 ]. Second, Liao (2014) reported improving speaking efficacy can also improve oral communications and collaborative learning among students could improve speech efficacy and decrease speech anxiety [ 5 ]. A study by De Grez et al. provided students with a list of skills to practice, which allowed them to feel more comfortable when a formal presentation was required, increased presentation skills, and improved communication by improving self-regulation [ 6 ]. Third, Carlson and Smith-Howell (1995) determined quality and accuracy of the information presented was also an important aspect of public speaking performances [ 7 ]. Therefore, all three above mentioned components are important skills for effective communication during an oral presentation.

Instruments that provide an assessment of a public speaking performance are critical for helping students’ improve oral presentation skills [ 7 ]. One study found peer evaluations were higher than those of university tutors for student presentations, using a student-developed assessment form [ 8 ]. The assessment criteria included content (40%), presentation (40%), and structure (20%); the maximum percent in each domain was given for “excellence”, which was relative to a minimum “threshold”. Multiple “excellence” and “threshold” benchmarks were described for each domain. For example, benchmarks included the use of clear and appropriate language, enthusiasm, and keeping the audience interested. However, the percentage score did not provide any information about what specific benchmarks were met. Thus, these quantitative scores did not include feedback on specific criteria that could enhance future presentations.

At the other extreme is an assessment that is limited to one aspect of the presentation and is too detailed to evaluate the performance efficiently. An example of this is the 40-item tool developed by Tsang (2018) [ 6 ] to evaluate oral presentation skills, which measured several domains: voice (volume and speed), facial expressions, passion, and control of time. An assessment tool developed by De Grez et al. (2009) includes several domains: three subcategories for content (quality of introduction, structure, and conclusion), five subcategories of expression (eye-contact, vocal delivery, enthusiasm, interaction with audience, and body-language), and a general quality [ 9 ]. Many items overlap, making it hard to distinguish specific qualities. Other evaluation tools include criteria that are difficult to objectively measure, such as body language, eye-contact, and interactions with the audience [ 10 ]. Finally, most of the previous tools were developed without testing the reliability and validity of the instrument.

Nurses have the responsibility of not only providing medical care, but also medical information to other healthcare professionals, patients, and members of the community. Therefore, improving nursing students’ speaking skills is an important part of the curriculum. A self-report instrument for measuring nursing students’ subjective assessment of their presentation skills could help increase competence in oral communication. However, to date, there is a no reliable and valid instrument of evaluating oral presentation performance in nursing education. Therefore, the aim of this study was to develop a self-assessment instrument for nursing students that could guide them in understanding their strengths and development areas in aspects of oral presentations. Development of a scale that is a valid and reliable instrument for nursing students could then be examined for use as a scale for objective evaluations of oral presentations by peers and nurse educators.

Study design

This study developed and validated an oral presentation evaluation scale (OPES) that could be employed as a self-assessment instrument for students when learning skills for effective oral presentations. The instrument was developed in two phases: Phase I (item generation and revision) and Phase II (scale development) [ 11 ]. The phase I was aimed to generate items by a qualitative method and to collect content evidence for the OPES. The phase II focused on scale development which was established internal structure evidence including CFA, EFA, and internal consistency of the scale for the OPES. In addition, the phase II collected the evidence of OPES on relationships with other variables. Because we hope to also use the instrument as an aid for nurse educators in objective evaluations of nursing students’ oral presentations, both students and educators were involved in item generation and revision. Only nursing students participated in Phase II.

Approval was obtained from Chang Gung Medical Foundation institutional review board (ID: 201702148B0) prior to initiation of the study. Informed consent was obtained from all participants prior to data collection. All participants being interviewed for item generation in phase I provided signed informed consent indicating willingness to be audiotaped during the interview. All the study methods were carried out in accordance with relevant guidelines and regulations.

Phase I: item generation and item revision

Participants.

A sample of nurse educators ( n  = 8) and nursing students (n  = 11) participated in the interviews for item generation. Nursing students give oral presentations to meet the curriculum requirement, therefore the educators were university tutors experienced in coaching nursing students preparing to give an oral presentation. Nurse educators specializing in various areas of nursing, such as acute care, psychology, and community care were recruited if they had at least 10 years’ experience coaching university students. The mean age of the educators was 52.1 years ( SD  = 4.26), 75% were female, and the mean amount of teaching experience was 22.6 years ( SD  = 4.07). Students were included if they had given at least one oral presentation and were willing to share their experiences of oral presentation. The mean age of the students was 20.7 ( SD  = 1.90), 81.8% were female; 36.3%, four were second year students, three were third students, and four were in their fourth year.

An additional eight educators participated in the evaluation of content evidence of the ORES. All had over 10 years’ experience in coaching students in giving an oral presentation that would be evaluated for a grade.

Item generation

Development of item domains involved deductive evaluations of the about oral presentations [ 2 , 3 , 6 , 7 , 8 , 12 , 13 , 14 ]. Three domains were determined to be important components of an oral presentation: accuracy of content, effective communication, and clarity of speech. Inductive qualitative data from face-to-face semi-structured interviews with nurse educators and nursing student participants were used to identify domain items [ 11 ]. Details of interview participants are described in the section above. The interviews with nurse educators and students followed an interview guide (Table  1 ) and lasted approximately 30–50 min for educators and 20–30 min for students. Deduction of the literature and induction of the interview data was used to determine categories considered important for the objective evaluation of oral presentations.

Analysis of interview data. Audio recordings of the interviews were transcribed verbatim at the conclusion of each interview. Interview data were analyzed by the first, second, and corresponding author, all experts in qualitative studies. The first and second authors coded the interview data to identify items educators and student described as being important to the experience of an oral presentation [ 11 ]. The corresponding author grouped the coded items into constructs important for oral presentations. Meetings with the three researchers were conducted to discuss the findings; if there were differences in interpretation, an outside expert in qualitative studies was included in the discussions until consensus was reached among the three researchers.

Analysis of the interview data indicated items involved in preparation, presentation, and post-presentation were important to the three domains of accuracy of content, effective communication, and clarity of speech. Items for accuracy of content involved preparation (being well-prepared before the presentation; preparing materials suitable for the target audience; practicing the presentation in advance) and post-presentation reflection; and discussing the content of the presentation with classmates and teachers. Items for effective communication involved the presentation itself: obtain the attention of the audience; provide materials that are reliable and valuable; express confidence and enthusiasm; interact with the audience; and respond to questions from the audience. The third domain, clarity of speech, involved of items could be, post-presentation, involved a student’s ability to reflect on the content and performance of their presentation and willingness to obtain feedback from peers and teachers.

Item revision: content evidence

Based on themes that emerged during, 28 items were generated. Content evidence of the 28 items of the OPES was established with a panel of eight experts who were educators that had not participated in the face-to-face interviews. The experts were provided with a description of the research purpose, a list of the proposed items, and were asked to rate each item on a 4-point Likert scale (1 = not representative, 2 = item needs major revision, 3 = representative but needs minor revision, 4 = representative). For item-level content validity index (I-CVI) was determined by the total items rated 3 or 4 divided by the total number of experts; scale-level content validity index (S-CVI) was determined by the total items rated 3 or 4 divided by the total number of items.

Based on the suggestions of the experts, six items of the OPES were reworded for clarity: item 12 was revised from “The presentation is riveting” to “The presenter’s performance is brilliant; it resonates with the audience and arouses their interests”. Two items were deleted because they duplicated other items: “demonstrates confidence” and “presents enthusiasm” were combined and item 22 became, “demonstrates confidence and enthusiasm properly”. The item “the presentation allows for proper timing and sequencing” and “the length of time of the presentation is well controlled” were also combined into item 9, “The content of presentation follows the rules, allowing for the proper timing and sequence”. Thus, a total of 26 items were included in the OPES at this phase. The I-CVI value was .88 ~ 1 and the scale-level CVI/universal agreement was .75, indicating that the OPES was an acceptable instrument for measuring an oral presentation [ 11 ].

Phase II: scale development

Phase II, scale development, aimed to establish the internal structure evidence for OPES. The evidence of relation to other variables was also evaluated as well in this phase. More specifically, the internal structure evidence for OPES was evaluated by exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The evidence of relationships to other variables was determined by examining the relationships between the OPES and the PRCA and SPCC [ 15 ].

A sample of nursing students was recruited purposively from a university in Taiwan. Students were included if they were: (a) full-time students; (b) had declared nursing as their major; and (c) were in their sophomore, junior, or senior year. First-year university students (freshman) were excluded. A bulletin about the survey study was posted outside of classrooms; 707 students attend these classes. The bulletin included a description of the inclusion criteria and instructions to appear at the classroom on a given day and time if students were interested in participating in the study. Students who appeared at the classroom on the scheduled day ( N  = 650) were given a packet containing a demographic questionnaire (age, gender, year in school), a consent form, the OPES instrument, and two scales for measuring aspects of communication, the Personal Report of Communication Apprehension (PRCA) and the Self-Perceived Communication Competence (SPCC); the documents were labeled with an identification number to anonymize the data. The 650 students were divided into two groups, based on the demographic data using the SPSS random case selection procedure, (Version 23.0; SPSS Inc., Chicago, IL, USA). The selection procedure was performed repeatedly until the homogeneity of the baseline characteristics was established between the two groups ( p  > .05). The mean age of the participants was 20.5 years ( SD  = 0.98) and 87.1% were female ( n  = 566). Participants were comprised of third-year students (40.6%, n  = 274), fourth year (37.9%, n  = 246) and second year (21.5%, n  = 93). The survey data for half the group (calibration sample, n  = 325) was used for EFA; the survey data from the other half (the validation sample, n  = 325) was used for CFA. Scores from the PRCA and SPCC instruments were used for evaluating the evidence of relationships to other variables.

The aims of phase II were to collect the scale of internal structure evidence, which identify the items that nursing students perceived as important during an oral presentation and to determine the domains that fit a set of items. The 325 nursing students for EFA (described above) were completed the data collection. We used EFA to evaluate the internal structure of the scale. The items were presented in random order and were not nested according to constructs. Internal consistency of the scale was determined by calculating Cronbach’s alpha.

Then, the next step involved determining if the newly developed OPES was a reliable and valid self-report scale for subjective assessments of nursing students’ previous oral presentations. Participants (the second group of 325 students) were asked, “How often do you incorporate each item into your oral presentations?”. Responses were scored on a 5-point Likert scale with 1 = never to 5 = always; higher scores indicated a better performance. The latent structure of the scale was examined with CFA.

Finally, the evidence of relationships with other variables of the OPES was determined by examining the relationships between the OPES and the PRCA and SPCC, described below.

The 24-item PRCA scale

The PRCA scale is a self-report instrument for measuring communication apprehension, which is an individual’s level of fear or anxiety associated with either real or anticipated communication with a person or persons [ 12 ]. The 24 scale items are comprised of statements concerning feelings about communicating with others. Four subscales are used for different situations: group discussions, interpersonal communications, meetings, and public speaking. Each item is scored on a 5-point Likert scale from 1 (strongly disagree) to 5 (strongly agree); scores range from 24 to 120, with higher scores indicating greater communication anxiety. The PRCA has been demonstrated to be a reliable and valid scale across a wide range of related studies [ 5 , 13 , 14 , 16 , 17 ]. The Cronbach’s alpha for the scale is .90 [ 18 ]. We received permission from the owner of the copyright to translate the scale into Chinese. Translation of the scale into Chinese by a member of the research team who was fluent in English was followed by back-translation from a differed bi-lingual member of the team to ensure semantic validity of the translated PRCA scale. The Cronbach’s alpha value in the present study was .93.

The 12-item SPCC scale

The SPCC scale evaluates a persons’ self-perceived competence in a variety of communication contexts and with a variety of types of receivers. Each item is a situation which requires communication, such as “Present a talk to a group of strangers”, or “Talk with a friend”. Participants respond to each situation by ranking their level of competence from 0 (completely incompetent) to 100 (completely competent). The Cronbach’s alpha for reliability of the scale is .85. The SPCC has been used in similar studies [ 13 , 19 ]. We received permission owner of the copyright to translate the scale into Chinese. Translation of the SPCC scale into Chinese by a member of the research team who was fluent in English was followed by back-translation from a differed bi-lingual member of the team to ensure semantic validity of the translated scale. The Cronbach’s alpha value in the present study was .941.

Statistical analysis

Data were analyzed using SPSS for Windows 23 (SPSS Inc., Chicago, IL, USA). Data from the 325 students designated for EFA was used to determine the internal structure evidence of the OPES. The Kaiser-Meyer-Olkin measure for sampling adequacy and Bartlett’s test of sphericity demonstrated factor analysis was appropriate [ 20 ]. Principal component analysis (PCA) was performed on the 26 items to extract the major contributing factors; varimax rotation determined relationships between the items and contributing factors. Factors with an eigenvalue > 1 were further inspected. A factor loading greater than .50 was regarded as significantly relevant [ 21 ].

All item deletions were incorporated one by one, and the EFA model was respecified after each deletion, which reduced the number of items in accordance with a priori criteria. In the EFA phase, the internal consistency of each construct was examined using Cronbach’s alpha, with a value of .70 or higher considered acceptable [ 22 ].

Data from the 325 students designated for CFA was used to validate the factor structure of the OPES. In this phase, items with a factor loading less than .50 were deleted [ 21 ]. The goodness of the model fit was assessed using the following: absolute fit indices, including goodness of fit index (GFI), adjusted goodness of fit index (AGFI), standardized root mean squared residual (SRMR), and the root mean square error of approximation (RMSEA); relative fit indices, normed and non-normed fit index (NFI and NNFI, respectively), and comparative fit index (CFI); and the parsimony NFI, CFI, and likelihood ratio ( x 2 /df ) [ 23 ].

In addition to the validity testing, a research team, which included a statistician, determined the appropriateness of either deleting or retaining each item. The convergent validity (internal quality of the items and factor structures), was further verified using standardized factor loading, with values of .50 or higher considered acceptable, and average variance extraction (AVE), with values of .5 or higher considered acceptable [ 21 ]. Convergent reliability (CR) was assessed using the construct reliability from the CFA, with values of .7 or higher considered acceptable [ 24 ]. The AVE and correlation matrices among the latent constructs were used to establish discriminant validity of the instrument. The square root of the AVE of each construct was required to reach a value that was larger than the correlation coefficient between itself and the other constructs [ 24 ].

The evidence of relationships with other variables was determined by examining the relationship of nursing students’ scores ( N  = 650) on the newly developed OPES with scores for constructs of communication of the translated scales for PRCA and SPCC. The hypotheses between OPES to PRCA and SPCC individually indicated the strong self-reported presentation competence were associated with lower communication anxiety and greater communication competence.

Development of the OPES: internal structure evidence

EFA was performed sequentially six times until there were no items with a loading factor < .50 or that were cross-loaded, and six items were deleted (Table  2 ). EFA resulted in 20 items with a three factors solution, which represented 64.75% of the variance of the OPES. The Cronbach’s alpha estimates for the total scale was .94. indicating the scale had sound internal reliability (Table 2 ). The three factors were labeled in accordance with the item content via a panel discussion and had Cronbach’s alpha values of .93, .89, and .84 for factors 1, 2 and 3, respectively.

Factor 1, Accuracy of Content, was comprised of 11 items and explained 30.03% of the variance. Items in Accuracy of Content evaluated agreement between the topic (theme) and content of the presentation, use of presentation aids to highlight the key points of the presentation, and adherence to time limitations. These items included statements such as: “The content of the presentation matches the theme” (item 7), “Presentation aids, such as PowerPoint and posters, highlight key points of the report” (item 14), and “The organization of the presentation is structured to provide the necessary information, while also adhering to time limitations” (item 9). Factor 2, “Effective Communication”, was comprised of five items, which explained 21.72% of the total variance. Effective Communication evaluated the attitude and expression of the presenter. Statements included “Demonstrates confidence and an appropriate level of enthusiasm” (item 22), “Uses body language in a manner that increases the audience’s interest in learning” (item 21), and “Interacts with the audience using eye contact and a question and answer session” (item 24). Factor 3, “Clarity of Speech” was comprised of four items, which explained 13.00% of the total variance. Factor 3 evaluated the presenter’s pronunciation with statements such as “The words and phrases of the presenter are smooth and fluent” (item 19).

The factor structure of the 20-items of the EFA were examined with CFA. We sequentially removed items 1, 4, 20, 15, and 16, based on modification indices. The resultant 15-item scale had acceptable fit indices for the 3-factor model of the OPES for chi-square ( x 2 /df  = 2.851), RMSEA (.076), NNFI (.933), and CFI = .945. However, the AGFI, which was .876, was below the acceptable criteria of .9. A panel discussion with the researchers determined that items 4, 15, and 16 were similar in meaning to item 14; item 1 was similar in meaning to item 7. Therefore, the panel accepted the results of the modified CFA model of the OPES with 15 items and 3-factors.

As illustrated in Table  3 and Fig.  1 , all standardized factor loadings exceeded the threshold of .50, and the AVE for each construct ranged from .517 to .676, indicating acceptable convergent validity. In addition, the CR was greater than .70 for the three constructs (range = .862 to .901), providing further evidence for the reliability of the instrument [ 25 ]. As shown in Table  4 , all square roots of the AVE for each construct (values in the diagonal elements) were greater than the corresponding inter-construct correlations (values below the diagonal) [ 24 , 25 ]. These findings provide further support for the validity of the OPES.

figure 1

The standardized estimates of CFA model for validation sample

Development of the OPES: relationships with other variables

Relationships with other variable evidence was examined with correlation coefficients for the total score and subscale scores of the OPES with the total score and subscale scores of the PRCA and SPCC (Table  5 ) from all nursing students who participated in the study and complete all three scales ( N  = 650). Correlation coefficients for the total score of the OPES with total scores for the PRCA and SPCC were − .51 and .45, respectively (both p  < .001). Correlation coefficients for subscale scores of the OPES with the subscale scores of the PRCA and SPCC were all significant ( p  < .001), indicating strong valid evidence of the scale as a self-assessment for effective communication.

The 15-item OPES was found to be a reliable and valid instrument for nursing students’ self-assessments of their performance during previous oral presentations. The strength of this study is that the initial items were developed using both literature review and interviews with nurse educators, who were university tutors in oral presentation skills, as well as nursing students at different stages of the educational process. Another strength of this study is the multiple methods used to establish the validity and reliability of the OPES, including internal structure evidence (both EFA and CFA) and relationships with other variables [ 15 , 26 ].

Similar to previous to other oral presentation instruments, content analysis of items of the OPES generated from the interviews with educators and students indicated accuracy of the content of a presentation and effective communication were important factors for a good performance [ 3 , 4 , 5 , 6 , 8 ]. Other studies have also included self-esteem as a factor that can influence the impact of an oral presentation [ 3 ], however, the subscale of effective communication included the item “Demonstrates confidence and an appropriate level of enthusiasm”, which a quality of self-esteem. The third domain was identified as clarity of speech, which is unique to our study.

Constructs that focus on a person’s ability to deliver accurate content are important components for evaluations of classroom speaking because they have been shown to be fundamental elements of public speaking ([ 7 ]). Accuracy of content as it applies to oral presentation for nurses is important not only for communicating information involving healthcare education for patients, but also for communicating with team members providing medical care in a clinical setting.

The two other factors identified in the OPES, effective communication and clarity of speech, are similar to constructs for delivery of a presentation, which include interacting with the audience through body-language, eye-contact, and question and answer sessions. These behaviors indicate the presenter is confident and enthusiastic, which engages and captures the attention of an audience. It seems logical that the voice, pronunciation, and fluency of speech were not independent factors because the presenter’s voice qualities all are keys to effectively delivering a presentation. A clear and correct pronunciation, appropriate tone and volume of a presentation assists audiences in more easily receiving and understanding the content.

Our 15-item OPES scale evaluated the performance based on outcome. The original scale was composed of 26 items that were derived from qualitative interviews with nursing students and university tutors in oral presentations. These items were the result of asking about important qualities at three timepoints of a presentation: before, during, and after. However, most of the items that were deleted were those about the period before the presentation (1 to 6); two items (25 and 26) were about the period after the presentation. Analysis did not reflect the qualitative interview data expressed by educators and students regarding the importance of preparing with practice and rehearsal, and the importance of peer and teacher evaluations. Other studies have suggested that preparation and self-reflection is important for a good presentation, which includes awareness of the audience receiving the presentation, meeting the needs of the audience, defining the purpose of the presentation, use of appropriate technology to augment information, and repeated practices to reduce anxiety [ 2 , 5 , 27 ]. However, these items were deleted in the scale validation stage, possibly because it is not possible to objectively evaluate how much time and effort the presenter has devoted to the oral presentation.

The deletion of item 20, “The clothing worn by the presenter is appropriate” was also not surprising. During the interviews, educators and students expressed different opinions about the importance of clothing for a presentation. Many of the educators believed the presenter should be dressed formally; students believed the presenter should be neatly dressed. These two perspectives might reflect generational differences. However, these results are reminders assessments should be based on a structured and objective scale, rather than one’s personal attitude and stereotype of what should be important about an oral presentation.

The application of the OPES may be useful not only for educators but also for students. The OPES could be used a checklist to help students determine how well their presentation matches the 15 items, which could draw attention to deficiencies in their speech before the presentation is given. Once the presentation has been given, the OPES could be used as a self-evaluation form, which could help them make modifications to improve the next the next presentation. Educators could use the OPES to evaluate a performance during tutoring sessions with students, which could help identify specific areas needing improvement prior to the oral presentation. Although, analysis of the scale was based on data from nursing students, additional assessments with other populations of healthcare students should be conducted to determine if the OPES is applicable for evaluating oral presentations for students in general.

Limitations

This study had several limitations. Participants were selected by non-random sampling, therefore, additional studies with nursing students from other nursing schools would strengthen the validity and reliability of the scale. In addition, the OPES was developed using empirical data, rather than basing it on a theoretical framework, such as anxiety and public speaking. Therefore, the validity of the OPES for use in other types of student populations or cultures that differ significantly from our sample population should be established in future studies. Finally, the OPES was in the study was examined as a self-assessment instrument for nursing students who rated themselves based on their perceived abilities previous oral presentations rather than from peer or nurse educator evaluations. Therefore, applicability of the scale as an assessment instrument for educators providing an objective score of nursing students’ real-life oral presentations needs to be validated in future studies.

This newly developed 15-item OPES is the first report of a valid self-assessment instrument for providing nursing students with feedback about whether necessary targets for a successful oral presentation are reached. Therefore, it could be adopted as a self-assessment instrument for nursing students when learning what oral presentation require skills require strengthening. However, further studies are needed to determine if the OPES is a valid instrument for use by student peers or nursing educators evaluating student presentations across nursing programs.

Availability of data and materials

The datasets and materials of this study are available to the corresponding author on request.

Hadfield-Law L. Presentation skills. Presentation skills for nurses: how to prepare more effectively. Br J Nurs. 2001;10(18):1208–11.

Article   Google Scholar  

Longo A, Tierney C. Presentation skills for the nurse educator. J Nurses Staff Dev. 2012;28(1):16–23.

Elfering A, Grebner S. Getting used to academic public speaking: global self-esteem predicts habituation in blood pressure response to repeated thesis presentations. Appl Psychophysiol Biofeedback. 2012;37(2):109–20.

Turner K, Roberts L, Heal C, Wright L. Oral presentation as a form of summative assessment in a master’s level PGCE module: the student perspective. Assess Eval High Educ. 2013;38(6):662–73.

Liao H-A. Examining the role of collaborative learning in a public speaking course. Coll Teach. 2014;62(2):47–54.

Tsang A. Positive effects of a programme on oral presentation skills: high- and low-proficient learners' self-evaluations and perspectives. Assess Eval High Educ. 2018;43(5):760–71.

Carlson RE, Smith-Howell D. Classroom public speaking assessment: reliability and validity of selected evaluation instruments. Commun Educ. 1995;44:87–97.

Langan AM, Wheater CP, Shaw EM, Haines BJ, Cullen WR, Boyle JC, et al. Peer assessment of oral presentations: effects of student gender, university affiliation and participation in the development of assessment criteria. Assess Eval High Educ. 2005;30(1):21–34.

De Grez L, Valcke M, Roozen I. The impact of an innovative instructional intervention on the acquisition of oral presentation skills in higher education. Comput Educ. 2009;53(1):112–20.

Murillo-Zamorano LR, Montanero M. Oral presentations in higher education: a comparison of the impact of peer and teacher feedback. Assess Eval High Educ. 2018;43(1):138–50.

Polit DF, Beck CT. The content validity index: are you sure you know what’s being reported? Critique and recommendations. Res Nurs Health. 2006;29(5):489–97.

McCroskey CJ. Oral communication apprehension: a summary of recent theory and research. Hum Commun Res. 1977;4(1):78–96.

Dupagne M, Stacks DW, Giroux VM. Effects of video streaming technology on public speaking Students' communication apprehension and competence. J Educ Technol Syst. 2007;35(4):479–90.

Kim JY. The effect of personality, situational factors, and communication apprehension on a blended communication course. Indian J Sci Technol. 2015;8(S1):528–34.

Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med. 2006;119(2):166 e7–16.

Pearson JC, Child JT, DeGreeff BL, Semlak JL, Burnett A. The influence of biological sex, self-esteem, and communication apprehension on unwillingness to communicate. Atl J Commun. 2011;19(4):216–27.

Degner RK. Prevalence of communication apprehension at a community college. Int J Interdiscip Soc Sci. 2010;5(6):183–91.

Google Scholar  

McCroskey JC. An introduction to rhetorical communication, vol. 4th ed. Englewood Cliffs: NJ: Prentice-Hall; 1982.

Hancock AB, Stone MD, Brundage SB, Zeigler MT. Public speaking attitudes: does curriculum make a difference? J Voice. 2010;24(3):302–7.

Nunnally JC, Bernstein IH. Psychometric theory. New York: McGraw-Hill; 1994.

Hair JF, Black B, Babin B, Anderson RE, Tatham RL. Multivariate data analysis. 6th ed. Upper Saddle River: NJ: Prentice-Hall; 2006.

DeVellis RF. Scale development: theory and applications. 2nd ed. Oaks, CA: SAGE; 2003.

Bentler PM. On the fit of models to covariances and methodology to the bulletin. Psychol Bull. 1992;112(3):400–4.

Fornell C, Larcker D. Evaluating structural equation models with unobservable variables and measurement error. J Mark Res. 1981;18:39–50.

Hair JF, Black WC, Babin BJ, Anderson RE. Multivariate data analysis: a global perspective vol. 7th ed. Upper Saddle River: Pearson Prentice Hall; 2009.

Downing SM. Validity: on meaningful interpretation of assessment data. Med Educ. 2003;37(9):830–7.

Foulkes M. Presentation skills for nurses. Nurs Stand. 2015;29(25):52–8.

Download references

Acknowledgements

The authors thank all the participants for their kind cooperation and contribution to the study.

This study was supported by grants from the Ministry of Science and Technology Taiwan (MOST 107–2511-H-255-007), Ministry of Education (PSR1090283), and the Chang Gung Medical Research Fund (CMRPF3K0021, BMRP704, BMRPA63).

Author information

Authors and affiliations.

Department of Nursing, Chang Gung University of Science and Technology, Division of Pediatric Hematology and Oncology, Linkou Chang Gung Memorial Hospital, Taoyuan City, Taiwan, Republic of China

Yi-Chien Chiang

Department of Nursing, Chang Gung University of Science and Technology, Taoyuan City, Taiwan, Republic of China

Hsiang-Chun Lee & Chia-Ling Wu

Administration Center of Quality Management Department, Chang Gung Medical Foundation, Taoyuan City, Taiwan, Republic of China

Tsung-Lan Chu

Department of Nursing, Chang Gung University of Science and Technology; Administration Center of Quality Management Department, Linkou Chang Gung Memorial Hospital, No.261, Wenhua 1st Rd., Guishan Dist, Taoyuan City, 333 03, Taiwan, Republic of China

Ya-Chu Hsiao

You can also search for this author in PubMed   Google Scholar

Contributions

All authors conceptualized and designed the study. Data were collected by Y-CH and H-CL. Data analysis was conducted by Y-CH and Y-CC. The first draft of the manuscript was written by Y-CH, Y-CC, and all authors contributed to subsequent revisions. All authors read and approved the final submission.

Corresponding author

Correspondence to Ya-Chu Hsiao .

Ethics declarations

Ethics approval and consent to participate.

All the study methods and materials have been performed in accordance with the Declaration of Helsink. The study protocol and the procedures of the study were approved by Chang Gung Medical Foundation institutional review board (number: 201702148B0) for the protection of participants’ confidentiality. All of the participants received oral and written explanations of the study and its procedures, as well as informed consent was obtained from all subjects.

Consent for publication

Not applicable.

Competing interests

No conflict of interest has been declared by the authors.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Chiang, YC., Lee, HC., Chu, TL. et al. Development and validation of the oral presentation evaluation scale (OPES) for nursing students. BMC Med Educ 22 , 318 (2022). https://doi.org/10.1186/s12909-022-03376-w

Download citation

Received : 25 February 2021

Accepted : 14 April 2022

Published : 26 April 2022

DOI : https://doi.org/10.1186/s12909-022-03376-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Nurse educators
  • Nursing students
  • Oral presentation
  • Scale development

BMC Medical Education

ISSN: 1472-6920

assessment of oral presentation

To read this content please select one of the options below:

Please note you do not have access to teaching notes, assessing oral presentation performance: designing a rubric and testing its validity with an expert group.

Journal of Applied Research in Higher Education

ISSN : 2050-7003

Article publication date: 3 July 2017

The purpose of this paper is to design a rubric instrument for assessing oral presentation performance in higher education and to test its validity with an expert group.

Design/methodology/approach

This study, using mixed methods, focusses on: designing a rubric by identifying assessment instruments in previous presentation research and implementing essential design characteristics in a preliminary developed rubric; and testing the validity of the constructed instrument with an expert group of higher educational professionals ( n =38).

The result of this study is a validated rubric instrument consisting of 11 presentation criteria, their related levels in performance, and a five-point scoring scale. These adopted criteria correspond to the widely accepted main criteria for presentations, in both literature and educational practice, regarding aspects as content of the presentation, structure of the presentation, interaction with the audience and presentation delivery.

Practical implications

Implications for the use of the rubric instrument in educational practice refer to the extent to which the identified criteria should be adapted to the requirements of presenting in a certain domain and whether the amount and complexity of the information in the rubric, as criteria, levels and scales, can be used in an adequate manner within formative assessment processes.

Originality/value

This instrument offers the opportunity to formatively assess students’ oral presentation performance, since rubrics explicate criteria and expectations. Furthermore, such an instrument also facilitates feedback and self-assessment processes. Finally, the rubric, resulting from this study, could be used in future quasi-experimental studies to measure students’ development in presentation performance in a pre-and post-test situation.

  • Higher education
  • Oral presentation competence

Van Ginkel, S. , Laurentzen, R. , Mulder, M. , Mononen, A. , Kyttä, J. and Kortelainen, M.J. (2017), "Assessing oral presentation performance: Designing a rubric and testing its validity with an expert group", Journal of Applied Research in Higher Education , Vol. 9 No. 3, pp. 474-486. https://doi.org/10.1108/JARHE-02-2016-0012

Emerald Publishing Limited

Copyright © 2017, Emerald Publishing Limited

Related articles

We’re listening — tell us what you think, something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

New TTE Logo very Small

Teach the Earth the portal for Earth Education

From NAGT's On the Cutting Edge Collection

NAGT Join small

  • Course Topics
  • Atmospheric Science
  • Biogeoscience
  • Environmental Geology
  • Environmental Science
  • Geochemistry
  • Geomorphology
  • GIS/Remote Sensing
  • Hydrology/Hydrogeology
  • Oceanography
  • Paleontology
  • Planetary Science
  • Sedimentary Geology
  • Structural Geology
  • Incorporating Societal Issues
  • Climate Change
  • Complex Systems
  • Ethics and Environmental Justice
  • Geology and Health
  • Public Policy
  • Sustainability
  • Strengthening Your Department
  • Career Development
  • Strengthening Departments
  • Student Recruitment
  • Teacher Preparation
  • Teaching Topics
  • Biocomplexity
  • Early Earth
  • Earthquakes
  • Hydraulic Fracturing
  • Plate Tectonics
  • Teaching Environments
  • Intro Geoscience
  • Online Teaching
  • Teaching in the Field
  • Two-Year Colleges
  • Urban Students
  • Enhancing your Teaching
  • Affective Domain
  • Course Design
  • Data, Simulations, Models
  • Geophotography
  • Google Earth
  • Metacognition
  • Online Games
  • Problem Solving
  • Quantitative Skills
  • Rates and Time
  • Service Learning
  • Spatial Thinking
  • Teaching Methods
  • Teaching with Video
  • Undergrad Research
  • Visualization
  • Teaching Materials
  • Two Year Colleges
  • Departments
  • Workshops and Webinars

Field Exp Logo

Understanding What Our Geoscience Students Are Learning: Observing and Assessing Topical Resources

  • ⋮⋮⋮ ×

Related Links

A guide to Professional Communications Projects , with examples and grading rubrics Resources about Speaking Effectively from the State Your Case project.

Assessment By Oral Presentation

What is assessment by oral presentation.

Oral presentations are often used to assess student learning from student individual and group research projects.

Oral Presentation Assessment Tips for Instructors:

  • Oral Presentation Tips and Peer Evaluation Questions Laura Goering, Carleton College, developed these tips and student evaluation template for the Carleton College Perlman Center for Learning and Teaching .
  • Oral Report Evaluation Rubric (Microsoft Word 56kB Jul6 07) from Mark France, Gallery Walk page.
  • Information on developing scoring rubric .
  • Information on developing instructional rubrics .
  • If students are giving group presentations, the following Student Peer Assessment Rubric for Group Work (Microsoft Word 37kB May20 05) can be useful for having student assess the individuals in their groups.
  • The Assessing Project Based Learning Starting Point website page uses rubrics to assess oral presentations.
  • For an example of how to incorporate rubric in to a class, see Environmental Assessment course.
  • Oral Presentation Assessment Examples - See how other courses have incorporated oral presentations. This link will take you to a browse listing example courses that have incorporated oral presentations.
  • Effective Speaking Resources from the State Your Case project - A handful of useful resources about speaking effectively and giving successful oral presentations.  
  • Professional Communications Projects - Learn more about this teaching method, which asks students to effectively communicate scientific information in a genre that professional scientists are expected to master, such as with scientific posters, conference proposals or oral presentations.

« Previous Page       Next Page »

Faculty Learning Hub

Faculty Learning Hub

Oral assessments: benefits, drawbacks, and considerations.

What is the best way to measure achievement of course learning outcomes?  As modalities for teaching and learning continue evolve, it is important to refresh assessment practices with student success and academic integrity in mind.

Oral exams and assessments are not a new practice, although they tend to be more common in European countries than in North America (Sayre, 2014).  A well-constructed oral exam as one element of an overall-assessment strategy can provide many benefits for the learner and the evaluator. This teaching tip will explore those benefits, weigh potential drawbacks, and finish with considerations for implementing oral assessment in your course.

Note: An oral assessment must be listed as such on the course outline as it is a distinct type of evaluation and student must be aware of its inclusion from the start of the course.

What is an oral exam? Typically, an oral exam is one in which students are provided in advance with topics or questions that cover a set of course outcomes. During the oral exam, which can either involve sitting down with the professor in-person or on zoom or recording a timed video, the student is provided randomly with 1-2 prompts and must explain or reply in the set time. Here is a Danish student explaining about her oral exam experience.

Potential Benefits

  • Oral exams are versatile . Orals have been used successfully in a variety of disciplines, including mathematics, religious studies, business, physics, medicine, and modern languages (Hazen, 2020).
  • Oral exams provide evidence and support for higher order thinking and problem solving skills . A large part of STEM-related learning consists of learning how to problem-solve.  Many written tests make exclusive use of shorter problems so that a student who hits a roadblock is not penalized too heavily. In an oral exam, on the other hand, the faculty can provide supportive prompts so that students overcome any stumbling blocks and both practice and demonstrate more fully their problem-solving skills. This makes an oral exam both kinder and more thorough than a written exam. (Sayre, 2014; Simpson, 2015).
  • Oral exams can encourage better learning outcomes .  The act of explaining an answer to the examiner adds to the student’s learning, making the test an opportunity for further learning. (Sayre, 2014; Zhao, 2018).
  • Oral exams potentially alter the way students study .  Students in Iannone and Simpson’s study focused more on understanding and less on memorization when they knew that they would be examined orally. Students may study harder for an in-person test than for a written test. (Iannone and Simpson, 2015; Zhao, 2018, Hazen 2020).
  • Oral exams help students develop authentic communication skills in their discipline . Sayre suggests that oral tests help her students learn how to talk like scientists. Oral tests allow students to develop the ability to communicate in skill areas they will need later in the workplace (Sayre, 2014; Hazen, 2020).
  • Oral exams as one examination tool among others can increase academic integrity . While oral exams should not be used solely for the purpose of curbing cheating, it is difficult to cheat for an oral exam, depending on how it is set up (Iannone and Simpson, 2015; Hazen, 2020).
  • Oral exams provide an alternative for demonstrating achievement of course learning outcomes . Students who are more comfortable with speaking than with writing will benefit from a portion of their assessment grade being allocated to an oral assessment. (Iannone and Simpson, 2015, p. 973). An oral exam component of an overall assessment strategy is therefore in keeping with the principles of Universal Design for Learning.
  • Oral exams provide an alternative means of expression. Oral exams may suit some students better than written demonstrations depending on their strengths and abilities.

Potential Drawbacks

  • Time Commitment . Oral tests are less work to administer and mark than essay exams but take more time than self-grading multiple-choice exams (Hazen, 2020).  It is important to make sure that fewer questions are asked in order to retain the benefit of more in-depth responses which oral exams offer (Sayre, 2014).A well-constructed marking rubric will be important to help guide marking.
  • Potential Bias . Of course, anonymity is not possible for an oral test; however, all assessments contain a potential for bias, and it is unclear whether bias is a greater factor for orals than for other assessment methods. Math students in Iannone and Simpson’s study discussed a perceived lack of fairness when hearing that their fellow students were asked more questions but felt that videos of their oral test performance would guarantee a degree of fairness (Iannone and Simpson, 2015; Hazen, 2020).
  • Test anxiety. Iannone and Simpson cite studies showing that test anxiety decreases with increased familiarity and understanding of the benefits of oral testing. Their own study demonstrated that math students taking an oral test for the first time were anxious at feeling exposed if they could not answer the questions (Iannone and Simpson, 2015).

Considerations

Have you thought it through? Would you like to add an oral exam to the evaluation scheme in your course outline?  Here are some considerations to bias for best in an oral exam.

  • Keep it short . Oral tests allow for the examiner to interject with conceptual questions and discussion of a given problem.  This means that there should be fewer questions asked. A good oral test for smaller classes will likely take no longer than 20 minutes and will be structured more like an interview than a list of questions. (Sayre, 2014). Oral tests for larger classes can be shorter, but then should be worth less of the overall grade.
  • Consider open-book . To deal with pre-exam anxiety, oral tests can be open-book or open-note. This will allow students to check formulas or charts and focus more on their ability to apply relevant information to problem-solving than on memory retrieval. (Sayre, 2014, p. 31).
  • Weight appropriately.  The time and complexity of an assessment should correspond to its weight in the overall course marking scheme.  For example, an oral test worth 5% may have only one question marked according to one criterion, while a test worth 20% of the final grade should provide the student with a complex opportunity to demonstrate knowledge and ability and should be marked based on multiple criteria.
  • Prepare for grading the exam.   Grading does not have to be prohibitive; the examiner can take notes during the exam, with a rubric that grades for problem-solving, not just the right answer (Sayre, 2014). With a good rubric, the marks should be consistent from student to student.
  • Ensure that the marking scheme is valid . To provide students with meaningful feedback and grades, consider a rubric with sufficient levels to capture the range of achievement. The example below is set up for one criterion per question.

Setting up Simple Oral Assessments

While oral assessments are meant to be short, they can be more or less formal. For informal formative assessment, one teacher describes 60-second interviews asking students to explain concepts that they may have had difficulty with on a test. You can arrange for 5-to-10-minute interviews with each student in the class, or you can arrange for students to create a video of themselves talking through a question or a problem using the video feature in the eConestoga assignments tool.  Here are suggested steps for setting up an oral assessment.

  • Align the test with the course learning outcomes and include it in the course outline.
  • Decide how long the test will be.
  • Create a question pool with a corresponding rubric.
  • Prepare students by explaining the test and by providing opportunities to practice.
  • If the test will be asynchronous, provide a practice opportunity so students know how to use the technology.
  • Consider recording the test in case of questions later.
  • Consider how to assist students in signing up for a time.
  • Consider where students will wait for their turn and how much class time the test will take.
  • Panopto course folders provide a platform for students to record.
  • If you wish to randomize questions, you can set up an eConestoga quiz with random questions and a link to Panopto.

Further Supports

Guidelines on Online Oral Exams for Lecturers

Hazen, H. (2020). Use of oral examinations to assess student learning in the social sciences. Journal of Geography in Higher Education , 44 (4), 592–607. https://doi.org/10.1080/03098265.2020.1773418

Iannone, P. & Simpson, A. (2015). Students’ views of oral performance assessment in mathematics: straddling the “assessment of” and “assessment for” learning divide.  Assessment and Evaluation in Higher Education ,  40 (7), 971–987. https://doi.org/10.1080/02602938.2014.961124

Sayre, E. (2014). Oral exams as a tool for teaching and assessment.  Teaching Science (Deakin West, A.C.T.) ,  60 (2), 29–33. https://doi.org/10.3316/aeipt.203840

Zhao, Y. (2018). Impact of oral exams on a thermodynamics course performance . Paper presented at 2018 ASEE Zone IV Conference, Boulder, Colorado. https://peer.asee.org/29617

' src=

Laura Stoutenburg

A college professor and accredited TESL trainer for more than 20 years, Laura Stoutenburg, holding an M.A., has taught and developed curricula for a variety of topics, with her work including language assessment in China and Canada. Before joining Teaching and Learning as a consultant, Laura coordinated Conestoga’s TESL Certificate and English Language Studies programs. She specializes in matters related to Intercultural Teaching and language acquisition, and is available at the Kitchener Downtown Campus.

Related Posts

assessment of oral presentation

Faculty Innovation 2022 – a Faculty Member’s Story 3 

assessment of oral presentation

Gamification and Student Choice in Brick and Stone Mason Programs

assessment of oral presentation

Generative Artificial Intelligence (GenAI) Assessment Statements for Students

Rubrics for Oral Presentations

Introduction.

Many instructors require students to give oral presentations, which they evaluate and count in students’ grades. It is important that instructors clarify their goals for these presentations as well as the student learning objectives to which they are related. Embedding the assignment in course goals and learning objectives allows instructors to be clear with students about their expectations and to develop a rubric for evaluating the presentations.

A rubric is a scoring guide that articulates and assesses specific components and expectations for an assignment. Rubrics identify the various criteria relevant to an assignment and then explicitly state the possible levels of achievement along a continuum, so that an effective rubric accurately reflects the expectations of an assignment. Using a rubric to evaluate student performance has advantages for both instructors and students.  Creating Rubrics

Rubrics can be either analytic or holistic. An analytic rubric comprises a set of specific criteria, with each one evaluated separately and receiving a separate score. The template resembles a grid with the criteria listed in the left column and levels of performance listed across the top row, using numbers and/or descriptors. The cells within the center of the rubric contain descriptions of what expected performance looks like for each level of performance.

A holistic rubric consists of a set of descriptors that generate a single, global score for the entire work. The single score is based on raters’ overall perception of the quality of the performance. Often, sentence- or paragraph-length descriptions of different levels of competencies are provided.

When applied to an oral presentation, rubrics should reflect the elements of the presentation that will be evaluated as well as their relative importance. Thus, the instructor must decide whether to include dimensions relevant to both form and content and, if so, which one. Additionally, the instructor must decide how to weight each of the dimensions – are they all equally important, or are some more important than others? Additionally, if the presentation represents a group project, the instructor must decide how to balance grading individual and group contributions.  Evaluating Group Projects

Creating Rubrics

The steps for creating an analytic rubric include the following:

1. Clarify the purpose of the assignment. What learning objectives are associated with the assignment?

2. Look for existing rubrics that can be adopted or adapted for the specific assignment

3. Define the criteria to be evaluated

4. Choose the rating scale to measure levels of performance

5. Write descriptions for each criterion for each performance level of the rating scale

6. Test and revise the rubric

Examples of criteria that have been included in rubrics for evaluation oral presentations include:

  • Knowledge of content
  • Organization of content
  • Presentation of ideas
  • Research/sources
  • Visual aids/handouts
  • Language clarity
  • Grammatical correctness
  • Time management
  • Volume of speech
  • Rate/pacing of Speech
  • Mannerisms/gestures
  • ​​​​​​​Eye contact/audience engagement

Examples of scales/ratings that have been used to rate student performance include:

  • Strong, Satisfactory, Weak
  • Beginning, Intermediate, High
  • Exemplary, Competent, Developing
  • Excellent, Competent, Needs Work
  • Exceeds Standard, Meets Standard, Approaching Standard, Below Standard
  • Exemplary, Proficient, Developing, Novice
  • Excellent, Good, Marginal, Unacceptable
  • Advanced, Intermediate High, Intermediate, Developing
  • Exceptional, Above Average, Sufficient, Minimal, Poor
  • Master, Distinguished, Proficient, Intermediate, Novice
  • Excellent, Good, Satisfactory, Poor, Unacceptable
  • Always, Often, Sometimes, Rarely, Never
  • Exemplary, Accomplished, Acceptable, Minimally Acceptable, Emerging, Unacceptable

Grading and Performance Rubrics Carnegie Mellon University Eberly Center for Teaching Excellence & Educational Innovation

Creating and Using Rubrics Carnegie Mellon University Eberly Center for Teaching Excellence & Educational Innovation

Using Rubrics Cornell University Center for Teaching Innovation

Rubrics DePaul University Teaching Commons

Building a Rubric University of Texas/Austin Faculty Innovation Center

Building a Rubric Columbia University Center for Teaching and Learning

Rubric Development University of West Florida Center for University Teaching, Learning, and Assessment

Creating and Using Rubrics Yale University Poorvu Center for Teaching and Learning

Designing Grading Rubrics ​​​​​​​ Brown University Sheridan Center for Teaching and Learning

Examples of Oral Presentation Rubrics

Oral Presentation Rubric Pomona College Teaching and Learning Center

Oral Presentation Evaluation Rubric University of Michigan

Oral Presentation Rubric Roanoke College

Oral Presentation: Scoring Guide Fresno State University Office of Institutional Effectiveness

Presentation Skills Rubric State University of New York/New Paltz School of Business

Oral Presentation Rubric Oregon State University Center for Teaching and Learning

Oral Presentation Rubric Purdue University College of Science

Group Class Presentation Sample Rubric Pepperdine University Graziadio Business School

Self- and Peer Assessment of Oral Presentation in Advanced Chinese Classrooms: An Exploratory Study

  • First Online: 13 April 2017

Cite this chapter

assessment of oral presentation

  • Dan Wang 7  

Part of the book series: Chinese Language Learning Sciences ((CLLS))

1371 Accesses

1 Citations

Self- and peer assessment allows students to play a greater role in the assessment process. Twenty-one non-heritage undergraduate students who took the Advanced Chinese course at Duke University were involved in a project on self- and peer assessment of oral presentations. The project included rubric designing, training, practice, observation, evaluation, discussion, survey, and feedback. The assessment components were designed by the instructor and the students collaboratively during the first week of the course. They included content and organization of presentation, vocabulary and grammar, fluency and voice, accuracy of pronunciation, posture, and support. In addition to scoring for each of these components, the students were also asked to provide written comments on their own presentation and those of their peers. Self-, peer, and instructor assessments were analyzed and compared quantitatively and qualitatively. The results showed that the practice and discussion in the training session had a positive effect on the accuracy of students’ self- and peer assessment. Over 90% of the students liked participating in the assessment process and thought the self- and peer assessment conducive to their Chinese language learning. This study highlights the potential pedagogical benefits of involving students in assessment at both the cognitive and affective levels.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Adams, C., & King, K. (1995). Towards a framework for student self-assessment. Innovation in Education and Training International, 32, 336–343.

Article   Google Scholar  

AlFallay, I. (2004). The role of some selected psychological and personality traits of the rater in the accuracy of self- and peer-assessment. System, 32, 407–425.

Bachman, L., & Palmer, A. (1989). The construct validation of self-ratings of communicative language ability. Language Testing, 6, 14–25.

Birenbaum, M., & Dochy, F. (Eds.). (1996). Alternatives in assessment of achievement, learning processes and prior knowledge . Boston, MA: Kluwer.

Google Scholar  

Chen, Y. M. (2006). Peer and self-assessment for English oral performance: A study of reliability and learning benefits. English Teaching and Learning, 30 (4), 1–22.

Cotteral, S. (2002). Promoting learner autonomy through the curriculum: Principles for designing language courses. ELT Journal, 54 (2), 109–117.

Davidson, F., & Henning, G. (1985). A self-rating scale of English difficulty. Language Testing, 2, 164–178.

Dochy, F., & Moerkerke, G. (1997). The present, the past and the future of achievement testing and performance assessment. International Journal of Educational Research, 27, 415–432.

Falchikov, N. (1995). Peer feedback marking: Developing peer assessment. Innovation in Education and Training International, 32, 175–187.

Freeman, M. (1995). Peer assessment by groups of group work. Assessment & Evaluation in Higher Education, 20 (3), 289–300.

Gregersen, T., & Horwitz, E. (2002). Language learning and perfectionism: Anxious and non-anxious language learners’ reactions to their own oral performance. Modern Language Journal, 86, 562–570.

Hansen, J. G., & Liu, J. (2005). Guiding principles for effective peer response. ELT Journal, 59 (1), 31–39.

Hughes, I. E., & Large, B. (1993). Assessment of students’ oral communication skills by staff and peer groups. New Academic, 2 (3), 10–12.

Kitano, K. (2001). Anxiety in the college Japanese language classroom. Modern Language Journal, 85, 549–566.

Kwan, K., & Leung, R. (1996). Tutor versus peer group assessment of student performance in a simulation training exercise. Assessment and Evaluation in Higher Education, 21, 205–214.

Lindholm-Leary, K., & Borsato, G. (2002). Impact of two-way immersion on students’ attitudes toward school and college. Eric Digest . EDO-FL-02-01.

Miller, L., & Ng, R. (1994). Peer assessment of oral language proficiency. In Perspectives: Working papers of the department of English , City Polytechnic of Hong Kong (Vol. 6, pp. 41–56).

Morton, L., Lemieux, C., Diffey, N., & Awender, M. (1999). Determinants of withdrawal from the bilingual career track when entering high school. Guidance and Counseling, 14, 1–14.

Mowl, G., & Pain, R. (1995). Using self and peer assessment to improve students’ essay writing: A case study from geography. Innovations in Education and Training International, 32 (4), 324–335.

Nitko, A. J., & Brookhart, S. M. (2007). Educational assessment of students (5th ed.). Upper Saddle River, NJ: Pearson Education.

Noels, K. A., Clément, R., & Pelletier, L. G. (1999). Perceptions of teachers’ communicative style and students’ intrinsic and extrinsic motivation. Modern Language Journal, 83, 23–34.

Noels, K. A., Pelletier, L. G., Clément, R., & Vallerand, R. J. (2000). Why are you leaning a second language? Motivational orientation and self-determination theory. Language Learning, 50 (1), 57–85.

O’Malley, J. M., & Pierce, L. V. (1996). Authentic assessment for English language learners: Practical approaches for teachers . New York: Addison-Wesley Publishing Company.

Patri, M. (2002). The influence of peer feedback on self and peer-assessment of oral skills. Language Testing, 19 (2), 109–131.

Pond, K., Ul-Haq, R., & Wade, W. (1995). Peer review: A precursor to peer assessment. Innovation in Education and Training International, 32, 314–323.

Rolfe, T. (1990). Self- and peer-assessment in the ESL curriculum. In G. Brindley (Ed.), The second language curriculum in action . Research series 6. NCETR. Sydney: University of Macquarie.

Rollinson, P. (2005). Using peer feedback in the ESL writing class. ELT Journal, 59 (1), 23–30.

Sambell, K., & McDowell, L. (1998). The value of self and peer assessment to the developing lifelong learner. In C. Rust (Ed.), Improving student learning—Improving students as learners (pp. 56–66). Oxford, UK: Oxford Centre for Staff and Learning Development.

Stefani, L. A. J. (1994). Peer, self and tutor assessment: Relative reliabilities. Studies in Higher Education, 19 (1), 69–75.

Download references

Acknowledgements

The research described in this paper was conducted when I worked at Department of Asian and Middle East Studies, Duke University.

Author information

Authors and affiliations.

University of Tennessee, Knoxville, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Dan Wang .

Editor information

Editors and affiliations.

Department of Teacher Education, Michigan State University, East Lansing, Michigan, USA

Dongbo Zhang

Department of Counseling Educational Psychology, and Special Education, Michigan State University, East Lansing, Michigan, USA

Chin-Hsi Lin

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Nature Singapore Pte Ltd.

About this chapter

Wang, D. (2017). Self- and Peer Assessment of Oral Presentation in Advanced Chinese Classrooms: An Exploratory Study. In: Zhang, D., Lin, CH. (eds) Chinese as a Second Language Assessment. Chinese Language Learning Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-4089-4_13

Download citation

DOI : https://doi.org/10.1007/978-981-10-4089-4_13

Published : 13 April 2017

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-4087-0

Online ISBN : 978-981-10-4089-4

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PMC10365299

Logo of plosone

Scholars180: An effective oral presentation assessment for optometry students

Khyber Alam

1 Discipline of Optometry, School of Allied Health, The University of Western Australia, Perth, Western Australia, Australia

Xinyi Zheng

Kenneth lee.

2 Discipline of Pharmacy, School of Allied Health, The University of Western Australia, Perth, Western Australia, Australia

Associated Data

All relevant data are within the manuscript and its Supporting information files.

Oral presentation assessments are multifunctional tools that can potentially test all six cognitive domains of Bloom’s taxonomy. Yet, they are not used as frequently as other forms of assessment in curriculums due to time limitations. Hence, designing effective oral presentation assessments that can overcome this is required. The purpose of this study was to investigate whether Scholars180, an oral presentation assessment developed for optometry students, would effectively help students improve their knowledge of and confidence in the identification and management of ocular diseases. This study utilized a non-randomized pre-questionnaire and post-questionnaire design where the participants (n = 31) were asked to assess their knowledge of ocular diseases before and after the oral presentation. The questionnaire was developed according to the unit outcomes. The responses to each of the 12 Likert-type scale questions on the post-questionnaire with the respective responses on the pre-questionnaire were compared. Students (n = 31) experienced improvements in their knowledge of eye diseases and even more so in their confidence and application of their knowledge. This was indicated by the statistically significant increases in median scores and low interquartile ranges (IQR) of ≤1.0. The peer evaluation also illustrated that students felt that the assessment contributed positively to their learning experience. Teachers require a variety of assessment methods to accurately test the student’s authentic depth of knowledge and achievement of learning outcomes. Scholars180 is an effective assessment that follows constructive alignment and overcomes time limitations, providing teachers an assessment to consider implementing in the future.

Introduction

Traditional assessments involve written and oral assessments. Written assessments can be advantageous as they are a familiar tool to test the student’s comprehension of content and it is more objective in comparison to oral assessments, which increases their reliability [ 1 ]. However, oral assessments should be used to supplement written ones as some learning outcomes are infeasible to be tested in written form [ 2 ]. Unlike the static responses given in written assessments, the oral format allows the assessor not only to test verbal communication but to also probe questions to gain a more well-rounded understanding of the student’s knowledge and way of thinking [ 2 ]. The oral assessment format allows the potential evaluation of all six cognitive domains of Bloom’s taxonomy, which are knowledge, comprehension, application, analysis, evaluation, and creation [ 2 ]. This makes it a valuable multifunctional assessment type. Additionally, assessments can also be classified into summative and formative assessments [ 3 ]. Examples of summative assessments are final exams, standardized tests, and final reports, whilst formative assessments could include homework assignments, weekly quizzes, and in-class discussions. For effective evaluation, formative assessments should be designed to help strengthen summative assessments [ 3 ].

The university has the responsibility to ensure graduate optometrists have been equipped with relevant skills before entering the workforce. Optometric competencies include efficiently performing all optometric procedures, recognizing the importance of effective communication, demonstrating the ability to assess and interpret information, contributing to the creation of new knowledge through research, and being a reflective practitioner to ensure growth and development [ 4 ]. As accrediting bodies, such as the Optometry Council of Australia and New Zealand (OCANZ), become more strict about accrediting standards, universities are required to demonstrate a clear and methodical approach to curriculum development, learning outcomes, and relevant assessments [ 5 ]. Thus, the multifunctional use of oral presentation assessments should make it an attractive assessment type in addition to other forms of assessments to demonstrate the achievement of learning objectives, but this is not always the case. Despite the need for oral presentation assessments, they are not as commonly used as written assessments. One of the major factors that prevent teachers from utilizing it is the long assessment time that is associated with conducting oral presentations [ 6 ]. Yet, a challenge that arises when time is reduced is the simultaneous maintenance of the effectiveness of the assessment, which is sufficient time to achieve pre-defined learning outcomes and a showcase of knowledge.

Although there is a range of existing oral assessments available, there were no specific oral presentation assessments that aligned with the purpose of testing the optometry students on their knowledge and management of ocular diseases. To address this issue, The University of Western Australia (UWA) designed Scholars180, which is an oral presentation assessment for first-year optometry students. UWA’s optometry course is based on the competency standards for optometrists in Australia and New Zealand. The competency standards include communication, patient examination, diagnosis and management, and health information management. Constructive alignment was used when designing Scholars180 as it was based on the learning outcomes of a clinical unit and the objectives of the learning events in the semester. Thus, Scholars180 can help display student achievement in these competencies.

For Scholars180, students received 12 weeks to prepare a short 3-minute individual oral presentation about an ocular disease to their peers and teacher. This presentation style is based on The University of Queensland’s (UQ) Three Minute Thesis (3MT) format [ 7 ] with the addition of a Q&A section. The 3MT format not only challenges students to condense a large amount of information into 180 seconds alongside a single static slide, but it also needs to be done in a manner that is easily comprehensible to an audience who are not experts in the field [ 7 ]. Through this assessment, students experienced active and peer learning. Therefore, it was hypothesized that students will experience an improvement in their knowledge of ocular diseases and confidence in the identification and management of ocular diseases.

Methodology

The Scholars180 assessment is an oral presentation that encompasses 4 components:

  • Baseline assessment of knowledge
  • Oral presentation
  • Q&A session
  • Post-presentation ‘experiences’ questionnaire

The participating students in this study were first-year optometry students and thus recruitment for their participation in the study was conducted at the beginning of the semester. The study was approved by The University of Western Australia (approval no. 2021/ {"type":"entrez-nucleotide","attrs":{"text":"ET000652","term_id":"156994504","term_text":"ET000652"}} ET000652 ). Written and informed consent were obtained from all participants prior to enrolment in the study. Informed consent was required as participants were asked to conduct tests and the researchers used the data collected. All participants were informed of possible publishing and written consent was obtained. All the first-year optometry students were given the opportunity to participate in this study unless they did not consent to participate. The authors declare they have no conflicts of interest, and this research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

In this study, the student’s baseline assessment of knowledge of all the eye conditions listed in Table 1 was assessed prior to the assessment. After 12 weeks, a post-presentation ‘experiences’ questionnaire was used to compare the changes that students experienced after undergoing the assessment. Additionally, the questionnaires developed were based on the unit’s learning outcomes to see if they had been achieved. For the oral presentation component, the students were first allocated an eye disease as per Table 1 in week 1 of the semester as part of their introductory lecture. The diseases or topics listed in Table 1 were based on some of the most prevalent eye conditions around the world. In addition, these topics were linked to specific cases which were scheduled to be taught in the second and third years of the Optometry course. The students then had 12 weeks to develop an oral presentation (only one static slide allowed) to encompass the background knowledge related to the eye disease, pathophysiology, signs and symptoms, prevalence, and treatment options. The problem-based learning (PBL) cases throughout the semester were designed in such a way it fed the students sufficient information they could use to structure their presentations. For the presentation itself, the students had 180 seconds to individually present the eye disease to their peers. The assessment was graded using the rubric found in S1 Appendix and the presentation was worth 7% of the total mark for the unit.

For their presentations, the student was required to follow the 3MT presentation format. The 3MT format restricts students to a 180-second presentation, where they can only use one static slide. The lack of time in the 3MT format forces students to remove jargon to concisely explain the disease. Like how a physician would require in-depth clinical knowledge to identify key clinical issues and to be able to break information down [ 8 ], a high level of understanding of the disease is needed in the Scholars180 assessment to make judgments about core pieces of information. In addition, the short presentation style cuts down the length of time required to complete the assessment, which is often an issue with oral assessments [ 6 ]. Moreover, the public presentation style, rather than an online or self-record version, has been chosen as it has been shown that students tend to take presenting in front of others more seriously and thus are more likely to study the topic in depth in comparison to if they weren’t presenting publicly [ 9 ].

After the presentation, the students were required to answer three questions in an oral format (worth 3% of the total mark for the unit) in a separate room with an academic about the same eye condition. The questions were based on existing clinical behavior patterns in eye clinics (clinical workup) and the required approaches to ensure patients receive evidence-based care. The questions were as follows:

  • How would you test for this disease and what clinical workup would you use?
  • Pretend you are in the clinic, and you suspect a patient has this disease, how would you manage the patient?
  • What did you learn from this assignment?

The Q&A component was included to improve the authenticity of the assessment. Authenticity refers to how the assessment mimics encounters in the workplace or real-life situations [ 10 ]. This allowed the teacher to probe questions from the student to test whether their understanding was superficial or in-depth [ 2 , 11 , 12 ]. The questions asked required clinical reasoning and students need to critically think about how to approach the question to answer it correctly. This is beneficial as in clinical practice, optometrists are faced with many different cases in which the application of knowledge is needed and not direct regurgitation of information.

Before any tests were conducted, the students were assigned a uniquely identifiable number that they would have to use for both the pre- and post-questionnaires. The spreadsheet with the unique identification numbers was stored on a password-protected computer and the extracted information from the spreadsheet did not include any details that could be used to identify the students (to ensure anonymity).

To test the hypothesis, this study involved an anonymous pre-questionnaire data collection to measure the students’ knowledge of eye diseases and their understanding of the scope of practice, followed by the development and sharing of the presentations and a subsequent post-questionnaire to evaluate their change in knowledge and behavior. The pre- and post-questionnaires were useful as the baseline assessment of knowledge (pre-questionnaire) primed students for learning as it made them aware of what they did not know. The questions for the pre-questionnaire of existing knowledge can be found in S2 Appendix and the students answered the questions on a 5-point Likert scale. Depending on the question, the 5-point Likert scale score included: (1) Very low/strongly disagree/very unlikely; (2) Low/disagree/unlikely; (3) Medium/neutral; (4) High/agree/likely: (5) Very high/strongly agree/very likely. The questions were divided into three domains including knowledge of eye diseases, attitude and intention towards learning, and application of knowledge to placement (practice). For each domain, specific questions were aimed to test an aspect of that domain.

After the delivery of the presentations, the students were then asked to complete a post-questionnaire found in S3 Appendix to measure a difference in knowledge and or behavior. The post-questionnaire also supported student reflection, which consolidated learning. As part of the evaluation test, besides the questions on knowledge, attitude, intentions, and practice, the students were asked about the performance of their peers too, which is shown in S4 Appendix .

Data from participants who completed both pre- and post-questionnaires were included for data analysis. All statistics were performed in R v4.1.0. Descriptive statistics were used to report demographic data. Non-parametric Wilcoxon Signed-Ranks Tests (with continuity correction) were used to compare the responses to each of the 12 Likert-type scale questions on the post-questionnaire with the respective responses on the pre-questionnaire. To account for multiple comparisons, a Bonferroni correction was applied to the initial alpha of 0.05. Given that 12 statistical tests were applied, the adjusted alpha (via Bonferroni correction) was calculated to be 0.004; this means statistical significance was deemed to be achieved if the resultant P -values were less than 0.004.

Overall, as shown in Table 2 , 42 students completed the pre-oral presentation questionnaire, and 41 completed the post-questionnaire. However, only 31 completed both pre- and post-questionnaires as incomplete surveys were removed from the data.

The results as shown in Table 3 indicated how the Scholars180 assessment impacted student knowledge of ocular diseases and how confident they were in the identification and management of these diseases. Generally, Table 3 showed that the post-questionnaire median scores for the statements increased from the pre-questionnaire scores. For the ‘knowledge of eye diseases’ domain, all values increased. The median values for overall knowledge, knowledge of common treatment methods, and diagnostic knowledge all increased from 2.0 to 3.0. Concerning the attitudes and intention domain, all but one of the median scores increased. Student confidence regarding the signs and symptoms of ocular diseases and when to manage or refer a patient all improved as the median scores increased from 2.0 to 4.0. Student confidence about the treatment options available also improved from a score of 2.0 to 3.0. However, for the statement regarding “assessment methods need improvement”, the score remained at 3.0 for the pre- and post-questionnaire. With regards to the ‘application of knowledge to placement’ domain, all median scores increased. There was a significant improvement in the appropriate management of chronic eye conditions, acute eye conditions, and severe eye conditions as the median scores increased from 1.0 to 3.0. Appropriate management of minor eye conditions also increased from 2.0 to 3.0. Furthermore, not only did median scores increase in the post-questionnaire, but they also increased by a P -value below 0.001. Additionally, the interquartile range (IQR) for all the statements in Table 2 was 1.0 or lower. This indicated that the spread of data was minimal, which suggested that the students had similar experiences with the assessment.

a Scores for each item ranged from 1 to 5 with higher scores indicating a higher level of self-reported knowledge, positive attitude, and/or knowledge application.

b Indicates statistically significant change from pre- to post-oral presentation, given Bonferroni corrected P -value cut-off of 0.004

Moreover, the largest changes in scores were mainly regarding the student’s confidence in the identification and management of ocular diseases. For example, 3 out of 5 statements under the attitude and intention domain had a median change of an increase of 2.0 points. The increase of 2.0 points on the median score also applied to 3 out of 4 of the statements on the application of knowledge to the placement domain. In comparison, for all three statements in the knowledge of eye diseases domain, the median score increased by one, which was a lesser amount than the other domains. Overall, the results indicated that the median student attitude towards ocular diseases improved in a positive direction as their answers to the statements changed from ‘disagree’ to ‘neutral’ or ‘agree’.

Table 4 illustrated student evaluation of their peers’ presentations. For all the statements in Table 4 , the median score was 4.0. out of a maximum score of 5.0. The 4.0 scores for statements such as ‘the shared knowledge was relevant to my growth as a student’, ‘I would recommend this assessment to examine future students’, ‘overall, I was satisfied with the processes of the assignment’ etc. showed that not only did students benefit from presenting themselves but also when their peers did. Additionally, the IQR for all the statements was 1.0 or lower, indicating that most students had a high level of agreement regarding the statements on peer evaluation. It is important to note that the three questions asked after the oral presentation were relevant in helping the students reflect on their learning. This allowed them to answer the questions more accurately under the domains of ‘knowledge of eye diseases’ and ‘application of knowledge to placement (practice)’.

c Scores for each item ranged from 1 to 5 with higher scores indicating a higher level of agreement with the statements.

This study’s goal was to study the impact of the oral presentation assessment, Scholars180, on optometry students’ knowledge of ocular diseases and their confidence regarding how to identify and manage these diseases. Oral assessments are defined as an assessment of student learning that incorporates spoken words either entirely or partially [ 11 ]. One of the important competencies that health professionals need to learn is the ability to effectively articulate their medical knowledge to patients of varying backgrounds [ 13 ]. To do this, the health professional must have sufficient knowledge of the disease and be able to identify it, but they also must be able to communicate all relevant aspects of the disease to the patient within short consultation times, so patients can play an active role in their health decisions. Oral assessments are multifunctional assessments that can both evaluate and assist in students achieving several learning outcomes. Hence, after completion of Scholars180, it was hypothesized that student knowledge of and confidence in the identification and management of ocular diseases will improve. In addition, the peer learning aspect of Scholars180 may have also contributed to an improved learning experience.

From the results as indicated in Table 2 , our hypothesis was supported as the median scores increased by a statistically significant value with a low IQR. This illustrated that students felt that their knowledge of and confidence to identify and manage ocular diseases had improved. The data collected is encouraging as it revealed that the design of Scholars180 is in the correct direction. To complete the assessment, students needed to compress all the relevant information about an ocular disease in a very limited time, which required them to gain an in-depth understanding of the topic to differentiate between essential and non-essential pieces of information and how to string it together so it could be easily understood. Furthermore, due to the time constraint, the 3MT format encouraged students to be succinct yet engaging to make an impression on the audience [ 14 ]. Therefore, due to the high demands for successful completion of Scholars180, it was likely that students were pushed to gain in-depth knowledge of ocular diseases, which transferred to an increase in confidence in identification and management as well.

Moreover, the results shown in Table 3 highlighted how students felt about Scholars180 after completing the assessment. From Table 3 , it is seen that the median scores were all 4.0, indicating the students mostly agreed with the statements and thus felt positively towards the assessment. This is most likely due to the benefits discussed above, but it may also be attributed to the peer learning aspect of Scholars180. Peer learning can be defined simply as the interaction between students to actively learn educational material [ 15 ]. In Scholars180, peer learning was present as students were actively teaching other students about their assigned ocular disease, and students also learned from their peer’s presentations. The benefits of the teaching aspect of oral presentations are consolidated by a study that found that teaching increases reflection, breaks down resistance to change, and can help one to recognize their ignorance, leading them to be more open to learning [ 16 ]. When expected to teach content, students are more likely to be engaged with the information. This is likely because the process of preparing for oral presentations required students to first practice with self-explanation which could expose gaps in their knowledge, leading them to seek further studying to gain a deeper level of understanding [ 17 ]. Moreover, by listening to other presentations, students could reinforce their knowledge regarding each ocular disease [ 18 ].

Moving forward, assessments can be summative or formative [ 19 ]. Summative assessments are usually carried out at the end of a learning process to test if a student has sufficiently learned the material and it tends to be a high-stakes assessment [ 3 ]. In contrast, formative assessments are usually conducted during the student’s learning process to provide them with feedback and the outcome does not have a high impact on the student’s final score [ 3 ]. For effective evaluation, formative assessments should be designed to help strengthen summative assessments [ 3 ]. Scholars180 is a summative assessment as it is conducted at the end of the semester and accounts for 7% of the student’s final grade in the unit. The data collected suggests that it is an effective summative assessment. However, in the future, it could be helpful to design formative assessments to accompany it throughout the semester as obtaining relevant and early feedback is an important aspect of facilitating improvement [ 20 , 21 ] and a study has shown that this is particularly true for low-performing students [ 22 ]. For example, a study found that one-minute presentations conducted every few weeks to provide students with immediate feedback helped them develop better clarity in speaking and confidence in presenting [ 23 ].

There are also some limitations to the design of the study. The participants conducted the pre-questionnaire at the beginning of the course whilst the post-questionnaire was conducted 12 weeks later. As there is a time gap between the questionnaires, it is unsure if the increase in student knowledge and confidence is due to the Scholars180 assessment itself, or through other teaching activities such as lectures, tutorials, and tests. Moreover, the sample size for this study was relatively small and there was no comparator group given a standard oral presentation assessment task. Thus, it is not known whether the improvements that students felt from Scholars180 would fare better than a standard oral assessment. In the future, a large sample should be used to permit more generalisability of the outcomes. The differences in outcomes for first-year students may be due to their different educational backgrounds and exposure to presentation assessments. Hence, this study could be conducted on second or third-year optometry students once they have all been exposed to this assessment type, which enables a more standardised comparison of results.

Lastly, the students were surveyed using a 5-point Likert scale regarding their confidence and self-reported knowledge of subjects. However, the rating is quite subjective and there is no way of knowing if their reported increase in confidence on the subject translated to an actual increase in knowledge. Another detail to note is that during the development of the project, a gap in knowledge around the establishment of clear and comprehensive rubrics for oral presentations was identified. To fill this knowledge gap, various journals on rubric development and consultations with established academics were used to improve the development of new assessment tools and associated rubrics. However, more research in the future concerning rubric design could help refine Scholars180.

Given that Scholars180 is an effective assessment, students may benefit directly from participation by gaining better knowledge in the areas of optometry, ophthalmology, and vision sciences. There is also a chance that some of the students may not directly benefit from this assignment. However, the results of their tests will help guide future research into this area as well as provide direction for improvements in how we teach and assess optometry students. Doing so could help improve the health of the wider community by improving the competence of the optometry workforce.

In this study, the impact of Scholars180, an oral presentation assessment tool, has been analyzed and discussed. The results have supported the hypothesis that Scholars180 would help students improve their knowledge of ocular diseases and their confidence in the identification and management of the disease. It is also likely that students would benefit from the peer learning aspect of the assessment. Scholars180’s oral presentation assessment design provided a method to shorten the time required for typical oral presentation assessments whilst simultaneously maintaining the effectiveness of the assessment. Overall, the design of Scholars180 appeared to be in the right direction, but further research should be conducted to strengthen the assessment tool. Through this study, an additional oral presentation assessment and its benefits to students will be added to the existing literature. Additionally, it may also encourage other academics to consider implementing this assessment or aspects of it in their curriculum to enhance both the teaching and learning experience.

Supporting information

S1 appendix, s2 appendix, s3 appendix, s4 appendix, funding statement.

The authors received no specific funding for this work.

Data Availability

  • PLoS One. 2023; 18(7): e0289081.

Decision Letter 0

24 May 2023

PONE-D-23-06339Scholars180: An effective oral presentation assessment for optometry students that overcomes time limitationsPLOS ONE

Dear Dr. Zheng,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 08 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at  gro.solp@enosolp . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Kofi Asiedu

Academic Editor

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability .

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories .  Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions . Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

3. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

2. Has the statistical analysis been performed appropriately and rigorously?

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The research work is very relevant in adding to the available data on summative assessment. The Scholars180 is a very good tool that can rigorously be used as a research tool in checking the effectiveness of oral presentation in the assessment of students. The authors provided pieces of information on the baseline knowledge of the study participants, the pre-presentation questionnaires, 180 seconds presentation, question and answers and the post presentation questionnaires which are very vital in proving the effectiveness of the presentation tool used in this study which is Scholars180.

The research work employed the right methodology. The authors did well in using questionnaires before the presentation and after the presentation. The non-parametric test employed in this research work is well suited for such purpose. The conclusion made by the authors helps in contributing to the available data on the usefulness and effectiveness of oral presentations in the assessment of students.

However, in the future, the work has to be done on a larger sample to permit for more generalizability of the outcomes. Also, in the future the research can also be conducted in those in either the second or third year since they might have had exposure to presentations which will put all participants at the same level. In such cases then the effectiveness of the Scholars180

Reviewer #2: General comment:

The manuscript concept is essential to the academic evaluation of students in all faculties, not only in optometry. Various techniques, such as direct observation abilities, standardized interpretation training, and having an expert double-check, have been suggested to evaluate learners' image interpretation. Oral reporting, essays, feedback, or straightforward multiple-choice questions are frequently used to assess a learner's performance. Still, unless these tools are well-designed, it may not be easy to discern a learner's clinical reasoning process. Scholars180 is a novel assessment tool for evaluating students and will have great potential in its impact on optometry training.

1. The title of the manuscript could be modified since the manuscript did not evaluate time limitations. Suggested title: “Scholars180: An effective oral presentation assessment for Optometry Students.”

2. There was mention of three questions used in assessing the presenters in a separate room. The manuscript needs to cover that in the results and discussion section.

3. The follow-up time for the measurement of the post-questionnaire should have been reported in the methodology.

4. If there was one, the manuscript did not state the sampling process and exclusion.

5. From the manuscript, lines 111-117 and 149-154 seem to repeat the same concept. It is my opinion to omit one.

6. In lines 196-197, you stated the Likert scale used in Appendix 2, but Q5 and Q7 used different wording of the scale. I think using the phrasing of a “5-point Likert scale score” only without itemizing them would be appropriate.

7. The question domain at line 198, “attitude and intention,” is unclear if the attitude concerns eye diseases or the learning process. As shown in the appendix, line 433, it is suggested to use “Attitude and intention towards learning.”

8. Were the pre and post-questionnaire results based on all the topics in the different presentations or on a single presentation based on a single eye condition? Can that be clarified in the manuscript?

9. The Likert scale for the question domain knowledge about eye disease could have been: (1 = very unconfident, 2 = fairly unconfident, 3 = neutral, 4 = fairly confident, and 5 = very confident) instead of high to low.

10. The Likert scale for the question domain practice “likely,” reads unethical to be used for questions on patient management. I believe a different wording could have been used, like “confident.”

11. The limitations presented in the manuscript are the main defining pointers to measure if the Scholar180 assessment was the primary influencer for the increase in knowledge of the peers. These limitations could have been taken care of to objectively measure the benefit of using this assessment method.

6. PLOS authors have the option to publish the peer review history of their article ( what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,  https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at  gro.solp@serugif . Please note that Supporting Information files do not need this step.

Author response to Decision Letter 0

27 Jun 2023

Firstly, we would like to thank the reviewers for their time and effort in reviewing our manuscript. The constructive criticism provided in the comments has enabled us to improve the quality of our work. We have revised our manuscript according to most of the comments provided. The changes have been highlighted in red in the ‘revised manuscript with tracked changes’ document as requested and our responses are given below. Our response to the reviewers is written below, with the direct comments by the reviewers coloured in blue and italicised. The referred lines correspond to the revised manuscript, not the first submission unless stated otherwise.

The journal requirements have been addressed. The layout of the manuscript has been changed to fit the templates provided by PLOS One. The section ‘Supporting Information’ has been added to include the appendixes and the minimal dataset underlying the results has been included as Supporting Information files (S1 File and S2 File). The ethics statement has also been moved to the methodology section (lines 134-142).

Responses to Reviewer 1:

Thank you to reviewer 1 for the positive feedback provided.

Comment 1: However, in the future, the work has to be done on a larger sample to permit for more generalizability of the outcomes. Also, in the future the research can also be conducted in those in either the second or third year since they might have had exposure to presentations which will put all participants at the same level. In such cases then the effectiveness of the Scholars180

Response 1: The authors acknowledge that this study can be improved by using a larger sample size and with second and third-year optometry students. Hence, we have added these points raised in our limitations paragraph in the discussion section as seen in lines 365-366 and 368-374.

Responses to Reviewer 2:

We appreciate your acknowledgement of the benefits of using Scholars180 as an assessment tool.

Comment 1: The title of the manuscript could be modified since the manuscript did not evaluate time limitations. Suggested title: “Scholars180: An effective oral presentation assessment for Optometry Students.”

Response 1: We agree with the comment and the suggested title has been adopted to better reflect the paper’s contents.

Comment 2: There was mention of three questions used in assessing the presenters in a separate room. The manuscript needs to cover that in the results and discussion section.

Response 2: The three questions and the relevant context has been appropriately explained in the methodology and we added some information to the results section to emphasise its relevance as seen in line 282-284. The questions were part of the overall assessments, and we believe additional information purely focused on the questions will influence the overall flow of the discussion section.

Comment 3: The follow-up time for the measurement of the post-questionnaire should have been reported in the methodology.

Response 3: The follow-up time has now been reported in the second paragraph, lines 144-146, in the methodology section as 12 weeks.

Comment 4: If there was one, the manuscript did not state the sampling process and exclusion.

Response 4: There was no sampling or exclusion process. All first-year optometry students at the University of Western Australia were given the opportunity to participate in the study. The students were only excluded if they did not consent to participate. We apologise for not making this clear in the study and have included a sentence to make this clear in the first paragraph of the methodology section (lines 139-140). Additionally, the results of some students were excluded due to incomplete questionnaires, which was stated in the first paragraph, lines 241-243, of the results section.

Comment 5: From the manuscript, lines 111-117 and 149-154 seem to repeat the same concept. It is my opinion to omit one.

Response 5: We acknowledge the concepts are the same. A sentence in lines 157-158 has been removed to prevent repetition. The deletion of this also removed the reference by Hartigan L (originally the 8th reference) as it is no longer essential to the paper. However, we believe the whole of lines 149-154 of the original paper cannot be removed as lines 149-152 explain the 3-minute thesis format required in the methodology. Lines 165-168 of the revised manuscript are kept as it links to the other ideas in the paragraph smoothly. Additionally, although the two concepts are the same, they are in separate sections- the introduction and methodology. We chose to keep lines 111-117 of the original manuscript in the introduction as we believe it is necessary to introduce the uniqueness of the three-minute thesis format as it is not widely known in other countries besides Australia.

Comment 6: In lines 196-197, you stated the Likert scale used in Appendix 2, but Q5 and Q7 used different wording of the scale. I think using the phrasing of a “5-point Likert scale score” only without itemizing them would be appropriate.

Response 6: Thank you for pointing out the inconsistencies. We have included the other descriptors for questions 5 and 7 in the methodology as seen in lines 209-212. In line 375, we have also amended the phrasing to not be itemized.

Comment 7: The question domain at line 198, “attitude and intention,” is unclear if the attitude concerns eye diseases or the learning process. As shown in the appendix, line 433, it is suggested to use “Attitude and intention towards learning.”

Response 7: This has been amended to “attitude and intention towards learning” in now line 213.

Comment 8: Were the pre and post-questionnaire results based on all the topics in the different presentations or on a single presentation based on a single eye condition? Can that be clarified in the manuscript?

Response 8: This has been clarified in the methodology section in lines 143-144. The pre and post-questionnaires were based on all the topics in the different presentations.

Comment 9: The Likert scale for the question domain knowledge about eye disease could have been: (1 = very unconfident, 2 = fairly unconfident, 3 = neutral, 4 = fairly confident, and 5 = very confident) instead of high to low.

Response 9: We agree that using ‘confident’ rather than ‘high to low’ would have been a better option as ‘high to low’ can be somewhat subjective. This is an aspect that we will need to be more careful of when considering the terminology we use in our scales. However, as the result included measurements of median changes, we believe that it still sufficiently displays the benefits of Scholars180 for the student’s development.

Comment 10: The Likert scale for the question domain practice “likely,” reads unethical to be used for questions on patient management. I believe a different wording could have been used, like “confident.”

Response 10: The word ‘confident’ could have been a better wording option than ‘likely’ and we will keep that in mind for future studies that involve the Likert scale and questions regarding patient management. However, we also believe the word ‘likely’ cannot be considered ‘unethical’ in this scenario as the study is conducted on first-year optometry students. Hence, there is no expectation for the students to be competent in patient management as the goal is to achieve the skills through participating in the assessment.

Comment 11: The limitations presented in the manuscript are the main defining pointers to measure if the Scholar180 assessment was the primary influencer for the increase in knowledge of the peers. These limitations could have been taken care of to objectively measure the benefit of using this assessment method.

Response 11: We acknowledge that some of the limitations could have been better-taken care of. However, we do not believe that the limitations are significant enough to detract from showing the positive benefits of implementing a novel oral presentation like Scholars180.

Submitted filename: Response to Reviewers.docx

Decision Letter 1

11 Jul 2023

PONE-D-23-06339R1

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/ , click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at gro.solp@gnillibrohtua .

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact gro.solp@sserpeno .

Additional Editor Comments (optional):

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

2. Is the manuscript technically sound, and do the data support the conclusions?

3. Has the statistical analysis been performed appropriately and rigorously?

4. Have the authors made all data underlying the findings in their manuscript fully available?

5. Is the manuscript presented in an intelligible fashion and written in standard English?

6. Review Comments to the Author

Reviewer #1: (No Response)

Reviewer #2: The authors have effectively addressed and revised the manuscript to address the concerns raised adequately and I agree that it can be considered and accepted for publication.

7. PLOS authors have the option to publish the peer review history of their article ( what does this mean? ). If published, this will include your full peer review and any attached files.

Acceptance letter

14 Jul 2023

Dear Dr. Zheng:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact gro.solp@sserpeno .

If we can help with anything else, please email us at gro.solp@enosolp .

Thank you for submitting your work to PLOS ONE and supporting open access.

PLOS ONE Editorial Office Staff

on behalf of

Dr. Kofi Asiedu

IMAGES

  1. 10 Best Printable Rubrics For Oral Presentations PDF for Free at Printablee

    assessment of oral presentation

  2. FREE 7+ Oral Presentation Evaluation Forms in PDF

    assessment of oral presentation

  3. 10 Best Printable Rubrics For Oral Presentations PDF for Free at Printablee

    assessment of oral presentation

  4. FREE 7+ Sample Presentation Evaluations in PDF

    assessment of oral presentation

  5. 7+ Sample Presentation Evaluations

    assessment of oral presentation

  6. FREE 7+ Oral Presentation Evaluation Forms in PDF

    assessment of oral presentation

VIDEO

  1. Class 5 annual exam odia oral question paper 2024 l 5th class shahitya moukhik question paper 2024 l

  2. Class 5 annual exam odia oral question paper 2024 l 5th class shahitya moukhik question paper 2024 l

  3. ORAL ASSESSMENT 1

  4. Oral assessment group 15 Section 02

  5. Conversation-Oral assessment

  6. PUBH5767

COMMENTS

  1. PDF Oral Presentation Evaluation Rubric

    Organization. Logical, interesting, clearly delineated themes and ideas. Generally clear, overall easy for audience to follow. Overall organized but sequence is difficult to follow. Difficult to follow, confusing sequence of information. No clear organization to material, themes and ideas are disjointed. Evaluation.

  2. PDF Criteria for Evaluating an Individual Oral Presentation

    you to achieve sustained eye contact throughout the presentation. Volume Adjust the volume for the venue. Work to insure that remote audience members can clearly hear even the inflectional elements in your speech. Inflection Adjust voice modulation and stress points to assist the audience in identifying key concepts in the presentation.

  3. Enhancing learners' awareness of oral presentation (delivery) skills in

    The list of presentation items (i.e. areas to consider when delivering oral presentations) was retrieved and modified from the following: (1) presentation assessment criteria in some journal articles such as Al-Issa and Al-Qubtan (2010), Langan et al. (2005) and Živković (2014), (2) practical advice from websites and chapters from books, most ...

  4. Rubric formats for the formative assessment of oral presentation skills

    Acquiring complex oral presentation skills is cognitively demanding for students and demands intensive teacher guidance. The aim of this study was twofold: (a) to identify and apply design guidelines in developing an effective formative assessment method for oral presentation skills during classroom practice, and (b) to develop and compare two analytic rubric formats as part of that assessment ...

  5. Oral assessment

    In oral assessment, students speak to provide evidence of their learning. Internationally, oral examinations are commonplace. We use a wide variety of oral assessment techniques at UCL. Students can be asked to: present posters. use presentation software such as Power Point or Prezi. perform in a debate.

  6. PDF Assessment of Oral Presentations: Effectiveness of Self-, Peer-, and

    Furthermore, few studies have examined the assessment of oral presentations in EFL contexts. For example, in Turkey Sasmaz Oren (2012) investigated the impact of two variables (experience and gender) on the alternative methods (self and peer-assessments) in the process of assessing oral presentations. They reported that female students were

  7. Oral presentations

    Oral presentations are often combined with other modes of assessment; for example oral presentation of a project report, oral presentation of a poster, commentary on a practical exercise, etc. Also common is the use of PechaKucha, a fast-paced presentation format consisting of a fixed number of slides that are set to move on every twenty ...

  8. PDF Rubric formats for the formative assessment of oral presentation skills

    lands are struggling with how to teach and assess students' oral presentation skills, lack clear performance criteria for oral presentations, and fall short in oering adequate forma-tive assessment methods that support the eective acquisition of oral presentation skills (Sluijsmans et al., 2013).

  9. Oral Exams: A More Meaningful Assessment of Students' Understanding

    Where oral presentations have a "presentation" interaction, oral exams use a "dialogue" (viva voice) interaction. The "authenticity" of the exam refers to "the extent to which assessment replicates the context of professional practice or 'real life'" (Joughin Citation 1998 , p. 371).

  10. Development and validation of the oral presentation evaluation scale

    Oral presentations are an important educational component for nursing students and nursing educators need to provide students with an assessment of presentations as feedback for improving this skill. However, there are no reliable validated tools available for objective evaluations of presentations. We aimed to develop and validate an oral presentation evaluation scale (OPES) for nursing ...

  11. How effective are self- and peer assessment of oral presentation skills

    Assessment of oral presentation skills is an underexplored area. The study described here focuses on the agreement between professional assessment and self- and peer assessment of oral presentation skills and explores student perceptions about peer assessment. The study has the merit of paying attention to the inter-rater reliability of the ...

  12. PDF Oral Presentation Evaluation Criteria and Checklist

    ORAL PRESENTATION EVALUATION CRITERIA AND CHECKLIST. talk was well-prepared. topic clearly stated. structure & scope of talk clearly stated in introduction. topic was developed in order stated in introduction. speaker summed up main points in conclusion. speaker formulated conclusions and discussed implications. was in control of subject matter.

  13. Assessing speaking through multimodal oral presentations: The case of

    Within English-medium degree programmes at universities, oral presentations commonly feature as learning and assessment tools (Huxham et al., 2012).English for academic purposes (EAP) instruction seeks to prepare students for English-medium university study and, therefore, endeavours to mirror elements of the target language use domain (Bachman & Palmer, 1996).

  14. Assessing oral presentation performance: Designing a rubric and testing

    The purpose of this paper is to design a rubric instrument for assessing oral presentation performance in higher education and to test its validity with an expert group.,This study, using mixed methods, focusses on: designing a rubric by identifying assessment instruments in previous presentation research and implementing essential design ...

  15. University Students' Perceptions of Peer Assessment in Oral Presentations

    Peer assessment has been shown to be useful in a variety of educational contexts, but there is a scarcity of research on how prior experience affects university students' perceptions of this form of assessment. This study evaluates whether experience with peer assessment of oral presentations influences the perceptions and self-efficacy of university students as assessors.

  16. Oral Presentations

    Oral Presentation Tips and Peer Evaluation Questions Laura Goering, Carleton College, developed these tips and student evaluation template for the Carleton College Perlman Center for Learning and Teaching. Assessment rubrics can be a particularly useful tool in assessing student presentations. Oral Report Evaluation Rubric (Microsoft Word 56kB ...

  17. Oral Assessments: Benefits, Drawbacks, and Considerations

    Setting up Simple Oral Assessments. While oral assessments are meant to be short, they can be more or less formal. For informal formative assessment, one teacher describes 60-second interviews asking students to explain concepts that they may have had difficulty with on a test. You can arrange for 5-to-10-minute interviews with each student in the class, or you can arrange for students to ...

  18. Rubrics for Oral Presentations

    Examples of criteria that have been included in rubrics for evaluation oral presentations include: Knowledge of content. Organization of content. Presentation of ideas. Research/sources. Visual aids/handouts. Language clarity. Grammatical correctness. Time management.

  19. PDF Oral Presentation Rubric

    Oral Presentation Rubric 4—Excellent 3—Good 2—Fair 1—Needs Improvement Delivery • Holds attention of entire audience with the use of direct eye contact, seldom looking at notes • Speaks with fluctuation in volume and inflection to maintain audience interest and emphasize key points • Consistent use of direct eye contact with ...

  20. (PDF) Assessment Practices for Oral Presentations ...

    Oral presentations can be made as formative assessment where instructors are expected to gi ve feedback for learners to improve their skills (Man et al., 2022; Murillo - Zamorano & Montaner, 2017).

  21. Assessment of Oral Presentations: Effectiveness of Self-, Peer-, and

    624 Assessment of Oral Presentations: Effectiveness of Self-, Peer … International Journal of Instruction, July 20 19 Vol.12 , No.3 because s ome of my classmates are very squeamish.

  22. Self- and Peer Assessment of Oral Presentation in Advanced ...

    Self- and peer assessment allows students to play a greater role in the assessment process. Twenty-one non-heritage undergraduate students who took the Advanced Chinese course at Duke University were involved in a project on self- and peer assessment of oral presentations. The project included rubric designing, training, practice, observation, evaluation, discussion, survey, and feedback.

  23. Scholars180: An effective oral presentation assessment for optometry

    Scholars180's oral presentation assessment design provided a method to shorten the time required for typical oral presentation assessments whilst simultaneously maintaining the effectiveness of the assessment. Overall, the design of Scholars180 appeared to be in the right direction, but further research should be conducted to strengthen the ...