U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Am J Pharm Educ
  • v.74(9); 2010 Nov 10

A Standardized Rubric to Evaluate Student Presentations

Michael j. peeters.

a University of Toledo College of Pharmacy

Eric G. Sahloff

Gregory e. stone.

b University of Toledo College of Education

To design, implement, and assess a rubric to evaluate student presentations in a capstone doctor of pharmacy (PharmD) course.

A 20-item rubric was designed and used to evaluate student presentations in a capstone fourth-year course in 2007-2008, and then revised and expanded to 25 items and used to evaluate student presentations for the same course in 2008-2009. Two faculty members evaluated each presentation.

The Many-Facets Rasch Model (MFRM) was used to determine the rubric's reliability, quantify the contribution of evaluator harshness/leniency in scoring, and assess grading validity by comparing the current grading method with a criterion-referenced grading scheme. In 2007-2008, rubric reliability was 0.98, with a separation of 7.1 and 4 rating scale categories. In 2008-2009, MFRM analysis suggested 2 of 98 grades be adjusted to eliminate evaluator leniency, while a further criterion-referenced MFRM analysis suggested 10 of 98 grades should be adjusted.

The evaluation rubric was reliable and evaluator leniency appeared minimal. However, a criterion-referenced re-analysis suggested a need for further revisions to the rubric and evaluation process.

INTRODUCTION

Evaluations are important in the process of teaching and learning. In health professions education, performance-based evaluations are identified as having “an emphasis on testing complex, ‘higher-order’ knowledge and skills in the real-world context in which they are actually used.” 1 Objective structured clinical examinations (OSCEs) are a common, notable example. 2 On Miller's pyramid, a framework used in medical education for measuring learner outcomes, “knows” is placed at the base of the pyramid, followed by “knows how,” then “shows how,” and finally, “does” is placed at the top. 3 Based on Miller's pyramid, evaluation formats that use multiple-choice testing focus on “knows” while an OSCE focuses on “shows how.” Just as performance evaluations remain highly valued in medical education, 4 authentic task evaluations in pharmacy education may be better indicators of future pharmacist performance. 5 Much attention in medical education has been focused on reducing the unreliability of high-stakes evaluations. 6 Regardless of educational discipline, high-stakes performance-based evaluations should meet educational standards for reliability and validity. 7

PharmD students at University of Toledo College of Pharmacy (UTCP) were required to complete a course on presentations during their final year of pharmacy school and then give a presentation that served as both a capstone experience and a performance-based evaluation for the course. Pharmacists attending the presentations were given Accreditation Council for Pharmacy Education (ACPE)-approved continuing education credits. An evaluation rubric for grading the presentations was designed to allow multiple faculty evaluators to objectively score student performances in the domains of presentation delivery and content. Given the pass/fail grading procedure used in advanced pharmacy practice experiences, passing this presentation-based course and subsequently graduating from pharmacy school were contingent upon this high-stakes evaluation. As a result, the reliability and validity of the rubric used and the evaluation process needed to be closely scrutinized.

Each year, about 100 students completed presentations and at least 40 faculty members served as evaluators. With the use of multiple evaluators, a question of evaluator leniency often arose (ie, whether evaluators used the same criteria for evaluating performances or whether some evaluators graded easier or more harshly than others). At UTCP, opinions among some faculty evaluators and many PharmD students implied that evaluator leniency in judging the students' presentations significantly affected specific students' grades and ultimately their graduation from pharmacy school. While it was plausible that evaluator leniency was occurring, the magnitude of the effect was unknown. Thus, this study was initiated partly to address this concern over grading consistency and scoring variability among evaluators.

Because both students' presentation style and content were deemed important, each item of the rubric was weighted the same across delivery and content. However, because there were more categories related to delivery than content, an additional faculty concern was that students feasibly could present poor content but have an effective presentation delivery and pass the course.

The objectives for this investigation were: (1) to describe and optimize the reliability of the evaluation rubric used in this high-stakes evaluation; (2) to identify the contribution and significance of evaluator leniency to evaluation reliability; and (3) to assess the validity of this evaluation rubric within a criterion-referenced grading paradigm focused on both presentation delivery and content.

The University of Toledo's Institutional Review Board approved this investigation. This study investigated performance evaluation data for an oral presentation course for final-year PharmD students from 2 consecutive academic years (2007-2008 and 2008-2009). The course was taken during the fourth year (P4) of the PharmD program and was a high-stakes, performance-based evaluation. The goal of the course was to serve as a capstone experience, enabling students to demonstrate advanced drug literature evaluation and verbal presentations skills through the development and delivery of a 1-hour presentation. These presentations were to be on a current pharmacy practice topic and of sufficient quality for ACPE-approved continuing education. This experience allowed students to demonstrate their competencies in literature searching, literature evaluation, and application of evidence-based medicine, as well as their oral presentation skills. Students worked closely with a faculty advisor to develop their presentation. Each class (2007-2008 and 2008-2009) was randomly divided, with half of the students taking the course and completing their presentation and evaluation in the fall semester and the other half in the spring semester. To accommodate such a large number of students presenting for 1 hour each, it was necessary to use multiple rooms with presentations taking place concurrently over 2.5 days for both the fall and spring sessions of the course. Two faculty members independently evaluated each student presentation using the provided evaluation rubric. The 2007-2008 presentations involved 104 PharmD students and 40 faculty evaluators, while the 2008-2009 presentations involved 98 students and 46 faculty evaluators.

After vetting through the pharmacy practice faculty, the initial rubric used in 2007-2008 focused on describing explicit, specific evaluation criteria such as amounts of eye contact, voice pitch/volume, and descriptions of study methods. The evaluation rubric used in 2008-2009 was similar to the initial rubric, but with 5 items added (Figure ​ (Figure1). 1 ). The evaluators rated each item (eg, eye contact) based on their perception of the student's performance. The 25 rubric items had equal weight (ie, 4 points each), but each item received a rating from the evaluator of 1 to 4 points. Thus, only 4 rating categories were included as has been recommended in the literature. 8 However, some evaluators created an additional 3 rating categories by marking lines in between the 4 ratings to signify half points ie, 1.5, 2.5, and 3.5. For example, for the “notecards/notes” item in Figure ​ Figure1, 1 , a student looked at her notes sporadically during her presentation, but not distractingly nor enough to warrant a score of 3 in the faculty evaluator's opinion, so a 3.5 was given. Thus, a 7-category rating scale (1, 1.5, 2, 2.5. 3, 3.5, and 4) was analyzed. Each independent evaluator's ratings for the 25 items were summed to form a score (0-100%). The 2 evaluators' scores then were averaged and a letter grade was assigned based on the following scale: >90% = A, 80%-89% = B, 70%-79% = C, <70% = F.

An external file that holds a picture, illustration, etc.
Object name is ajpe171fig1.jpg

Rubric used to evaluate student presentations given in a 2008-2009 capstone PharmD course.

EVALUATION AND ASSESSMENT

Rubric reliability.

To measure rubric reliability, iterative analyses were performed on the evaluations using the Many-Facets Rasch Model (MFRM) following the 2007-2008 data collection period. While Cronbach's alpha is the most commonly reported coefficient of reliability, its single number reporting without supplementary information can provide incomplete information about reliability. 9 - 11 Due to its formula, Cronbach's alpha can be increased by simply adding more repetitive rubric items or having more rating scale categories, even when no further useful information has been added. The MFRM reports separation , which is calculated differently than Cronbach's alpha, is another source of reliability information. Unlike Cronbach's alpha, separation does not appear enhanced by adding further redundant items. From a measurement perspective, a higher separation value is better than a lower one because students are being divided into meaningful groups after measurement error has been accounted for. Separation can be thought of as the number of units on a ruler where the more units the ruler has, the larger the range of performance levels that can be measured among students. For example, a separation of 4.0 suggests 4 graduations such that a grade of A is distinctly different from a grade of B, which in turn is different from a grade of C or of F. In measuring performances, a separation of 9.0 is better than 5.5, just as a separation of 7.0 is better than a 6.5; a higher separation coefficient suggests that student performance potentially could be divided into a larger number of meaningfully separate groups.

The rating scale can have substantial effects on reliability, 8 while description of how a rating scale functions is a unique aspect of the MFRM. With analysis iterations of the 2007-2008 data, the number of rating scale categories were collapsed consecutively until improvements in reliability and/or separation were no longer found. The last positive iteration that led to positive improvements in reliability or separation was deemed an optimal rating scale for this evaluation rubric.

In the 2007-2008 analysis, iterations of the data where run through the MFRM. While only 4 rating scale categories had been included on the rubric, because some faculty members inserted 3 in-between categories, 7 categories had to be included in the analysis. This initial analysis based on a 7-category rubric provided a reliability coefficient (similar to Cronbach's alpha) of 0.98, while the separation coefficient was 6.31. The separation coefficient denoted 6 distinctly separate groups of students based on the items. Rating scale categories were collapsed, with “in-between” categories included in adjacent full-point categories. Table ​ Table1 1 shows the reliability and separation for the iterations as the rating scale was collapsed. As shown, the optimal evaluation rubric maintained a reliability of 0.98, but separation improved the reliability to 7.10 or 7 distinctly separate groups of students based on the items. Another distinctly separate group was added through a reduction in the rating scale while no change was seen to Cronbach's alpha, even though the number of rating scale categories was reduced. Table ​ Table1 1 describes the stepwise, sequential pattern across the final 4 rating scale categories analyzed. Informed by the 2007-2008 results, the 2008-2009 evaluation rubric (Figure ​ (Figure1) 1 ) used 4 rating scale categories and reliability remained high.

Evaluation Rubric Reliability and Separation with Iterations While Collapsing Rating Scale Categories.

An external file that holds a picture, illustration, etc.
Object name is ajpe171tbl1.jpg

a Reliability coefficient of variance in rater response that is reproducible (ie, Cronbach's alpha).

b Separation is a coefficient of item standard deviation divided by average measurement error and is an additional reliability coefficient.

c Optimal number of rating scale categories based on the highest reliability (0.98) and separation (7.1) values.

Evaluator Leniency

Described by Fleming and colleagues over half a century ago, 6 harsh raters (ie, hawks) or lenient raters (ie, doves) have also been demonstrated in more recent studies as an issue as well. 12 - 14 Shortly after 2008-2009 data were collected, those evaluations by multiple faculty evaluators were collated and analyzed in the MFRM to identify possible inconsistent scoring. While traditional interrater reliability does not deal with this issue, the MFRM had been used previously to illustrate evaluator leniency on licensing examinations for medical students and medical residents in the United Kingdom. 13 Thus, accounting for evaluator leniency may prove important to grading consistency (and reliability) in a course using multiple evaluators. Along with identifying evaluator leniency, the MFRM also corrected for this variability. For comparison, course grades were calculated by summing the evaluators' actual ratings (as discussed in the Design section) and compared with the MFRM-adjusted grades to quantify the degree of evaluator leniency occurring in this evaluation.

Measures created from the data analysis in the MFRM were converted to percentages using a common linear test-equating procedure involving the mean and standard deviation of the dataset. 15 To these percentages, student letter grades were assigned using the same traditional method used in 2007-2008 (ie, 90% = A, 80% - 89% = B, 70% - 79% = C, <70% = F). Letter grades calculated using the revised rubric and the MFRM then were compared to letter grades calculated using the previous rubric and course grading method.

In the analysis of the 2008-2009 data, the interrater reliability for the letter grades when comparing the 2 independent faculty evaluations for each presentation was 0.98 by Cohen's kappa. However, using the 3-facet MRFM revealed significant variation in grading. The interaction of evaluator leniency on student ability and item difficulty was significant, with a chi-square of p < 0.01. As well, the MFRM showed a reliability of 0.77, with a separation of 1.85 (ie, almost 2 groups of evaluators). The MFRM student ability measures were scaled to letter grades and compared with course letter grades. As a result, 2 B's became A's and so evaluator leniency accounted for a 2% change in letter grades (ie, 2 of 98 grades).

Validity and Grading

Explicit criterion-referenced standards for grading are recommended for higher evaluation validity. 3 , 16 - 18 The course coordinator completed 3 additional evaluations of a hypothetical student presentation rating the minimal criteria expected to describe each of an A, B, or C letter grade performance. These evaluations were placed with the other 196 evaluations (2 evaluators × 98 students) from 2008-2009 into the MFRM, with the resulting analysis report giving specific cutoff percentage scores for each letter grade. Unlike the traditional scoring method of assigning all items an equal weight, the MFRM ordered evaluation items from those more difficult for students (given more weight) to those less difficult for students (given less weight). These criterion-referenced letter grades were compared with the grades generated using the traditional grading process.

When the MFRM data were rerun with the criterion-referenced evaluations added into the dataset, a 10% change was seen with letter grades (ie, 10 of 98 grades). When the 10 letter grades were lowered, 1 was below a C, the minimum standard, and suggested a failing performance. Qualitative feedback from faculty evaluators agreed with this suggested criterion-referenced performance failure.

Measurement Model

Within modern test theory, the Rasch Measurement Model maps examinee ability with evaluation item difficulty. Items are not arbitrarily given the same value (ie, 1 point) but vary based on how difficult or easy the items were for examinees. The Rasch measurement model has been used frequently in educational research, 19 by numerous high-stakes testing professional bodies such as the National Board of Medical Examiners, 20 and also by various state-level departments of education for standardized secondary education examinations. 21 The Rasch measurement model itself has rigorous construct validity and reliability. 22 A 3-facet MFRM model allows an evaluator variable to be added to the student ability and item difficulty variables that are routine in other Rasch measurement analyses. Just as multiple regression accounts for additional variables in analysis compared to a simple bivariate regression, the MFRM is a multiple variable variant of the Rasch measurement model and was applied in this study using the Facets software (Linacre, Chicago, IL). The MFRM is ideal for performance-based evaluations with the addition of independent evaluator/judges. 8 , 23 From both yearly cohorts in this investigation, evaluation rubric data were collated and placed into the MFRM for separate though subsequent analyses. Within the MFRM output report, a chi-square for a difference in evaluator leniency was reported with an alpha of 0.05.

The presentation rubric was reliable. Results from the 2007-2008 analysis illustrated that the number of rating scale categories impacted the reliability of this rubric and that use of only 4 rating scale categories appeared best for measurement. While a 10-point Likert-like scale may commonly be used in patient care settings, such as in quantifying pain, most people cannot process more then 7 points or categories reliably. 24 Presumably, when more than 7 categories are used, the categories beyond 7 either are not used or are collapsed by respondents into fewer than 7 categories. Five-point scales commonly are encountered, but use of an odd number of categories can be problematic to interpretation and is not recommended. 25 Responses using the middle category could denote a true perceived average or neutral response or responder indecisiveness or even confusion over the question. Therefore, removing the middle category appears advantageous and is supported by our results.

With 2008-2009 data, the MFRM identified evaluator leniency with some evaluators grading more harshly while others were lenient. Evaluator leniency was indeed found in the dataset but only a couple of changes were suggested based on the MFRM-corrected evaluator leniency and did not appear to play a substantial role in the evaluation of this course at this time.

Performance evaluation instruments are either holistic or analytic rubrics. 26 The evaluation instrument used in this investigation exemplified an analytic rubric, which elicits specific observations and often demonstrates high reliability. However, Norman and colleagues point out a conundrum where drastically increasing the number of evaluation rubric items (creating something similar to a checklist) could augment a reliability coefficient though it appears to dissociate from that evaluation rubric's validity. 27 Validity may be more than the sum of behaviors on evaluation rubric items. 28 Having numerous, highly specific evaluation items appears to undermine the rubric's function. With this investigation's evaluation rubric and its numerous items for both presentation style and presentation content, equal numeric weighting of items can in fact allow student presentations to receive a passing score while falling short of the course objectives, as was shown in the present investigation. As opposed to analytic rubrics, holistic rubrics often demonstrate lower yet acceptable reliability, while offering a higher degree of explicit connection to course objectives. A summative, holistic evaluation of presentations may improve validity by allowing expert evaluators to provide their “gut feeling” as experts on whether a performance is “outstanding,” “sufficient,” “borderline,” or “subpar” for dimensions of presentation delivery and content. A holistic rubric that integrates with criteria of the analytic rubric (Figure ​ (Figure1) 1 ) for evaluators to reflect on but maintains a summary, overall evaluation for each dimension (delivery/content) of the performance, may allow for benefits of each type of rubric to be used advantageously. This finding has been demonstrated with OSCEs in medical education where checklists for completed items (ie, yes/no) at an OSCE station have been successfully replaced with a few reliable global impression rating scales. 29 - 31

Alternatively, and because the MFRM model was used in the current study, an items-weighting approach could be used with the analytic rubric. That is, item weighting based on the difficulty of each rubric item could suggest how many points should be given for that rubric items, eg, some items would be worth 0.25 points, while others would be worth 0.5 points or 1 point (Table ​ (Table2). 2 ). As could be expected, the more complex the rubric scoring becomes, the less feasible the rubric is to use. This was the main reason why this revision approach was not chosen by the course coordinator following this study. As well, it does not address the conundrum that the performance may be more than the summation of behavior items in the Figure ​ Figure1 1 rubric. This current study cannot suggest which approach would be better as each would have its merits and pitfalls.

Rubric Item Weightings Suggested in the 2008-2009 Data Many-Facet Rasch Measurement Analysis

An external file that holds a picture, illustration, etc.
Object name is ajpe171tbl2.jpg

Regardless of which approach is used, alignment of the evaluation rubric with the course objectives is imperative. Objectivity has been described as a general striving for value-free measurement (ie, free of the evaluator's interests, opinions, preferences, sentiments). 27 This is a laudable goal pursued through educational research. Strategies to reduce measurement error, termed objectification , may not necessarily lead to increased objectivity. 27 The current investigation suggested that a rubric could become too explicit if all the possible areas of an oral presentation that could be assessed (ie, objectification) were included. This appeared to dilute the effect of important items and lose validity. A holistic rubric that is more straightforward and easier to score quickly may be less likely to lose validity (ie, “lose the forest for the trees”), though operationalizing a revised rubric would need to be investigated further. Similarly, weighting items in an analytic rubric based on their importance and difficulty for students may alleviate this issue; however, adding up individual items might prove arduous. While the rubric in Figure ​ Figure1, 1 , which has evolved over the years, is the subject of ongoing revisions, it appears a reliable rubric on which to build.

The major limitation of this study involves the observational method that was employed. Although the 2 cohorts were from a single institution, investigators did use a completely separate class of PharmD students to verify initial instrument revisions. Optimizing the rubric's rating scale involved collapsing data from misuse of a 4-category rating scale (expanded by evaluators to 7 categories) by a few of the evaluators into 4 independent categories without middle ratings. As a result of the study findings, no actual grading adjustments were made for students in the 2008-2009 presentation course; however, adjustment using the MFRM have been suggested by Roberts and colleagues. 13 Since 2008-2009, the course coordinator has made further small revisions to the rubric based on feedback from evaluators, but these have not yet been re-analyzed with the MFRM.

The evaluation rubric used in this study for student performance evaluations showed high reliability and the data analysis agreed with using 4 rating scale categories to optimize the rubric's reliability. While lenient and harsh faculty evaluators were found, variability in evaluator scoring affected grading in this course only minimally. Aside from reliability, issues of validity were raised using criterion-referenced grading. Future revisions to this evaluation rubric should reflect these criterion-referenced concerns. The rubric analyzed herein appears a suitable starting point for reliable evaluation of PharmD oral presentations, though it has limitations that could be addressed with further attention and revisions.

ACKNOWLEDGEMENT

Author contributions— MJP and EGS conceptualized the study, while MJP and GES designed it. MJP, EGS, and GES gave educational content foci for the rubric. As the study statistician, MJP analyzed and interpreted the study data. MJP reviewed the literature and drafted a manuscript. EGS and GES critically reviewed this manuscript and approved the final version for submission. MJP accepts overall responsibility for the accuracy of the data, its analysis, and this report.

Eberly Center

Teaching excellence & educational innovation, instructor: robert dammon course: 45-901: corporate restructuring, tepper school of business assessment:   rating scale for assessing oral presentations.

A key business communication skill is the ability to give effective oral presentations, and it is important for students to practice this skill. I wanted to create a systematic and consistent assessment of students’ oral presentation skills and to grade these skills consistently within the class and across semesters.

Implementation:

This oral presentation is part of a larger finance analysis assignment that students complete in groups. Each group decides which members will be involved in the oral presentation. I constructed a rating scale that decomposes the oral presentation into four major components: (1) preparation, (2) quality of handouts and overheads, (3) quality of presentation skills, and (4) quality of analysis. I rate preparation as “yes” or “no”; all other components are rated on a five-point scale. Immediately after the class in which the presentation was given, I complete the rating scale, write a few notes, and calculate an overall score, which accounts for 20% of the group’s grade on the overall assignment.

The presentation gives students important practice in oral communication, and the rating scale has made my grading much more consistent.

I have used this rating scale for several years, and the oral presentation is a standard course component. I have considered adapting the rating scale so that students also rate the oral presentation; however, I would want to give the students an abbreviated rating scale so that they focus primarily on the presentation.

CONTACT US to talk with an Eberly colleague in person!

  • Faculty Support
  • Graduate Student Support
  • Canvas @ Carnegie Mellon
  • Quick Links

creative commons image

Center for Teaching and Learning

Step 4: develop assessment criteria and rubrics.

Just as we align assessments with the course learning objectives, we also align the grading criteria for each assessment with the goals of that unit of content or practice, especially for assignments than cannot be graded through automation the way that multiple-choice tests can. Grading criteria articulate what is important in each assessment, what knowledge or skills students should be able to demonstrate, and how they can best communicate that to you. When you share grading criteria with students, you help them understand what to focus on and how to demonstrate their learning successfully. From good assessment criteria, you can develop a grading rubric .

Develop Your Assessment Criteria | Decide on a Rating Scale | Create the Rubric

Developing Your Assessment Criteria

Good assessment criteria are

  • Clear and easy to understand as a guide for students
  • Attainable rather than beyond students’ grasp in the current place in the course
  • Significant in terms of the learning students should demonstrate
  • Relevant in that they assess student learning toward course objectives related to that one assessment.

To create your grading criteria, consider the following questions:

  • What is the most significant content or knowledge students should be able to demonstrate understanding of at this point in the course?
  • What specific skills, techniques, or applications should students be able to use to demonstrate using at this point in the course?
  • What secondary skills or practices are important for students to demonstrate in this assessment? (for example, critical thinking, public speaking skills, or writing as well as more abstract concepts such as completeness, creativity, precision, or problem-solving abilities)
  • Do the criteria align with the objectives for both the assessment and the course?

Once you have developed some ideas about the assessment’s grading criteria, double-check to make sure the criteria are observable, measurable, significant, and distinct from each other.

Assessment Criteria Example Using the questions above, the performance criteria in the example below were designed for an assignment in which students had to create an explainer video about a scientific concept for a specified audience. Each elements can be observed and measured based on both expert instructor and peer feedback, and each is significant because it relates to the course and assignment learning goals.

assessment criteria for seminar presentation

Additional Assessment Criteria Resources Developing Grading Criteria (Vanderbilt University) Creating Grading Criteria (Brown University) Sample Criteria (Brown University) Developing Grading Criteria (Temple University)

Decide on a Rating Scale

Deciding what scale you will use for an assessment depends on the type of learning you want students to demonstrate and the type of feedback you want to give students on this particular assignment or test. For example, for an introductory lab report early in the semester, you might not be as concerned with advanced levels of precision as much as correct displays of data and the tone of the report; therefore, grading heavily on copy editing or advanced analysis would not be appropriate. The criteria would likely more rigorous by the end of the semester, as you build up to the advanced level you want students to reach in the course.

Rating scales turn the grading criteria you have defined into levels of performance expectations for the students that can then be interpreted as a letter, number, or level. Common rating scales include

  • A, B, C, etc. (without or without + and -)
  • 100 point scale with defined cut-off for a letter grade if desired (ex. a B = 89-80; or a B+ = 89-87, B = 86-83, B- = 82-80)
  • Yes or no, present or not present (if the rubric is a checklist of items students must show)
  • below expectations, meets expectations, exceeds expectations
  • not demonstrated, poor, average, good, excellent

Once you have decided on a scale for the type of assignment and the learning you want students to demonstrate, you can use the scale to clearly articulate what each level of performance looks like, such as defining what A, B, C, etc. level work would look like for each grading criteria. What would distinguish a student who earns a B from one who earns a C? What would distinguish a student who excelled in demonstrating use of a tool from a student who clearly was not familiar with it? Write these distinctions out in descriptive notes or brief paragraphs.

​ Ethical Implications of Rating Scales There are ethical implications in each of these types of rating skills. On a project worth 100 points, what is the objective difference between earning an 85 or and 87? On an exceeds/meets/does not meet scale, how can those levels be objectively applied? Different understandings of "fairness" can lead to several ways of grading that might disadvantage some students.  Learn more about equitable grading practices here.

Create the Rubric

Rubrics Can Make Grading More Effective

  • Provide students with more complete and targeted feedback
  • Make grading more timely by enabling the provision of feedback soon after assignment is submitted/presented.
  • Standardize assessment criteria among those assigning/assessing the same assignment.
  • Facilitate peer evaluation of early drafts of assignment.

Rubrics Can Help Student Learning

  • Convey your expectations about the assignment through a classroom discussion of the rubric prior to the beginning of the assignment
  • Level the playing field by clarifying academic expectations and assignments so that all students understand regardless of their educational backgrounds.(e.g. define what we expect analysis, critical thinking, or even introductions/conclusions should include)
  • Promote student independence and motivation by enabling self-assessment
  • Prepare students to use detailed feedback.

Rubrics Have Other Uses:

  • Track development of student skills over several assignments
  • Facilitate communication with others (e.g. TAs, communication center, tutors, other faculty, etc)
  • Refine own teaching skills (e.g. by responding to common areas of weaknesses, feedback on how well teaching strategies are working in preparing students for their assignments).

In this video, CTL's Dr. Carol Subino Sullivan discusses the value of the different types of rubrics.

Many non-test-based assessments might seem daunting to grade, but a well-designed rubric can alleviate some of that work. A rubric is a table that usually has these parts:  

  • a clear description of the learning activity being assessed
  • criteria by which the activity will be evaluated
  • a rating scale identifying different levels of performance
  • descriptions of the level of performance a student must reach to earn that level.  

When you define the criteria and pre-define what acceptable performance for each of those criteria looks like ahead of time, you can use the rubric to compare with student work and assign grades or points for each criteria accordingly. Rubrics work very well for projects, papers/reports, and presentations , as well as in peer review, and good rubrics can save instructors and TAs time when grading .  

Sample Rubrics This final rubric for the scientific concept explainer video combines the assessment criteria and the holistic rating scale:

assessment criteria for seminar presentation

When using this rubric, which can be easily adapted to use a present/not present rating scale or a letter grade scale, you can use a combination of checking items off and adding written (or audio/video) comments in the different boxes to provide the student more detailed feedback. 

As a second example, this descriptive rubric was used to ask students to peer assess and self-assess their contributions to a collaborative project. The rating scale is 1 through 4, and each description of performance builds on the previous. ( See the full rubric with scales for both product and process here. This rubric was designed for students working in teams to assess their own contributions to the project as well as their peers.)

assessment criteria for seminar presentation

Building a Rubric in Canvas Assignments You can create rubrics for assignments and discussions boards in Canvas. Review these Canvas guides for tips and tricks. Rubrics Overview for Instructors What are rubrics?  How do I align a rubric with a learning outcome? How do I add a rubric to an assignment? How do I add a rubric to a quiz? How do I add a rubric to a graded discussion? How do I use a rubric to grade submissions in SpeedGrader? How do I manage rubrics in a course?

Additional Resources for Developing Rubrics Designing Grading Rubrics  (Brown University) Step-by-step process for creating an effective, fair, and efficient grading rubric. 

Creating and Using Rubrics  (Carnegie Mellon University) Explores the basics of rubric design along with multiple examples for grading different types of assignments.

Using Rubrics  (Cornell University) Argument for the value of rubrics to support student learning.

Rubrics  (University of California Berkeley) Shares "fun facts" about rubrics, and links the rubric guidelines from many higher ed organizations such as the AAC&U.

Creating and Using Rubrics  (Yale University) Introduces different styles of rubrics and ways to decide what style to use given your course's learning goals.

Best Practices for Designing Effective Resources (Arizona State University) Comprehensive overview of rubric design principles.

  Return to Main Menu | Return to Step 3 | Go to Step 5 Determine Feedback Strategy

Accessibility Information

Download Microsoft Products   >      Download Adobe Reader   >

Mayo's Clinics

  • Email Subscription

Use Clear Criteria and Methodologies When Evaluating PowerPoint Presentations

Use Clear Criteria and Methodologies When Evaluating PowerPoint Presentations

Dr. Fred Mayo explains the three major methods for presentation evaluation: self, peer and professional. An added bonus: ready-made student evaluation form.

By Dr. Fred Mayo, CHE, CHT

In the last issue, we discussed making interactive presentations and this month we will focus on evaluating presentations. For many of us, encouraging and supporting students in making presentations is already a challenge; assessing their merit is often just another unwelcome teaching chore.

There are three major methods for evaluating presentation – self evaluations, peer evaluations, and professional evaluations. Of course, the most important issue is establishing evaluation criteria.

Criteria for Evaluating Presentations One of the best ways to help students create and deliver good presentations involves providing them with information about how their presentations will be evaluated. Some of the criteria that you can use to assess presentations include:

  • Focus of the presentation
  • Clarity and coherence of the content
  • Thoroughness of the ideas presented and the analysis
  • Clarity of the presentation
  • Effective use of facts, statistics and details
  • Lack of grammatical and spelling errors
  • Design of the slides
  • Effective use of images
  • Clarity of voice projection and appropriate volume
  • Completion of the presentation within the allotted time frame

Feel free to use these criteria or to develop your own that more specifically match your teaching situation.

Self Evaluations When teaching public speaking and making presentations, I often encouraged students to rate their own presentations after they delivered them. Many times, they were very insightful about what could have been improved. Others just could not complete this part of the assignment. Sometimes, I use their evaluations to make comments on what they recognized in their presentations. However, their evaluations did not overly influence the grade except that a more thorough evaluation improved their grade and a weak evaluation could hurt their presentation grade.

Questions I asked them to consider included:

  • How do you think it went?
  • What could you have done differently to make it better?
  • What did you do that you are particularly proud of accomplishing?
  • What did you learn from preparing for and delivering this presentation?
  • What would you change next time?

Peer Evaluations One way to provide the most feedback for students involves encouraging – or requiring – each student evaluate each other’s presentation. It forces them to watch the presentation both for content and delivery and helps them learn to discriminate between an excellent and an ordinary presentation. The more presentations they observe or watch, the more they learn.

In classes where students are required to deliver presentations, I have students evaluate the presentations they observe using a form I designed. The students in the audience give the evaluation or feedback forms to the presenter as soon as it is over. I do not collect them or review them to encourage honest comments and more direct feedback. Also, students do not use their names when completing the form. That way the presenter gets a picture from all the students in the audience – including me – and cannot discount the comments by recognizing the author.

A version of the form that I use is reproduced below – feel free to adopt or adapt it to your own use and classroom situation.

evaluation form

Professional Evaluations When conducting your professional evaluation of a presentation, remember to consider when and how to deliver oral comments as opposed to a completed form. I complete a written evaluation (shown above) along with all the students so they get some immediate feedback. I also take notes on the presentation and decide a grade as well. After the conclusion of the presentation, whether it was an individual or team presentation, I lead a class discussion on the presentation material. That way, students get to hear some immediate comments as well as reading the written peer evaluations.

I usually ask for a copy of the presentation prior to the delivery date. (Getting the PowerPoint slides ahead also helps me ensure I have all the presentations loaded on the projector or computer so we do not waste class time.) Students either email it to me or place it on our classroom management system. I will provide their letter grade and make comments on the design of the presentation on the copy they gave me. However, I don’t explain the final grade right after the presentation since it is often hard for students who have just made a presentation to hear comments.

Summary Each of these suggestions may prompt you to try your own ideas. Remember that students improve when they receive thoughtful and useful feedback from their peers and you as their teacher. I encourage you to use this form or develop a form so that the criteria used to evaluate the presentations are clear and explained ahead of time. Now, you can enjoy evaluating their presentations.

Dr. Fred Mayo, CHE, CHT, is retired as a clinical professor of hotel and tourism management at New York University. As principal of Mayo Consulting Services, he continues to teach around the globe and is a regular presenter at CAFÉ events nationwide.

Department of History

Mark scheme for presentations.

Different students may legitimately approach their presentations in different ways and sometimes particular strength in one area can offset weakness in another. But the following criteria gives you an idea of the areas to think about when preparing and presenting, and what makes for a good presentation.

First Class (marks of 74+)

  • Information: detailed, accurate, relevant; key points highlighted;
  • Structure: rigorously argued, logical, easy to follow;
  • Analysis and Interpretation: extensive evidence of independent thought and critical analysis;
  • Use of relevant and accurate Evidence: key points supported with highly relevant and accurate evidence, critically evaluated;
  • Presentation Skills: clear, lively, imaginative; good use of visual aids (where appropriate);
  • Time Management: perfectly timed, well organised;
  • Group Skills: engages well with group; encourages discussion and responds well to questions.

2.1 Upper Second (62-68)

  • Information: detailed, accurate, relevant;
  • Structure: generally clearly argued and logical;
  • Analysis and Interpretation: attempts to go beyond the ideas presented in secondary literature;
  • Use of relevant and accurate Evidence: most points illustrated with relevant and accurate evidence;
  • Presentation Skills: generally clear, lively; use of appropriate visual aids;
  • Time Management : well organised, more or less to time;
  • Group Skills: attempts to engage with group and responds reasonably well to questions.

2.2 Lower Second (52-58)

  • Information: generally accurate and relevant, but perhaps some gaps and/or irrelevant material;
  • Structure: not always clear or logical; may be overly influenced by secondary literature rather than the requirements of the topic;
  • Analysis and Interpretation: little attempt to go beyond or criticise secondary literature;
  • Use of relevant and accurate Evidence: some illustrative material, but not critically evaluated and/or some inaccuracies and irrelevancies;
  • Presentation Skills: c onveys meaning, but sometimes unclear or clumsy;
  • T ime Management: more or less right length, but some material not covered properly as a result, OR, significantly over-runs;
  • Group Skills: responds reasonably well to questions, but makes no real attempt to engage with group or promote discussion

Third (42-48)

  • Information: limited knowledge, with some significant gaps and/or errors;
  • Structure: argument underdeveloped and not entirely clear;
  • Analysis and Interpretation : fairly superficial and generally derivative and uncritical;
  • Use of relevant and accurate Evidence : some mentioned, but not integrated into presentation or evaluated; the evidence used may not be relevant or accurate
  • Presentation Skills: not always clear or easy to follow; unimaginative and unengaging;
  • Time Management : significantly over time; material fairly disorganised and rushed;
  • Group Skills: uncomfortable responding to questions; no attempt at engaging with group.

Fail (0-40)

  • Information: very limited, with many errors and gaps;
  • Structure: muddled, incoherent;
  • Analysis and Interpretation: entirely derivative, generally superficial;
  • Use of relevant and accurate Evidence: little or no evidence discussed; or irrelevant and inaccurate.
  • Presentation Skills: clumsy, disjointed, difficult to follow, dull;
  • Time Management : significantly under or over time; has clearly not tried out
  • material beforehand; disorganised;
  • Group Skills : poor.

The Teaching Knowledge Base

  • Digital Teaching and Learning Tools
  • Assessment and Feedback Tools

Assessment Criteria and Rubrics

An introduction.

This guide is an introduction to:

  • Writing an assessment brief with clear assessment criteria and rubrics
  • Grading tools available in Turnitin enabling the use of criteria and rubrics in marking.

Clear and explicit assessment criteria and rubrics are meant to increase the transparency of the assessment and aim to develop students into ‘novice assessors’ (Gipps, 1994) and facilitating deep learning.  Providing well-designed criteria and rubrics, contributes to communicating assessment requirements that can be more inclusive to all (including markers) regardless of previous learning experiences, and or individual differences in language, cultural and educational background.  It also facilitates the development of self-judgment skills (Boud & Falchikov, 2007).

  • Assessment brief
  • Assessment criteria
  • Assessment rubric
  • Guidance in how to create rubrics and grading forms
  • Guidance on how to create a rubric in Handin

Terminology Explored

The terms ‘assessment brief’ , ‘assessment criteria’ and ‘assessment rubrics’ however, are often used interchangeably and that may lead to misunderstandings and impact on the effectiveness of the design and interpretation of the assessment brief.  Therefore, it is important to first clarify these terms:

Assessment Brief

An assessment (assignment) brief refers to the instructions provided to communicate the requirements and expectations of assessment tasks, including the assessment criteria and rubrics to students.  The brief should clearly outline which module learning outcomes will assessed in the assignment.

NOTE: If you are new to writing learning outcomes, or need a refresher, have a look at Baume’s guide to “Writing and using good learning outcomes”, (2009).  See list of references.

When writing an assessment brief, it may be useful to consider the following questions with regards to your assessment brief:

  • Have you outlined clearly what type of assessment you require students to complete?  For example, instead of “written assessment”, outline clearly what type of written assessment you require from your students; is it a report, a reflective journal, a blog, presentation, etc.  It is also recommended to give a breakdown of the individual tasks that make up the full assessment within the brief, to ensure transparency.
  • Is the purpose of the assessment immediately clear to your students, i.e. why the student is being asked to do the task?  It might seem obvious to you as an academic, but for students new to academia and the subject discipline, it might not be clear.  For example, explain why they have to write a reflective report or a journal and indicate which module learning outcomes are to be assessed in this specific assessment task.
  • Is all the important task information clearly outlined, such as assessment deadlines, word count, criteria and further support and guidance?

Assessment Criteria

Assessment criteria communicate to students the knowledge, skills and understanding (thus in line with the expected module learning outcomes) the assessors expect from students to evidence in any given assessment task.  To write a good set of criteria, the focus should be on the characteristics of the learning outcomes that the assignment will evidence and not only consider the characteristics of the assignment (task), i.e., presentation, written task, etc.

Thus, the criteria outlines what we expect from our students (based on learning outcomes), however it does not in itself make assumptions about the actual quality or level of achievement (Sadler, 1987: 194) and needs to be refined in the assessment rubric.  

When writing an assessment brief, it may be useful to consider the following questions with regards to the criteria that will be applied to assess the assignment:

  • Are your criteria related and aligned with the module and (or) the course learning outcomes?
  • What are the number of criteria you will assess in any particular task?  Consider how realistic and achievable this may be.
  • Are the criteria clear and have you avoided using any terms not clear to students (academic jargon)?
  • Are the criteria and standards (your quality definitions) aligned with the level of the course?   For guidance, consider revisiting the  Credit Level Descriptors (SEEC, 2016) and the QAA Subject Benchmarks in Framework for the Higher Education Qualifications that are useful starting points to consider.

Assessment Rubric

The assessment rubric, forms part of a set of criteria and refers specifically to the “levels of performance quality on the criteria.” (Brookhart & Chen, 2015, p. 343)

Generally, rubrics are categorised into two categories, holistic and or analytic. A holistic rubric assesses an assignment as a whole and is not broken down into individual assessment criteria.  For the purpose of this guidance, the focus will be on an analytic rubric that provides separate performance descriptions for each criterion.

An assessment rubric is therefore a tool used in the process of assessing student work that usually includes essential features namely the:  

  • Scoring strategy – Can be numerical of qualitative, associated with the levels of mastery (quality definitions). (Shown as SCALE in Turnitin)
  • Quality definitions (levels of mastery) – Specify the levels of achievement / performance in each criterion.

 (Dawson, 2017).

The figure below, is an example of the features of a complete rubric including the assessment criteria. 

When writing an assessment brief, it may be useful to consider the following questions with regards to firstly, the assessment brief, and secondly, the criteria and associated rubrics.

  • Does your scoring strategy clearly define and cover the whole grading range?  For example, do you distinguish between the distinctions (70-79%) and 80% and above?
  • Are the words and terms used to indicate level of mastery, clearly outlining and enabling students to distinguish between the different judgements?  For example, how do you differentiate between work that is outstanding, excellent and good?
  • Is the chosen wording in your rubric too explicit?  It should be explicit but at the same time not overly specific to avoid students adopting a mechanistic approach to your assignment.  For example, instead of stating a minimum number references, consider stating rather effectiveness or quality of the use of literature, and or awareness or critical analysis of supporting literature.

NOTE: For guidance across Coventry University Group on writing criteria and rubrics, follow the links to guidance.

 POST GRADUATE Assessment criteria and rubrics (mode R)

 UNDER GRADUATE Assessment criteria and rubrics (mode E)

Developing Criteria and Rubrics within Turnitin

Within Turnitin, depending on the type of assessment, you have a choice between four grading tools:

  • Qualitative Rubric – A rubric that provides feedback but has no numeric scoring.  More descriptive than measurable.  This rubric is selected by choosing the ‘0’ symbol at the base of the Rubric.
  • Standard Rubric – Used for numeric scoring.  Enter scale values for each column (rubric score) and percentages for each criteria row, combined to be equal to 100%.  This rubric can calculate and input the overall grade.  This rubric is selected by choosing the % symbol at the base of the Rubric window.
  • Custom Rubric – Add criteria (row) and descriptive scales (rubric), when marking enter (type) any value directly into each rubric cell.  This rubric will calculate and input the overall grade.  This rubric is selected by choosing the ‘Pencil’ symbol at the base of the Rubric window.
  • Grading form – Can be used with or without numerical score.  If used without numerical score, then it is more descriptive feedback.  If used with numerical scoring, this can be added together to create an overall grade.  Note that grading forms can be used without a ‘paper assignment’ being submitted, for example, they can be used to assess work such as video submission, work of art, computer programme or musical performance.

Guidance on how to Create Rubric and Grading Forms

Guidance by Turnitin:

https://help.turnitin.com/feedback-studio/turnitin-website/instructor/rubric-scorecards-and-grading-forms/creating-a-rubric-or-grading-form-during-assignment-creation.htm

University of Kent – Creating and using rubrics and grading form (written guidance):

https://www.kent.ac.uk/elearning/files/turnitin/turnitin-rubrics.pdf

Some Examples to Explore

It is useful to explore some examples in Higher Education, and the resource developed by UCL of designing generic assessment criteria and rubrics from level 4 to 7, is a good starting point.

Guidance on how to Create Rubric in Handin

Within Handin, depending on the type of assessment, you have a choice between three grading tools, see list below, as well as the choice to use “free-form” grading that allows you to enter anything in the grade field when grading submissions.

  • None = qualitative
  • Range = quantitative – can choose score from range
  • Fixed = quantitative – one score per level

Guide to Handin: Creating ungraded (“free-form”) assignments

https://aula.zendesk.com/hc/en-us/articles/360053926834

Guide to Handin: Creating rubrics https://aula.zendesk.com/hc/en-us/articles/360017154820-How-can-I-use-Rubrics-for-Assignments-in-Aula-

References and Further Reading

Baume, D (2009) Writing and using good learning outcomes. Leeds Metropolitan University. ISBN 978-0-9560099-5-1 Link to Leeds Beckett Repository record: http://eprints.leedsbeckett.ac.uk/id/eprint/2837/1/Learning_Outcomes.pdf

Boud, D & Falchikov, N. (2007) Rethinking Assessment in Higher Education. London: Routledge.

Brookhart, S.M. & Chen, F. (2015) The quality and effectiveness of descriptive rubrics, Educational Review, 67:3, pp.343-368.  http://dx.doi.org/10.1080/00131911.2014.929565

Dawson, P. (2017) Assessment rubrics: Towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), pp.347-360. https://doi.org/10.1080/02602938.2015.1111294

Gipps, C.V. (1994) Beyond testing: Towards a theory of educational assessment. Psychology Press.

Sadler, D.R. (1987) Specifying and promulgating achievement standards. Oxford Review of Education, 13(2), pp.191-209.

SEEC (2016) Credit Level Descriptors. Available: http://www.seec.org.uk/wp-content/uploads/2016/07/SEEC-descriptors-2016.pdf

UK QAA Quality Code (2014) Part A – Setting and Maintaining Academic Standards. Available: https://www.qaa.ac.uk/docs/qaa/quality-code/qualifications-frameworks.pdf

Was this article helpful?

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

CHAPTER 18 Guidelines to qualitative academic seminar presentation

Profile image of Udeme Usanga

The primary objective of seminar presentation is to enhance presentation skills when persuading, educating, or informing an audience. Specifically, it provides a focus on the fundamental aspects of a quality academic, professional and business communications including structure, preparation and strategy for delivery, using visual aids, and handling question and answer sessions. The presenter/student practices by preparing and delivering an ideal real-life academic/business presentation. Strict adherence to the instructions outlined allows the presenter to evaluate his/her progress and alter any distracting behaviours before and during presentation. It also enables the participant to learn by doing. The aim of this paper is to introduce students to simple principles on how to plan, writs and present their findings as technical conference papers, then act as the mini-conference programme committee members in reviewing each other's submissions. Finally, in addition to the model itself, description of some variations in instantiation and an assessment of the benefits of this general approach and recommendation for adoption by faculties and educators are proffered. Introduction Rarely are the three pillars of academia-research, teaching and service-addressed together, within one intellectually cohesive context in the graduate curriculum. Such a context is important for exposing students to the interrelationships among these facets. Oftentimes, people are confused what a seminar, workshop or conference means. They are sometimes considered to mean the same thing. However, workshop is a brief intensive educational programme for a relatively small group of people that focuses on techniques and skills in a particular field. Seminar on the other hand is a meeting of a group of advanced students studying under a professor/officer with each doing original research and all exchanging results of their findings through reports and discussions. A conference is a meeting of two or more persons/bodies organized for the benefit of discussing matters of common concern, which usually involves formal interchange of views.

Related Papers

JSCI Systemics, Cybernetics, and Informatics

The purposes of this article is to show the differences between traditional and conversational conferences and to suggest that synergic effects might be produced when both models are adequately related in simultaneous events so cybernetic loops might be produced. The effectiveness of both approaches could increase if they are adequately related or oriented to the generation of processes that might integrate both models in the context of the same event, or chain of events. The content of this article is based on a combination of experience, reflection, and action, using the methodologies of Action-Research/reflection, Action-Learning, and Action-Design. After ten years trying to relate these two approaches we learned that they are opposite, but not contradictory with each other. They are, or can be designed as polar opposites which would complement (and even require) each other in a synergic whole, with potential emergent properties as effective learning, interdisciplinary communicat...

assessment criteria for seminar presentation

Constructivist Foundations

Christoph Brunner

This article is an open peer commentary to Johan Verbeke's target article "Designing Academic Conferences as a Learning Environment: How to Stimulate Active Learning at Academic Conferences?" (http://www.univie.ac.at/constructivism/journal/11/1/098.verbeke). It targets the field of artistic and design research and its attempt to invent new experimental conference formats. After critically examining the conception of "knowledge production" in these discourses, the commentary fosters the need to take into account sensuous and more-than-human elements shaping such emergent conference formats. The comment closes with a constructivist and speculative proposition for the future planning of creative practice events.

Computer Human Interaction

Nico Macdonald

Conferences are still valuable for established attendees and potential new audiences, and the overall audience for events can be increased, helping alleviate competition between professional organisations.In addition professional organisations need to avoid conferences being run-of-the-mill, and taking their audience for granted. They need to widen their primary and secondary audiences by helping potential attendees and presenters find out about events,

Ivan Dunđer

Przegląd Badań Edukacyjnych

Helena Ostrowicka

Ben Sweeting , Michael Hohl

Context: The design of academic conferences, in which settings ideas are shared and created, is, we suggest, of more than passing interest in constructivism, where epistemology is considered in terms of knowing rather than knowledge. Problem: The passivity and predominantly one-way structure of the typical paper presentation format of academic conferences has a number of serious limitations from a constructivist perspective. These limits are both practical and epistemological. While alternative formats abound, there is nevertheless increasing pressure reinforcing this format due to delegates’ funding typically being linked to reading a paper. Method: In this special issue, authors reflect on conferences that they have organised and participated in that have used alternative formats, such as conversational structures or other constructivist inspired approaches, in whole or in part. We review and contextualize their contributions, understanding them in terms of their connections to constructivism and to each other. Results: While this issue is of relevance across disciplinary boundaries, contributions focus on two fields: that of cybernetics/systems, and that of design. We identify the way that conference organization is of particular importance to these fields, being in self-reflexive relationship to them: the environment of a design conference is something that we design; while a conference regarding systems or cybernetics is itself an instance of the sorts of process with which these fields are concerned. Implications: Building on this self-reflexivity and, also, the close connection of design and cybernetics/systems to constructivism, we suggest that conference organization is an area in which constructivism may itself be understood in terms of practice (and so knowing) rather than theory (and so knowledge. This in turn helps connect ideas in constructivism with pragmatic fields, such as knowledge management, and recent discussions in this journal regarding second-order science. Constructivist content: As a setting for the creation of new ideas, the design of conferences is of importance where we understand epistemology in constructivist terms as a process of knowing. Moreover, the particular fields drawn on - design and cybernetics/systems - have close connections to constructivism, as can be seen, for instance, in the work of Ranulph Glanville, on which we draw here.

Chiara Belluzzi

To date, little research has been conducted on conference presentation (CP) introductions with the aim of analysing their moves, especially as far as non academic CP's are concerned. In fact, to the best of my knowledge, no study has ever focused on the non-academic context - apart from numerous public speaking handbooks, which, however, do not apply any scientific method of analysis. Therefore this study sets out to investigate non-academic CP introductions in order to determine whether they coincide with the genre of the academic CP introduction. Such a study will hopefully prove valuable not only in the field of genre analysis, but also in the field of interpreting studies. Since it is possible to determine a move model from the structure of every genre, I will set out to do this for the non-academic CP introduction as well, thus providing the interpreter with a series of speech acts a speaker can reasonably be expected to carry out. Chapter 2 of this dissertation begins with an overview of the literature on the academic CP as a genre from many different perspectives. Then the focus shifts to the introductory section of different academic written and oral genres, in particular to those studies which lead to the definition of their moves and which, therefore, will be useful in the analysis of the structure of non-academic CP introductions. Chapter 3 focuses on the non-academic CP. First, the concept of discourse community is explored in terms of both academic and non-academic discourse, with the aim of achieving a better understanding of the differences between the two as well as of the latter alone. Then a definition of the non-academic CP introduction as a genre will be developed on the basis of Swales’ (1990) criteria and Hasan’s (Halliday and Hasan 1989) notion of ‘context’, in order to determine whether academic and non-academic CP introductions belong to the same genre or not. After the theoretical framework set in the first part of the dissertation, five case studies will be analysed in Chapter 4. Five non-academic presentations were selected and their introductions were transcribed. To these I have applied Rowley-Jolivet and Carter-Thomas’ (2005) move model of academic CP introductions in order to determine whether their model can be applied to non-academic CP introductions as well. The data retrieved is analysed to let new moves emerge, too, so that a move model for non-academic CP introductions can be identified. The usefulness of this model for further and more in-depth studies is mentioned at the end of the chapter In the last chapter a suggestion is made about the application of move models to 11 interpreting studies, in particular as far as simultaneous interpretation is concerned. To be brief, since move models describe the structure shared by the texts of a given genre, they could be used by interpreters to predict the structure of the text they are going to interpret.

European Political Science

Andrew Mycock

Olga Vetrova

Abstract. Students ’ conferences make up the environment where specific competences are combined with generic competences. Our goal is to estimate the potential of the professionally-oriented academic communication in a foreign language in the students ’ conference environment and find out the ways students’ conferences could contribute to the professional competencies formation. Investigated is the polytechnical tertiary school. Integrative in its essence, the project is aimed at fostering the efficiency of the university education, creativity development, ideas generation by specialists-to-be and innovations dissemination, – all of which is supposed to upgrade the standards of tertiary education and raise the quality of vocational training. This goes in accord with the Bologna process and European, and world-wide effort to enhance flexibility and professionalism on the labour market.

RELATED PAPERS

Eduardo Omar Sosa

Teaching & Learning Inquiry

Margaret Wegener

Oddvar Royset

American Antiquity

Presented at the seminar on “Training Workshop on Raising Awareness Regarding Insurance” organized by “Bangladesh Insurance Sector Development Project (BISDP) held on 20 June 2019, in the Conference Room of Hotel Shaikat, Station Road, Chottogram, Bangladesh

Dr. Nazrul Islam

Cadernos de Saúde Pública

João Luiz Pena

Surgery for Obesity and Related Diseases

lizzet villalobos

sutri handayani

Daniel LONDOÑO

Dicle Üniversitesi Ziya Gökalp Eğitim Fakültesi Dergisi

Medine Baran

Anales Del Seminario De Historia De La Filosofia

abraham fernandez

Journal of the American Society of Echocardiography

Mohammed Mosad

Topoi (Rio de Janeiro)

Roger Chartier

The Egyptian Journal of Hospital Medicine

The Science of the total environment

American Journal of Kidney Diseases

Lucile Mercadal

Contexto: revista anual de estudios literarios

Norelsy Lima

Etudes Photographiques

Didier Aubert

Zenodo (CERN European Organization for Nuclear Research)

Yulia Frank

oswaldo cruz

Nelsi Yunisa

Enzyme Research

Amelia .farres

Applied Sciences

Davor Ljubas

Stefan Bittner

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

IMAGES

  1. Presentation Scoring Sheet

    assessment criteria for seminar presentation

  2. oral presentation grading rubric

    assessment criteria for seminar presentation

  3. FREE 14+ Sample Presentation Evaluation Forms in PDF

    assessment criteria for seminar presentation

  4. FREE 14+ Sample Presentation Evaluation Forms in PDF

    assessment criteria for seminar presentation

  5. Presentation Evaluation Rubric

    assessment criteria for seminar presentation

  6. FREE 7+ Oral Presentation Evaluation Forms in PDF

    assessment criteria for seminar presentation

VIDEO

  1. Self Assessment For Job Promotion PowerPoint Presentation Slides

  2. Assessment Centers (Session 3) on 2nd July 2023

  3. AI and Assessment—A Seminar and Discussion

  4. CSS PMS Essay Writing Seminar by Adnan Bashir Day 1| Chughtai Public Library

  5. Unveiling "Assessment Criteria": A Guide to Understanding Evaluations

  6. Criteria for Effective Presentations/Public Speaking: Grades 7 -12

COMMENTS

  1. PDF Presentation Evaluation Criteria

    The speaker presents ideas in a clear manner. The speaker states one point at a time. The speaker fully develops each point. The presentation is cohesive. The presentation is properly focused. A clear train of thought is followed and involves the audience. The speaker makes main points clear. The speaker sequences main points effectively.

  2. PDF Oral Presentation Evaluation Criteria and Checklist

    ORAL PRESENTATION EVALUATION CRITERIA AND CHECKLIST. talk was well-prepared. topic clearly stated. structure & scope of talk clearly stated in introduction. topic was developed in order stated in introduction. speaker summed up main points in conclusion. speaker formulated conclusions and discussed implications. was in control of subject matter.

  3. PDF SAMPLE ORAL PRESENTATION MARKING CRITERIA

    3. PEER ASSESSMENT OF GROUP PRESENTATIONS BY MEMBERS OF TEAM Use the criteria below to assess your contribution to the group presentation as well as the contribution of each of your teammates. 0 = no contribution 1 = minor contribution 2 = some contribution, but not always effective/successful 3 = some contribution, usually effective/successful

  4. PDF Presentation Assessment Rubric

    Presentation Assessment Rubric, updated Spring 2005 Presentation Assessment Rubric The following rubric is used in my undergraduate and graduate classes that utilize oral presentations as part of a seminar, mid-term assignment or final assignment. The students use PowerPoint or other types of digital presentation

  5. A Standardized Rubric to Evaluate Student Presentations

    Design. A 20-item rubric was designed and used to evaluate student presentations in a capstone fourth-year course in 2007-2008, and then revised and expanded to 25 items and used to evaluate student presentations for the same course in 2008-2009. Two faculty members evaluated each presentation.

  6. PDF Guidelines on Seminar Presentations

    seminar should tell a scientific story in a way that everyone present can understand and go home with some lesson learned. Purpose of Seminar: A presentation concentrates on teaching something to the audience. A good presentation means that the audience understood the message. The first rule is to place yourself in the mind of your audience.

  7. PDF Preparing your Student Seminar presentation

    Here are the review criteria, by which your presentation will be graded. (A) Correct assessment and logical presentation of the science!(40%)!(1) Comprehension of the underlying problem and its solution!(2) Technical acuity!(3) Proper selection of emphasis!(4) Connection with topics presented in lecture (where applicable)

  8. PDF Seminar presentation

    Higher Education Language & Presentation Support (HELPS) University of Technology Sydney Building 1, Level 5, Room 25 15 Broadway Ultimo NSW 2007 Australia +61 2 9514 9733 [email protected] www.helps.uts.edu.au. UTS: HELPS / JULY 2018. UTS: HELPS / JULY 2017.

  9. PDF Oral Presentations

    graded - are described in either the Assessment Criteria for Oral Presentations and Commentaries or the Assessment Criteria for Oral Presentations for the level of your module. If the Assessment Criteria for Oral Presentationsare used, the criteria against which your work will be marked fall into three categories: Knowledge and understanding:

  10. Full article: Using seminars to assess computing students

    Assessment criteria on understanding, analysis, synthesis and evaluation are common to both the seminar paper and presentation. ... We end this section with a short list of recommendations, derived from the literature, on how to make effective use of seminar presentations for assessment purposes (Figure 16). Figure 16 Best practices for ...

  11. PDF Criteria for Evaluating an Individual Oral Presentation

    you to achieve sustained eye contact throughout the presentation. Volume Adjust the volume for the venue. Work to insure that remote audience members can clearly hear even the inflectional elements in your speech. Inflection Adjust voice modulation and stress points to assist the audience in identifying key concepts in the presentation.

  12. (PDF) RUBRIC SIX: STUDENTS SEMINAR EVALUATION RUBRIC

    PDF | Rubric to evaluate student's seminar presentation and to suggest improvisation. | Find, read and cite all the research you need on ResearchGate

  13. Rating Scale for Assessing Oral Presentations

    Each group decides which members will be involved in the oral presentation. I constructed a rating scale that decomposes the oral presentation into four major components: (1) preparation, (2) quality of handouts and overheads, (3) quality of presentation skills, and (4) quality of analysis. I rate preparation as "yes" or "no"; all other ...

  14. PDF Section 10 Seminar Presentations

    This section focuses on seminar presentations, but most of the information given here is transferable and can be applied to all forms. of public speaking. This section is going to address a number of issues which are all relevant to preparing. and giving presentations and these include: preparing and reading background.

  15. Step 4: Develop Assessment Criteria and Rubrics

    Rubrics work very well for projects, papers/reports, and presentations, as well as in peer review, and good rubrics can save instructors and TAs time when grading. Sample Rubrics This final rubric for the scientific concept explainer video combines the assessment criteria and the holistic rating scale:

  16. Use Clear Criteria and Methodologies When Evaluating PowerPoint

    Some of the criteria that you can use to assess presentations include: Focus of the presentation. Clarity and coherence of the content. Thoroughness of the ideas presented and the analysis. Clarity of the presentation. Effective use of facts, statistics and details. Lack of grammatical and spelling errors. Design of the slides.

  17. Mark Scheme for presentations

    Mark Scheme for presentations. Different students may legitimately approach their presentations in different ways and sometimes particular strength in one area can offset weakness in another. But the following criteria gives you an idea of the areas to think about when preparing and presenting, and what makes for a good presentation.

  18. Fostering oral presentation performance: does the quality of feedback

    Quality criteria for feedback. A recently published review study on assessment and evaluation in higher education (Pereira, Flores, and Niklasson Citation 2015) revealed that many recent articles have addressed formative assessment, modes of assessment (i.e. peer- and self-assessment) and their (assumed) effectiveness.While empirical evidence on the effectiveness of formative assessment in ...

  19. Presentation Skills Assessment Tools

    Abstract. This resource is a collection of interactive assessment tools designed to measure presentation effectiveness by self-evaluation or by peer evaluation. The resource contains three evaluation forms, each of which takes less than 5 minutes to complete. The first is for standard lectures, presentations, or seminars, where the presenter is ...

  20. Assessment Criteria and Rubrics

    Assessment Rubric. The assessment rubric, forms part of a set of criteria and refers specifically to the "levels of performance quality on the criteria." (Brookhart & Chen, 2015, p. 343) Generally, rubrics are categorised into two categories, holistic and or analytic. A holistic rubric assesses an assignment as a whole and is not broken ...

  21. CHAPTER 18 Guidelines to qualitative academic seminar presentation

    The primary objective of seminar presentation is to enhance presentation skills when persuading, educating, or informing an audience. Specifically, it provides a focus on the fundamental aspects of a quality academic, professional and business communications including structure, preparation and strategy for delivery, using visual aids, and handling question and answer sessions.

  22. PDF Rubric for Seminar/ project work evaluation

    Rubric for Seminar/ project work evaluation Assessment Criteria Exceeds Expectations Meets Expectations Below Expectations Not Acceptable Score 100% 90% 60% 30% Rating 4 3 2 1 Organization & Style follow. (15%) Information is presented in a logical, interesting way, which is easy to stated, but does not Purpose is clearly

  23. Assessment Criteria For The Seminar Presentation Fall 2015

    Assessment Criteria for the Seminar Presentation Fall 2015 - Free download as Word Doc (.doc), PDF File (.pdf), Text File (.txt) or view presentation slides online. aera