Rubric Best Practices, Examples, and Templates

A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

Rubrics can help instructors communicate expectations to students and assess student work fairly, consistently and efficiently. Rubrics can provide students with informative feedback on their strengths and weaknesses so that they can reflect on their performance and work on areas that need improvement.

How to Get Started

Best practices, moodle how-to guides.

  • Workshop Recording (Fall 2022)
  • Workshop Registration

Step 1: Analyze the assignment

The first step in the rubric creation process is to analyze the assignment or assessment for which you are creating a rubric. To do this, consider the following questions:

  • What is the purpose of the assignment and your feedback? What do you want students to demonstrate through the completion of this assignment (i.e. what are the learning objectives measured by it)? Is it a summative assessment, or will students use the feedback to create an improved product?
  • Does the assignment break down into different or smaller tasks? Are these tasks equally important as the main assignment?
  • What would an “excellent” assignment look like? An “acceptable” assignment? One that still needs major work?
  • How detailed do you want the feedback you give students to be? Do you want/need to give them a grade?

Step 2: Decide what kind of rubric you will use

Types of rubrics: holistic, analytic/descriptive, single-point

Holistic Rubric. A holistic rubric includes all the criteria (such as clarity, organization, mechanics, etc.) to be considered together and included in a single evaluation. With a holistic rubric, the rater or grader assigns a single score based on an overall judgment of the student’s work, using descriptions of each performance level to assign the score.

Advantages of holistic rubrics:

  • Can p lace an emphasis on what learners can demonstrate rather than what they cannot
  • Save grader time by minimizing the number of evaluations to be made for each student
  • Can be used consistently across raters, provided they have all been trained

Disadvantages of holistic rubrics:

  • Provide less specific feedback than analytic/descriptive rubrics
  • Can be difficult to choose a score when a student’s work is at varying levels across the criteria
  • Any weighting of c riteria cannot be indicated in the rubric

Analytic/Descriptive Rubric . An analytic or descriptive rubric often takes the form of a table with the criteria listed in the left column and with levels of performance listed across the top row. Each cell contains a description of what the specified criterion looks like at a given level of performance. Each of the criteria is scored individually.

Advantages of analytic rubrics:

  • Provide detailed feedback on areas of strength or weakness
  • Each criterion can be weighted to reflect its relative importance

Disadvantages of analytic rubrics:

  • More time-consuming to create and use than a holistic rubric
  • May not be used consistently across raters unless the cells are well defined
  • May result in giving less personalized feedback

Single-Point Rubric . A single-point rubric is breaks down the components of an assignment into different criteria, but instead of describing different levels of performance, only the “proficient” level is described. Feedback space is provided for instructors to give individualized comments to help students improve and/or show where they excelled beyond the proficiency descriptors.

Advantages of single-point rubrics:

  • Easier to create than an analytic/descriptive rubric
  • Perhaps more likely that students will read the descriptors
  • Areas of concern and excellence are open-ended
  • May removes a focus on the grade/points
  • May increase student creativity in project-based assignments

Disadvantage of analytic rubrics: Requires more work for instructors writing feedback

Step 3 (Optional): Look for templates and examples.

You might Google, “Rubric for persuasive essay at the college level” and see if there are any publicly available examples to start from. Ask your colleagues if they have used a rubric for a similar assignment. Some examples are also available at the end of this article. These rubrics can be a great starting point for you, but consider steps 3, 4, and 5 below to ensure that the rubric matches your assignment description, learning objectives and expectations.

Step 4: Define the assignment criteria

Make a list of the knowledge and skills are you measuring with the assignment/assessment Refer to your stated learning objectives, the assignment instructions, past examples of student work, etc. for help.

  Helpful strategies for defining grading criteria:

  • Collaborate with co-instructors, teaching assistants, and other colleagues
  • Brainstorm and discuss with students
  • Can they be observed and measured?
  • Are they important and essential?
  • Are they distinct from other criteria?
  • Are they phrased in precise, unambiguous language?
  • Revise the criteria as needed
  • Consider whether some are more important than others, and how you will weight them.

Step 5: Design the rating scale

Most ratings scales include between 3 and 5 levels. Consider the following questions when designing your rating scale:

  • Given what students are able to demonstrate in this assignment/assessment, what are the possible levels of achievement?
  • How many levels would you like to include (more levels means more detailed descriptions)
  • Will you use numbers and/or descriptive labels for each level of performance? (for example 5, 4, 3, 2, 1 and/or Exceeds expectations, Accomplished, Proficient, Developing, Beginning, etc.)
  • Don’t use too many columns, and recognize that some criteria can have more columns that others . The rubric needs to be comprehensible and organized. Pick the right amount of columns so that the criteria flow logically and naturally across levels.

Step 6: Write descriptions for each level of the rating scale

Artificial Intelligence tools like Chat GPT have proven to be useful tools for creating a rubric. You will want to engineer your prompt that you provide the AI assistant to ensure you get what you want. For example, you might provide the assignment description, the criteria you feel are important, and the number of levels of performance you want in your prompt. Use the results as a starting point, and adjust the descriptions as needed.

Building a rubric from scratch

For a single-point rubric , describe what would be considered “proficient,” i.e. B-level work, and provide that description. You might also include suggestions for students outside of the actual rubric about how they might surpass proficient-level work.

For analytic and holistic rubrics , c reate statements of expected performance at each level of the rubric.

  • Consider what descriptor is appropriate for each criteria, e.g., presence vs absence, complete vs incomplete, many vs none, major vs minor, consistent vs inconsistent, always vs never. If you have an indicator described in one level, it will need to be described in each level.
  • You might start with the top/exemplary level. What does it look like when a student has achieved excellence for each/every criterion? Then, look at the “bottom” level. What does it look like when a student has not achieved the learning goals in any way? Then, complete the in-between levels.
  • For an analytic rubric , do this for each particular criterion of the rubric so that every cell in the table is filled. These descriptions help students understand your expectations and their performance in regard to those expectations.

Well-written descriptions:

  • Describe observable and measurable behavior
  • Use parallel language across the scale
  • Indicate the degree to which the standards are met

Step 7: Create your rubric

Create your rubric in a table or spreadsheet in Word, Google Docs, Sheets, etc., and then transfer it by typing it into Moodle. You can also use online tools to create the rubric, but you will still have to type the criteria, indicators, levels, etc., into Moodle. Rubric creators: Rubistar , iRubric

Step 8: Pilot-test your rubric

Prior to implementing your rubric on a live course, obtain feedback from:

  • Teacher assistants

Try out your new rubric on a sample of student work. After you pilot-test your rubric, analyze the results to consider its effectiveness and revise accordingly.

  • Limit the rubric to a single page for reading and grading ease
  • Use parallel language . Use similar language and syntax/wording from column to column. Make sure that the rubric can be easily read from left to right or vice versa.
  • Use student-friendly language . Make sure the language is learning-level appropriate. If you use academic language or concepts, you will need to teach those concepts.
  • Share and discuss the rubric with your students . Students should understand that the rubric is there to help them learn, reflect, and self-assess. If students use a rubric, they will understand the expectations and their relevance to learning.
  • Consider scalability and reusability of rubrics. Create rubric templates that you can alter as needed for multiple assignments.
  • Maximize the descriptiveness of your language. Avoid words like “good” and “excellent.” For example, instead of saying, “uses excellent sources,” you might describe what makes a resource excellent so that students will know. You might also consider reducing the reliance on quantity, such as a number of allowable misspelled words. Focus instead, for example, on how distracting any spelling errors are.

Example of an analytic rubric for a final paper

Example of a holistic rubric for a final paper, single-point rubric, more examples:.

  • Single Point Rubric Template ( variation )
  • Analytic Rubric Template make a copy to edit
  • A Rubric for Rubrics
  • Bank of Online Discussion Rubrics in different formats
  • Mathematical Presentations Descriptive Rubric
  • Math Proof Assessment Rubric
  • Kansas State Sample Rubrics
  • Design Single Point Rubric

Technology Tools: Rubrics in Moodle

  • Moodle Docs: Rubrics
  • Moodle Docs: Grading Guide (use for single-point rubrics)

Tools with rubrics (other than Moodle)

  • Google Assignments
  • Turnitin Assignments: Rubric or Grading Form

Other resources

  • DePaul University (n.d.). Rubrics .
  • Gonzalez, J. (2014). Know your terms: Holistic, Analytic, and Single-Point Rubrics . Cult of Pedagogy.
  • Goodrich, H. (1996). Understanding rubrics . Teaching for Authentic Student Performance, 54 (4), 14-17. Retrieved from   
  • Miller, A. (2012). Tame the beast: tips for designing and using rubrics.
  • Ragupathi, K., Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In: Sanger, C., Gleason, N. (eds) Diversity and Inclusion in Global Higher Education. Palgrave Macmillan, Singapore.

Rubric Design

Main navigation, articulating your assessment values.

Reading, commenting on, and then assigning a grade to a piece of student writing requires intense attention and difficult judgment calls. Some faculty dread “the stack.” Students may share the faculty’s dim view of writing assessment, perceiving it as highly subjective. They wonder why one faculty member values evidence and correctness before all else, while another seeks a vaguely defined originality.

Writing rubrics can help address the concerns of both faculty and students by making writing assessment more efficient, consistent, and public. Whether it is called a grading rubric, a grading sheet, or a scoring guide, a writing assignment rubric lists criteria by which the writing is graded.

Why create a writing rubric?

  • It makes your tacit rhetorical knowledge explicit
  • It articulates community- and discipline-specific standards of excellence
  • It links the grade you give the assignment to the criteria
  • It can make your grading more efficient, consistent, and fair as you can read and comment with your criteria in mind
  • It can help you reverse engineer your course: once you have the rubrics created, you can align your readings, activities, and lectures with the rubrics to set your students up for success
  • It can help your students produce writing that you look forward to reading

How to create a writing rubric

Create a rubric at the same time you create the assignment. It will help you explain to the students what your goals are for the assignment.

  • Consider your purpose: do you need a rubric that addresses the standards for all the writing in the course? Or do you need to address the writing requirements and standards for just one assignment?  Task-specific rubrics are written to help teachers assess individual assignments or genres, whereas generic rubrics are written to help teachers assess multiple assignments.
  • Begin by listing the important qualities of the writing that will be produced in response to a particular assignment. It may be helpful to have several examples of excellent versions of the assignment in front of you: what writing elements do they all have in common? Among other things, these may include features of the argument, such as a main claim or thesis; use and presentation of sources, including visuals; and formatting guidelines such as the requirement of a works cited.
  • Then consider how the criteria will be weighted in grading. Perhaps all criteria are equally important, or perhaps there are two or three that all students must achieve to earn a passing grade. Decide what best fits the class and requirements of the assignment.

Consider involving students in Steps 2 and 3. A class session devoted to developing a rubric can provoke many important discussions about the ways the features of the language serve the purpose of the writing. And when students themselves work to describe the writing they are expected to produce, they are more likely to achieve it.

At this point, you will need to decide if you want to create a holistic or an analytic rubric. There is much debate about these two approaches to assessment.

Comparing Holistic and Analytic Rubrics

Holistic scoring .

Holistic scoring aims to rate overall proficiency in a given student writing sample. It is often used in large-scale writing program assessment and impromptu classroom writing for diagnostic purposes.

General tenets to holistic scoring:

  • Responding to drafts is part of evaluation
  • Responses do not focus on grammar and mechanics during drafting and there is little correction
  • Marginal comments are kept to 2-3 per page with summative comments at end
  • End commentary attends to students’ overall performance across learning objectives as articulated in the assignment
  • Response language aims to foster students’ self-assessment

Holistic rubrics emphasize what students do well and generally increase efficiency; they may also be more valid because scoring includes authentic, personal reaction of the reader. But holistic sores won’t tell a student how they’ve progressed relative to previous assignments and may be rater-dependent, reducing reliability. (For a summary of advantages and disadvantages of holistic scoring, see Becker, 2011, p. 116.)

Here is an example of a partial holistic rubric:

Summary meets all the criteria. The writer understands the article thoroughly. The main points in the article appear in the summary with all main points proportionately developed. The summary should be as comprehensive as possible and should be as comprehensive as possible and should read smoothly, with appropriate transitions between ideas. Sentences should be clear, without vagueness or ambiguity and without grammatical or mechanical errors.

A complete holistic rubric for a research paper (authored by Jonah Willihnganz) can be  downloaded here.

Analytic Scoring

Analytic scoring makes explicit the contribution to the final grade of each element of writing. For example, an instructor may choose to give 30 points for an essay whose ideas are sufficiently complex, that marshals good reasons in support of a thesis, and whose argument is logical; and 20 points for well-constructed sentences and careful copy editing.

General tenets to analytic scoring:

  • Reflect emphases in your teaching and communicate the learning goals for the course
  • Emphasize student performance across criterion, which are established as central to the assignment in advance, usually on an assignment sheet
  • Typically take a quantitative approach, providing a scaled set of points for each criterion
  • Make the analytic framework available to students before they write  

Advantages of an analytic rubric include ease of training raters and improved reliability. Meanwhile, writers often can more easily diagnose the strengths and weaknesses of their work. But analytic rubrics can be time-consuming to produce, and raters may judge the writing holistically anyway. Moreover, many readers believe that writing traits cannot be separated. (For a summary of the advantages and disadvantages of analytic scoring, see Becker, 2011, p. 115.)

For example, a partial analytic rubric for a single trait, “addresses a significant issue”:

  • Excellent: Elegantly establishes the current problem, why it matters, to whom
  • Above Average: Identifies the problem; explains why it matters and to whom
  • Competent: Describes topic but relevance unclear or cursory
  • Developing: Unclear issue and relevance

A  complete analytic rubric for a research paper can be downloaded here.  In WIM courses, this language should be revised to name specific disciplinary conventions.

Whichever type of rubric you write, your goal is to avoid pushing students into prescriptive formulas and limiting thinking (e.g., “each paragraph has five sentences”). By carefully describing the writing you want to read, you give students a clear target, and, as Ed White puts it, “describe the ongoing work of the class” (75).

Writing rubrics contribute meaningfully to the teaching of writing. Think of them as a coaching aide. In class and in conferences, you can use the language of the rubric to help you move past generic statements about what makes good writing good to statements about what constitutes success on the assignment and in the genre or discourse community. The rubric articulates what you are asking students to produce on the page; once that work is accomplished, you can turn your attention to explaining how students can achieve it.

Works Cited

Becker, Anthony.  “Examining Rubrics Used to Measure Writing Performance in U.S. Intensive English Programs.”   The CATESOL Journal  22.1 (2010/2011):113-30. Web.

White, Edward M.  Teaching and Assessing Writing . Proquest Info and Learning, 1985. Print.

Further Resources

CCCC Committee on Assessment. “Writing Assessment: A Position Statement.” November 2006 (Revised March 2009). Conference on College Composition and Communication. Web.

Gallagher, Chris W. “Assess Locally, Validate Globally: Heuristics for Validating Local Writing Assessments.” Writing Program Administration 34.1 (2010): 10-32. Web.

Huot, Brian.  (Re)Articulating Writing Assessment for Teaching and Learning.  Logan: Utah State UP, 2002. Print.

Kelly-Reilly, Diane, and Peggy O’Neil, eds. Journal of Writing Assessment. Web.

McKee, Heidi A., and Dànielle Nicole DeVoss DeVoss, Eds. Digital Writing Assessment & Evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press, 2013. Web.

O’Neill, Peggy, Cindy Moore, and Brian Huot.  A Guide to College Writing Assessment . Logan: Utah State UP, 2009. Print.

Sommers, Nancy.  Responding to Student Writers . Macmillan Higher Education, 2013.

Straub, Richard. “Responding, Really Responding to Other Students’ Writing.” The Subject is Writing: Essays by Teachers and Students. Ed. Wendy Bishop. Boynton/Cook, 1999. Web.

White, Edward M., and Cassie A. Wright.  Assigning, Responding, Evaluating: A Writing Teacher’s Guide . 5th ed. Bedford/St. Martin’s, 2015. Print.

Essay Rubric

Essay Rubric

About this printout

This rubric delineates specific expectations about an essay assignment to students and provides a means of assessing completed student essays.

Teaching with this printout

More ideas to try.

Grading rubrics can be of great benefit to both you and your students. For you, a rubric saves time and decreases subjectivity. Specific criteria are explicitly stated, facilitating the grading process and increasing your objectivity. For students, the use of grading rubrics helps them to meet or exceed expectations, to view the grading process as being “fair,” and to set goals for future learning. In order to help your students meet or exceed expectations of the assignment, be sure to discuss the rubric with your students when you assign an essay. It is helpful to show them examples of written pieces that meet and do not meet the expectations. As an added benefit, because the criteria are explicitly stated, the use of the rubric decreases the likelihood that students will argue about the grade they receive. The explicitness of the expectations helps students know exactly why they lost points on the assignment and aids them in setting goals for future improvement.

  • Routinely have students score peers’ essays using the rubric as the assessment tool. This increases their level of awareness of the traits that distinguish successful essays from those that fail to meet the criteria. Have peer editors use the Reviewer’s Comments section to add any praise, constructive criticism, or questions.
  • Alter some expectations or add additional traits on the rubric as needed. Students’ needs may necessitate making more rigorous criteria for advanced learners or less stringent guidelines for younger or special needs students. Furthermore, the content area for which the essay is written may require some alterations to the rubric. In social studies, for example, an essay about geographical landforms and their effect on the culture of a region might necessitate additional criteria about the use of specific terminology.
  • After you and your students have used the rubric, have them work in groups to make suggested alterations to the rubric to more precisely match their needs or the parameters of a particular writing assignment.
  • Print this resource

Explore Resources by Grade

  • Kindergarten K

Eberly Center

Teaching excellence & educational innovation, creating and using rubrics.

A rubric is a scoring tool that explicitly describes the instructor’s performance expectations for an assignment or piece of work. A rubric identifies:

  • criteria: the aspects of performance (e.g., argument, evidence, clarity) that will be assessed
  • descriptors: the characteristics associated with each dimension (e.g., argument is demonstrable and original, evidence is diverse and compelling)
  • performance levels: a rating scale that identifies students’ level of mastery within each criterion  

Rubrics can be used to provide feedback to students on diverse types of assignments, from papers, projects, and oral presentations to artistic performances and group projects.

Benefitting from Rubrics

  • reduce the time spent grading by allowing instructors to refer to a substantive description without writing long comments
  • help instructors more clearly identify strengths and weaknesses across an entire class and adjust their instruction appropriately
  • help to ensure consistency across time and across graders
  • reduce the uncertainty which can accompany grading
  • discourage complaints about grades
  • understand instructors’ expectations and standards
  • use instructor feedback to improve their performance
  • monitor and assess their progress as they work towards clearly indicated goals
  • recognize their strengths and weaknesses and direct their efforts accordingly

Examples of Rubrics

Here we are providing a sample set of rubrics designed by faculty at Carnegie Mellon and other institutions. Although your particular field of study or type of assessment may not be represented, viewing a rubric that is designed for a similar assessment may give you ideas for the kinds of criteria, descriptions, and performance levels you use on your own rubric.

  • Example 1: Philosophy Paper This rubric was designed for student papers in a range of courses in philosophy (Carnegie Mellon).
  • Example 2: Psychology Assignment Short, concept application homework assignment in cognitive psychology (Carnegie Mellon).
  • Example 3: Anthropology Writing Assignments This rubric was designed for a series of short writing assignments in anthropology (Carnegie Mellon).
  • Example 4: History Research Paper . This rubric was designed for essays and research papers in history (Carnegie Mellon).
  • Example 1: Capstone Project in Design This rubric describes the components and standards of performance from the research phase to the final presentation for a senior capstone project in design (Carnegie Mellon).
  • Example 2: Engineering Design Project This rubric describes performance standards for three aspects of a team project: research and design, communication, and team work.

Oral Presentations

  • Example 1: Oral Exam This rubric describes a set of components and standards for assessing performance on an oral exam in an upper-division course in history (Carnegie Mellon).
  • Example 2: Oral Communication This rubric is adapted from Huba and Freed, 2000.
  • Example 3: Group Presentations This rubric describes a set of components and standards for assessing group presentations in history (Carnegie Mellon).

Class Participation/Contributions

  • Example 1: Discussion Class This rubric assesses the quality of student contributions to class discussions. This is appropriate for an undergraduate-level course (Carnegie Mellon).
  • Example 2: Advanced Seminar This rubric is designed for assessing discussion performance in an advanced undergraduate or graduate seminar.

See also " Examples and Tools " section of this site for more rubrics.

CONTACT US to talk with an Eberly colleague in person!

  • Faculty Support
  • Graduate Student Support
  • Canvas @ Carnegie Mellon
  • Quick Links

creative commons image

USC shield

Center for Excellence in Teaching

Home > Resources > Academic essay rubric

Academic essay rubric

This is a grading rubric an instructor uses to assess students’ work on this type of assignment. It is a sample rubric that needs to be edited to reflect the specifics of a particular assignment. 

Download this file

Download this file [63.33 KB]

Back to Resources Page

Assessment Rubrics

A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations. Markers of quality give students a clear idea about what must be done to demonstrate a certain level of mastery, understanding, or proficiency (i.e., "Exceeds Expectations" does xyz, "Meets Expectations" does only xy or yz, "Developing" does only x or y or z). Rubrics can be used for any assignment in a course, or for any way in which students are asked to demonstrate what they've learned. They can also be used to facilitate self and peer-reviews of student work.

Rubrics aren't just for summative evaluation. They can be used as a teaching tool as well. When used as part of a formative assessment, they can help students understand both the holistic nature and/or specific analytics of learning expected, the level of learning expected, and then make decisions about their current level of learning to inform revision and improvement (Reddy & Andrade, 2010). 

Why use rubrics?

Rubrics help instructors:

Provide students with feedback that is clear, directed and focused on ways to improve learning.

Demystify assignment expectations so students can focus on the work instead of guessing "what the instructor wants."

Reduce time spent on grading and develop consistency in how you evaluate student learning across students and throughout a class.

Rubrics help students:

Focus their efforts on completing assignments in line with clearly set expectations.

Self and Peer-reflect on their learning, making informed changes to achieve the desired learning level.

Developing a Rubric

During the process of developing a rubric, instructors might:

Select an assignment for your course - ideally one you identify as time intensive to grade, or students report as having unclear expectations.

Decide what you want students to demonstrate about their learning through that assignment. These are your criteria.

Identify the markers of quality on which you feel comfortable evaluating students’ level of learning - often along with a numerical scale (i.e., "Accomplished," "Emerging," "Beginning" for a developmental approach).

Give students the rubric ahead of time. Advise them to use it in guiding their completion of the assignment.

It can be overwhelming to create a rubric for every assignment in a class at once, so start by creating one rubric for one assignment. See how it goes and develop more from there! Also, do not reinvent the wheel. Rubric templates and examples exist all over the Internet, or consider asking colleagues if they have developed rubrics for similar assignments. 

Sample Rubrics

Examples of holistic and analytic rubrics : see Tables 2 & 3 in “Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners” (Allen & Tanner, 2006)

Examples across assessment types : see “Creating and Using Rubrics,” Carnegie Mellon Eberly Center for Teaching Excellence and & Educational Innovation

“VALUE Rubrics” : see the Association of American Colleges and Universities set of free, downloadable rubrics, with foci including creative thinking, problem solving, and information literacy. 

Andrade, H. 2000. Using rubrics to promote thinking and learning. Educational Leadership 57, no. 5: 13–18. Arter, J., and J. Chappuis. 2007. Creating and recognizing quality rubrics. Upper Saddle River, NJ: Pearson/Merrill Prentice Hall. Stiggins, R.J. 2001. Student-involved classroom assessment. 3rd ed. Upper Saddle River, NJ: Prentice-Hall. Reddy, Y., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation In Higher Education, 35(4), 435-448.

Sample Essay Rubric for Elementary Teachers

  • Grading Students for Assessment
  • Lesson Plans
  • Becoming A Teacher
  • Assessments & Tests
  • Elementary Education
  • Special Education
  • Homeschooling
  • M.S., Education, Buffalo State College
  • B.S., Education, Buffalo State College

An essay rubric is a way teachers assess students' essay writing by using specific criteria to grade assignments. Essay rubrics save teachers time because all of the criteria are listed and organized into one convenient paper. If used effectively, rubrics can help improve students' writing .

How to Use an Essay Rubric

  • The best way to use an essay rubric is to give the rubric to the students before they begin their writing assignment. Review each criterion with the students and give them specific examples of what you want so they will know what is expected of them.
  • Next, assign students to write the essay, reminding them of the criteria and your expectations for the assignment.
  • Once students complete the essay have them first score their own essay using the rubric, and then switch with a partner. (This peer-editing process is a quick and reliable way to see how well the student did on their assignment. It's also good practice to learn criticism and become a more efficient writer.)
  • Once peer-editing is complete, have students hand in their essay's. Now it is your turn to evaluate the assignment according to the criteria on the rubric. Make sure to offer students examples if they did not meet the criteria listed.

Informal Essay Rubric

Formal essay rubric.

  • How to Create a Rubric in 6 Steps
  • Writing Rubrics
  • What Is a Rubric?
  • Holistic Grading (Composition)
  • How to Make a Rubric for Differentiation
  • A Simple Guide to Grading Elementary Students
  • How to Write a Philosophy of Education for Elementary Teachers
  • Tips to Cut Writing Assignment Grading Time
  • Assignment Biography: Student Criteria and Rubric for Writing
  • Rubrics - Quick Guide for all Content Areas
  • How to Teach the Compare and Contrast Essay
  • How to Calculate a Percentage and Letter Grade
  • Stage a Debate in Class
  • Rubric Template Samples for Teachers
  • Group Project Grading Tip: Students Determine Fair Grade
  • Grading for Proficiency in the World of 4.0 GPAs

Search the blog

Input your search keywords and press Enter.

Vibrant Teaching

Vibrant Teaching

Teaching Resources Creator and Blogger

3 Types of Writing Rubrics for Effective Assessments

writing-rubrics

There are so many different writing rubrics out there and it may be difficult to find the right one. Below you will find a guide with 3 types of effective writing rubrics. Choosing the right one depends on the writing genre and your needs for the assessment.

Student-Friendly Rubrics

There are two ways to think about student-friendly rubrics. The first way is to use a rubric that students can complete as a self-assessment. The second way is to use a rubric that is completed by the teacher but is easy for students to understand. These rubrics are often based on the standards but shown in a different way. Instead of writing the actual standard on the rubric, include a one or two-word category.

See the example below of a third grade informative writing rubric. The first rubric uses the words introduction, content, linking words, closing, and mechanics for the categories. The second rubric lists each standard that goes with those categories. As you can see, the first option covers the same information but uses fewer words and is much easier for students to use and understand.

writing-rubrics-3rd-grade

When do you use student-friendly rubrics? These are great for students to assess themselves. The student and teacher can fill out the rubric separately and then meet for a conference to discuss any differences. This same strategy can also be done with two students, but instead of a conference, they will meet to edit and revise their work. Another option is for teachers to fill out the writing rubric and hand it back to students with feedback. The student-friendly rubrics are easy for kids to understand and are still aligned with the standards.

what-are-prompts-in-writing

Teacher-Friendly Rubrics

Teacher-friendly rubrics list each standard and use more details in the descriptions. This helps teachers know what to look for when assessing a writing piece. It will be very clear from the rubric whether or not the student is meeting or exceeding the standard. These writing rubrics are also quick and easy for teachers to use but may be more difficult for students to understand.

The examples below are standards based teacher-friendly rubrics. On the left side, you will see each Common Core Standard. The descriptions on the right side match each standard accordingly. These rubrics are used for assessing narrative writing in 1st grade, 2nd grade, 3rd grade, 4th grade, and 5th grade.

first-grade-rubric-for-writing

When do you use teacher-friendly rubrics? These are ideal when a teacher needs to get an accurate assessment for recording grades or writing report cards. They can choose whether or not to hand these rubrics back to students or use them for their records. Teacher-friendly rubrics are also helpful to show parents, especially at conferences.

Time-Saving Rubrics

Time-saving rubrics are a combination of student-friendly and teacher-friendly rubrics. These are standards based and list each standard on the left side. The difference is that instead of a description there is a number in each box such as 1, 2, 3, or 4. These numbers tell whether or not the student is meeting the standard.

1= needs support

2 = approaching standard, 3 = meets standard, 4 = exceeds standard.

The benefits of these rubrics are that they save time and energy by easily circling the number for each standard. It is quick for teachers to use but also easy for students to understand. Some teachers may also choose to include a section for the total score and comments depending on their needs. Check out some of the examples below for opinion writing.

first-grade-rubric-for-writing

When do you use time-saving rubrics? Well of course these are helpful when teachers want to save time. The best part is the rubrics are still standards based but also very easy for students and parents to understand. Use these anytime!

Writing Rubrics Conclusion

I hope you have found these 3 types of writing rubrics helpful and will utilize them with your class. Think about what your goal is with the assessment and choose the best rubric for both you and your students. You may find that a mix of all three is beneficial throughout the school year.

Writing Rubrics by Grade Level

Grab these standards based writing rubrics. Each grade level includes 9 rubrics in 3 different options . Choose from student-friendly, teacher-friendly, and time-saving rubrics. These are ideal for assessing narrative, opinion, and informative pieces. Click each grade level below to learn more. Also, check out this Monthly Writing Prompts blog post for more resources and ideas.

first-grade-rubric-for-writing

Middle School Writing Rubrics

writing-rubrics-6th-grade

Angela Sutton

Related posts.

assessment rubric for essay writing

How to Create an Assignment in Google Classroom

assessment rubric for essay writing

20 Prompts for Information Writing That Empower Students

Informational Writing: Teaching All About Books

Informational Writing: Teaching All About Books

No comments, leave a reply cancel reply.

I accept the Privacy Policy

assessment rubric for essay writing

I specialize in helping elementary teachers with writing resources, tips, and ideas. My goal is to save teachers time and energy so they can be vibrant inside and outside of the classroom! Read More

SEARCH THE BLOG

Subscribe to our mailing list.

Get the news right in your inbox!

Health and Wellness

  • Faculty and Staff

twitter

Assessment and Curriculum Support Center

Creating and using rubrics.

Last Updated: 4 March 2024. Click here to view archived versions of this page.

On this page:

  • What is a rubric?
  • Why use a rubric?
  • What are the parts of a rubric?
  • Developing a rubric
  • Sample rubrics
  • Scoring rubric group orientation and calibration
  • Suggestions for using rubrics in courses
  • Equity-minded considerations for rubric development
  • Tips for developing a rubric
  • Additional resources & sources consulted

Note:  The information and resources contained here serve only as a primers to the exciting and diverse perspectives in the field today. This page will be continually updated to reflect shared understandings of equity-minded theory and practice in learning assessment.

1. What is a rubric?

A rubric is an assessment tool often shaped like a matrix, which describes levels of achievement in a specific area of performance, understanding, or behavior.

There are two main types of rubrics:

Analytic Rubric : An analytic rubric specifies at least two characteristics to be assessed at each performance level and provides a separate score for each characteristic (e.g., a score on “formatting” and a score on “content development”).

  • Advantages: provides more detailed feedback on student performance; promotes consistent scoring across students and between raters
  • Disadvantages: more time consuming than applying a holistic rubric
  • You want to see strengths and weaknesses.
  • You want detailed feedback about student performance.

Holistic Rubric: A holistic rubrics provide a single score based on an overall impression of a student’s performance on a task.

  • Advantages: quick scoring; provides an overview of student achievement; efficient for large group scoring
  • Disadvantages: does not provided detailed information; not diagnostic; may be difficult for scorers to decide on one overall score
  • You want a quick snapshot of achievement.
  • A single dimension is adequate to define quality.

2. Why use a rubric?

  • A rubric creates a common framework and language for assessment.
  • Complex products or behaviors can be examined efficiently.
  • Well-trained reviewers apply the same criteria and standards.
  • Rubrics are criterion-referenced, rather than norm-referenced. Raters ask, “Did the student meet the criteria for level 5 of the rubric?” rather than “How well did this student do compared to other students?”
  • Using rubrics can lead to substantive conversations among faculty.
  • When faculty members collaborate to develop a rubric, it promotes shared expectations and grading practices.

Faculty members can use rubrics for program assessment. Examples:

The English Department collected essays from students in all sections of English 100. A random sample of essays was selected. A team of faculty members evaluated the essays by applying an analytic scoring rubric. Before applying the rubric, they “normed”–that is, they agreed on how to apply the rubric by scoring the same set of essays and discussing them until consensus was reached (see below: “6. Scoring rubric group orientation and calibration”). Biology laboratory instructors agreed to use a “Biology Lab Report Rubric” to grade students’ lab reports in all Biology lab sections, from 100- to 400-level. At the beginning of each semester, instructors met and discussed sample lab reports. They agreed on how to apply the rubric and their expectations for an “A,” “B,” “C,” etc., report in 100-level, 200-level, and 300- and 400-level lab sections. Every other year, a random sample of students’ lab reports are selected from 300- and 400-level sections. Each of those reports are then scored by a Biology professor. The score given by the course instructor is compared to the score given by the Biology professor. In addition, the scores are reported as part of the program’s assessment report. In this way, the program determines how well it is meeting its outcome, “Students will be able to write biology laboratory reports.”

3. What are the parts of a rubric?

Rubrics are composed of four basic parts. In its simplest form, the rubric includes:

  • A task description . The outcome being assessed or instructions students received for an assignment.
  • The characteristics to be rated (rows) . The skills, knowledge, and/or behavior to be demonstrated.
  • Beginning, approaching, meeting, exceeding
  • Emerging, developing, proficient, exemplary 
  • Novice, intermediate, intermediate high, advanced 
  • Beginning, striving, succeeding, soaring
  • Also called a “performance description.” Explains what a student will have done to demonstrate they are at a given level of mastery for a given characteristic.

4. Developing a rubric

Step 1: Identify what you want to assess

Step 2: Identify the characteristics to be rated (rows). These are also called “dimensions.”

  • Specify the skills, knowledge, and/or behaviors that you will be looking for.
  • Limit the characteristics to those that are most important to the assessment.

Step 3: Identify the levels of mastery/scale (columns).

Tip: Aim for an even number (4 or 6) because when an odd number is used, the middle tends to become the “catch-all” category.

Step 4: Describe each level of mastery for each characteristic/dimension (cells).

  • Describe the best work you could expect using these characteristics. This describes the top category.
  • Describe an unacceptable product. This describes the lowest category.
  • Develop descriptions of intermediate-level products for intermediate categories.
Important: Each description and each characteristic should be mutually exclusive.

Step 5: Test rubric.

  • Apply the rubric to an assignment.
  • Share with colleagues.
Tip: Faculty members often find it useful to establish the minimum score needed for the student work to be deemed passable. For example, faculty members may decided that a “1” or “2” on a 4-point scale (4=exemplary, 3=proficient, 2=marginal, 1=unacceptable), does not meet the minimum quality expectations. We encourage a standard setting session to set the score needed to meet expectations (also called a “cutscore”). Monica has posted materials from standard setting workshops, one offered on campus and the other at a national conference (includes speaker notes with the presentation slides). They may set their criteria for success as 90% of the students must score 3 or higher. If assessment study results fall short, action will need to be taken.

Step 6: Discuss with colleagues. Review feedback and revise.

Important: When developing a rubric for program assessment, enlist the help of colleagues. Rubrics promote shared expectations and consistent grading practices which benefit faculty members and students in the program.

5. Sample rubrics

Rubrics are on our Rubric Bank page and in our Rubric Repository (Graduate Degree Programs) . More are available at the Assessment and Curriculum Support Center in Crawford Hall (hard copy).

These open as Word documents and are examples from outside UH.

  • Group Participation (analytic rubric)
  • Participation (holistic rubric)
  • Design Project (analytic rubric)
  • Critical Thinking (analytic rubric)
  • Media and Design Elements (analytic rubric; portfolio)
  • Writing (holistic rubric; portfolio)

6. Scoring rubric group orientation and calibration

When using a rubric for program assessment purposes, faculty members apply the rubric to pieces of student work (e.g., reports, oral presentations, design projects). To produce dependable scores, each faculty member needs to interpret the rubric in the same way. The process of training faculty members to apply the rubric is called “norming.” It’s a way to calibrate the faculty members so that scores are accurate and consistent across the faculty. Below are directions for an assessment coordinator carrying out this process.

Suggested materials for a scoring session:

  • Copies of the rubric
  • Copies of the “anchors”: pieces of student work that illustrate each level of mastery. Suggestion: have 6 anchor pieces (2 low, 2 middle, 2 high)
  • Score sheets
  • Extra pens, tape, post-its, paper clips, stapler, rubber bands, etc.

Hold the scoring session in a room that:

  • Allows the scorers to spread out as they rate the student pieces
  • Has a chalk or white board, smart board, or flip chart
  • Describe the purpose of the activity, stressing how it fits into program assessment plans. Explain that the purpose is to assess the program, not individual students or faculty, and describe ethical guidelines, including respect for confidentiality and privacy.
  • Describe the nature of the products that will be reviewed, briefly summarizing how they were obtained.
  • Describe the scoring rubric and its categories. Explain how it was developed.
  • Analytic: Explain that readers should rate each dimension of an analytic rubric separately, and they should apply the criteria without concern for how often each score (level of mastery) is used. Holistic: Explain that readers should assign the score or level of mastery that best describes the whole piece; some aspects of the piece may not appear in that score and that is okay. They should apply the criteria without concern for how often each score is used.
  • Give each scorer a copy of several student products that are exemplars of different levels of performance. Ask each scorer to independently apply the rubric to each of these products, writing their ratings on a scrap sheet of paper.
  • Once everyone is done, collect everyone’s ratings and display them so everyone can see the degree of agreement. This is often done on a blackboard, with each person in turn announcing his/her ratings as they are entered on the board. Alternatively, the facilitator could ask raters to raise their hands when their rating category is announced, making the extent of agreement very clear to everyone and making it very easy to identify raters who routinely give unusually high or low ratings.
  • Guide the group in a discussion of their ratings. There will be differences. This discussion is important to establish standards. Attempt to reach consensus on the most appropriate rating for each of the products being examined by inviting people who gave different ratings to explain their judgments. Raters should be encouraged to explain by making explicit references to the rubric. Usually consensus is possible, but sometimes a split decision is developed, e.g., the group may agree that a product is a “3-4” split because it has elements of both categories. This is usually not a problem. You might allow the group to revise the rubric to clarify its use but avoid allowing the group to drift away from the rubric and learning outcome(s) being assessed.
  • Once the group is comfortable with how the rubric is applied, the rating begins. Explain how to record ratings using the score sheet and explain the procedures. Reviewers begin scoring.
  • Are results sufficiently reliable?
  • What do the results mean? Are we satisfied with the extent of students’ learning?
  • Who needs to know the results?
  • What are the implications of the results for curriculum, pedagogy, or student support services?
  • How might the assessment process, itself, be improved?

7. Suggestions for using rubrics in courses

  • Use the rubric to grade student work. Hand out the rubric with the assignment so students will know your expectations and how they’ll be graded. This should help students master your learning outcomes by guiding their work in appropriate directions.
  • Use a rubric for grading student work and return the rubric with the grading on it. Faculty save time writing extensive comments; they just circle or highlight relevant segments of the rubric. Some faculty members include room for additional comments on the rubric page, either within each section or at the end.
  • Develop a rubric with your students for an assignment or group project. Students can the monitor themselves and their peers using agreed-upon criteria that they helped develop. Many faculty members find that students will create higher standards for themselves than faculty members would impose on them.
  • Have students apply your rubric to sample products before they create their own. Faculty members report that students are quite accurate when doing this, and this process should help them evaluate their own projects as they are being developed. The ability to evaluate, edit, and improve draft documents is an important skill.
  • Have students exchange paper drafts and give peer feedback using the rubric. Then, give students a few days to revise before submitting the final draft to you. You might also require that they turn in the draft and peer-scored rubric with their final paper.
  • Have students self-assess their products using the rubric and hand in their self-assessment with the product; then, faculty members and students can compare self- and faculty-generated evaluations.

8. Equity-minded considerations for rubric development

Ensure transparency by making rubric criteria public, explicit, and accessible

Transparency is a core tenet of equity-minded assessment practice. Students should know and understand how they are being evaluated as early as possible.

  • Ensure the rubric is publicly available & easily accessible. We recommend publishing on your program or department website.
  • Have course instructors introduce and use the program rubric in their own courses. Instructors should explain to students connections between the rubric criteria and the course and program SLOs.
  • Write rubric criteria using student-focused and culturally-relevant language to ensure students understand the rubric’s purpose, the expectations it sets, and how criteria will be applied in assessing their work.
  • For example, instructors can provide annotated examples of student work using the rubric language as a resource for students.

Meaningfully involve students and engage multiple perspectives

Rubrics created by faculty alone risk perpetuating unseen biases as the evaluation criteria used will inherently reflect faculty perspectives, values, and assumptions. Including students and other stakeholders in developing criteria helps to ensure performance expectations are aligned between faculty, students, and community members. Additional perspectives to be engaged might include community members, alumni, co-curricular faculty/staff, field supervisors, potential employers, or current professionals. Consider the following strategies to meaningfully involve students and engage multiple perspectives:

  • Have students read each evaluation criteria and talk out loud about what they think it means. This will allow you to identify what language is clear and where there is still confusion.
  • Ask students to use their language to interpret the rubric and provide a student version of the rubric.
  • If you use this strategy, it is essential to create an inclusive environment where students and faculty have equal opportunity to provide input.
  • Be sure to incorporate feedback from faculty and instructors who teach diverse courses, levels, and in different sub-disciplinary topics. Faculty and instructors who teach introductory courses have valuable experiences and perspectives that may differ from those who teach higher-level courses.
  • Engage multiple perspectives including co-curricular faculty/staff, alumni, potential employers, and community members for feedback on evaluation criteria and rubric language. This will ensure evaluation criteria reflect what is important for all stakeholders.
  • Elevate historically silenced voices in discussions on rubric development. Ensure stakeholders from historically underrepresented communities have their voices heard and valued.

Honor students’ strengths in performance descriptions

When describing students’ performance at different levels of mastery, use language that describes what students can do rather than what they cannot do. For example:

  • Instead of: Students cannot make coherent arguments consistently.
  • Use: Students can make coherent arguments occasionally.

9. Tips for developing a rubric

  • Find and adapt an existing rubric! It is rare to find a rubric that is exactly right for your situation, but you can adapt an already existing rubric that has worked well for others and save a great deal of time. A faculty member in your program may already have a good one.
  • Evaluate the rubric . Ask yourself: A) Does the rubric relate to the outcome(s) being assessed? (If yes, success!) B) Does it address anything extraneous? (If yes, delete.) C) Is the rubric useful, feasible, manageable, and practical? (If yes, find multiple ways to use the rubric: program assessment, assignment grading, peer review, student self assessment.)
  • Collect samples of student work that exemplify each point on the scale or level. A rubric will not be meaningful to students or colleagues until the anchors/benchmarks/exemplars are available.
  • Expect to revise.
  • When you have a good rubric, SHARE IT!

10. Additional resources & sources consulted:

Rubric examples:

  • Rubrics primarily for undergraduate outcomes and programs
  • Rubric repository for graduate degree programs

Workshop presentation slides and handouts:

  • Workshop handout (Word document)
  • How to Use a Rubric for Program Assessment (2010)
  • Techniques for Using Rubrics in Program Assessment by guest speaker Dannelle Stevens (2010)
  • Rubrics: Save Grading Time & Engage Students in Learning by guest speaker Dannelle Stevens (2009)
  • Rubric Library , Institutional Research, Assessment & Planning, California State University-Fresno
  • The Basics of Rubrics [PDF], Schreyer Institute, Penn State
  • Creating Rubrics , Teaching Methods and Management, TeacherVision
  • Allen, Mary – University of Hawai’i at Manoa Spring 2008 Assessment Workshops, May 13-14, 2008 [available at the Assessment and Curriculum Support Center]
  • Mertler, Craig A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation , 7(25).
  • NPEC Sourcebook on Assessment: Definitions and Assessment Methods for Communication, Leadership, Information Literacy, Quantitative Reasoning, and Quantitative Skills . [PDF] (June 2005)
  • Grades 6-12
  • School Leaders

FREE Poetry Worksheet Bundle! Perfect for National Poetry Month.

15 Helpful Scoring Rubric Examples for All Grades and Subjects

In the end, they actually make grading easier.

Collage of scoring rubric examples including written response rubric and interactive notebook rubric

When it comes to student assessment and evaluation, there are a lot of methods to consider. In some cases, testing is the best way to assess a student’s knowledge, and the answers are either right or wrong. But often, assessing a student’s performance is much less clear-cut. In these situations, a scoring rubric is often the way to go, especially if you’re using standards-based grading . Here’s what you need to know about this useful tool, along with lots of rubric examples to get you started.

What is a scoring rubric?

In the United States, a rubric is a guide that lays out the performance expectations for an assignment. It helps students understand what’s required of them, and guides teachers through the evaluation process. (Note that in other countries, the term “rubric” may instead refer to the set of instructions at the beginning of an exam. To avoid confusion, some people use the term “scoring rubric” instead.)

A rubric generally has three parts:

  • Performance criteria: These are the various aspects on which the assignment will be evaluated. They should align with the desired learning outcomes for the assignment.
  • Rating scale: This could be a number system (often 1 to 4) or words like “exceeds expectations, meets expectations, below expectations,” etc.
  • Indicators: These describe the qualities needed to earn a specific rating for each of the performance criteria. The level of detail may vary depending on the assignment and the purpose of the rubric itself.

Rubrics take more time to develop up front, but they help ensure more consistent assessment, especially when the skills being assessed are more subjective. A well-developed rubric can actually save teachers a lot of time when it comes to grading. What’s more, sharing your scoring rubric with students in advance often helps improve performance . This way, students have a clear picture of what’s expected of them and what they need to do to achieve a specific grade or performance rating.

Learn more about why and how to use a rubric here.

Types of Rubric

There are three basic rubric categories, each with its own purpose.

Holistic Rubric

A holistic scoring rubric laying out the criteria for a rating of 1 to 4 when creating an infographic

Source: Cambrian College

This type of rubric combines all the scoring criteria in a single scale. They’re quick to create and use, but they have drawbacks. If a student’s work spans different levels, it can be difficult to decide which score to assign. They also make it harder to provide feedback on specific aspects.

Traditional letter grades are a type of holistic rubric. So are the popular “hamburger rubric” and “ cupcake rubric ” examples. Learn more about holistic rubrics here.

Analytic Rubric

Layout of an analytic scoring rubric, describing the different sections like criteria, rating, and indicators

Source: University of Nebraska

Analytic rubrics are much more complex and generally take a great deal more time up front to design. They include specific details of the expected learning outcomes, and descriptions of what criteria are required to meet various performance ratings in each. Each rating is assigned a point value, and the total number of points earned determines the overall grade for the assignment.

Though they’re more time-intensive to create, analytic rubrics actually save time while grading. Teachers can simply circle or highlight any relevant phrases in each rating, and add a comment or two if needed. They also help ensure consistency in grading, and make it much easier for students to understand what’s expected of them.

Learn more about analytic rubrics here.

Developmental Rubric

A developmental rubric for kindergarten skills, with illustrations to describe the indicators of criteria

Source: Deb’s Data Digest

A developmental rubric is a type of analytic rubric, but it’s used to assess progress along the way rather than determining a final score on an assignment. The details in these rubrics help students understand their achievements, as well as highlight the specific skills they still need to improve.

Developmental rubrics are essentially a subset of analytic rubrics. They leave off the point values, though, and focus instead on giving feedback using the criteria and indicators of performance.

Learn how to use developmental rubrics here.

Ready to create your own rubrics? Find general tips on designing rubrics here. Then, check out these examples across all grades and subjects to inspire you.

Elementary School Rubric Examples

These elementary school rubric examples come from real teachers who use them with their students. Adapt them to fit your needs and grade level.

Reading Fluency Rubric

A developmental rubric example for reading fluency

You can use this one as an analytic rubric by counting up points to earn a final score, or just to provide developmental feedback. There’s a second rubric page available specifically to assess prosody (reading with expression).

Learn more: Teacher Thrive

Reading Comprehension Rubric

Reading comprehension rubric, with criteria and indicators for different comprehension skills

The nice thing about this rubric is that you can use it at any grade level, for any text. If you like this style, you can get a reading fluency rubric here too.

Learn more: Pawprints Resource Center

Written Response Rubric

Two anchor charts, one showing

Rubrics aren’t just for huge projects. They can also help kids work on very specific skills, like this one for improving written responses on assessments.

Learn more: Dianna Radcliffe: Teaching Upper Elementary and More

Interactive Notebook Rubric

Interactive Notebook rubric example, with criteria and indicators for assessment

If you use interactive notebooks as a learning tool , this rubric can help kids stay on track and meet your expectations.

Learn more: Classroom Nook

Project Rubric

Rubric that can be used for assessing any elementary school project

Use this simple rubric as it is, or tweak it to include more specific indicators for the project you have in mind.

Learn more: Tales of a Title One Teacher

Behavior Rubric

Rubric for assessing student behavior in school and classroom

Developmental rubrics are perfect for assessing behavior and helping students identify opportunities for improvement. Send these home regularly to keep parents in the loop.

Learn more: Teachers.net Gazette

Middle School Rubric Examples

In middle school, use rubrics to offer detailed feedback on projects, presentations, and more. Be sure to share them with students in advance, and encourage them to use them as they work so they’ll know if they’re meeting expectations.

Argumentative Writing Rubric

An argumentative rubric example to use with middle school students

Argumentative writing is a part of language arts, social studies, science, and more. That makes this rubric especially useful.

Learn more: Dr. Caitlyn Tucker

Role-Play Rubric

A rubric example for assessing student role play in the classroom

Role-plays can be really useful when teaching social and critical thinking skills, but it’s hard to assess them. Try a rubric like this one to evaluate and provide useful feedback.

Learn more: A Question of Influence

Art Project Rubric

A rubric used to grade middle school art projects

Art is one of those subjects where grading can feel very subjective. Bring some objectivity to the process with a rubric like this.

Source: Art Ed Guru

Diorama Project Rubric

A rubric for grading middle school diorama projects

You can use diorama projects in almost any subject, and they’re a great chance to encourage creativity. Simplify the grading process and help kids know how to make their projects shine with this scoring rubric.

Learn more: Historyourstory.com

Oral Presentation Rubric

Rubric example for grading oral presentations given by middle school students

Rubrics are terrific for grading presentations, since you can include a variety of skills and other criteria. Consider letting students use a rubric like this to offer peer feedback too.

Learn more: Bright Hub Education

High School Rubric Examples

In high school, it’s important to include your grading rubrics when you give assignments like presentations, research projects, or essays. Kids who go on to college will definitely encounter rubrics, so helping them become familiar with them now will help in the future.

Presentation Rubric

Example of a rubric used to grade a high school project presentation

Analyze a student’s presentation both for content and communication skills with a rubric like this one. If needed, create a separate one for content knowledge with even more criteria and indicators.

Learn more: Michael A. Pena Jr.

Debate Rubric

A rubric for assessing a student's performance in a high school debate

Debate is a valuable learning tool that encourages critical thinking and oral communication skills. This rubric can help you assess those skills objectively.

Learn more: Education World

Project-Based Learning Rubric

A rubric for assessing high school project based learning assignments

Implementing project-based learning can be time-intensive, but the payoffs are worth it. Try this rubric to make student expectations clear and end-of-project assessment easier.

Learn more: Free Technology for Teachers

100-Point Essay Rubric

Rubric for scoring an essay with a final score out of 100 points

Need an easy way to convert a scoring rubric to a letter grade? This example for essay writing earns students a final score out of 100 points.

Learn more: Learn for Your Life

Drama Performance Rubric

A rubric teachers can use to evaluate a student's participation and performance in a theater production

If you’re unsure how to grade a student’s participation and performance in drama class, consider this example. It offers lots of objective criteria and indicators to evaluate.

Learn more: Chase March

How do you use rubrics in your classroom? Come share your thoughts and exchange ideas in the WeAreTeachers HELPLINE group on Facebook .

Plus, 25 of the best alternative assessment ideas ..

Scoring rubrics help establish expectations and ensure assessment consistency. Use these rubric examples to help you design your own.

You Might Also Like

What is Project Based Learning? #buzzwordsexplained

What Is Project-Based Learning and How Can I Use It With My Students?

There's a difference between regular projects and true-project based learning. Continue Reading

Copyright © 2023. All rights reserved. 5335 Gate Parkway, Jacksonville, FL 32256

  • Open access
  • Published: 26 September 2020

Examining consistency among different rubrics for assessing writing

  • Enayat A. Shabani   ORCID: orcid.org/0000-0002-7341-1519 1 &
  • Jaleh Panahi 1  

Language Testing in Asia volume  10 , Article number:  12 ( 2020 ) Cite this article

8390 Accesses

4 Citations

3 Altmetric

Metrics details

The literature on using scoring rubrics in writing assessment denotes the significance of rubrics as practical and useful means to assess the quality of writing tasks. This study tries to investigate the agreement among rubrics endorsed and used for assessing the essay writing tasks by the internationally recognized tests of English language proficiency. To carry out this study, two hundred essays (task 2) from the academic IELTS test were randomly selected from about 800 essays from an official IELTS center, a representative of IDP Australia, which was taken between 2015 and 2016. The test takers were 19 to 42 years of age, 120 of them were female and 80 were males. Three raters were provided with four sets of rubrics used for scoring the essay writing task of tests developed by Educational Testing Service (ETS) and Cambridge English Language Assessment (i.e., Independent TOELF iBT, GRE, CPE, and CAE) to score the essays which had been previously scored officially by a certified IELTS examiner. The data analysis through correlation and factor analysis showed a general agreement among raters and scores; however, some deviant scorings were spotted by two of the raters. Follow-up interviews and a questionnaire survey revealed that the source of score deviations could be related to the raters’ interests and (un)familiarity with certain exams and their corresponding rubrics. Specifically, the results indicated that despite the significance which can be attached to rubrics in writing assessment, raters themselves can exceed them in terms of impact on scores.

Introduction

Writing effectively is a very crucial part of advancement in academic contexts (Rosenfeld et al. 2004 ; Rosenfeld et al. 2001 ), and generally, it is a leading contributor to anyone’s progress in the professional environment (Tardy and Matsuda 2009 ). It is an essential skill enabling individuals to have a remarkable role in today’s communities (Cumming 2001 ; Dunsmuir and Clifford 2003 ). Capable and competent L2 writers demonstrate their idea in the written form, present and discuss their contentions, and defend their stances in different circumstances (Archibald 2004 ; Bridgeman and Carlson 1983 ; Brown and Abeywickrama 2010 ; Cumming 2001 ; Hinkel 2009 ; Hyland 2004 ). Writing correctly and impressively is vital as it ensures that ideas and beliefs are expressed and transferred effectively. Being capable of writing well in the academic environment leads to better scores (Faigley et al. 1981 ; Graham et al. 2005 ; Harman 2013 ). It also helps those who require admission to different organizations of higher education (Lanteigne 2017 ) and provides them with better opportunities to get better job positions. Business communications, proceedings, legal agreements, and military agreements all have to be well written to transmit information in the most influential way (Canseco and Byrd 1989 ; Grabe and Kaplan 1996 ; Hyland 2004 ; Kroll and Kruchten 2003 ; Matsuda 2002 ). What should be taken into consideration is that even well until the mid-1980s, L2 writing in general, and academic L2 writing in particular, was hardly regarded as a major part of standard language tests desirable of being tested on its own right. Later, principally owing to the announced requirements of some universities, it meandered through its path to first being recognized as an option in these tests and then recently turning into an indispensable and integral part of them.

L2 writing is not the mere adequate use of grammar and vocabulary in composing a text, rather it is more about the content, organization and accurate use of language, and proper use of linguistic and textual parts of the language (Chenoweth and Hayes 2001 ; Cumming 2001 ; Holmes 2006 ; Hughes 2003 ; Sasaki 2000 ; Weissberg 2000 ; Wiseman 2012 ). Essay, as one of the official practices of writing, has become a major part of formal education in different countries. It is used by different universities and institutes in selecting qualified applicants, and the applicants’ mastery and comprehension of L2 writing are evaluated by their performance in essay writing.

Essay, as one of the most formal types of writing, constitutes a setting in which clear explanations and arguments on a given topic are anticipated (Kane 2000 ; Muncie 2002 ; Richards and Schmidt 2002 ; Spurr 2005 ). The first steps in writing an essay are to gain a good grasp of the topic, apprehend the raised question and produce the response in an organized way, select the proper lexicon, and use the best structures (Brown and Abeywickrama 2010 ; Wyldeck 2008 ). To many, writing an essay is hampering, yet is a key to success. It makes students think critically about a topic, gather information, organize and develop an idea, and finally produce a fulfilling written text (Levin 2009 ; Mackenzie 2007 ; McLaren 2006 ; Wyldeck 2008 ).

L2 writing has had a great impact on the field of teaching and learning and is now viewed not only as an independent skill in the classroom but also as an integral aspect of the process of instruction, learning, and most freshly, assessment (Archibald 2001 ; Grabe and Kaplan 1996 ; MacDonald 1994 ; Nystrand et al. 1993 ; Raimes 1991 ). Now, it is not possible to think of a dependable test of English language proficiency without a section on essay writing, especially when academic and educational purposes are of concern. Educational Testing Service (ETS) and Cambridge English Language Assessment offer a particular section on essay writing for their tests of English language proficiency. The independent TOEFL iBT writing section, the objective of which is to gauge and assess learners’ ability to logically and precisely express their opinions using their L2 requires the learners to write well at the sentence, paragraph, and essay level. It is written on a computer using a word processing program with rudimentary qualities which does not have a self-checker and a grammar or spelling checker. Generally, the essay should have an introduction, a body, and a conclusion. A standard essay usually has four paragraphs, five is possibly better, and six is too many (Biber et al. 2004 ; Cumming et al. 2000 ). TOEFL iBT is scored based on the candidates’ performance on two tasks in the writing section. Candidates should at least do one of the writing tasks. Scoring could be done either by human rater or automatically (the eRater). Using human judgment for assessing content and meaning along with automated scoring for evaluating linguistic features ensures the consistency and reliability of scores (Jamieson and Poonpon 2013 ; Kong et al. 2009 ; Weigle 2013 ).

The Graduate Record Examination (GRE) analytic writing consists of two different essay tasks, an “issue task” and an “argument task”, the latter being the focus of the present study. Akin to TOELF iBT, the GRE is also written on a computer employing very basic features of a word processing program. Each essay has an introduction including some contextual and upbringing information about what is going to be analyzed, a body in which complex ideas should be articulated clearly and effectively using enough examples and relevant reasons for supporting the thesis statement. Finally, the claims and opinions have to be summed up coherently in the concluding part (Broer et al. 2005 ). The GRE is scored two times on a holistic scale, and usually, the average score is reported if the two scores are within one point; otherwise, a third reader steps in and examines the essay (Staff 2017 ; Zahler 2011 ).

IELTS essay writing (in both Academic and General Modules) involves developing a formal five-paragraph essay in 40 min. Similar to essays in other exams, it should include an introductory paragraph, two to three body paragraphs, and a concluding paragraph (Aish and Tomlinson 2012 ; Dixon 2015 ; Jakeman 2006 ; Loughead 2010 ; Stewart 2009 ). To score IELTS essay writing, the received scores for the (four) components of the rubric are averaged (Fleming et al. 2011 ).

The writing sections of the Cambridge Advanced Certificate in English (CAE) and the Cambridge English: Proficiency (CPE) exams have two parts. The first part is compulsory and candidates are asked to write in response to an input text including articles, leaflets, notices, and formal and/or informal letters. In the second part, the candidates must select one of the writing tasks that might be a letter, proposal, report, or a review (Brookhart and Haines 2009 ; Corry 1999 ; Duckworth et al. 2012 ; Evans 2005 ; Moore 2009 ). The essays should include an introduction, a body, and a conclusion (Spratt and Taylor 2000 ). Similar to IELTS essay writing, these exams are scored analytically. The scores are added up and then converted to a scale of 1 to 20 (Brookhart 1999 ; Harrison 2010 ).

Assessing L2 writing proficiency is a flourishing area, and the precise assessment of writing is a critical matter. Practically, learners are generally expected to produce a piece of text so that raters can evaluate the overall quality of their performance using a variety of different scoring systems including holistic and analytic scoring, which are the most common and acceptable ways of assessing essays (Anderson 2005 ; Brossell 1986 ; Brown and Abeywickrama 2010 ; Hamp-Lyons 1990 , 1991 ; Kroll 1990 ). Today, the significance of L2 writing assessment is on an increase not only in language-related fields of studies but also arguably in all disciplines, and it is a very pressing concern in various educational and also vocational settings.

L2 writing assessment is the focal point of an effective teaching process of this complicated skill (Jones 2001 ). A diligent assessment of writing completes the way it is taught (White 1985 ). The challenging and thorny natures of assessment and writing skills impede the reliable assessment of an essay (Muenz et al. 1999 ) such that, to date, a plethora of research studies have been conducted to discern the validity and reliability of writing assessment. Huot ( 1990 ) argues that writing assessment encounters difficulty because usually, there are more than two or three raters assessing essays, which may lead to uncertainty in writing assessment.

L2 writing assessment is generally prone to subjectivity and bias, and “the assessment of writing has always been threatened due to raters’ biasedness” (Fahim and Bijani 2011 , p. 1). Ample studies document that raters’ assessment and judgments are biased (Kondo-Brown 2002 ; Schaefer 2008 ). They also suggested that in order to reduce the bias and subjectivity in assessing L2 writing, standard and well-described rating scales, viz rubrics, should be determined (Brown and Jaquith 2007 ; Diederich et al. 1961 ; Hamp-Lyons 2007 ; Jonsson and Svingby 2007 ; Aryadoust and Riazi 2016 ). Furthermore, there are some studies suggesting the tendency of many raters toward subjectivity in writing assessment (Eckes 2005 ; Lumley 2005 ; O’Neil and Lunz 1996 ; Saeidi et al. 2013 ; Schaefer 2008 ). In light of these considerations, it becomes of prominence to improve consistency among raters’ evaluations of writing proficiency and to increase the reliability and validity of their judgments to avoid bias and subjectivity to produce a greater agreement between raters and ratings. The most notable move toward attaining this objective is using rubrics (Cumming 2001 ; Hamp-Lyons 1990 ; Hyland 2004 ; Raimes 1991 ; Weigle 2002 ). In layman’s terms, rubrics ensure that all the raters evaluate a writing task by the same standards (Biggs and Tang 2007 ; Dunsmuir and Clifford 2003 ; Spurr 2005 ). To curtail the probable subjectivity and personal bias in assessing one’s writing, there should be some determined and standard criteria for assessing different types of writing tasks (Condon 2013 ; Coombe et al. 2012 ; Shermis 2014 ; Weigle 2013 ).

Assessment rubrics (alternatively called instruments) should be reliable, valid, practical, fair, and constructive to learning and teaching (Anderson et al. 2011 ). Moskal and Leydens ( 2000 ) considered validity and reliability as the two significant factors when rubrics are used for assessing an individual’s work. Although researchers may define validity and reliability in various ways (for instance, Archibald 2001 ; Brookhart 1999 ; Bachman and Palmer 1996 ; Coombe et al. 2012 ; Cumming 2001 ; Messick 1994 ; Moskal and Leydens 2000 ; Moss 1994 ; Rezaei and Lovorn 2010 ; Weigle 2002 ; White 1994 ; Wiggan 1994 ), they generally agree that validity in this area of investigation is the degree to which the criteria support the interpretations of what is going to be measured. Reliability, they generally settle, is the consistency of assessment scores regardless of time and place. Rubrics and any rating scales should be so developed to corroborate these two important factors and equip raters and scorers with an authoritative tool to assess writing tasks fairly. Arguably, “the purpose of the essay task, whether for diagnosis, development, or promotion, is significant in deciding which scale is chosen” (Brossell 1986 , p. 2). As rubrics should be conceived and designed with the purpose of assessment of any given type of written task (Crusan 2015 ; Fulcher 2010 ; Knoch 2009 ; Malone and Montee 2014 ; Weigle 2002 ), the development and validation of rating scales are very challenging issues.

Writing rubrics can also help teachers gauge their own teaching (Coombe et al. 2012 ). Rubrics are generally perceived as very significant resources attainable for teachers enabling them to provide insightful feedback on L2 writing performance and assess learners’ writing ability (Brown and Abeywickrama 2010 ; Knoch 2011 ; Shaw and Weir 2007 ; Weigle 2002 ). Similarly, but from another perspective, rubrics help learners to follow a clear route of progress and contribute to their own learning (Brown and Abeywickrama 2010 ; Eckes 2012 ). Well-defined rubrics are constructive criteria, which help learners to understand what the desired performance is (Bachman and Palmer 1996 ; Fulcher and Davidson 2007 ; Weigle 2002 ). Employing rubrics in the realm of writing assessment helps learners understand raters’ and teachers’ expectations better, judge and revise their own work more successfully, promote self-assessment of their learning, and improve the quality of their writing task. Rubrics can be used as an effective tool enabling learners to focus on their efforts, produce works of higher quality, get better grades, find better jobs, and feel more concerned and confident about doing their assignment (Bachman and Palmer 2010 ; Cumming 2013 ; Kane 2006 ).

Rubrics are set to help scorers evaluate writers’ performances and provide them with very clear descriptions about organization and coherence, structure and vocabulary, fluent expressions, ideas and opinions, among other things. They are also practical for the purpose of describing one’s competence in logical sequencing of ideas in producing a paragraph, use of sufficient and proper grammar and vocabulary related to the topic (Kim 2011 ; Pollitt and Hutchinson 1987 ; Weigle 2002 ). Employing rubrics reduces the time required to assess a writing performance and, most importantly, well-defined rubrics clarify criteria in particular terms enabling scorers and raters to judge a work based on standard and unified yardsticks (Gustilo and Magno 2015 ; Kellogg et al. 2016 ; Klein and Boscolo 2016 ).

Selecting and designing an effective rating scale hinges upon the purpose of the test (Alderson et al. 1995 ; Attali et al. 2012 ; Becker 2011 ; East 2009 ). Although rubrics are crucial in essay evaluation, choosing the appropriate rating scale and forming criteria based on the purpose of assessment are as important (Bacha 2001 ; Coombe et al. 2012 ). It seems that a considerable part of scale developers prefers to adapt their scoring scales from a well-established existing one (Cumming 2001 ; Huot et al. 2009 ; Wiseman 2012 ). The relevant literature supports the idea of adapting rating scales used in large-scale tests for academic purposes (Bacha 2001 ; Leki et al. 2008 ). Yet, East ( 2009 ) warned about the adaptation of rating scales from similar tests, especially when they are to be used across languages.

Holistic and analytic scoring systems are now widely used to identify learners’ writing proficiency levels for different purposes (Brown and Abeywickrama 2010 ; Charney 1984 ; Cohen 1994 ; Coombe et al. 2012 ; Cumming 2001 ; Hamp-Lyons 1990 ; Reid 1993 ; Weir 1990 ). Unlike the analytic scoring system, the holistic one takes the whole written text into consideration. This scoring system generally emphasizes what is done well and what is deficient (Brown and Hudson 2002 ; White 1985 ). The analytic scoring system (multi-trait rubrics), however, includes discrete components (Bacha 2001 ; Becker 2011 ; Brown and Abeywickrama 2010 ; Coombe et al. 2012 ; Hamp-Lyons 2007 ; Knoch 2009 ; Kuo 2007 ; Shaw and Weir 2007 ). To Weigle ( 2002 ), accuracy, cohesion, content, organization, register, and appropriacy of language conventions are the key components or traits of an analytic scoring system. One of the early analytic scoring rubrics for writing was employed in the ESL Composition by Jacobs et al. 1981 , which included five components, namely language development, organization, vocabulary, language use, and mechanics).

Each scoring system has its own merits and limitations. One of the advantages of analytic scoring is its distinctive reliability in scoring (Brown et al. 2004 ; Zhang et al. 2008 ). Some researchers (e.g. Johnson et al. 2000 ; McMillan 2001 ; Ward and McCotter 2004 ) contend that analytic scoring provides the maximum opportunity for reliability between raters and ratings since raters can use one scoring criteria for different writing tasks at a time. Yet, Myford and Wolfe ( 2003 ) considered the halo effect as one of the major disadvantages of analytic rubrics. The most commonly recognized merit of holistic scoring is its feasibility as it requires less time. However, it does not encompass different criteria, affecting its validity in comparison to analytic scoring, as it entails the personal reflection of raters (Elder et al. 2007 ; Elder et al. 2005 ; Noonan and Sulsky 2001 ; Roch and O’Sullivan 2003 ). Cohen ( 1994 ) stated that the major demerit of the holistic scoring system is its relative weakness in providing enough diagnostic information about learners’ writing.

Many research studies have been conducted to examine the effect of analytic and holistic scoring systems on writing performance. For instance, more than half a century ago, Diederich et al. ( 1961 ) carried out a study on the holistic scoring system in a large-scale testing context. Three-hundred essays were rated by 53 raters, and the results showed variation in ratings based on three criteria, namely ideas, organization, and language. About two score years later, Borman ( 1979 ) conducted a similar study on 800 written tasks and found that the variations can be attributed to ideas, organizations, and supporting details. Charney ( 1984 ) did a comparison study between analytic and holistic rubrics in assessing writing performance in terms of validity and found a holistic scoring system to be more valid. Bauer ( 1981 ) compared the cost-effectiveness of analytic and holistic rubrics in assessing essay tasks and found the time needed to train raters to be able to employ analytic rubrics was about two times more than the required time to train raters to use the holistic one. Moreover, the time needed to grade the essays using analytic rubrics was four times the time needed to grade essays using holistic rubrics. Some studies reported findings that corroborated that holistic scoring can be the preferred scoring system in large-scale testing context (Bell et al. 2009 ). Chi ( 2001 ) compared analytic and holistic rubrics in terms of their appropriacy, the agreement of the learners’ scores, and the consistency of rater. The findings revealed that raters who used the holistic scoring system outperformed those employing analytic scoring in terms of inter-rater and intra-rater reliability. Thus, there is research to suggest the superiority of analytic rubrics in assessing writing performance in terms of reliability and accuracy in scoring (Birky 2012 ; Brown and Hudson 2002 ; Diab and Balaa 2011 ; Kondo-Brown 2002 ). It is, generally speaking, difficult to decide which one is the best, and the research findings so far can best be described as inconclusive.

Rubrics of internationally recognized tests used in assessing essays have many similar components, including organization and coherence, task achievement, range of vocabulary used, grammatical accuracy, and types of errors. The wording used, however, is usually different in different rubrics, for instance, “task achievement” that is used in the IELTS rubrics is represented as the “realization of tasks” in CPE and CAE, “content coverage” in GRE, and “task accomplishment” in TOEFL iBT. Similarly, it can be argued that the point of focus of the rubrics for different tests may not be the same. Punctuation, spelling, and target readers’ satisfaction, for example, are explicitly emphasized in CAE and CPE while none of them are mentioned in GRE and TOEFL iBT. Instead, idiomaticity and exemplifications are listed in the TOEFL iBT rubrics, and using enough supporting ideas to address the topic and task is the focus of GRE rating scales (Brindley 1998 ; Hamp-Lyons and Kroll 1997 ; White 1984 ).

Broadly speaking, the rubrics employed in assessing L2 writing include the above-mentioned components but as mentioned previously, they are commonly expressed in different wordings. For example, the criteria used in IELTS Task 2 rating scale are task achievement, coherence and cohesion, lexical resources, and grammatical range and accuracy. These criteria are the ones based on which candidates’ work is assessed and scored. Each of these criteria has its own descriptors, which determine the performance expected to secure a certain score on that criterion. The summative outcome, along with the standards, determines if the candidate has attained the required qualification which is established based on the criteria. The summative outcome of IELTS Task 2 rating scale will be between 0 and 9. Similar components are used in other standard exams like CAE and CPE, their summative outcomes being determined from 1 to 5. Their criteria are used to assess content (relevance and completeness), language (vocabulary, grammar, punctuation, and spelling), organization (logic, coherence, variety of expressions and sentences, and proper use of linking words and phrases), and finally communicative achievement (register, tone, clarity, and interest). CAE and CPE have their particular descriptors which demonstrate the achievement of each learners’ standard for each criterion (Betsis et al. 2012 ; Capel and Sharp 2013 ; Dass 2014 ; Obee 2005 ). Similar to the other rubrics, the GRE scoring scale has the main components like the other essay writing scales but in different wordings. In the GRE, the standards and summative outcomes are reported from 0–6, denoting fundamentally deficient, seriously flawed, limited, adequate, strong, and outstanding, respectively. Like the GRE, the TOEFL iBT is scored from 0–5. Akin to the GRE, Independent Writing Rubrics for the TOEFL iBT delineates the descriptors clearly and precisely (Erdosy 2004 ; Gass et al. 2011 ).

Abundant research studies have been carried out to show that idea and content, organization, cohesion and coherence, vocabulary and grammar, and language and mechanics are the main components of essay rubrics (Jacobs et al. 1981 ; Schoonen 2005 ). What has been considered a missing element in the analytic rating scale is the raters’ knowledge of, and familiarity with, rubrics and their corresponding elements as one of the key yardsticks in measuring L2 writing ability (Arter et al. 1994 ; Sasaki and Hirose 1999 ; Weir 1990 ). Raters play a crucial role in assessing writing. There is research to allude to the impact of raters’ judgments on L2 writing assessment (Connor-Linton 1995 ; Sasaki 2000 ; Schoonen 2005 ; Shi 2001 ).

The past few decades have witnessed an increasing growth in research on different scoring systems and raters’ critical role in assessment. There are some recent studies discussing the importance of rubrics in L2 writing assessment (e.g. Deygers et al. 2018 ; Fleckenstein et al. 2018 ; Rupp et al. 2019 ; Trace et al. 2016 ; Wesolowski et al. 2017 ; Wind et al. 2018 ). They commonly consider rubrics as significant tools for measuring L2 learners’ performances and suggest that rubrics enhance the reliability and validity of writing assessment. More importantly, they argue that employing rubrics can increase the consistency among raters.

Shi ( 2001 ) made comparisons between native and non-native, as well as between experienced and novice raters, and found that raters have their own criteria to assess an essay, virtually regardless of whether they are native or non-native and experienced or novice. Lumley ( 2002 ) and Schoonen ( 2005 ) conducted comparison studies between two groups of raters, one group trained expert raters provided with no standard rubrics, the other group novice raters with no training who had standard rubrics. The trained raters with no rubrics outperformed the other group in terms of accuracy in assessing the essays, implying the importance of raters. Rezaei and Lovorn ( 2010 ) compared the use of rubrics between summative and formative assessment. They argued that using rubrics in summative assessment is predominant and that it overshadows the formative aspects of rubrics. Their results showed that rubrics can be more beneficial when used for formative assessment purposes.

Izadpanah et al. ( 2014 ) conducted a study drawing on Jacobs et al. ( 1981 ) to see if the rubrics of one exam can be the predictor of another one. Practically, they wanted to examine whether the same score would be obtained if a rubric for an IELTS exam was used for assessing CPE or any other standard test. Their findings revealed that the rubrics were comparable with each other in terms of their different components by which different standard essays are assessed. Bachman ( 2000 ) compared TOEFL PBT and CPE and found a very meaningful relationship between the scores gained from essay writing tests. He also concluded that scoring CPE was usually more difficult than PBT, and that under similar conditions, exams from UCLES/Cambridge Assessment (like CPE) received lower scores in comparison to the ones from ETS (like PBT). In Fleckenstein et al. ( 2019 ) experts from different countries linked upper secondary students’ writing profiles elicited in a constructed response test (integrated and independent essays from the TOEFL iBT) to CEFR level. The Delphi technic was used to find out the intra- and inter-panelist consistency while scoring students’ writing profiles. The findings showed that panelists are able to provide ratings consistent with the empirical item difficulties and the validity of the estimate of the cut scores.

Schoonen ( 2005 ) and Attali and Burstein ( 2005 ) compared the generalizability of writing scores to different essays using only one set of the rubric. They checked and analyzed three components of writing rubric, including content, language use, and organization and found that the obtained scores from different essays are similar. Wind ( 2020 ) conducted a study to illustrate and explore methods for evaluating the degree to which raters apply a common rating scale consistently in analytic writing assessments. The results indicated a lack of invariance in rating scale category functioning across domains for several raters. Becker ( 2011 ) also examined different rubrics used to measure writing performance. He investigated the three different types of rubrics, namely holistic, analytic, and primary-trait scoring systems, to find which one is more appropriate for assessing L2 writing. He studied the merits and demerits of the three rubrics and concluded that none of them had superiority over the others, making each legitimate for assessing a piece of writing depending on the purpose of writing, the time allocated for assessment, and the raters’ expertise.

In a recent study, Ghaffar et al. ( 2020 ) examined the impact of rubrics and co-constructed rubrics on middle school students’ writing skill performance. The findings of their study indicated that co-constructed rubrics as assessment tools help students to outperform in their writing due to their familiarity with these types of rubrics. In addition, there are researchers who are of the contention that the use of rubrics is inconclusive and can be controversial especially when they are just used for summative assessment purposes and that when rubrics are used for both summative and formative assessment, they are more advantageous (Andrade 2000 ; Broad 2003 ; Ene and Kosobucki 2016 ; Inoue 2004 ; Panadero and Jonsson 2013 ; Schirmer and Bailey 2000 ; Wilson 2006 , 2017 ).

What all of these studies indicated is that employing well-developed rubrics increase equality and fairness in writing assessment. It is also suggested that various factors could affect writing assessment, especially raters’ expertise and time allocated to the rating (Bacha 2001 ; Ghalib and Hattami 2015 ; Knoch 2009 , 2011 ; Lu and Zhang 2013 ; Melendy 2008 ; Nunn 2000 ; Nunn and Adamson 2007 ). The purpose of the present study is twofold. First, it attempts to investigate the consistency among different standard rubrics in writing assessment. Second, it tries to examine whether any of these rubrics could be used as a predictor of others and if they all tap the same underlying construct.

To meet the objectives of the study, 200 samples of Academic IELTS Task 2 (i.e., essay writing) were used. The samples were randomly selected from more than 800 essays written as part of academic IELTS tests taken between 2015 and 2016 at an official IELTS test center, a representative of IDP Australia. The essays were asked to be written based on different prompts. As an instruction to the IELTS writing Task 2, it is required that the test takers write at least 250 words, a condition that 21 samples did not meet. Test takers were 19 to 42 years of age, 120 of the females and 80 males.

One of the raters in this study was an (anonymous) official IELTS examiner who had scored the essays officially; the other raters were four experienced IELTS instructors from an English department of a nationally prominent language institute, three males and one female, between 26 and 39 years of age, with 5 to 12 years of English language teaching experience. These four raters were selected based on their qualifications, teaching credentials and certifications, and years of teaching experience, particularly in IELTS classes. All the four raters were M.A. holders in TEFL and had been teaching different writing courses at universities and language institutes and were familiar with different scoring systems and their relevant components. Each rater was invited to an individual briefing session with one of the researchers to ensure their familiarity with the rubrics of interest and discuss some practical considerations pertaining to this study. They were asked to read and score each essay four times, each time based on one of the four rubrics (TOELF iBT, GRE, CPE, and CAE). The raters completed the scorings in 12 weeks during which time they were instructed not to share ideas about the task (the costs of scorings were modestly met).

Instrumentation

Four sets of rubrics for different writing tests (i.e., Independent TOEFL iBT, GRE, CPE, & CAE) were taken from ETS and Cambridge English Language Assessment. The official IELTS scores of the 200 essays were collected from the IELTS center. The rubrics employed for assessing and evaluating the writing tasks of these five standard exams were analytic rubrics with different scales, namely a nine-point scale for assessing IELTS Task 2, five-point scales for GRE and TOEFL iBT, and six-point scales for CAE and CPE writing tasks. They assess the main components of essay writing construct, including the range of vocabulary and grammar used in addressing the task, cohesion and organization, and range of using cohesive devices, which were presented in different wordings in these rubrics.

Another instrument was a questionnaire designed by the researchers, which included both open-ended and closed-ended questions (see Appendix). The aim was to determine the raters’ attitudes toward their rating experience and their familiarity with each exam and its corresponding rubrics. The themes of questionnaire items were determined based on a review of the literature on the important issues and factors affecting raters’ performances and attitudes (Brown and Abeywickrama 2010 ; Coombe et al. 2012 ; Fulcher and Davidson 2007 ; Weigle 2002 ). In addition, an interview was carried out with the four raters to find out about their interest in rating and also to investigate their familiarity of the exams and their conforming rating scales.

To carry out the study, 200 essay samples were scored once by a certified IELTS examiner. The assigned scores together with the IELTS examiner’s relevant comments were written next to each essay sample. Afterward, all essays were rated by the four other raters, who were kept uninformed of the official IELTS scores. They were provided with the rubrics of the four essay writing tests and were instructed to assess each essay with the four given rubrics. By so doing, in addition to the official IELTS scores, four other scores were given to each essay from each rater; that is to say, each essay received 16 scores plus the official IELTS score. Therefore, all in all, the researchers collected 17 scores for each essay. The researcher-made questionnaire was carried out, and then an interview was conducted whereby the 4 raters were asked about their interest in rating and also their awareness and concerns about each exam and their relevant rubrics.

To do the analysis of the data, the SPSS program, version 22, was employed. Initially, the descriptive statistics of the data were computed, and intercorrelations among the 17 scores were calculated to see if any statistically significant association could be found among the rubrics. To have a better picture of the existing association among the scoring rubrics of the different exams, PCA as a variant of factor analysis was run to examine the extent the rubrics tap the same underlying construct.

To address the first research question, intercorrelations were computed among the IELTS, CAE, CPE, TOEFL iBT, and GRE scores. To answer the second research question, factor analysis was run to examine the extent the standard essay writings in these five tests of English language proficiency tap the same underlying construct. In this section, the results of the intercorrelations and factor analyses computations are reported in detail.

Intercorrelations among ratings

To estimate the intercorrelations among test ratings and raters, first, alpha was calculated for these five sets of scores together (i.e., IELTS, CAE, CPE, TOEFL iBT, and GRE). To analyze the data, primarily, alpha was calculated for each rater separately to check the consistency among raters. Then, alpha was computed for all the raters together to find inter-reliability among the raters. The intercorrelations were afterward computed between each exam score and the IELTS scores to see which score is (more) correlated with the IELTS.

Table 1 presents the alphas as the average of intercorrelations among the five sets of scores including the IELTS scores, and the four scores given by the raters. Evidently, rater 1 has an alpha of about .67, which is lower than the other alphas. However, because there were only five sets of scores correlated in each alpha, this low value of alpha could still be considered acceptable. Nevertheless, this lower value of alpha in comparison to the other alphas could be meaningful since, after all, this rater showed less internal consistency among his ratings.

To see which test rating given by the four raters agreed the least with the IELTS scores, intercorrelations of each test rating with the IELTS scores were computed as shown in Table 2 . As the intercorrelations of the first rater demonstrate, Rater 1’s CPE rating and Rater 4’s TOEFL iBT rating show lower correlations with the IELTS ratings. Afterward, an alpha was computed for an aggregate of the ratings of all the raters including the IELTS scores.

Table 3 shows an alpha of around .86, which could be considered acceptable with regard to the small number of ratings.

To see which rating had a negative effect on the total alpha, item-total correlation for each test rating was computed. Item-total correlation showed the extent to which each test rating agrees with the total of the other test ratings including the IELTS scores. As it is shown in Table 4 , CPE1 and iBT4 had the lowest correlations with the total ratings. This table also indicates that the removal of these scores would have increased the total alpha considerably.

These results, as expected, confirmed the results found in each rater’s alpha and inter-test correlations computed in the previous section.

Factor analysis

This study was carried out having hypothesized that the construct of essay writing is similar across different standardized tests (i.e., IELTS, CAE, CPE, TOEFL iBT, and GRE), and a given essay is expected to be scored similarly by the rubrics and scales of these different exams. To see whether this was the case, the ratings of these exams were examined. The correlation analyses reported above showed that there is an acceptable agreement among all test ratings except two of them, CPE and TOEFL iBT. That is, rater 1 in CPE and Rater 4 in TOEFL iBT showed the least correlation among other test ratings (.15 and .13, respectively). To have a better picture of this issue, it was decided to run a PCA to examine the extent these exams tap the same underlying construct. Factor analysis provides some factor loadings for each test item (i.e., test rating); if two or more items load on the same factor, it will show that these items (i.e., test ratings) tap the same construct (i.e., essay writing construct).

Table 5 presents the results of Kaiser-Meyer-Olkin measure (KMO) and Bartlett’s test of sphericity on the sampling adequacy for the analysis. The reported KMO is .83, which is larger than the acceptable value (KMO > .5) according to Field ( 2009 ). Bartlett’s test of sphericity [ χ 2 (136) = 1377.12, p < .001] was also found significant, indicating large enough correlations among the items for PCA; therefore, this sample could be considered adequate for running the PCA.

The next step was to investigate the number of factors required to be retained in the PCA. To do so, the scree plot was checked (Fig. 1 ). The first point that should be identified in the scree plot is the point of inflexion, that is, where the slopes of the line in the scree plot changes dramatically. Only those factors, which fall to the left of the point of inflexion, should be retained. Based on Fig. 1 , it seems that the point of inflexion is on the fourth factor; therefore, four factors were retained.

figure 1

According to Table 6 , the first four retained factors explain around 60 percent of the whole variance, which is quite considerable.

Table 7 presents the four factor loadings after varimax rotation. Obviously, the different test ratings were loaded on 4 factors. In other words, those test ratings that clustered around the same factor seemed to be loading on the same underlying factor or latent variable.

Following the above analysis, it was decided to further examine the factor loadings as follows: It should be noted that the above factor structure was achieved by considering only those loadings above .4 as suggested by Stevens ( 2002 ), which explained around 16 percent of the variance in the variable. This value was strict, though, resulting in the emergence of limited factors. Therefore, employing Kaiser’s criterion, a second factor analysis was run with a more lenient absolute value for each factor, which was .3 as suggested by Field ( 2009 ). By so doing, more factor loadings emerged and more information was achieved. The factor loadings above .3 are presented in Table 8 , which almost revealed the same factor structure as found in the previous factor analysis with absolute values greater than .4; however, one important finding was that the IELTS ratings this time showed loadings on all the factors on which other tests also loaded. It can be construed, therefore, that the other tests had significant potential to tap the same construct.

After estimating reliability using Cronbach’s alpha and then by running a Confirmatory Factor Analysis, it was decided to omit Rater 1 due to his unfamiliarity with the exam and its corresponding rubrics reported by him in the questionnaire.

Table 9 and Fig. 2 (scree plot) demonstrate the factor structure after removing Rater 1. The scree plot shows that 4 factors should be retained in the analysis, and Table 9 indicates that the first four retained factors explain about 70 percent of the whole variance, which was quite satisfactory.

figure 2

Scree plot (Rater 1 removed)

Finally, Table 10 shows that after removing Rater 1’s data, all the ratings of Raters 3 and 4 have loaded on the same factors with the IELTS. Of course, like the previous factor analysis, the IELTS ratings again showed loadings on all the factors on which other tests loaded except iBT4. All in all, it could be concluded that the results from the factor analysis confirm the previous findings from alpha computations showing iBT4 ratings had the lowest correlations with the total ratings.

Discussion and conclusions

The purpose of the present study is to examine the consistency of the rubrics endorsed for assessing the writing tasks by the internationally recognized tests of English language proficiency. Standard rubrics can be considered constructive tool helping raters to assess different types of essays (Busching, 1998 ). Using rubrics enhances the reliability of the assessment of essays provided that these rubrics are well described and that they tap the same construct (Jonsson & Svingby, 2007 ). The current study is an attempt to examine the reliability among different rubrics of essay writing with regard to their major components, namely, organization, coherence and cohesion, range of lexical and grammatical complexity used, and accuracy.

The results of this study show that all in all, there is a high correlation among raters (i.e., the IELTS examiner and the four other raters) and rating scores (i.e., the official IELTS scores and the other 16 test ratings received from the four raters). The intercorrelations among test ratings and the raters as well as the computation of inter-item correlations between each test rating and the IELTS scores revealed that CPE1 and iBT4 had the least agreement with the official IELTS ratings. Therefore, these low correlations were investigated in a follow-up study by giving the four raters a questionnaire including both open-ended and closed-ended questions. The raters’ responses to the questionnaire denoted the extent to which they were familiar with each exam and their corresponding rubrics.

The responses of two of the raters, that is, Rater 1 in CPE and Rater 4 in TOEFL iBT, proved to be illuminating in explaining their performance. Rater 1 s’ responses to the questionnaire showed that he had no teaching experience for CPE classes. However, his responses to other questions of the questionnaire indicated his familiarity with this exam and its writing essay scoring rubrics. The responses of Rater 4 revealed that she had no teaching experience for TOEFL iBT and no familiarity with the exam and its corresponding rating scales. The outcome from the interview with Rater 4 suggests that using well-trained raters leads to fewer problems in rating. What Rater 4 stated in her responses to the questionnaire and interview were in line with the findings of Sasaki and Hirose ( 1999 ), who concluded that familiarity with different tests and their relevant rubrics leads to better scoring. Additionally, the results of the present study are consistent with what Schoonen ( 2005 ), Attali and Burstein ( 2005 ), Wind et al. ( 2018 ), Deygers et al. ( 2018 ), Wesolowski et al. ( 2017 ), Trace et al. ( 2016 ), Fleckenstein et al. ( 2018 ), Rupp et al. ( 2019 ) found in their studies, that is employing rubrics enhances the reliability of writing assessment as well as among raters.

To this point, the obtained results from this study provide an affirmative answer to the first question of the study, indicating a very high agreement among test ratings and the raters. Also, in order to ensure that the construct of essay writing is similar across different standardized tests and identical essays are scored similarly by the internationally recognized rubrics of these different exams, inter-item correlation analysis was computed which indicated that CPE1 and iBT4 had the lowest correlations with the total ratings. This could be due to either the raters’ inconsistencies or the hypothesis that essay writing is conceptualized differently based on the scoring rubrics of these exams. The follow-up survey also corroborated that the disagreement among Raters 1 and 4 and the other raters was due to either the rater’s discrepancies or the way every writing task was hypothesized differently according to the rubrics of each exam. It can be supported by Weigle ( 2002 ) who concluded that raters should have a good grasp of scoring and its essential details. She also discussed that raters should have a sharp conceptualization of the construct of essay writing.

The results from the rotated component matrix revealed that all the ratings of Raters 3 and 4 loaded on the same factor, meaning that they tap the same construct. Examining the other factor loadings revealed that CAE1, iBT1, CAE2, and iBT2 also loaded on the same factor with the IELTS, suggesting that these rater’s conceptualizations of the construct of essay writing in CAE and TOEFL iBT were more similar to that of the IELTS raters rather than those of CPE and GRE scorers. However, what remained questionable was why CPE1 and GRE1 did not load on the same factor as CAE1 and iBT1, and why CPE1 and GRE1 loaded on the same factor with CPE2 and GRE2. Additionally, why CPE1 also loaded with GRE2 and CPE2 on the same factor remained open to discussion.

What was found above was the results of the PCA considering those factor loadings above .4 based on Stevens ( 2002 ). As this value was strict, and the number of obtained factors was limited, it was decided to apply Kaiser’s Criterion with a less rigorous eigenvalue of .3 based on Field’ ( 2009 ) suggestion. The findings showed almost the same factor loadings as was found in the previous factor analysis. Again Raters 3 and 4 loaded on the same factor, but this time, the IELTS scores loaded on the same factor with CAE2 and iBT2. CAE1, GRE1, and iBT1 loaded on the same factor and what was still debatable was why CPE1 loaded with GRE 2 and CPE2.

Up to this mentioned point, all the results obtained from alpha computation and factor analysis indicated something different in Rater 1, based on which it was decided to omit Rater 1 from the PCA. It is interesting to note that after interviewing all the four raters and scrutinizing the questionnaire survey, it was found that Rater 1, in his responses to the questionnaire, had indicated that he had no teaching experience in teaching CPE classes, and yet he claimed that he was familiar with this exam and its related rating scales, contrary to other raters’ responses to the questionnaire.

After omitting Rater 1 from the PCA, the findings showed that Rater 3’s and Rater 4’s test ratings loaded on the same factor, and this time the IELTS loaded on the factors that all the other tests had loaded except iBT4, meaning that Rater 4 had no agreement with the IELTS raters in rating the essay. What was found from the questionnaire survey of this rater indicated that Rater 4 had no teaching experience for this particular exam. She also had no familiarity with the exam and its corresponding rubrics. This rater also believed that scoring exams like TOEFL iBT and the exams developed by ETS were more difficult, and that they generally received lower scores in comparison to the Cambridge English Language Assessment exams. The results of what Rater 4 stated were not in line with the findings of Bachman ( 2000 ) who did a comparison study between TOEFL PBT and CPE essay task and concluded that CPE scoring is more difficult than scoring TOEFL PBT. Contrary to the findings of the present study, he also concluded that exams like CPE received lower scores.

The results from alpha computation and factor analysis showed the noticeable role of raters in assessing writing. The results from this study are in line with the findings of Lumley ( 2002 ) and Schoonen ( 2005 ) who argue that raters need to be considered one of the most remarkable concerns in the process of assessment. Shi ( 2001 ) argued in favor of the significant role of raters in assessing essays using their own criteria in addition to the standard and determined rating scales. Likewise, the outcome of factor analysis in this research study revealed that raters play a remarkable role in assessing essays by showing that all the items (i.e., test ratings) load on the same factor, especially when all the essay writings were rated by the same rater.

This study aimed to examine the consistency and reliability among different standard rubrics and rating scales used for assessing writing in the internationally recognized tests of English language proficiency. The results from alpha estimation provide evidence for a strong association among the raters and test ratings. Also, what has been found from the PCA indicate that these test ratings tap the same underlying construct. This study encourages employing practical rater trainer and rater training courses, providing them with the authentic opportunities to get familiar with different rubrics. This area requires more investigation on how raters themselves might affect the rating and how employing trained and certified raters can affect the process of rating. Test administrators and developers are the other groups who benefit from the findings of this study, since, when argued that all the test ratings tap the same underlying construct and different essay writing rating scales can be predictors of each other, it would be practical for them to set standard essay writing rubrics which can be used for rating and assessing writing. Also, as the findings of the present study alluded, the developers of the writing rubrics for these tests may also take into stock the implication that there are critical constructs within writing that weigh more heavily when being assessed across standardized measures. Teachers and learners are other groups who benefit from the result of this research study. They might devote less time on describing all these rubrics with their descriptions stated in different words. Instead, they could spend more time on practicing writing and essay writing tasks.

The study tried to examine the reliability of analytic rubrics used in assessing the essay component of the following standardized examinations: IELTS, TOEFL iBT, CAE, CPE, and GRE. While the first four of the tests listed above are indeed English language proficiency examinations designed to assess language skills of English as a Second Language (ESL) learners, the last one (i.e. GRE) is intended for those seeking admission to graduate programs in the U.S., regardless of the first language background. GRE candidates are, at minimum, bachelor degree holders, most of whom are native speakers of English whose education was completed in the English language, while the minority are international applicants to U.S. universities’ master’s and Ph.D. programs from various language backgrounds. GRE writing task, in other words, is not intended for L2 English learners. Therefore, it seems that juxtaposing the GRE requirements for the writing task, which zero in on argumentation and critical thinking, with English language proficiency standards as measured by the other four tests can dilute the generalizability of the results particularly with reference to this particular exam, due to the divergent assessment purposes and intended candidate profiles for this test. Future researchers are encouraged to take heed of this limitation in the present study.

Availability of data and materials

The authors were provided with the data for research purposes. Sharing the data with a third party requires obtaining consent from the organization which provided the data. The materials are available in the article.

Aish, F., & Tomlinson, J. (2012). Get ready for IELTS writing . London: HarperCollins.

Google Scholar  

Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation . Cambridge: Cambridge University Press.

Anderson, B., Bollela, V., Burch, V., Costa, M. J., Duvivier, R., Galbraith, R., & Roberts, T. (2011). Criteria for assessment: consensus statement and recommendations form the Ottawa 2010 conference. Medical Teacher , 33 (3), 206–214.

Anderson, C. (2005). Assessing writers . Portsmouth: Heinemann.

Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational Leadership , 57 (5), 13–18.

Archibald, A. (2001). Targeting L2 writing proficiencies: Instruction and areas of change in students’ writing over time. International Journal of English Studies , 1 (2), 153–174.

Archibald, A. (2004). Writing in a second language. In The higher education academy subject centre for languages, linguistics and area studies Retrieved from http://www.llas.ac.uk/resources/gpg/2175 .

Arter, J. A., Spandel, V., Culham, R., & Pollard, J. (1994). The impact of training students to be self-assessors of writing . New Orleans: Paper presented at the Annual Meeting of the American Educational Research Association.

Aryadoust, V., & Riazi, A. M. (2016). Role of assessment in second language writing research and pedagogy. Educational Psychology , 37 (1), 1–7.

Attali, Y., & Burstein, J. (2005). Automated essay scoring with e-rater.V.2.0. (RR- 04-45) . Princeton: ETS.

Attali, Y., Lewis, W., & Steier, M. (2012). Scoring with the computer: alternative procedures for improving the reliability of holistic essay scoring. Language Testing , 30 (1), 125–141.

Bacha, N. (2001). Writing evaluation: what can analytic versus holistic essay scoring tell? System , 29 (3), 371–383.

Bachman, L., & Palmer, A. S. (2010). Language assessment in practice: developing language assessments and justifying their use in the real world . Oxford: Oxford University Press.

Bachman, L. F. (2000). Modern language testing at turn of the century: assuring that what we count counts. Language Testing , 17 (1), 1–42.

Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: designing and developing useful language tests . Oxford: Oxford University Press.

Bauer, B. A. (1981). A study of the reliabilities and the cost-efficiencies of three methods of assessment for writing ability . Champaign: University of Illinois.

Becker, A. (2011). Examining rubrics used to measure writing performance in U.S. intensive English programs. The CATESOL Journal , 22 (1), 113–117.

Bell, R. M., Comfort, K., Klein, S. P., McCarffey, D., Ormseth, T., Othman, A. R., & Stecher, B. M. (2009). Analytic versus holistic scoring of science performance tasks. Applied Measurement in Education , 11 (2), 121–137.

Betsis, A., Haughton, L., & Mamas, L. (2012). Succeed in the new Cambridge proficiency (CPE)- student’s book with 8 practice tests . Brighton: GlobalELT.

Biber, D., Byrd, M., Clark, V., Conrad, S. M., Cortes, E., Helt, V., & Urzua, A. (2004). Representing language use in the university: analysis of the TOEFL 2000 spoken and written academic language corpus. In ETS research report series (RM-04-3, TOEFL Report MS-25) . Princeton: ETS.

Biggs, J., & Tang, C. (2007). Teaching for quality learning at university . Maidenhead: McGraw Hill.

Birky, B. (2012). A good solution for assessment strategies. A Journal for Physical and Sport Educators , 25 (7), 19–21.

Borman, W. C. (1979). Format and training effects on rating accuracy and rater errors. Journal of Applied Psychology , 64 (4), 410–421.

Bridgeman, B., & Carlson, S. (1983). Survey of academic writing tasks required of graduate and undergraduate foreign students. In ETS Research Report Series (RR- 83-18, TOELF- RR-15) . Princeton: ETS.

Brindley, G. (1998). Describing language development? Rating scales and SLA. In L. F. Bachman, & A. D. Cohen (Eds.), Interfaces between second language acquisition and language testing research , (pp. 112–140). Cambridge: Cambridge University Press.

Broad, B. (2003). What we really value: beyond rubrics in teaching and assessing writing . Logan: Utah State UP.

Broer, M., Lee, Y. W., Powers, D. E., & Rizavi, S. (2005). Ensuring the fairness of GRE writing prompts: Assessing differential difficulty. In ETS research report series (GREB Report No. 02-07R, RR-05-11) .

Brookhart, G., & Haines, S. (2009). Complete CAE student’s book with answers . Cambridge: Cambridge University Press.

Brookhart, S. M. (1999). The art and science of classroom assessment: the missing part of pedagogy. ASHE-ERIC Higher Education Report , 27 (1), 1–128.

Brossell, G. (1986). Current research and unanswered questions in writing assessment. In K. Greenberg, H. Wiener, & R. Donovan (Eds.), Writing assessment: issues and strategies , (pp. 168–182). New York: Longman.

Brown, A., & Jaquith, P. (2007). Online rater training: perceptions and performance . Dubai: Paper presented at Current Trends in English Language Testing Conference (CTELT).

Brown, G. T. L., Glasswell, K., & Harland, D. (2004). Accuracy in the scoring of writing: studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing , 9 (2), 105–121.

Article   Google Scholar  

Brown, H. D., & Abeywickrama, P. (2010). Language assessment: Principles and classroom practice . Lewiston: Pearson Longman.

Brown, J. (2002). Training needs assessment: a must for developing an effective training program. Sage Journal , 31 (4), 569–578 https://doi.org/10.1177/009102600203100412 .

Brown, J. D., & Hudson, T. (2002). Criterion-referenced language testing. Cambridge applied linguistics series . Cambridge: Cambridge University Press.

Busching, B. (1998). Grading inquiry projects. New Directions for Teaching and Learning , ( 74 ), 89–96.

Canseco, G., & Byrd, P. (1989). Writing required in graduate courses in business administration. TESOL Quarterly , 23 (2), 305–316.

Capel, A., & Sharp, W. (2013). Cambridge english objective proficiency , (2nd ed., ). Cambridge: Cambridge University Press.

Charney, D. (1984). The validity of using holistic scoring to evaluate writing. Research in the Teaching of English , 18 (1), 65–81.

Chenoweth, N. A., & Hayes, J. R. (2001). Fluency in writing: Generating text in L1 and L2. Written Communication , 18 (1), 80–98 https://doi.org/10.1177/0741088301018001004 .

Chi, E. (2001). Comparing holistic and analytic scoring for performance assessment with many facet models. Journal of Applied Measurement , 2 (4), 379–388.

Cohen, A. D. (1994). Assessing language ability in the classroom . Boston: Heinle & Heinle.

Condon, W. (2013). Large-scale assessment, locally-developed measures, and automated scoring of essays: Fishing for red herrings? Assessing Writing , 18 , 100–108.

Connor-Linton, J. (1995). Crosscultural comparison of writing standards: American ESL and Japanese EFL. World English , 14 (1), 99–115.

Coombe, C., Davidson, P., O’Sullivan, B., & Stoynoff, S. (2012). The Cambridge guide to second language assessment . New York: Cambridge University Press.

Corry, H. (1999). Advanced writing with English in use: CAE . Oxford: Oxford University Press.

Crusan, D. (2015). And then a miracle occurs: the use of computers to assess student writing. International Journal of TESOL and Learning , 4 (1), 20–33.

Cumming, A. (2001). Learning to write in a second language: two decades of research. International Journal of English Studies , 1 (2), 1–23.

Cumming, A. (2013). Assessing integrated writing tasks for academic purposes: promises and perils. Language Assessment Quarterly , 10 (1), 1–8.

Cumming, A. H., Kantor, R., Powers, D., Santos, T., & Taylor, C. (2000). TOEFL 2000 writing framework: A working paper , ETS Research Report Series (RM-00-5; TOEFL-MS-18) . Princeton: ETS.

Dass, B. (2014). Adult & continuing professional education practices: CPE among professional providers . Singapore: Partridge Singapore.

Deygers, B., Zeidler, B., Vilcu, D., & Carlsen, C. H. (2018). One framework to unite them all? Use of CEFR in European university entrance policies. Language Assessment Quarterly , 15 (1), 3–15 https://doi.org/10.1080/15434303.2016.1261350 .

Diab, R., & Balaa, L. (2011). Developing detailed rubrics for assessing critique writing: impact on EFL university students’ performance and attitudes. TESOL Journal , 2 (1), 52–72.

Diederich, P. B., French, J. W., & Carlton, S. T. (1961). Factors in judgments of writing ability (Research Bulletin No. RB-61-15) . Princeton: Educational Testing Service https://doi.org/10.1002/j.2333-8504.1961.tb00286.x .

Dixon, N. (2015). Band 9-IELTS writing task 2-real tests . Oxford: Oxford University Press.

Duckworth, M., Gude, K., & Rogers, L. (2012). Cambridge english: proficiency (CPE) masterclass: student’s book . Oxford: Oxford University Press.

Dunsmuir, S., & Clifford, V. (2003). Children’s writing and the use of ICT. Educational Psychology in Practice , 19 (3), 171–187.

East, M. (2009). Evaluating the reliability of a detailed analytic scoring rubric for foreign language writing. Assessing Writing , 14 (2), 88–115.

Eckes, T. (2005). Examining rater effects in TestDaF writing and speaking performance assessments: a many-facet Rasch analysis. Language Assessment Quarterly , 2 (3), 197–221.

Eckes, T. (2012). Operational rater types in writing assessment: linking rater cognition to rater behavior. Language Assessment Quarterly , 9 ( 3 ), 270–292.

Elder, C., Barkhuizen, G., Knoch, U., & von Randow, J. (2007). Evaluating rater responses to an online training program for L2 writing assessment. Language Testing , 24 (1), 37–64.

Elder, C., Knoch, U., Barkhuizen, G., & von Randow, J. (2005). Individual feedback to enhance rater training: does it work? Language Assessment Quarterly , 2 (3), 175–196.

Ene, E., & Kosobucki, V. (2016). Rubrics and corrective feedback in ESL writing: a longitudinal case study of an L2 writer. Assessing Writing , 30 , 3–20 https://doi.org/10.1016/j.asw.2016.06.003 .

Erdosy, M. U. (2004). Exploring variability in judging writing ability in a second language: a study of four experienced raters of ESL composition. In ETS research report series (RR-03-17) . Ontario: ETS.

Evans, V. (2005). Entry tests CPE 2 for the revised Cambridge proficiency examination: Student’s book . New York City: Pearson Education.

Fahim, M., & Bijani, H. (2011). The effect of rater training on raters’ severity and bias in second language writing assessment. Iranian Journal of Language Testing , 1 (1), 1–16.

Faigley, L., Daly, J. A., & Witte, S. P. (1981). The role of writing apprehension in writing performance and competence. Journal of Educational Research , 75 (1), 16–21.

Field, A. P. (2009). Discovering statistics using SPSS (and sex and drugs and rock’ n’ roll) , (3rd ed., ). London: Sage Publication.

Fleckenstein, J., Keller, S., Kruger, M., Tannenbaum, R. J., & Köller, O. (2019). Linking TOEFL iBT writing rubrics to CEFR levels: Cut scores and validity evidence from a standard setting study. Assessing Writing , 43 https://doi.org/10.1016/j.asw.2019.100420 .

Fleckenstein, J., Leucht, M., & Köller, O. (2018). Teachers’ judgement accuracy concerning CEFR levels of prospective university students. Language Assessment Quarterly , 15 (1), 90–101 https://doi.org/10.1080/15434303.2017.1421956 .

Fleming, S., Golder, K., & Reeder, K. (2011). Determination of appropriate IELTS writing and speaking band scores for admission into two programs at a Canadian post-secondary polytechnic institution. The Canadian Journal of Applied Linguistics , 14 (1), 222 – 250 .

Fulcher, G. (2010). Practical language testing . London: Hodder Education.

Fulcher, G., & Davidson, F. (2007). Language testing and assessment: an advanced resource book . New York: Routledge.

Gass, S., Myford, C., & Winke, P. (2011). Raters’ L2 background as a potential source of bias in rating oral performance. Language Testing , 30 (2), 231–252.

Ghaffar, M. A., Khairallah, M., & Salloum, S. (2020). Co-constructed rubrics and assessment forlearning: The impact on middle school students’ attitudes and writing skills. Assessing Writing , 45 https://doi.org/10.1016/j.asw.2020.100468 .

Ghalib, T. K., & Hattami, A. A. (2015). Holistic versus analytic evaluation of EFL writing: a case study. English Language Teaching , 8 (7), 225–236.

Grabe, W., & Kaplan, R. B. (1996). Theory and practice of writing: an applied linguistic perspective . London: Longman.

Graham, S., Harris, K. R., & Mason, L. (2005). Improving the writing performance, knowledge, and self-efficacy of struggling young writers: the effects of self-regulated strategy development. Contemporary Educational Psychology , 30 (2), 207–241 https://doi.org/10.1016/j.cedpsych.2004.08.001 .

Gustilo, L., & Magno, C. (2015). Explaining L2 Writing performance through a chain of predictors: A SEM approach. 3 L: The Southeast Asian Journal of English Language Studies , 21 (2), 115–130.

Hamp-Lyons, L. (1990). Second language writing assessment. In B. Kroll (Ed.), Second language writing: research insights for the classroom , (pp. 69–87). California: Cambridge University Press.

Hamp-Lyons, L. (1991). Holistic writing assessment of LEP students . Washington, DC: Paper presented at Symposium on limited English proficient student.

Hamp-Lyons, L. (2007). Editorial: worrying about rating. Assessing Writing , 12 , 1–9.

Hamp-Lyons, L., & Kroll, B. (1997). TOEFL 2000 – writing: composition, community and assessment (toefl monograph series no. 5) . Princeton: Educational Testing Service.

Harman, R. (2013). Literary intertextuality in genre-based pedagogies: building lexicon cohesion in fifth-grade L2 writing. Journal of Second Language Writing , 22 (2), 125–140.

Harrison, J. (2010). Certificate of proficiency in English (CPE) test preparation course . Oxford: Oxford University Press.

Hinkel, E. (2009). The effects of essay topics on modal verb uses in L1 and L2 academic writing. Journal of Pragmatics , 41 (4), 667–683.

Holmes, P. (2006). Problematizing intercultural communication competence in the pluricultural classroom: Chinese students in New Zealand University. Journal of Language and Intercultural Communication , 6 (1), 18–34.

Hughes, A. (2003). Testing for language teachers . Cambridge: Cambridge University Press.

Huot, B. (1990). The literature of direct writing assessment: major concerns and prevailing trends. Review of Educational Research , 60 (2), 237–239.

Huot, B., Moore, C., & O’Neill, P. (2009). Creating a culture of assessment in writing programs and beyond. College Composition and Communication , 61 ( 1 ), 107–132.

Hyland, K. (2004). Disciplinary discourses: social interactions in academic writing . Michigan: University of Michigan Press.

Inoue, A. (2004). Community-based assessment pedagogy. Assessing Writing , 9 (3), 208–238 https://doi.org/10.1016/j.asw.2004.12.001 .

Izadpanah, M. A., Rakhshandehroo, F., & Mahmoudikia, M. (2014). On the consensus between holistic rating system and analytical rating system: a comparison between TOEFL iBT and Jacobs’ et al. composition. International Journal of Language Learning and Applied Linguistics World , 6 (1), 170–187.

Jacobs, H. L., Zingraf, S. A., Wormuth, D. R., Hartfiel, V. F., & Hughey, J. B. (1981). Testing ESL composition: a practical approach . Rowley: Newbury House.

Jakeman, V. (2006). Cambridge action plan for IELTS: academic module . Cambridge: Cambridge University Press.

Jamieson, J., & Poonpon, K. (2013). Developing analytic rating guides for TOEFL iBT integrated speaking tasks. In ETS research series (RR-13-13, TOEFLiBT-20) . Princeton: ETS.

Johnson, R. L., Penny, J., & Gordon, B. (2000). The relation between score resolution methods and interrater reliability: An empirical study of an analytic scoring rubric. Applied Measurement in Education , 13 , 121–138 https://doi.org/10.1207/S15324818AME1302_1 .

Jones, C. (2001). The relationship between writing centers and improvement in writing ability: An assessment of the literature. Journal of Education , 122 (1), 3–20.

Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: reliability, validity and educational consequences. Educational Research Review , 2 , 130–144.

Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement , (4th ed., pp. 17–64). Westport: American Council on Education and Praeger Publishers.

Kane, T. S. (2000). Oxford essential guide to writing . New York: Berkey Publishing Group.

Kellogg, R. T., Turner, C. E., Whiteford, A. P., & Mertens, A. (2016). The role of working memory in planning and generating written sentences. Journal of Writing Research , 7 (3), 397–416.

Kim, Y. H. (2011). Diagnosing EAP writing ability using the reduced reparametrized unified model. Language Testing , 28 (4), 509–541.

Klein, P. D., & Boscolo, P. (2016). Trends in research on writing as a learning activity. Journal of Writing Research , 7 (3), 311–350 https://doi.org/10.17239/jowr-2016.07.3.01 .

Knoch, U. (2009). The assessment of academic style in EAP writing: the case of the rating scale. Melbourne Papers in Language Testing , 13 (1), 35.

Knoch, U. (2011). Rating scales for diagnostic assessment of writing: what should they look like and where should the criteria come from? Assessing Writing , 16 (2), 81–96.

Kondo-Brown, K. (2002). A facet analysis of rater bias in Japanese second language writing performance. Language Testing , 19 (1), 3–31.

Kong, N., Liu, O. L., Malloy, J., & Schedl, M. A. (2009). Does content knowledge affect TOEFL iBT reading performance? A confirmatory approach to differential item functioning. In ETS research report series (RR-09-29, TOEFLiBT-09) . Princeton: ETS.

Kroll, B. (1990). Second language writing (Cambridge Applied Linguistics): research insights for the classroom . Cambridge: Cambridge University Press.

Kroll, B., & Kruchten, P. (2003). The rational unified process made essay: a practitioner’s guide to the RUP . Boston: Pearson Education.

Kuo, S. (2007). Which rubric is more suitable for NSS liberal studies? Analytic or holistic? Educational Research Journal , 22 (2), 179–199.

Lanteigne, B. (2017). Unscrambling jumbled sentences: an authentic task for English language assessment? Studies in Second Language Learning and Teaching , 7 (2), 251–273 https://doi.org/10.14746/ssllt.2017.7.2.5 .

Leki, L., Cumming, A., & Silva, T. (2008). A synthesis of research on second language writing in English . New York: Routledge.

Levin, P. (2009). Write great essays . London: McGraw-Hill Education.

Loughead, L. (2010). IELTS practice exam: with audio CDs . Hauppauge: Barron’s Education Series.

Lu, J., & Zhang, Z. (2013). Assessing and supporting argumentation with online rubrics. International Education Studies , 6 (7), 66–77.

Lumley, T. (2002). Assessment criteria in a large-scale writing test: what do they really mean to the raters? Language Testing , 19 (3), 246–276.

Lumley, T. (2005). Assessing second language writing: the rater’s perspective . Frankfurt: Lang.

MacDonald, S. (1994). Professional academic writing in the humanities and social sciences . Carbondale: Southern Illinois University Press.

Mackenzie, J. (2007). Essay writing: teaching the basics from the group up . Markham: Pembroke Publishers.

Malone, M. E., & Montee, M. (2014). Stakeholders’ beliefs about the TOEFL iBT test as a measure of academic language ability (TOEFL iBT Report No. 22, ETS Research Report No. RR-14-42) . Princeton: Educational Testing Service https://doi.org/10.1002/ets2.12039 .

Matsuda, P. K. (2002). Basic writing and second language writers: Toward an inclusive definition. Journal of Basic Writing , 22 (2), 67–89.

McLaren, S. (2006). Essay writing made easy . Sydney: Pascal Press.

McMillan, J. H. (2001). Classroom assessment: principles and practice for effective instruction , (2nd ed., ). Boston: Allyn & Bacon.

Melendy, G. A. (2008). Motivating writers: the power of choice. Asian EFL Journal , 20 (3), 187–198.

Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessment. Educational Researcher , 23 (2), 13–23.

Moore, J. (2009). Common mistakes at proficiency and how to avoid them . Cambridge: Cambridge University Press.

Moskal, B. M., & Leydens, J. (2000). Scoring rubric development: validity and reliability. Practical Assessment, Research & Evaluation , 7 , 10.

Moss, P. A. (1994). Can there be validity without reliability? Educational Researcher , 23 (2), 5–12.

Muenz, T. A., Ouchi, B. Y., & Cole, J. C. (1999). Item analysis of written expression scoring systems from the PIAT-R and WIAT. Psychology and Schools , 36 (1), 31–40.

Muncie, J. (2002). Using written teacher feedback in EFL composition classes. ELT Journal , 54 (1), 47–53 https://doi.org/10.1093/elt/54.1.47 .

Myford, C. M., & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet rasch measurement: Part I. Journal of Applied Measurement , 4 (4), 386–422.

Noonan, L. E., & Sulsky, L. M. (2001). Impact of frame-of-reference and behavioral observation training on alternative training effectiveness criteria in a Canadian military sample. Human Performance , 14 (1), 3–26.

Nunn, R. C. (2000). Designing rating scales for small-group interaction. ELT Journal , 54 (2), 169–178.

Nunn, R. C., & Adamson, J. (2007). Toward the development of interactional criteria for journal paper evaluation. Asian EFL Journal , 9 (4), 205–228.

Nystrand, M., Greene, S., & Wiemelt, J. (1993). Where did composition studies come from? An intellectual history. Written Communication , 10 (3), 267–333.

O’Neil, T. R., & Lunz, M. E. (1996). Examining the invariance of rater and project calibrations using a multi-facet rasch model . New York: Paper presented at the Annual Meeting of the American Educational Research Associations.

Obee, B. (2005). Practice tests for the revised CPE . Berkshire: Express Publishing.

Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purpose revisited. Educational Research Review , 9 , 129–144.

Pollitt, A., & Hutchinson, C. (1987). Calibrating graded assessments: rasch partial credit analysis of performance in writing. Language Testing , 4 (1), 72–92.

Raimes, A. (1991). Out of the woods: Emerging traditions in the teaching of writing. TESOL Quarterly , 25 (3), 407–430.

Reid, J. (1993). Teaching ESL writing . Englewood Cliffs: Regents Prentice Hall.

Rezaei, A. R., & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assessing Writing , 15 (1), 18–39.

Richards, J. C., & Schmidt, R. (2002). Longman dictionary of language teaching and applied linguistics . New York: Pearson Education.

Roch, S. G., & O’Sullivan, B. J. (2003). Frame of reference rater training issues: recall, time and behavior observation training. International Journal of Training and Development , 7 (2), 93–107.

Rosenfeld, M., Courtney, R., & Fowles, M. (2004). Identifying the writing tasks important for academic success at the undergraduate and graduate levels. Research report 42 . Princeton: Educational Testing Service.

Rosenfeld, M., Leung, S., & Oltman, P. K. (2001). Identifying the reading, writing, speaking, and listening tasks important for academic success at the undergraduate and graduate levels (TOEFL Monograph Series MS-21) . Princeton: Educational Testing Service.

Rupp, A. A., Casabianca, J. M., Krüger, M., Keller, S., & Köller, O. (2019). Automated essay scoring at scale: a case study in Switzerland and Germany (RR-86. ETS RR-19-12) . ETS Research Report Series , 2019 https://doi.org/10.1002/ets2.12249 .

Saeidi, M., Yousefi, M., & Baghayi, P. (2013). Rater bias in assessing Iranian EFL learners’ writing performance. Iranian Journal of Applied Linguistics , 16 (1), 145–175.

Sasaki, M. (2000). Toward an empirical model of EFL writing processes: an explanatory study. Journal of Second Language Writing , 9 (3), 259–291.

Sasaki, M., & Hirose, K. (1999). Development of an analytic rating scale for Japanese L1 writing. Language Testing , 16 (4), 457–478.

Schaefer, E. (2008). Rater bias patterns in an EFL writing assessment. Language Testing , 25 (4), 465–493.

Schirmer, B. R., & Bailey, J. (2000). Writing assessment rubric: an instructional approach for struggling writers. Teaching Exceptional Children , 33 (1), 52–58.

Schoonen, R. (2005). Generalizability of writing scores: an application of structural equation modeling. Language Testing , 22 (1), 1–5.

Shaw, S. D., & Weir, C. J. (2007). Examining writing: research and practice in assessing second language writing . Cambridge: Cambridge University Press.

Shermis, M. (2014). State-of-the-art automated essay scoring: competition, results, and future directions from a United States demonstration. Assessing Writing , 20 , 53–76 https://doi.org/10.1016/j.asw.2013.04.001 .

Shi, L. (2001). Native- and nonnative- speaking EFL teachers’ evaluation of Chinese students’ English writing. Language Testing , 18 (3), 303–325.

Spratt, M., & Taylor, L. B. (2000). The Cambridge CAE course: self-study student’s book . Cambridge: Cambridge University Press.

Spurr, B. (2005). Successful essay writing for senior high school . NSW: New Frontier Publishing.

Staff, M. P. (2017). GRE guide to the use of scores. In Graduate record examination . Princeton: ETS.

Stevens, J. P. (2002). Applied multivariate statistics for the social sciences , (4th ed., ). Hillsdale: Erlbaum.

Stewart, A. (2009). IELTS preparation & practice: reading and writing—academic module . New York: Pearson Education.

Tardy, M. C., & Matsuda, P. K. (2009). The construction of author voice by editorial board members. Written Communication , 26 (1), 32–52.

Trace, J., Meier, V., & Janseen, G. (2016). “I can see that”: developing shared rubric category interpretations through score negotiation. Assessing Writing , 30 , 32–43 https://doi.org/10.1016/j.asw.2016.08.001 .

Ward, J. R., & McCotter, S. S. (2004). Reflection as a visible outcome for preservice teachers. Teaching and Teacher Education , 20 (3), 243–257.

Weigle, S. C. (2002). Assessing writing . Cambridge: Cambridge University Press.

Book   Google Scholar  

Weigle, S. C. (2013). English language learners and automated scoring of essays: Critical considerations. Assessing Writing , 18 , 85–99.

Weir, C. J. (1990). Communicative language testing . New Jersey: Prentice Hall, Inc.

Weissberg, B. (2000). Developmental relationship in the acquisition of English syntax: Writing vs. speech. Journal of Learning and Instruction , 10 (1), 37–53 https://doi.org/10.1016/S0959-4752(99)00017-1 .

Wesolowski, B. W., Wind, S. A., & Engelhard, G. (2017). Evaluating differential rater functioning over time in the context of solo music performance assessment. Bulletin of the Council for Research in Music Education , ( 212 ), 75–98 https://doi.org/10.5406/bulcouresmusedu.212.0075 .

White, E. M. (1984). Teaching and assessing writing , (2nd ed., ). San Francisco: Jossey-Bass.

White, E. M. (1985). Teaching and assessing writing . San Francisco: Jossey-Bass.

White, E. M. (1994). Teaching and assessing writing , (2nd ed. ). San Francisco: Jossey-Bass.

Wiggan, G. (1994). The constant danger of sacrificing validity to reliability: making writing assessment serves writer. Assessing Writing , 1 , 129–139 https://doi.org/10.1016/1075-2935(94)90008-6 .

Wilson, M. (2006). Rethinking rubrics in writing assessment . Postmouth: Heinemann.

Wilson, M. (2017). Reimaging writing assessment: from scales to stories . Postmouth: Heinemann.

Wind, S. A. (2020). Do raters use rating scale categories consistently across analytic rubric domains in writing assessment? Assessing Writing , 43 https://doi.org/10.1016/j.asw.2019.100416 .

Wind, S. A., Tsai, C. L., Grajeda, S. B., & Bergin, C. (2018). Principals’ use of rating scale categories in classroom observation for teacher evaluation. School Effectiveness and School Improvement , 29 (3), 485–510 https://doi.org/10.1080/09243453.2018.1470989 .

Wiseman, C. S. (2012). A comparison of the performance of analytic vs. holistic scoring rubrics to assess L2 writing. Iranian Journal of Language Testing , 2 (1), 59–61.

Wyldeck, K. (2008). Everyday spelling and grammar . Sydney: Pascal Press.

Zahler, K. A. (2011). McGraw-Hill’s conquering the NEW GRE verbal and writing . New York: McGraw-Hill Education.

Zhang, B., Johnson, L., & Kilic, G. B. (2008). Assessing the reliability of self-and-peer rating in student group work. Assessment & Evaluation in Higher Education , 33 (3), 329–340 https://doi.org/10.1080/02602930701293181 .

Download references

Acknowledgements

The authors would like to thank the reviewers for their fruitful comments. We would also like to thank the raters who kindly accepted to contribute to this study.

Author information

Authors and affiliations.

Department of Foreign Languages, TUMS International College, Tehran University of Medical Sciences (TUMS), Keshavarz Blvd., Tehran, 1415913311, Iran

Enayat A. Shabani & Jaleh Panahi

You can also search for this author in PubMed   Google Scholar

Contributions

The authors made almost equal contributions to this manuscript, and both read and approved the final manuscript.

Authors’ information

Enayat A. Shabani 1 ( [email protected] ) is a Ph.D. in TEFL and is currently the Chair of the Department of Foreign Languages at Tehran University of Medical Sciences (TUMS). His areas of research interest include language testing and assessment, and internationalization of higher education.

Jaleh Panahi 2 ( [email protected] ) holds an M.A. in TEFL. She has been teaching English for 12 years with the main focus of IELTS teaching and instruction. She is currently a part-time instructor at the Department of Foreign Languages, Tehran University of Medical Sciences. Her fields of research interest are language assessment, and language and cognition.

Corresponding author

Correspondence to Enayat A. Shabani .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Shabani, E.A., Panahi, J. Examining consistency among different rubrics for assessing writing. Lang Test Asia 10 , 12 (2020). https://doi.org/10.1186/s40468-020-00111-4

Download citation

Received : 16 June 2020

Accepted : 03 September 2020

Published : 26 September 2020

DOI : https://doi.org/10.1186/s40468-020-00111-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Scoring rubrics
  • Essay writing
  • Tests of English language proficiency
  • Writing assessment

assessment rubric for essay writing

TOEFL iBT ®  Test

The premier test of academic English communication

Select any step to learn more about using TOEFL iBT ®  scores.

Interpreting TOEFL iBT Scores

The score range is 0–30 for each section of the TOEFL iBT test. Each score range is divided into four or five proficiency levels to help you more accurately assess a test taker's skill.

Scoring Resources

  • Performance Descriptors for the TOEFL iBT ®  Test (PDF)
  • Scoring Guides (Rubrics) for TOEFL iBT Speaking Responses (PDF)
  • Scoring Guides (Rubrics) for TOEFL iBT Writing Responses (PDF)

For related research, see Score Use Research .

Test forms with omitted sections

ETS administers some TOEFL iBT test forms with one or more sections omitted. When one of these forms is administered, scores are given only for the sections that were taken, and no total test score appears on the score report. If you receive this type of report and would like more information about why it was administered this way, please contact the test taker.

Test-taker score concerns

If you have a concern about a test taker's TOEFL scores, please fill out the TOEFL iBT ®  Score Inquiry Form (PDF) and send it to [email protected] .

IMAGES

  1. 020 Rubrics For Essay Example Writing High School English ~ Thatsnotus

    assessment rubric for essay writing

  2. 46 Editable Rubric Templates (Word Format) ᐅ TemplateLab

    assessment rubric for essay writing

  3. 3 Types of Writing Rubrics for Effective Assessments

    assessment rubric for essay writing

  4. What Is A Rubric In Writing

    assessment rubric for essay writing

  5. 3 Types of Writing Rubrics for Effective Assessments

    assessment rubric for essay writing

  6. Free Printable Writing Rubrics

    assessment rubric for essay writing

VIDEO

  1. Creating Rubrics for Student Assessment

  2. Presentation Portfolio Assessment

  3. IELTS Writing Assessment: Grammatical Range & Accuracy (GRA)

  4. OPALS: Program Assessment Rubric (PAR) Feedback

  5. Types Of Assessment (Essay)/6th sem Bcom Income tax and gst

  6. How to write and ace your IBDP Physics Internal Assessments

COMMENTS

  1. PDF Writing Assessment and Evaluation Rubrics

    Holistic scoring is a quick method of evaluating a composition based on the reader's general impression of the overall quality of the writing—you can generally read a student's composition and assign a score to it in two or three minutes. Holistic scoring is usually based on a scale of 0-4, 0-5, or 0-6.

  2. PDF Essay Rubric

    Essay Rubric Directions: Your essay will be graded based on this rubric. Consequently, use this rubric as a guide when writing your essay and check it again before you submit your essay. Traits 4 3 2 1 Focus & Details There is one clear, well-focused topic. Main ideas are clear and are well supported by detailed and accurate information.

  3. Writing Rubrics: How to Score Well on Your Paper

    A writing rubric is a clear set of guidelines on what your paper should include, often written as a rating scale that shows the range of scores possible on the assignment and how to earn each one. Professors use writing rubrics to grade the essays they assign, typically scoring on content, organization, mechanics, and overall understanding.

  4. PDF Writing Assessment and Evaluation Rubrics

    Holistic scoring is a quick method of evaluating a composition based on the reader's general impression of the overall quality of the writing—you can generally read a student's composition and assign a score to it in two or three minutes. Holistic scoring is usually based on a scale of 0-4, 0-5, or 0-6.

  5. Rubric Best Practices, Examples, and Templates

    A rubric is a scoring tool that identifies the different criteria relevant to an assignment, assessment, or learning outcome and states the possible levels of achievement in a specific, clear, and objective way. Use rubrics to assess project-based student work including essays, group projects, creative endeavors, and oral presentations.

  6. Rubric Design

    Writing rubrics can help address the concerns of both faculty and students by making writing assessment more efficient, consistent, and public. Whether it is called a grading rubric, a grading sheet, or a scoring guide, a writing assignment rubric lists criteria by which the writing is graded.

  7. Essay Rubric

    Grading rubrics can be of great benefit to both you and your students. For you, a rubric saves time and decreases subjectivity. Specific criteria are explicitly stated, facilitating the grading process and increasing your objectivity. For students, the use of grading rubrics helps them to meet or exceed expectations, to view the grading process ...

  8. Creating and Using Rubrics

    Example 1: Philosophy Paper This rubric was designed for student papers in a range of courses in philosophy (Carnegie Mellon). Example 2: Psychology Assignment Short, concept application homework assignment in cognitive psychology (Carnegie Mellon). Example 3: Anthropology Writing Assignments This rubric was designed for a series of short ...

  9. Academic essay rubric

    Academic essay rubric. This is a grading rubric an instructor uses to assess students' work on this type of assignment. It is a sample rubric that needs to be edited to reflect the specifics of a particular assignment. Download this file. Page. /. 3. Download this file [63.33 KB] Back to Resources Page.

  10. Assessment Rubrics

    Rubrics help instructors: Provide students with feedback that is clear, directed and focused on ways to improve learning. Demystify assignment expectations so students can focus on the work instead of guessing "what the instructor wants." Reduce time spent on grading and develop consistency in how you evaluate student learning across students ...

  11. Rubrics

    Rubrics are tools for communicating grading criteria and assessing student progress. Rubrics take a variety of forms, from grids to checklists, and measure a range of writing tasks, from conceptual design to sentence-level considerations. As with any assessment tool, a rubric's effectiveness is entirely dependent upon its design and its ...

  12. Sample Essay Rubric for Elementary Teachers

    Sample Essay Rubric for Elementary Teachers. An essay rubric is a way teachers assess students' essay writing by using specific criteria to grade assignments. Essay rubrics save teachers time because all of the criteria are listed and organized into one convenient paper. If used effectively, rubrics can help improve students' writing .

  13. PDF Assessing writing for Cambridge English Qualifications: A guide for

    1. Writing Assessment Scale 2. Writing Assessment subscales 1. Assessment criteria 2. Assessment categories Each piece of writing gets four sets of marks for each of the subscales, from 0 (lowest) to 5 (highest). Bands (0-5) marks / scores / grades These terms are commonly used to refer to Cambridge English Qualifications. There are also some

  14. 3 Types of Writing Rubrics for Effective Assessments

    The first rubric uses the words introduction, content, linking words, closing, and mechanics for the categories. The second rubric lists each standard that goes with those categories. As you can see, the first option covers the same information but uses fewer words and is much easier for students to use and understand. Student-Friendly Rubric.

  15. PDF Argumentative essay rubric

    Arrangement of essay is unclear and illogical. The writing lacks a clear sense of direction. Ideas, details or events seem strung together in a loose or random fashion; there is no identifiable internal structure and readers have trouble following the writer's line of thought. Few, forced transitions in the essay or no transitions are present.

  16. PDF Writing Skills Assessment Scoring Rubric & Practice Guide

    Writing Skills Assessment Scoring Rubric & Practice Guide. The ability to communicate ideas in writing clearly and concisely is a key skill for success in college and in your career. For your Bachelor of Arts in Business Administration application to be considered for any of the University of Washington's three campuses, you may need to take ...

  17. Creating and Using Rubrics

    A rubric is an assessment tool often shaped like a matrix, which describes levels of achievement in a specific area of performance, understanding, or behavior. There are two main types of rubrics: Analytic Rubric: An analytic rubric specifies at least two characteristics to be assessed at each performance level and provides a separate score for ...

  18. 15 Helpful Scoring Rubric Examples for All Grades and Subjects

    Try this rubric to make student expectations clear and end-of-project assessment easier. Learn more: Free Technology for Teachers. 100-Point Essay Rubric. Need an easy way to convert a scoring rubric to a letter grade? This example for essay writing earns students a final score out of 100 points. Learn more: Learn for Your Life. Drama ...

  19. PDF Writing Benchmarks: The Complete Guide

    Each benchmark administration tends to focus on one major essay type (e.g., informative, argument, and narrative, the big three as defined by the Common Core, or other types including expository, analytical, or persuasive writing). Read more about essay types here.These essays may be polished pieces, or more likely, are written on-demand (i.e.,

  20. English CAPS Document

    This English CAPS document contains the assessment rubric for writing an essay at Senior Phase. This is an invaluable tool for helping you to easily assess what level each child is at with their essay writing skills. Easy to print and download, this resource can be pinned up for reference or kept in your planner for when you need it. This English CAPS document has space for an individual's ...

  21. English CAPS Document

    This English CAPS document contains the assessment rubric for writing an essay at Senior Phase. This is an invaluable tool for helping you to easily assess what level each child is at with their essay writing skills. Easy to print and download, this resource can be pinned up for reference or kept in your planner for when you need it. This English CAPS document has space for an individual's ...

  22. Examining consistency among different rubrics for assessing writing

    The literature on using scoring rubrics in writing assessment denotes the significance of rubrics as practical and useful means to assess the quality of writing tasks. This study tries to investigate the agreement among rubrics endorsed and used for assessing the essay writing tasks by the internationally recognized tests of English language proficiency. To carry out this study, two hundred ...

  23. Grade 9 Assessment Rubrics Essay and Transactional Writing

    ASSESSMENT RUBRIC: ESSAY • Always use the rubric when marking the creative essay (Paper 3, Section A). • Marks from 0-40 have been divided into FIVE major level descriptors. • In the Content, Language and Style criteria, each of the five level descriptors is divided into an upper and a lower level sub-category with the applicable mark range

  24. Interpret Scores for the TOEFL iBT Test

    The score range is 0-30 for each section of the TOEFL iBT test. Each score range is divided into four or five proficiency levels to help you more accurately assess a test taker's skill. Skill. Proficiency Level. Reading. Advanced (24-30) High-Intermediate (18-23) Low-Intermediate (4-17) Below Low-Intermediate (0-3)