• Prodigy Math
  • Prodigy English
  • Is a Premium Membership Worth It?
  • Promote a Growth Mindset
  • Help Your Child Who's Struggling with Math
  • Parent's Guide to Prodigy
  • Assessments
  • Math Curriculum Coverage
  • English Curriculum Coverage
  • Game Portal

6 Types of Assessment (and How to Use Them)

no image

Written by Maria Kampen

Reviewed by Stephanie McEwan, B.Ed.

Do your students hate assessments?

Make assessments fun and engaging with Prodigy's game-based platform. And guess what? It's completely free for teachers!

  • Teacher Resources
  • Teaching Strategies

What's the purpose of different types of assessment?

6 types of assessment to use in your classroom, how to create effective assessments, final thoughts about different types of assessment.

How do you use the different  types of assessment  in your classroom to promote student learning?

School closures and remote or hybrid learning environments have posed some challenges for educators, but motivating students to learn and grow remains a constant goal.

Some students have lost a portion of their academic progress. Assessing students in meaningful ways can help motivate and empower them to grow as they become agents of their own learning. 

But testing can contribute to  math anxiety  for many students. Assessments can be difficult to structure properly and time-consuming to grade. And as a teacher, you know that student progress isn't just a number on a report card. 

There’s so much more to assessments than delivering an end-of-unit exam or prepping for a standardized test. Assessments help shape the learning process at all points, and give you insights into student learning. As John Hattie, a professor of education and the director of the Melbourne Education Research Institute at the University of Melbourne, Australia puts it :

The major purpose of assessment in schools should be to provide interpretative information to teachers and school leaders about their impact on students, so that these educators have the best information possible about what steps to take with instruction and how they need to change and adapt. So often we use assessment in schools to inform students of their progress and attainment. Of course this is important, but it is more critical to use this information to inform teachers about their impact on students. Using assessments as feedback for teachers is powerful. And this power is truly maximized when the assessments are timely, informative, and related to what teachers are actually teaching.

Six types of assessments are:

  • Diagnostic assessments
  • Formative assessments
  • Summative assessments
  • Ipsative assessments
  • Norm-referenced assessments
  • Criterion-referenced assessments

Let’s find out how assessments can analyze, support and further learning.

Smiling student completing an assessment

Different types of assessments can help you understand student progress in various ways. This understanding can inform the teaching strategies you use, and may lead to different adaptations.

In your classroom, assessments generally have one of three purposes:

  • Assessment  of  learning
  • Assessment  for  learning
  • Assessment  as  learning

Assessment of learning

You can use assessments to help identify if students are meeting grade-level standards. 

Assessments of learning are usually  grade-based , and can include:

  • Final projects
  • Standardized tests

They often have a concrete grade attached to them that communicates student achievement to teachers, parents, students, school-level administrators and district leaders. 

Common types of assessment of learning include: 

Assessment for learning

Assessments for learning provide you with a clear snapshot of student learning and understanding  as you teach  -- allowing you to adjust everything from your  classroom management strategies  to your lesson plans as you go. 

Assessments for learning should always be  ongoing and actionable . When you’re creating assessments, keep these key questions in mind:

  • What do students still need to know?
  • What did students take away from the lesson?
  • Did students find this lesson too easy? Too difficult?
  • Did my teaching strategies reach students effectively?
  • What are students most commonly misunderstanding?
  • What did I most want students to learn from this lesson? Did I succeed?

There are lots of ways you can deliver assessments for learning, even in a busy classroom.  We’ll cover some of them soon!

For now, just remember these assessments aren’t only for students -- they’re to provide you with actionable feedback to improve your instruction.

Common types of assessment for learning include formative assessments and diagnostic assessments. 

Assessment as learning

Assessment as learning  actively involves students  in the learning process. It teaches critical thinking skills, problem-solving and encourages students to set achievable goals for themselves and objectively measure their progress. 

They can help engage students in the learning process, too! One study "showed that in most cases the students pointed out the target knowledge as the reason for a task to be interesting and engaging, followed by the way the content was dealt with in the classroom."

Another found:

“Students develop an interest in mathematical tasks that they understand, see as relevant to their own concerns, and can manage.  Recent studies of students’ emotional responses to mathematics suggest that both their positive and their negative responses diminish as tasks become familiar and increase when tasks are novel”

Douglas B. McLeod

Some examples of assessment as learning include ipsative assessments, self-assessments and peer assessments.

There’s a time and place for every type of assessment. Keep reading to find creative ways of delivering assessments and understanding your students’ learning process!

1. Diagnostic assessment

Student working on an assessment at a wooden table

Let’s say you’re starting a lesson on two-digit  multiplication . To make sure the unit goes smoothly, you want to know if your students have mastered fact families,  place value  and one-digit multiplication before you move on to more complicated questions.

When you structure  diagnostic assessments  around your lesson,  you’ll get the information you need to understand student knowledge and engage your whole classroom .

Some examples to try include:

  • Short quizzes
  • Journal entries
  • Student interviews
  • Student reflections
  • Classroom discussions
  • Graphic organizers (e.g., mind maps, flow charts, KWL charts)

Diagnostic assessments can also help benchmark student progress. Consider giving the same assessment at the end of the unit so students can see how far they’ve come!

Using Prodigy for diagnostic assessments

One unique way of delivering diagnostic assessments is to use a game-based learning platform that engages your students.

Prodigy’s assessments tool  helps you align the math questions your students see in-game with the lessons you want to cover.

Screenshot of assessment pop up in Prodigy's teacher dashboard.

To set up a diagnostic assessment, use your assessments tool to create a  Plan  that guides students through a skill. This adaptive assessment will support students with pre-requisites when they need additional guidance.

Want to give your students a sneak peek at the upcoming lesson?  Learn how Prodigy helps you pre-teach important lessons .

2. Formative assessment

Just because students made it to the end-of-unit test, doesn’t mean they’ve  mastered the topics in the unit .  Formative assessments  help teachers understand student learning while they teach, and provide them with information to adjust their teaching strategies accordingly. 

Meaningful learning involves processing new facts, adjusting assumptions and drawing nuanced conclusions. As researchers  Thomas Romberg and Thomas Carpenter  describe it:

“Current research indicates that acquired knowledge is not simply a collection of concepts and procedural skills filed in long-term memory. Rather, the knowledge is structured by individuals in meaningful ways, which grow and change over time.”

In other words, meaningful learning is like a puzzle — having the pieces is one thing, but knowing how to put it together becomes an engaging process that helps solidify learning.

Formative assessments help you track how student knowledge is growing and changing in your classroom in real-time.  While it requires a bit of a time investment — especially at first — the gains are more than worth it.

A March 2020 study found that providing formal formative assessment evidence such as written feedback and quizzes within or between instructional units helped enhance the effectiveness of formative assessments.

Some examples of formative assessments include:

  • Group projects
  • Progress reports
  • Class discussions
  • Entry and exit tickets
  • Short, regular quizzes
  • Virtual classroom tools like  Socrative  or  Kahoot!

When running formative assessments in your classroom, it’s best to keep them  short, easy to grade and consistent . Introducing students to formative assessments in a low-stakes way can help you benchmark their progress and reduce math anxiety.

Find more engaging formative assessment ideas here!

How Prodigy helps you deliver formative assessments

Prodigy makes it easy to create, deliver and grade formative assessments that help keep your students engaged with the learning process and provide you with actionable data to adjust your lesson plans. 

Use your Prodigy teacher dashboard to create an  Assignment  and make formative assessments easy!

Assignments  assess your students on a particular skill with a set number of questions and can be differentiated for individual students or groups of students.

For more ideas on using Prodigy for formative assessments, read:

  • How to use Prodigy for spiral review
  • How to use Prodigy as an entry or exit ticket
  • How to use Prodigy for formative assessments

3. Summative assessment

Students completing a standardized test

Summative assessments  measure student progress as an assessment of learning. Standardized tests are a type of summative assessment and  provide data for you, school leaders and district leaders .

They can assist with communicating student progress, but they don’t always give clear feedback on the learning process and can foster a “teach to the test” mindset if you’re not careful. 

Plus, they’re stressful for teachers. One  Harvard survey  found 60% of teachers said “preparing students to pass mandated standardized tests” “dictates most of” or “substantially affects” their teaching.

Sound familiar?

But just because it’s a summative assessment, doesn’t mean it can’t be engaging for students and useful for your teaching. Try creating assessments that deviate from the standard multiple-choice test, like:

  • Recording a podcast
  • Writing a script for a short play
  • Producing an independent study project

No matter what type of summative assessment you give your students, keep some best practices in mind:

  • Keep it real-world relevant where you can
  • Make questions clear and instructions easy to follow
  • Give a rubric so students know what’s expected of them
  • Create your final test after, not before, teaching the lesson
  • Try blind grading: don’t look at the name on the assignment before you mark it

Use these summative assessment examples to make them effective and fun for your students!

Preparing students for summative assessments with Prodigy

Screenshot of Prodigy's test prep tool in the Prodigy teacher dashboard.

Did you know you can use Prodigy to prepare your students for summative assessments — and deliver them in-game?

Use  Assignments  to differentiate math practice for each student or send an end-of-unit test to the whole class.

Or use our  Test Prep  tool to understand student progress and help them prepare for standardized tests in an easy, fun way!

See how you can benchmark student progress and prepare for standardized tests with Prodigy.

4. Ipsative assessments

How many of your students get a bad grade on a test and get so discouraged they stop trying? 

Ipsative assessments  are one of the types of assessment  as  learning that  compares previous results with a second try, motivating students to set goals and improve their skills . 

When a student hands in a piece of creative writing, it’s just the first draft. They practice athletic skills and musical talents to improve, but don’t always get the same chance when it comes to other subjects like math. 

A two-stage assessment framework helps students learn from their mistakes and motivates them to do better. Plus, it removes the instant gratification of goals and teaches students learning is a process. 

You can incorporate ipsative assessments into your classroom with:

  • A two-stage testing process
  • Project-based learning  activities

One study on ipsative learning techniques  found that when it was used with higher education distance learners, it helped motivate students and encouraged them to act on feedback to improve their grades.

In Gwyneth Hughes' book, Ipsative Assessment: Motivation Through Marking Progress , she writes: "Not all learners can be top performers, but all learners can potentially make progress and achieve a personal best. Putting the focus onto learning rather than meeting standards and criteria can also be resource efficient."

While educators might use this type of assessment during pre- and post-test results, they can also use it in reading instruction. Depending on your school's policy, for example, you can record a student reading a book and discussing its contents. Then, at another point in the year, repeat this process. Next, listen to the recordings together and discuss their reading improvements.

What could it look like in your classroom?

5. Norm-referenced assessments

student taking a summative assessment

Norm-referenced assessments  are tests designed to compare an individual to a group of their peers, usually based on national standards and occasionally adjusted for age, ethnicity or other demographics.

Unlike ipsative assessments, where the student is only competing against themselves, norm-referenced assessments  draw from a wide range of data points to make conclusions about student achievement.

Types of norm-referenced assessments include:

  • Physical assessments
  • Standardized college admissions tests like the SAT and GRE

Proponents of norm-referenced assessments point out that they accentuate differences among test-takers and make it easy to analyze large-scale trends. Critics argue they don’t encourage complex thinking and can inadvertently discriminate against low-income students and minorities. 

Norm-referenced assessments are most useful when measuring student achievement to determine:

  • Language ability
  • Grade readiness
  • Physical development
  • College admission decisions
  • Need for additional learning support

While they’re not usually the type of assessment you deliver in your classroom, chances are you have access to data from past tests that can give you valuable insights into student performance.

6. Criterion-referenced assessments

Criterion-referenced assessments   compare the score of an individual student to a learning standard and performance level,  independent of other students around them. 

In the classroom, this means measuring student performance against grade-level standards and can include end-of-unit or final tests to assess student understanding. 

Outside of the classroom, criterion-referenced assessments appear in professional licensing exams, high school exit exams and citizenship tests, where the student must answer a certain percentage of questions correctly to pass. 

Criterion-referenced assessments are most often compared with norm-referenced assessments. While they’re both considered types of assessments of learning, criterion-referenced assessments don’t measure students against their peers. Instead, each student is graded to provide insight into their strengths and areas for improvement.

You don’t want to use a norm-referenced assessment to figure out where learning gaps in your classroom are, and ipsative assessments aren’t the best for giving your principal a high-level overview of student achievement in your classroom. 

When it comes to your teaching, here are some best practices to help you identify which type of assessment will work and how to structure it, so you and your students get the information you need.

Make a rubric

Students do their best work when they know what’s expected of them and how they’ll be marked. Whether you’re assigning a  cooperative learning  project or an independent study unit, a rubric  communicates clear success criteria to students and helps teachers maintain consistent grading.

Ideally, your rubric should have a detailed breakdown of all the project’s individual parts, what’s required of each group member and an explanation of what different levels of achievement look like.

A well-crafted rubric lets multiple teachers grade the same assignment and arrive at the same score. It’s an important part of assessments for learning and assessments of learning, and teaches students to take responsibility for the quality of their work. 

There are plenty of  online rubric tools  to help you get started -- try one today!

Ask yourself  why  you're giving the assessment

Teacher in classroom supervising students completing a test

While student grades provide a useful picture of achievement and help you communicate progress to school leaders and parents, the ultimate goal of assessments is to improve student learning. 

Ask yourself questions like:

  • What’s my plan for the results?
  • Who’s going to use the results, besides me?
  • What do I want to learn from this assessment?
  • What’s the best way to present the assessment to my students, given what I know about their progress and learning styles?

This helps you effectively prepare students and create an assessment that moves learning forward.

Don't stick with the same types of assessment — mix it up!

Teacher in front of a classroom and pointing at a student with a raised hand.

End-of-unit assessments are a tried and tested (pun intended) staple in any classroom. But why stop there?

Let’s say you’re teaching a unit on  multiplying fractions . To help you plan your lessons, deliver a diagnostic assessment to find out what students remember from last year. Once you’re sure they understand all the prerequisites, you can start teaching your lessons more effectively. 

After each math class, deliver short exit tickets to find out what students understand and where they still have questions. If you see students struggling, you can re-teach or deliver intervention in small groups during  station rotations . 

When you feel students are prepared, an assessment of learning can be given to them. If students do not meet the success criteria, additional support and scaffolding can be provided to help them improve their understanding of the topic. You can foster a growth mindset by reminding students that mistakes are an important part of learning!

Now your students are masters at multiplying fractions! And when standardized testing season rolls around, you know which of your students need additional support — and where. 

Build your review based on the data you’ve collected through diagnostic, formative, summative and ipsative assessments so they perform well on their standardized tests.

Remember: learning extends well beyond a single score or assessment!

It’s an ongoing process, with plenty of opportunities for students to build a  growth mindset  and develop new skills. 

Prodigy is a fun, digital game-based learning platform used by over 100 million students and 2.5 million teachers. Join today to make delivering assessments and differentiating math learning easy with a free teacher account!

ClickCease

Culture & Climate

Full day workshop jun 19, social-emotional learning, full day workshop jun 20, close reading & text-dependent questions, full day workshop jun 21, the flipped classroom, 2-day workshop jun 25 & 26, effective classroom management, full day workshop jul 15, reclaiming the joy of teaching, full day workshop jul 16, growth mindset, full day workshop jul 17, project-based learning, full day workshop jul 18.

in which assessment does assignment fall

Assessing Student Learning: 6 Types of Assessment and How to Use Them

assessment with bulb

Assessing student learning is a critical component of effective teaching and plays a significant role in fostering academic success. We will explore six different types of assessment and evaluation strategies that can help K-12 educators, school administrators, and educational organizations enhance both student learning experiences and teacher well-being.

We will provide practical guidance on how to implement and utilize various assessment methods, such as formative and summative assessments, diagnostic assessments, performance-based assessments, self-assessments, and peer assessments.

Additionally, we will also discuss the importance of implementing standard-based assessments and offer tips for choosing the right assessment strategy for your specific needs.

Importance of Assessing Student Learning

Assessment plays a crucial role in education, as it allows educators to measure students’ understanding, track their progress, and identify areas where intervention may be necessary. Assessing student learning not only helps educators make informed decisions about instruction but also contributes to student success and teacher well-being.

Assessments provide insight into student knowledge, skills, and progress while also highlighting necessary adjustments in instruction. Effective assessment practices ultimately contribute to better educational outcomes and promote a culture of continuous improvement within schools and classrooms.

1. Formative assessment

teacher assessing the child

Formative assessment is a type of assessment that focuses on monitoring student learning during the instructional process. Its primary purpose is to provide ongoing feedback to both teachers and students, helping them identify areas of strength and areas in need of improvement. This type of assessment is typically low-stakes and does not contribute to a student’s final grade.

Some common examples of formative assessments include quizzes, class discussions, exit tickets, and think-pair-share activities. This type of assessment allows educators to track student understanding throughout the instructional period and identify gaps in learning and intervention opportunities.

To effectively use formative assessments in the classroom, teachers should implement them regularly and provide timely feedback to students.

This feedback should be specific and actionable, helping students understand what they need to do to improve their performance. Teachers should use the information gathered from formative assessments to refine their instructional strategies and address any misconceptions or gaps in understanding. Formative assessments play a crucial role in supporting student learning and helping educators make informed decisions about their instructional practices.

Check Out Our Online Course: Standards-Based Grading: How to Implement a Meaningful Grading System that Improves Student Success

2. summative assessment.

students taking summative assessment

Examples of summative assessments include final exams, end-of-unit tests, standardized tests, and research papers. To effectively use summative assessments in the classroom, it’s important to ensure that they are aligned with the learning objectives and content covered during instruction.

This will help to provide an accurate representation of a student’s understanding and mastery of the material. Providing students with clear expectations and guidelines for the assessment can help reduce anxiety and promote optimal performance.

Summative assessments should be used in conjunction with other assessment types, such as formative assessments, to provide a comprehensive evaluation of student learning and growth.

3. Diagnostic assessment

Diagnostic assessment, often used at the beginning of a new unit or term, helps educators identify students’ prior knowledge, skills, and understanding of a particular topic.

This type of assessment enables teachers to tailor their instruction to meet the specific needs and learning gaps of their students. Examples of diagnostic assessments include pre-tests, entry tickets, and concept maps.

To effectively use diagnostic assessments in the classroom, teachers should analyze the results to identify patterns and trends in student understanding.

This information can be used to create differentiated instruction plans and targeted interventions for students struggling with the upcoming material. Sharing the results with students can help them understand their strengths and areas for improvement, fostering a growth mindset and encouraging active engagement in their learning.

4. Performance-based assessment

Performance-based assessment is a type of evaluation that requires students to demonstrate their knowledge, skills, and abilities through the completion of real-world tasks or activities.

The main purpose of this assessment is to assess students’ ability to apply their learning in authentic, meaningful situations that closely resemble real-life challenges. Examples of performance-based assessments include projects, presentations, portfolios, and hands-on experiments.

These assessments allow students to showcase their understanding and application of concepts in a more active and engaging manner compared to traditional paper-and-pencil tests.

To effectively use performance-based assessments in the classroom, educators should clearly define the task requirements and assessment criteria, providing students with guidelines and expectations for their work. Teachers should also offer support and feedback throughout the process, allowing students to revise and improve their performance.

Incorporating opportunities for peer feedback and self-reflection can further enhance the learning process and help students develop essential skills such as collaboration, communication, and critical thinking.

5. Self-assessment

Self-assessment is a valuable tool for encouraging students to engage in reflection and take ownership of their learning. This type of assessment requires students to evaluate their own progress, skills, and understanding of the subject matter. By promoting self-awareness and critical thinking, self-assessment can contribute to the development of lifelong learning habits and foster a growth mindset.

Examples of self-assessment activities include reflective journaling, goal setting, self-rating scales, or checklists. These tools provide students with opportunities to assess their strengths, weaknesses, and areas for improvement. When implementing self-assessment in the classroom, it is important to create a supportive environment where students feel comfortable and encouraged to be honest about their performance.

Teachers can guide students by providing clear criteria and expectations for self-assessment, as well as offering constructive feedback to help them set realistic goals for future learning.

Incorporating self-assessment as part of a broader assessment strategy can reinforce learning objectives and empower students to take an active role in their education.

Reflecting on their performance and understanding the assessment criteria can help them recognize both short-term successes and long-term goals. This ongoing process of self-evaluation can help students develop a deeper understanding of the material, as well as cultivate valuable skills such as self-regulation, goal setting, and critical thinking.

6. Peer assessment

Peer assessment, also known as peer evaluation, is a strategy where students evaluate and provide feedback on their classmates’ work. This type of assessment allows students to gain a better understanding of their own work, as well as that of their peers.

Examples of peer assessment activities include group projects, presentations, written assignments, or online discussion boards.

In these settings, students can provide constructive feedback on their peers’ work, identify strengths and areas for improvement, and suggest specific strategies for enhancing performance.

Constructive peer feedback can help students gain a deeper understanding of the material and develop valuable skills such as working in groups, communicating effectively, and giving constructive criticism.

To successfully integrate peer assessment in the classroom, consider incorporating a variety of activities that allow students to practice evaluating their peers’ work, while also receiving feedback on their own performance.

Encourage students to focus on both strengths and areas for improvement, and emphasize the importance of respectful, constructive feedback. Provide opportunities for students to reflect on the feedback they receive and incorporate it into their learning process. Monitor the peer assessment process to ensure fairness, consistency, and alignment with learning objectives.

Implementing Standard-Based Assessments

kids having quizzes

Standard-based assessments are designed to measure students’ performance relative to established learning standards, such as those generated by the Common Core State Standards Initiative or individual state education guidelines.

By implementing these types of assessments, educators can ensure that students meet the necessary benchmarks for their grade level and subject area, providing a clearer picture of student progress and learning outcomes.

To successfully implement standard-based assessments, it is essential to align assessment tasks with the relevant learning standards.

This involves creating assessments that directly measure students’ knowledge and skills in relation to the standards rather than relying solely on traditional testing methods.

As a result, educators can obtain a more accurate understanding of student performance and identify areas that may require additional support or instruction. Grading formative and summative assessments within a standard-based framework requires a shift in focus from assigning letter grades or percentages to evaluating students’ mastery of specific learning objectives.

This approach encourages educators to provide targeted feedback that addresses individual student needs and promotes growth and improvement. By utilizing rubrics or other assessment tools, teachers can offer clear, objective criteria for evaluating student work, ensuring consistency and fairness in the grading process.

Tips For Choosing the Right Assessment Strategy

When selecting an assessment strategy, it’s crucial to consider its purpose. Ask yourself what you want to accomplish with the assessment and how it will contribute to student learning. This will help you determine the most appropriate assessment type for your specific situation.

Aligning assessments with learning objectives is another critical factor. Ensure that the assessment methods you choose accurately measure whether students have met the desired learning outcomes. This alignment will provide valuable feedback to both you and your students on their progress. Diversifying assessment methods is essential for a comprehensive evaluation of student learning.

By using a variety of assessment types, you can gain a more accurate understanding of students’ strengths and weaknesses. This approach also helps support different learning styles and reduces the risk of overemphasis on a single assessment method.

Incorporating multiple forms of assessment, such as formative, summative, diagnostic, performance-based, self-assessment, and peer assessment, can provide a well-rounded understanding of student learning. By doing so, educators can make informed decisions about instruction, support, and intervention strategies to enhance student success and overall classroom experience.

Challenges and Solutions in Assessment Implementation

Implementing various assessment strategies can present several challenges for educators. One common challenge is the limited time and resources available for creating and administering assessments. To address this issue, teachers can collaborate with colleagues to share resources, divide the workload, and discuss best practices.

Utilizing technology and online platforms can also streamline the assessment process and save time. Another challenge is ensuring that assessments are unbiased and inclusive.

To overcome this, educators should carefully review assessment materials for potential biases and design assessments that are accessible to all students, regardless of their cultural backgrounds or learning abilities.

Offering flexible assessment options for the varying needs of learners can create a more equitable and inclusive learning environment. It is essential to continually improve assessment practices and seek professional development opportunities.

Seeking support from colleagues, attending workshops and conferences related to assessment practices, or enrolling in online courses can help educators stay up-to-date on best practices while also providing opportunities for networking with other professionals.

Ultimately, these efforts will contribute to an improved understanding of the assessments used as well as their relevance in overall student learning.

Assessing student learning is a crucial component of effective teaching and should not be overlooked. By understanding and implementing the various types of assessments discussed in this article, you can create a more comprehensive and effective approach to evaluating student learning in your classroom.

Remember to consider the purpose of each assessment, align them with your learning objectives, and diversify your methods for a well-rounded evaluation of student progress.

If you’re looking to further enhance your assessment practices and overall professional development, Strobel Education offers workshops , courses , keynotes , and coaching  services tailored for K-12 educators. With a focus on fostering a positive school climate and enhancing student learning,  Strobel Education can support your journey toward improved assessment implementation and greater teacher well-being.

Related Posts

Elkhart Large Group_with Kim strobel during workshop

Reimagining Grading and Assessment Practices in 21st Century Education

kids with their grades

Leading Change with Effective Grading Practices

teacher grading papers

Achieving Fairness and Transparency in Grading Practices through Standards-Based Grading

Subscribe to our blog today, keep in touch.

Copyright 2024 Strobel Education, all rights reserved.

Assessing Student Learning: 6 Types of Assessment and How to Use Them Individual Pay via PO

We are unable to directly process purchase orders. Please contact us with your request and we will immediately respond to assist you in purchasing.

A Guide to Types of Assessment: Diagnostic, Formative, Interim, and Summative

in which assessment does assignment fall

Assessments come in many shapes and sizes. For those who are new to assessment or just starting out, the terms can be hard to sort out or simply unfamiliar. Knowing one type of assessment from another can be a helpful way to understand how best to use assessment to your advantage. In this guide to types of assessments, we will cover the different types of assessments you may come across: diagnostic, formative, interim, and summative.

Nature of Assessments

The multi-faceted nature of assessments means that educators can leverage them in a number of ways to provide valuable formal or informal structure to the learning process. The main thing to remember is that the assessment is a learning tool. What all assessments have in common is that they provide a snapshot of student understanding at a particular time in the learning process.

Reasonably so, when you were a K-12 student yourself, you may not have been aware of the variety of assessments that teachers leverage.  To the average student, or anyone who has ever been a student, the word ‘test’ has a pretty clear cut definition and it usually includes some level of anxiety and expectation about a final outcome.  But, to educators, tests – or assessments – are actually quite multi-faceted and have both formal and informal places throughout the learning process.

Different Types of Assessments

Assessments can run the gamut from start to finish when it comes to instruction. Think of it like a long distance race that has a start and finish line and many stations to refuel in between.  The race can be any instructional period of time, such as a unit, a quarter, or even the full year.  In this metaphor, the student is the runner and the teacher is the coach who is trying to help the student run the race as well as they possibly can.  Different assessments types, when utilized by the coach (teacher) in the right way, can help the runner (student) run the race better and more effectively.

Some assessments are helpful before the race even begins to help determine what the best running strategy is ( diagnostic ). Some assessments are beneficial during the race to track progress and see if adjustments to the strategy should be made during the race ( formative ). Some assessments are given to see if students in entire schools or districts, the entire running team, are moving forward and learning the material ( interim ). And some assessments are best at the very end of the race, to review performance, see how you did, and see how to improve for the next race ( summative ).

How to Use Assessments

Assessments help the teacher determine what to teach, how to teach, and in the end, how effectively they taught it. Assessments can run the gamut from start to finish when it comes to instruction. Think of it like a race that has a start and finish line and many stations to refuel in between.

If you have ever asked the question, “What is a formative assessment?” or have been confused by formative assessment vs. summative assessment or interim vs final, that’s OK! The Pear Assessment team is here to help!

What is a Diagnostic Assessment?

Are students ready for the next unit?  What knowledge do they already have about this topic?  Teachers who are curious about how much their class knows about a future topic can give diagnostic assessments before diving in.

Diagnostic assessments are pretests. They usually serve as a barometer for how much pre-loaded information a student has about a topic. The word diagnosis is defined as an analysis of the nature or condition of a situation, which is exactly how teachers tend to use them.

Diagnostic tests help to tell the teacher (and the student) how much they know and don’t know about an upcoming topic. This helps to inform the teacher’s lesson planning, learning objectives, and identify areas that may need more or less time spent on.

Components of a Diagnostic Assessment

  • Happen at the beginning of a unit, lesson, quarter, or period of time.
  • Goal of understanding student’s current position to inform effective instruction
  • Identify strengths and areas of improvement for the student
  • Low-stakes assessments (Usually do not count as a grade)

Difference Between Diagnostic and Formative Assessments

Though both diagnostic assessments and formative assessments aim to inform teachers to instruct more effectively, they emphasize different aspects.  Formative assessments are taken during a unit to assess how students are learning the material that the teacher has been teaching.  Diagnostic assessments come before this, analyzing what students have learned in the past, many times from different teachers or classes.  Both are very helpful for the teacher, and the results are used to identify areas that need more attention in future instruction.

Diagnostic Assessments Examples

At the beginning of a unit on Ancient Greece, a teacher may give a pre-test to determine if the class knows the basic geography, history or culture.  The class’ responses will determine where the teacher begins and how much time is dedicated to certain topics.  The teacher may learn from this diagnostic assessment that many students already have knowledge on cultural aspects of Greece, but know little about its history. From this, they may adjust the lesson plan to spend a bit more time on the history and origins of Ancient Greece and slightly less on culture.

Keep In Mind  

Another valuable use of a diagnostic pre-test is to give the students an idea of what they will have learned by end of the learning period.  When combined with a post test, their score on a pre-test will show students just how much knowledge they have gained.  This can be a powerful practice for building esteem in students.   In fact, some teachers even use the same pre-test and post-test to make this difference more evident. This strategy provides great data on how students have progressed is a sure-tell way to measure and analyze growth over the year.

The grading scale for a diagnostic assessment is usually not based on the number of correct answers and holds little weight for a student’s final grade. You might consider this type of test to be a low-stakes assessment for students.

Diagnostic Assessment Tools

Teachers use Pear Assessment to find or develop diagnostic assessments in a number of creative ways. Some teachers set up diagnostics in the form of introductory activities, classic multiple-choice assessments, or tech-enhanced “quizzes”. The automated grading feature of Pear Assessment  makes it easy to instantly know how much information the class as a whole already knows.

Access Free Diagnostic Assessments

Start off the year strong and know where student are at when they begin the school year. Access FREE grade level SmartStart diagnostic assessments for grades 3-12 ELA and Math. Click here to learn more and explore these diagnostic assessments and more in the Pear Assessment Test Library.

Screen shot of Pear Assessment's diagnostic window

What is a Formative Assessment?

How are students doing? Are they picking up the information they should be learning? Teachers who don’t want to wait until the end of a unit or semester use various tactics, like formative assessment, to “check in” with students and see how they are progressing.

What makes formative assessment stand out?

Formative assessment involves the use of immediate insights to guide instruction. If we break down the term, we see that “Formative” comes from Latin formare ‘to form.’  Assessment simply refers to an evaluation. Together the words “formative” and  “assessment” refer to a guiding evaluation that helps to shape something.  With formative assessment, teachers mold or form instruction to better suit student learning. To glean actionable insights, the best formative assessments are generally easy to implement and offer immediate results that lead to instant intervention or instructional adjustments.

Here’s how education academics Paul Black and Dylan William explain the differences between formative assessment and the general term “assessment”:

We use the general term assessment to refer to all those activities undertaken by teachers — and by their students in assessing themselves — that provide information to be used as feedback to modify teaching and learning activities. Such assessment becomes formative assessment when the evidence is actually used to adapt the teaching to meet student needs.

Another Way to Check-up on Everyone

One common way to think of a formative assessment is to think about “check-ups” with the doctor. During a check-up, the doctor assesses the status of your health to make sure you are on track and to identify any areas where you might need more attention or support. It can be used to promote healthy habits or catch symptoms of illness. If the doctor notices something amiss, they may ask you to exercise more or eat less sugar and more vegetables! The goal is to make strategic changes based on new insights. Similarly, formative assessment provides feedback to teachers, allowing them to “check-in” on how students are doing, or, to match this analogy, the “health” of learning!

Components that Define Formative Assessment

The main intent of formative assessment is to gather insight about student learning during a unit to track student progress and inform instruction.

Formative assessments usually comprise of the following key aspects

  • Low-stakes assessment
  • Goal of informing instruction
  • Gain insight on learning status
  • Helps identify knowledge retention and understanding
  • Daily, weekly, or otherwise frequent checks
  • Generally short and quick checks
  • Comes in many forms: quiz, exit ticket, artwork, venn diagram, game, presentation, etc.

Examples of Formative Assessment

Formative assessments could include benchmark tests, a class discussion, an “exit ticket” activity or any check-in the teacher conducts to see how much has been learned.  By taking a quick formative assessment, the teacher can see how much has been retained and then modify the upcoming lessons or activities to fill in the gaps or pick up the pace.  It allows, as the name suggests, a teacher to form or reshape the lessons as they go. Formative assessments can sometimes be called interim assessments.

As you might be able to tell, formative assessments come in many shapes and sizes. They are used by a teacher to assess, or diagnose, how much information has been learned at periodic times in the middle of a unit, subject or year. Formative assessments are the close cousin to diagnostic assessments (add link).

Formative assessments are used in the middle of a learning process to determine if students are maintaining the right pace.

The second trend driving formative assessments is the common-core style of standardized tests.  Many schools are using formative tests to help guide the preparation of their students for the formal spring testing season– a time when results have an important impact on the school, district, and even the state. These kind of high-stakes assessments, such as PARCC, SBAC, AIR, ACT Aspire, etc., are driving the need for formative assessments throughout the year.

Like diagnostic assessments, formative assessments are usually given “cold”, without prior access to the information, to get an accurate sample of what has been retained. Similarly, they most often carry little weight towards the student’s final grade.

Online Formative Assessment with Pear Assessment

Many teachers use online digital assessment to gain immediate insights into student progress so they can immediately adjust teaching strategies or intervene where needed. Online assessment autogrades so ultimately teachers are able to save time and spend more time focusing on strong and effective instruction.

Log onto Pear Assessment to access a wide number of online digital assessments in the public assessment library. You may notice that a significant portion of digital assessments in the library are dedicated to helping students prepare for spring testing. Many Pear Assessment Certified assessments are modeled after the tech-enhanced style of questions that are found on the spring assessments. Using these throughout the year helps students build a comfort level with tech-enhanced maneuvers that are key to success on spring tests.

Try out some online formative assessments created by teachers across the country. Assign them to your students or log in to Pear Assessment to create a free account and start making your own!

What is a Benchmark/Interim Assessment?

Are students within a whole school or district understanding the material? Where is there room for growth and how can instruction be improved? These are the types of questions that teachers and school leaders ask and hope to answer when giving benchmark exams.

Defining Benchmark Assessments

A benchmark exam is given across many classes, an entire grade level, a whole school, or across a district. The purpose of a benchmark exam is to understand if students have mastered specific standards and are ready to move on. Typically, benchmark exams are given to help students prepare for end of year state testing, like PARCC, AIR, SBAC, FSA, or PSSA.

It’s important to note that the terms “benchmark exam” and “interim assessment” are used interchangeably. They both are used to measure academic progress of large groups of students. Ideally, the results of a benchmark exam help teachers understand what lessons they need to reteach and which students need extra support. Beyond this, benchmark exams act as a “preview” to how a class, school, or district will perform on state tests or summative exams.

Components of a Benchmark Exam:

  • Help drive future instruction
  • Term used interchangeably with “interim assessment”
  • Given to many classes, a whole school, or across an entire district
  • Act as a “predictor” to state test scores

Is There a Difference Between Interim Assessment and Benchmark Exam? What About Formative Assessment?

There can be lots of confusion about the different types of assessments. It’s important to recognize these differences and understand how each type of assessment fits into the overall learning process of each student.

There is little to no difference between an interim assessment and a benchmark exam. They are both formal tests often given using technology, like Pear Assessment, to thoroughly and efficiently monitor student progress.

Benchmark exams are also formative in that they help teachers drive their future instruction. While traditional formative assessments are given in one class, benchmark exams are usually given across many different classes or across an entire school. The best benchmark exams give data quickly, so teachers can act on it. This is why digital assessment is great for benchmark exams

Online Benchmark Exams With Pear Assessment

Schools and districts across the country have turned to Pear Assessment Enterprise to administer their common benchmark exams. When benchmark exams are given online, the results are instant and the data can immediately be used to help teachers modify their future lessons. School leaders can set up the test quickly and easily; they even can tie every question to a state standard.

For example, at Burton School District in California, district leaders and teachers are able to push out districtwide benchmark exams without a headache. David Shimer, Director of Education Services at Burton Schools, explains, “I think the ‘aha’ moment was when, within a period of one week, we were able to get every student across the district logged in, have teachers get an assessment from their students, and as a district we were able to get the charts and graphs back in ways that allowed us to adjust instruction and training.”

What is a Summative Assessment?

How well did a student do in this class? Did they learn this unit’s material? When people talk about classic tests or finals, a summative assessment is normally the type of assessment they are referring to.

In this category of assessments, you’ll find the “Big Kahuna” of tests, such as the finals that we pull all-nighters for as well as the tests that get you into college or let you drive on the roads.  Summative assessments document how much information was retained at the end of a designated period of learning (e.g. unit, semester, or school year).

Components of Summative Assessments:

  • Evaluate learning/understanding at the end of a checkpoint
  • Normally help to determine students’ grade
  • Used for accountability of schools, students, and teachers
  • Usually higher stakes than other assessment forms
  • Preparation and review is helpful for best performance

Summative Assessment Examples

At the end of a semester or a school year, summative tests are used to see how much the student actually learned. It can be the midterm,  final grade, or standardized tests. The best summative assessments require a higher level of thinking that synthesizes several important concepts together.

In the traditional sense of the term, summative assessments are what we think of as the big end-of-the-year bubble-sheet or pen-and-paper finals. In the modern-day tech-enhanced classroom summative assessments are increasingly delivered online. Summative assessments can even take the shape of multi-media presentations, group projects, creative writing, plays or other hands-on projects that demonstrate a mastery of the material. In summative assessments, the scores tend to have a significant effect on the student’s final grade or whatever is designated as the measurement of success.

Summative Assessment Tools

Teachers use Pear Assessment’s multimedia function to create summative assessments that use video as a prompt.  The multimedia can engage students with audio and visual items and then requires the students to summarize their learning in a classic essay.  The result is a traditional, “classic” exam with sophisticated multi-media components.

With Pear Assessment’s standards-tied questions, teachers who give summative assessments can immediately identify if students mastered the concepts they needed to know.

Jump to a Section

Contributors, subscribe to our newsletter, related reading.

in which assessment does assignment fall

Join our newsletter

  • Schools & Districts
  • Integrations
  • Pear Assessment
  • Pear Practice
  • Pear Deck Tutor
  • Success Stories
  • Advocacy Program
  • Resource Center
  • Help Center
  • Plans & Pricing
  • Product Updates
  • Security Reporting Program
  • Website Terms
  • Website Privacy Policy
  • Product Terms
  • Product Privacy Policy
  • Privacy & Trust
  • California Residents Notice

CCPA Privacy Rights logo.

Center for Teaching

Assessing student learning.

in which assessment does assignment fall

Forms and Purposes of Student Assessment

Assessment is more than grading, assessment plans, methods of student assessment, generative and reflective assessment, teaching guides related to student assessment, references and additional resources.

Student assessment is, arguably, the centerpiece of the teaching and learning process and therefore the subject of much discussion in the scholarship of teaching and learning. Without some method of obtaining and analyzing evidence of student learning, we can never know whether our teaching is making a difference. That is, teaching requires some process through which we can come to know whether students are developing the desired knowledge and skills, and therefore whether our instruction is effective. Learning assessment is like a magnifying glass we hold up to students’ learning to discern whether the teaching and learning process is functioning well or is in need of change.

To provide an overview of learning assessment, this teaching guide has several goals, 1) to define student learning assessment and why it is important, 2) to discuss several approaches that may help to guide and refine student assessment, 3) to address various methods of student assessment, including the test and the essay, and 4) to offer several resources for further research. In addition, you may find helfpul this five-part video series on assessment that was part of the Center for Teaching’s Online Course Design Institute.

What is student assessment and why is it Important?

In their handbook for course-based review and assessment, Martha L. A. Stassen et al. define assessment as “the systematic collection and analysis of information to improve student learning” (2001, p. 5). An intentional and thorough assessment of student learning is vital because it provides useful feedback to both instructors and students about the extent to which students are successfully meeting learning objectives. In their book Understanding by Design , Grant Wiggins and Jay McTighe offer a framework for classroom instruction — “Backward Design”— that emphasizes the critical role of assessment. For Wiggins and McTighe, assessment enables instructors to determine the metrics of measurement for student understanding of and proficiency in course goals. Assessment provides the evidence needed to document and validate that meaningful learning has occurred (2005, p. 18). Their approach “encourages teachers and curriculum planners to first ‘think like an assessor’ before designing specific units and lessons, and thus to consider up front how they will determine if students have attained the desired understandings” (Wiggins and McTighe, 2005, p. 18). [1]

Not only does effective assessment provide us with valuable information to support student growth, but it also enables critically reflective teaching. Stephen Brookfield, in Becoming a Critically Reflective Teacher, argues that critical reflection on one’s teaching is an essential part of developing as an educator and enhancing the learning experience of students (1995). Critical reflection on one’s teaching has a multitude of benefits for instructors, including the intentional and meaningful development of one’s teaching philosophy and practices. According to Brookfield, referencing higher education faculty, “A critically reflective teacher is much better placed to communicate to colleagues and students (as well as to herself) the rationale behind her practice. She works from a position of informed commitment” (Brookfield, 1995, p. 17). One important lens through which we may reflect on our teaching is our student evaluations and student learning assessments. This reflection allows educators to determine where their teaching has been effective in meeting learning goals and where it has not, allowing for improvements. Student assessment, then, both develop the rationale for pedagogical choices, and enables teachers to measure the effectiveness of their teaching.

The scholarship of teaching and learning discusses two general forms of assessment. The first, summative assessment , is one that is implemented at the end of the course of study, for example via comprehensive final exams or papers. Its primary purpose is to produce an evaluation that “sums up” student learning. Summative assessment is comprehensive in nature and is fundamentally concerned with learning outcomes. While summative assessment is often useful for communicating final evaluations of student achievement, it does so without providing opportunities for students to reflect on their progress, alter their learning, and demonstrate growth or improvement; nor does it allow instructors to modify their teaching strategies before student learning in a course has concluded (Maki, 2002).

The second form, formative assessment , involves the evaluation of student learning at intermediate points before any summative form. Its fundamental purpose is to help students during the learning process by enabling them to reflect on their challenges and growth so they may improve. By analyzing students’ performance through formative assessment and sharing the results with them, instructors help students to “understand their strengths and weaknesses and to reflect on how they need to improve over the course of their remaining studies” (Maki, 2002, p. 11). Pat Hutchings refers to as “assessment behind outcomes”: “the promise of assessment—mandated or otherwise—is improved student learning, and improvement requires attention not only to final results but also to how results occur. Assessment behind outcomes means looking more carefully at the process and conditions that lead to the learning we care about…” (Hutchings, 1992, p. 6, original emphasis). Formative assessment includes all manner of coursework with feedback, discussions between instructors and students, and end-of-unit examinations that provide an opportunity for students to identify important areas for necessary growth and development for themselves (Brown and Knight, 1994).

It is important to recognize that both summative and formative assessment indicate the purpose of assessment, not the method . Different methods of assessment (discussed below) can either be summative or formative depending on when and how the instructor implements them. Sally Brown and Peter Knight in Assessing Learners in Higher Education caution against a conflation of the method (e.g., an essay) with the goal (formative or summative): “Often the mistake is made of assuming that it is the method which is summative or formative, and not the purpose. This, we suggest, is a serious mistake because it turns the assessor’s attention away from the crucial issue of feedback” (1994, p. 17). If an instructor believes that a particular method is formative, but he or she does not take the requisite time or effort to provide extensive feedback to students, the assessment effectively functions as a summative assessment despite the instructor’s intentions (Brown and Knight, 1994). Indeed, feedback and discussion are critical factors that distinguish between formative and summative assessment; formative assessment is only as good as the feedback that accompanies it.

It is not uncommon to conflate assessment with grading, but this would be a mistake. Student assessment is more than just grading. Assessment links student performance to specific learning objectives in order to provide useful information to students and instructors about learning and teaching, respectively. Grading, on the other hand, according to Stassen et al. (2001) merely involves affixing a number or letter to an assignment, giving students only the most minimal indication of their performance relative to a set of criteria or to their peers: “Because grades don’t tell you about student performance on individual (or specific) learning goals or outcomes, they provide little information on the overall success of your course in helping students to attain the specific and distinct learning objectives of interest” (Stassen et al., 2001, p. 6). Grades are only the broadest of indicators of achievement or status, and as such do not provide very meaningful information about students’ learning of knowledge or skills, how they have developed, and what may yet improve. Unfortunately, despite the limited information grades provide students about their learning, grades do provide students with significant indicators of their status – their academic rank, their credits towards graduation, their post-graduation opportunities, their eligibility for grants and aid, etc. – which can distract students from the primary goal of assessment: learning. Indeed, shifting the focus of assessment away from grades and towards more meaningful understandings of intellectual growth can encourage students (as well as instructors and institutions) to attend to the primary goal of education.

Barbara Walvoord (2010) argues that assessment is more likely to be successful if there is a clear plan, whether one is assessing learning in a course or in an entire curriculum (see also Gelmon, Holland, and Spring, 2018). Without some intentional and careful plan, assessment can fall prey to unclear goals, vague criteria, limited communication of criteria or feedback, invalid or unreliable assessments, unfairness in student evaluations, or insufficient or even unmeasured learning. There are several steps in this planning process.

  • Defining learning goals. An assessment plan usually begins with a clearly articulated set of learning goals.
  • Defining assessment methods. Once goals are clear, an instructor must decide on what evidence – assignment(s) – will best reveal whether students are meeting the goals. We discuss several common methods below, but these need not be limited by anything but the learning goals and the teaching context.
  • Developing the assessment. The next step would be to formulate clear formats, prompts, and performance criteria that ensure students can prepare effectively and provide valid, reliable evidence of their learning.
  • Integrating assessment with other course elements. Then the remainder of the course design process can be completed. In both integrated (Fink 2013) and backward course design models (Wiggins & McTighe 2005), the primary assessment methods, once chosen, become the basis for other smaller reading and skill-building assignments as well as daily learning experiences such as lectures, discussions, and other activities that will prepare students for their best effort in the assessments.
  • Communicate about the assessment. Once the course has begun, it is possible and necessary to communicate the assignment and its performance criteria to students. This communication may take many and preferably multiple forms to ensure student clarity and preparation, including assignment overviews in the syllabus, handouts with prompts and assessment criteria, rubrics with learning goals, model assignments (e.g., papers), in-class discussions, and collaborative decision-making about prompts or criteria, among others.
  • Administer the assessment. Instructors then can implement the assessment at the appropriate time, collecting evidence of student learning – e.g., receiving papers or administering tests.
  • Analyze the results. Analysis of the results can take various forms – from reading essays to computer-assisted test scoring – but always involves comparing student work to the performance criteria and the relevant scholarly research from the field(s).
  • Communicate the results. Instructors then compose an assessment complete with areas of strength and improvement, and communicate it to students along with grades (if the assignment is graded), hopefully within a reasonable time frame. This also is the time to determine whether the assessment was valid and reliable, and if not, how to communicate this to students and adjust feedback and grades fairly. For instance, were the test or essay questions confusing, yielding invalid and unreliable assessments of student knowledge.
  • Reflect and revise. Once the assessment is complete, instructors and students can develop learning plans for the remainder of the course so as to ensure improvements, and the assignment may be changed for future courses, as necessary.

Let’s see how this might work in practice through an example. An instructor in a Political Science course on American Environmental Policy may have a learning goal (among others) of students understanding the historical precursors of various environmental policies and how these both enabled and constrained the resulting legislation and its impacts on environmental conservation and health. The instructor therefore decides that the course will be organized around a series of short papers that will combine to make a thorough policy report, one that will also be the subject of student presentations and discussions in the last third of the course. Each student will write about an American environmental policy of their choice, with a first paper addressing its historical precursors, a second focused on the process of policy formation, and a third analyzing the extent of its impacts on environmental conservation or health. This will help students to meet the content knowledge goals of the course, in addition to its goals of improving students’ research, writing, and oral presentation skills. The instructor then develops the prompts, guidelines, and performance criteria that will be used to assess student skills, in addition to other course elements to best prepare them for this work – e.g., scaffolded units with quizzes, readings, lectures, debates, and other activities. Once the course has begun, the instructor communicates with the students about the learning goals, the assignments, and the criteria used to assess them, giving them the necessary context (goals, assessment plan) in the syllabus, handouts on the policy papers, rubrics with assessment criteria, model papers (if possible), and discussions with them as they need to prepare. The instructor then collects the papers at the appropriate due dates, assesses their conceptual and writing quality against the criteria and field’s scholarship, and then provides written feedback and grades in a manner that is reasonably prompt and sufficiently thorough for students to make improvements. Then the instructor can make determinations about whether the assessment method was effective and what changes might be necessary.

Assessment can vary widely from informal checks on understanding, to quizzes, to blogs, to essays, and to elaborate performance tasks such as written or audiovisual projects (Wiggins & McTighe, 2005). Below are a few common methods of assessment identified by Brown and Knight (1994) that are important to consider.

According to Euan S. Henderson, essays make two important contributions to learning and assessment: the development of skills and the cultivation of a learning style (1980). The American Association of Colleges & Universities (AAC&U) also has found that intensive writing is a “high impact” teaching practice likely to help students in their engagement, learning, and academic attainment (Kuh 2008).

Things to Keep in Mind about Essays

  • Essays are a common form of writing assignment in courses and can be either a summative or formative form of assessment depending on how the instructor utilizes them.
  • Essays encompass a wide array of narrative forms and lengths, from short descriptive essays to long analytical or creative ones. Shorter essays are often best suited to assess student’s understanding of threshold concepts and discrete analytical or writing skills, while longer essays afford assessments of higher order concepts and more complex learning goals, such as rigorous analysis, synthetic writing, problem solving, or creative tasks.
  • A common challenge of the essay is that students can use them simply to regurgitate rather than analyze and synthesize information to make arguments. Students need performance criteria and prompts that urge them to go beyond mere memorization and comprehension, but encourage the highest levels of learning on Bloom’s Taxonomy . This may open the possibility for essay assignments that go beyond the common summary or descriptive essay on a given topic, but demand, for example, narrative or persuasive essays or more creative projects.
  • Instructors commonly assume that students know how to write essays and can encounter disappointment or frustration when they discover that this is sometimes not the case. For this reason, it is important for instructors to make their expectations clear and be prepared to assist, or provide students to resources that will enhance their writing skills. Faculty may also encourage students to attend writing workshops at university writing centers, such as Vanderbilt University’s Writing Studio .

Exams and time-constrained, individual assessment

Examinations have traditionally been a gold standard of assessment, particularly in post-secondary education. Many educators prefer them because they can be highly effective, they can be standardized, they are easily integrated into disciplines with certification standards, and they are efficient to implement since they can allow for less labor-intensive feedback and grading. They can involve multiple forms of questions, be of varying lengths, and can be used to assess multiple levels of student learning. Like essays they can be summative or formative forms of assessment.

Things to Keep in Mind about Exams

  • Exams typically focus on the assessment of students’ knowledge of facts, figures, and other discrete information crucial to a course. While they can involve questioning that demands students to engage in higher order demonstrations of comprehension, problem solving, analysis, synthesis, critique, and even creativity, such exams often require more time to prepare and validate.
  • Exam questions can be multiple choice, true/false, or other discrete answer formats, or they can be essay or problem-solving. For more on how to write good multiple choice questions, see this guide .
  • Exams can make significant demands on students’ factual knowledge and therefore can have the side-effect of encouraging cramming and surface learning. Further, when exams are offered infrequently, or when they have high stakes by virtue of their heavy weighting in course grade schemes or in student goals, they may accompany violations of academic integrity.
  • In the process of designing an exam, instructors should consider the following questions. What are the learning objectives that the exam seeks to evaluate? Have students been adequately prepared to meet exam expectations? What are the skills and abilities that students need to do well on the exam? How will this exam be utilized to enhance the student learning process?

Self-Assessment

The goal of implementing self-assessment in a course is to enable students to develop their own judgment and the capacities for critical meta-cognition – to learn how to learn. In self-assessment students are expected to assess both the processes and products of their learning. While the assessment of the product is often the task of the instructor, implementing student self-assessment in the classroom ensures students evaluate their performance and the process of learning that led to it. Self-assessment thus provides a sense of student ownership of their learning and can lead to greater investment and engagement. It also enables students to develop transferable skills in other areas of learning that involve group projects and teamwork, critical thinking and problem-solving, as well as leadership roles in the teaching and learning process with their peers.

Things to Keep in Mind about Self-Assessment

  • Self-assessment is not self-grading. According to Brown and Knight, “Self-assessment involves the use of evaluative processes in which judgement is involved, where self-grading is the marking of one’s own work against a set of criteria and potential outcomes provided by a third person, usually the [instructor]” (1994, p. 52). Self-assessment can involve self-grading, but instructors of record retain the final authority to determine and assign grades.
  • To accurately and thoroughly self-assess, students require clear learning goals for the assignment in question, as well as rubrics that clarify different performance criteria and levels of achievement for each. These rubrics may be instructor-designed, or they may be fashioned through a collaborative dialogue with students. Rubrics need not include any grade assignation, but merely descriptive academic standards for different criteria.
  • Students may not have the expertise to assess themselves thoroughly, so it is helpful to build students’ capacities for self-evaluation, and it is important that they always be supplemented with faculty assessments.
  • Students may initially resist instructor attempts to involve themselves in the assessment process. This is usually due to insecurities or lack of confidence in their ability to objectively evaluate their own work, or possibly because of habituation to more passive roles in the learning process. Brown and Knight note, however, that when students are asked to evaluate their work, frequently student-determined outcomes are very similar to those of instructors, particularly when the criteria and expectations have been made explicit in advance (1994).
  • Methods of self-assessment vary widely and can be as unique as the instructor or the course. Common forms of self-assessment involve written or oral reflection on a student’s own work, including portfolio, logs, instructor-student interviews, learner diaries and dialog journals, post-test reflections, and the like.

Peer Assessment

Peer assessment is a type of collaborative learning technique where students evaluate the work of their peers and, in return, have their own work evaluated as well. This dimension of assessment is significantly grounded in theoretical approaches to active learning and adult learning . Like self-assessment, peer assessment gives learners ownership of learning and focuses on the process of learning as students are able to “share with one another the experiences that they have undertaken” (Brown and Knight, 1994, p. 52).  However, it also provides students with other models of performance (e.g., different styles or narrative forms of writing), as well as the opportunity to teach, which can enable greater preparation, reflection, and meta-cognitive organization.

Things to Keep in Mind about Peer Assessment

  • Similar to self-assessment, students benefit from clear and specific learning goals and rubrics. Again, these may be instructor-defined or determined through collaborative dialogue.
  • Also similar to self-assessment, it is important to not conflate peer assessment and peer grading, since grading authority is retained by the instructor of record.
  • While student peer assessments are most often fair and accurate, they sometimes can be subject to bias. In competitive educational contexts, for example when students are graded normatively (“on a curve”), students can be biased or potentially game their peer assessments, giving their fellow students unmerited low evaluations. Conversely, in more cooperative teaching environments or in cases when they are friends with their peers, students may provide overly favorable evaluations. Also, other biases associated with identity (e.g., race, gender, or class) and personality differences can shape student assessments in unfair ways. Therefore, it is important for instructors to encourage fairness, to establish processes based on clear evidence and identifiable criteria, and to provide instructor assessments as accompaniments or correctives to peer evaluations.
  • Students may not have the disciplinary expertise or assessment experience of the instructor, and therefore can issue unsophisticated judgments of their peers. Therefore, to avoid unfairness, inaccuracy, and limited comments, formative peer assessments may need to be supplemented with instructor feedback.

As Brown and Knight assert, utilizing multiple methods of assessment, including more than one assessor when possible, improves the reliability of the assessment data. It also ensures that students with diverse aptitudes and abilities can be assessed accurately and have equal opportunities to excel. However, a primary challenge to the multiple methods approach is how to weigh the scores produced by multiple methods of assessment. When particular methods produce higher range of marks than others, instructors can potentially misinterpret and mis-evaluate student learning. Ultimately, they caution that, when multiple methods produce different messages about the same student, instructors should be mindful that the methods are likely assessing different forms of achievement (Brown and Knight, 1994).

These are only a few of the many forms of assessment that one might use to evaluate and enhance student learning (see also ideas present in Brown and Knight, 1994). To this list of assessment forms and methods we may add many more that encourage students to produce anything from research papers to films, theatrical productions to travel logs, op-eds to photo essays, manifestos to short stories. The limits of what may be assigned as a form of assessment is as varied as the subjects and skills we seek to empower in our students. Vanderbilt’s Center for Teaching has an ever-expanding array of guides on creative models of assessment that are present below, so please visit them to learn more about other assessment innovations and subjects.

Whatever plan and method you use, assessment often begins with an intentional clarification of the values that drive it. While many in higher education may argue that values do not have a role in assessment, we contend that values (for example, rigor) always motivate and shape even the most objective of learning assessments. Therefore, as in other aspects of assessment planning, it is helpful to be intentional and critically reflective about what values animate your teaching and the learning assessments it requires. There are many values that may direct learning assessment, but common ones include rigor, generativity, practicability, co-creativity, and full participation (Bandy et al., 2018). What do these characteristics mean in practice?

Rigor. In the context of learning assessment, rigor means aligning our methods with the goals we have for students, principles of validity and reliability, ethics of fairness and doing no harm, critical examinations of the meaning we make from the results, and good faith efforts to improve teaching and learning. In short, rigor suggests understanding learning assessment as we would any other form of intentional, thoroughgoing, critical, and ethical inquiry.

Generativity. Learning assessments may be most effective when they create conditions for the emergence of new knowledge and practice, including student learning and skill development, as well as instructor pedagogy and teaching methods. Generativity opens up rather than closes down possibilities for discovery, reflection, growth, and transformation.

Practicability. Practicability recommends that learning assessment be grounded in the realities of the world as it is, fitting within the boundaries of both instructor’s and students’ time and labor. While this may, at times, advise a method of learning assessment that seems to conflict with the other values, we believe that assessment fails to be rigorous, generative, participatory, or co-creative if it is not feasible and manageable for instructors and students.

Full Participation. Assessments should be equally accessible to, and encouraging of, learning for all students, empowering all to thrive regardless of identity or background. This requires multiple and varied methods of assessment that are inclusive of diverse identities – racial, ethnic, national, linguistic, gendered, sexual, class, etcetera – and their varied perspectives, skills, and cultures of learning.

Co-creation. As alluded to above regarding self- and peer-assessment, co-creative approaches empower students to become subjects of, not just objects of, learning assessment. That is, learning assessments may be more effective and generative when assessment is done with, not just for or to, students. This is consistent with feminist, social, and community engagement pedagogies, in which values of co-creation encourage us to critically interrogate and break down hierarchies between knowledge producers (traditionally, instructors) and consumers (traditionally, students) (e.g., Saltmarsh, Hartley, & Clayton, 2009, p. 10; Weimer, 2013). In co-creative approaches, students’ involvement enhances the meaningfulness, engagement, motivation, and meta-cognitive reflection of assessments, yielding greater learning (Bass & Elmendorf, 2019). The principle of students being co-creators of their own education is what motivates the course design and professional development work Vanderbilt University’s Center for Teaching has organized around the Students as Producers theme.

Below is a list of other CFT teaching guides that supplement this one and may be of assistance as you consider all of the factors that shape your assessment plan.

  • Active Learning
  • An Introduction to Lecturing
  • Beyond the Essay: Making Student Thinking Visible in the Humanities
  • Bloom’s Taxonomy
  • Classroom Assessment Techniques (CATs)
  • Classroom Response Systems
  • How People Learn
  • Service-Learning and Community Engagement
  • Syllabus Construction
  • Teaching with Blogs
  • Test-Enhanced Learning
  • Assessing Student Learning (a five-part video series for the CFT’s Online Course Design Institute)

Angelo, Thomas A., and K. Patricia Cross. Classroom Assessment Techniques: A Handbook for College Teachers . 2 nd edition. San Francisco: Jossey-Bass, 1993. Print.

Bandy, Joe, Mary Price, Patti Clayton, Julia Metzker, Georgia Nigro, Sarah Stanlick, Stephani Etheridge Woodson, Anna Bartel, & Sylvia Gale. Democratically engaged assessment: Reimagining the purposes and practices of assessment in community engagement . Davis, CA: Imagining America, 2018. Web.

Bass, Randy and Heidi Elmendorf. 2019. “ Designing for Difficulty: Social Pedagogies as a Framework for Course Design .” Social Pedagogies: Teagle Foundation White Paper. Georgetown University, 2019. Web.

Brookfield, Stephen D. Becoming a Critically Reflective Teacher . San Francisco: Jossey-Bass, 1995. Print

Brown, Sally, and Peter Knight. Assessing Learners in Higher Education . 1 edition. London ;Philadelphia: Routledge, 1998. Print.

Cameron, Jeanne et al. “Assessment as Critical Praxis: A Community College Experience.” Teaching Sociology 30.4 (2002): 414–429. JSTOR . Web.

Fink, L. Dee. Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses. Second Edition. San Francisco, CA: Jossey-Bass, 2013. Print.

Gibbs, Graham and Claire Simpson. “Conditions under which Assessment Supports Student Learning. Learning and Teaching in Higher Education 1 (2004): 3-31. Print.

Henderson, Euan S. “The Essay in Continuous Assessment.” Studies in Higher Education 5.2 (1980): 197–203. Taylor and Francis+NEJM . Web.

Gelmon, Sherril B., Barbara Holland, and Amy Spring. Assessing Service-Learning and Civic Engagement: Principles and Techniques. Second Edition . Stylus, 2018. Print.

Kuh, George. High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter , American Association of Colleges & Universities, 2008. Web.

Maki, Peggy L. “Developing an Assessment Plan to Learn about Student Learning.” The Journal of Academic Librarianship 28.1 (2002): 8–13. ScienceDirect . Web. The Journal of Academic Librarianship. Print.

Sharkey, Stephen, and William S. Johnson. Assessing Undergraduate Learning in Sociology . ASA Teaching Resource Center, 1992. Print.

Walvoord, Barbara. Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education. Second Edition . San Francisco, CA: Jossey-Bass, 2010. Print.

Weimer, Maryellen. Learner-Centered Teaching: Five Key Changes to Practice. Second Edition . San Francisco, CA: Jossey-Bass, 2013. Print.

Wiggins, Grant, and Jay McTighe. Understanding By Design . 2nd Expanded edition. Alexandria,

VA: Assn. for Supervision & Curriculum Development, 2005. Print.

[1] For more on Wiggins and McTighe’s “Backward Design” model, see our teaching guide here .

Photo credit

Creative Commons License

Teaching Guides

  • Online Course Development Resources
  • Principles & Frameworks
  • Pedagogies & Strategies
  • Reflecting & Assessing
  • Challenges & Opportunities
  • Populations & Contexts

Quick Links

  • Services for Departments and Schools
  • Examples of Online Instructional Modules

Logo for OPEN OCO

Planning Assessments

Assessment is a critical component of the instructional planning process and should have a prominent role in the learning process. This means that teachers should plan to integrate multiple forms of assessment and use the data to understand how well their students are learning the content and skills specified by the learning objectives. An assessment used during the learning process is referred to as a formative assessment. In this section, you will learn about the second stage in the Backward Design process of ensuring alignment between your learning objectives and your assessment plan.

Learning Objectives

By the end of this chapter, you will be able to:

  • Determine acceptable evidence of student learning; and
  • Select and/or design formative and summative assessments aligned with learning objectives to support, verify, and document learning.

Stage 2: Determining Acceptable Evidence

Now that we understand the value of having clear learning objectives, we can start to look at the second stage of the Backward Design model (Wiggins & McTighe, 2005) where we determine what types of evidence will be acceptable to demonstrate that our students have met our goals. When considering potential evidence, Popham and Baker (1970) contend that teachers must develop skills to differentiate between different types of practice to ensure that the evidence they collect aligns with their stated learning objectives. The assessment piece you choose, whether it be a quiz, assignment, essay, test, or project, will provide you with evidence of student learning. However, Popham and Baker suggest that you should evaluate what you are asking students to do based on the following practice types:

  • Equivalent: practice the specific desired objective
  • Analogous: practice is similar to the desired objective but not identical.
  • En-route: skill needed before performing the desired objective
  • Irrelevant: any practice or activity that does not align with the desired objective

Recognizing what type of practice you are requiring students to engage in will help guide your selection, adoption, and creation of assessments in stage 2 of the Backward Design process. The key to remember is that students should be given the opportunity to practice the specific skill(s) defined by your learning objectives (Popham & Baker, 1970). This second stage requires that you understand the differences between formative and summative assessment which is foundational information necessary to ensure you provide practice and feedback for your students during the learning process. In addition, we will investigate a variety of assessment types and their pros and cons in order to select the best format for your assessment.

Formative Assessment

Examples (Sidebar)

For an in-depth look at formative assessment beyond what is discussed in this textbook, check out the series of videos by Dr. Heidi Andrade of the University at Albany about designing valid formative assessment tools .

Formative assessment includes all the practices teachers use to check student understanding throughout the teaching and learning process. Often, formative assessment is said to be an assessment for learning.

Definition of Formative Assessment*

Formative assessment refers to the ongoing process teachers and students engage in when selecting a learning goal(s), determining student performance in relation to the goal, and planning steps needed to move students closer to the goal. This ongoing process is implemented through informal assessments, assessments that can easily be incorporated into day-to-day classroom activities. Informal assessments are content and performance-driven and include questioning students during a discussion, student work (exit slips; assignments), and direct observation of students working. Rather than being used for grading, formative assessment is used to inform instructional planning and to provide students with valuable feedback on their progress. Formative assessment data can be collected as a pre-assessment, during a lesson, or as a post-assessment at the closing of a lesson.

In the video below, Rick Wormeli, author of Fair Isn’t Always Equal and Differentiation, explains the difference between summative and formative assessment and how formative assessment helps you offer better feedback to your students.

Listen to Jeoy Feith and Terri Drain discuss what assessment for learning in a PE setting looks like (show notes available if you what to read instead).

Adjusting Instruction Based on Formative Assessment*

Using assessment information to adjust instruction is fundamental to the concept of assessment for learning. Teachers make these adjustments “in the moment” during classroom instruction as well as during reflection and planning periods. Teachers use the information they gain from questioning and observation to adjust their teaching during classroom instruction. If students cannot answer a question, the teacher may need to rephrase the question, probe understanding of prior knowledge, or change the way the current idea is being considered. Teachers need to learn to identify when only one or two students need individual help and when a large proportion of the class is struggling so whole group intervention is needed.

After the class is over, effective teachers spend time analyzing how well the lessons went, what students did and did not seem to understand, and what needs to be done the next day. Evaluation of student work also provides important information for teachers. If many students are confused about a similar concept, the teacher needs to re-teach it and consider new ways of helping students understand the topic. If the majority of students complete the tasks very quickly and well, the teacher might decide that the assessment was not challenging enough.

Formative Assessment Strategies

Wondering where to begin? Check out Gretchen Vierstra’s blog post where she has suggested a variety of formative assessment strategies that you can use today, tomorrow, and next week.

Selecting and administering assessment techniques that are appropriate for the goals of instruction as well as the developmental level of the students is a crucial component of effective formative assessment. Teachers need to know the characteristics of a wide variety of classroom assessment techniques and how these techniques can be adapted for various content, skills, and student characteristics (Seifert, 2011). There is a vast array of formative assessment strategies that have been proven to be effective. For example, Natalie Reiger has compiled a list of 60 formative assessment strategies along with guidance on how to use them successfully in the classroom. Finding different strategies to try has never been easier as dozens of books have been written on the topic and hundreds of videos have been posted online demonstrating effective strategies. The key is not knowing all the possible formative assessment strategies but being able to distinguish which strategy best fits your assessment needs.

Technology & Formative Assessment*

Using Tech Tools for Formative Assessment

Technology is a powerful ally for teachers, especially in measuring student learning. With digital formative assessments, teachers can expedite their ability to assess and provide student feedback in real-time. Timmis, Broadfoot, Sutherland, and Oldfield (2016) encourage teachers to reflect on the “four C’s” when using technology to enhance a lesson. Ask yourself, does technology allow for increased collaboration or critical thinking opportunities? Are students able to communicate their ideas uniquely and are students able to demonstrate creative thinking? Following this format provides lessons that foster student engagement, with technology as an enhancement tool. Digital formative assessments provide teachers the opportunity to give individual feedback quicker and in real-time compared to traditional non-digital paper and pen formative assessments.

Educators now have access to a variety of tools that allow for instant feedback. Google Forms , Socrative , Kahoot , Quizziz , Plickers , Formative , PollEverywhere , Edpuzzle , Nearpod , and Quizlet are all educational technologies that allow teachers and students to attain instant results on the learning taking place. The students may access the system using a variety of different technological tools including a learning management system (LMS) or a mobile device.

Looking for a quick and easy way to assess your students without devices in everyone’s hands? Read how Joey Feith uses Plickers in his PE classroom. This strategy could easily be adapted for all content areas.

Teachers can have students work through retrieval practice together (such as when using a polling tool like PollEverywhere or a game-like tool like Kahoot). There are also educational technology tools that are more self-paced and provide opportunities for learners to work at their own pace. Many of these services are starting to allow for either approach to be used. Quizlet flashcards and some of their games such as Scatter, Match, and Gravity can be used in a self-directed way by students. Quizlet also has a game called Quizlet Live that can be used with a group of students at one time for retrieval practice. Beyond assessment, teachers can utilize student devices, typically smartphones, to enhance learning in a variety of ways.

Exit Tickets

Exit Tickets are a great way to practice the backward design model on a small scale. Exit Tickets are brief mini-assessments aligned to your daily objective. Teachers can provide their students a short period at the end of the class session to complete and submit the Exit Ticket. By considering the content of the Exit Ticket before planning, teachers can ensure that they address the desired skills and concepts during their lesson. Teachers can then use the evidence gathered from Exit Tickets to guide future planning sessions for remediation purposes.

See It in Action: Exit Tickets

Check out this resource from the Teacher Toolkit website. They provide a video of a teacher using Exit Tickets and tips on how and when to use Exit Tickets.

Summative Assessment*

Assessment  of  learning  is a formal assessment that involves assessing students to certify their competence and fulfill accountability mandates. Assessment of learning is typically summative , that is, administered after the instruction is completed (e.g. end-of-unit or chapter tests, end-of-term tests, or standardized tests). Summative assessments provide information about how well students mastered the material, whether students are ready for the next unit, and what grades should be given (Airasian, 2005).

Assessment Methods

Learning objectives guide what sort of practice is appropriate. There are four classifications for learning objectives: knowledge, reasoning, skill, or product (Chappuis et al. 2012). The action defined by the objective will determine which assessment method is best appropriate for gathering evidence of learning. The table below outlines commonly used words and descriptions of each classification.

Classifications of Learning Objectives

Source: Classroom Assessment of Student Learning (Chappuis et al. 2012)

It is important to understand the focus of your learning objective because it will define what type of assessment tool to use. There are many methods to assess students learning but three common types are selected response, constructed response, and performance tasks (Chappuis et al. 2012). The visuals below from Chappuis et al. (2012) and Stiggins (2005) show how some assessment methods are better suited for certain learning targets than others.

Target-Assessment Method Match

Assessment method.

Links between achievement targets and assessment methods. Source: Student-involved assessment for learning (Stiggins, 2005)

In his book Grading Smarter Not Harder, Myron Dueck provides suggestions on how teachers might vary traditional multiple-choice tests to allow students to share their thinking. Consider how this option might change a test for your students. Dueck proposes an alternate response sheet that encourages students to place the choice they think is correct in the first space. If students are considering two answers, or believe there is more than one correct response, they can place the second letter in the space provided. Also, for each question that students place more than one response, they must also provide a written explanation to represent their thinking/debate.

The first and arguably most common form of assessment used in secondary classrooms is selected response. By asking various questions at varying levels of knowledge, selected-response assessments are an efficient way to measure student knowledge and understanding. However, the limitations of multiple-choice, true-false, matching, and fill-in-the-blank style assessments are that they can only provide a limited amount of evidence of student reasoning skills and are incapable of demonstrating a student’s ability to apply skills. A benefit to selected response assessments is that they are great at collecting information quickly and are easy to grade, thus decreasing the feedback loop. Therefore, selected-response can be a great tool to use for formative assessment. Not that it can’t or shouldn’t be used as a summative assessment tool, but if your learning objectives require action above recall of knowledge, you should probably look for another method.

The second form of assessment often used is constructed response. Constructed responses are often chosen to elicit evidence of students’ thinking regarding reasoning, understanding of connections, and application of content knowledge. This assessment form may be more heavily used in some disciplines than others. Lastly, the third type of assessment is the performance assessment. Performance tasks are best suited for gathering evidence of a student’s ability to perform a specific skill or create a product. With the increased pressure on schools to prepare students for college and careers, there has been a push to integrate more performance-type assessments into the teaching and learning process. The idea is that by adding more performance-based assessments, students will develop a deeper understanding of content and be able to not only retain information but also apply and transfer that knowledge to new areas.

Understanding which assessment method to use is crucial to accurately assess student learning. However, learning when and how to use assessment to further learning and measure learning is also necessary. Consider reviewing the Teacher Made Assessment Strategies resource for a deeper dive into the strengths and weaknesses of different assessment types. In the next sections, we will look at how to ensure that our assessments measure accurately.

Considerations for Formatting Assessments

If you choose to summatively assess your students with a performance assessment, then a well-designed rubric can provide students with feedback on how they did on each objective. However, traditional assessments (MC, free response, etc.) often lack detailed feedback on student learning objectives. To provide better feedback to students, consider either grouping assessment items based on learning objectives or tagging items with information that points back to specific objectives or standards for reference.

Grouping or tagging assessment items allows a teacher to track student progress and provide specific feedback to students. Tracking individual learning objectives on an assessment provides a clearer picture of student learning of the objectives than an overall score. By providing subscores for each learning objective, students can see their strengths and weaknesses and use your feedback to guide any remediation efforts. If your assessments are broken into sections based on learning objectives, you might allow students to re-test specific sections of a unit versus taking the whole assessment again. This could save time and stress for students and the teacher.

High-Quality Assessments*

To be able to select and administer appropriate assessment techniques, teachers need to know about the variety of techniques that can be used as well as what factors ensure that the assessment techniques are high quality. We begin by considering high-quality assessments. For an assessment to be high quality, it needs to have good validity and reliability as well as the absence of bias.

Validity  is the evaluation of the  “adequacy and appropriateness of the interpretations and uses of assessment results”  for a given group of individuals (Linn & Miller, 2005, p. 68).

For example, is it appropriate to conclude that the results of a mathematics test on fractions given to recent immigrants accurately represent their understanding of fractions?

Is it appropriate for the teacher to conclude, based on her observations, that a kindergarten student, Jasmine, has Attention Deficit Disorder because she does not follow the teacher’s oral instructions?

Obviously, in each situation, other interpretations are possible that the immigrant students have poor English skills rather than mathematics skills, or that Jasmine may be hearing impaired.

It is important to understand that validity refers to the  interpretation and uses made of the results of an assessment procedure, not of the assessment procedure itself. For example, making judgments about the results of the same test on fractions may be valid if all the students understand English well. A teacher, concluding from her observations that the kindergarten student has Attention Deficit Disorder (ADD) may be appropriate if the student has been screened for hearing and other disorders (although the classification of a disorder like ADD cannot be made by one teacher). Validity involves making an overall judgment of the degree to which the interpretations and uses of the assessment results are justified. Validity is a matter of degree (e.g. high, moderate, or low validity) rather than all-or-none (e.g. totally valid vs invalid) (Linn & Miller, 2005).

Content validity  evidence is associated with the question: How well does the assessment include the content or tasks it is supposed to?  For example, suppose your educational psychology instructor devises a mid-term test and tells you this includes chapters one to seven in the textbook.  All the items in the test should be based on the content from educational psychology, not your methods or cultural foundations classes. Also, the items in the test should cover content from all seven chapters and not just chapters three to seven—unless the instructor tells you that these chapters have priority.

Teachers have to be clear about their purposes and priorities for instruction before  they can begin to gather evidence related to content validity .  Content validation determines the degree that assessment tasks are relevant and representative of the tasks judged by the teacher (or test developer) to represent their goals and objectives (Linn & Miller, 2005). In their book, The Understanding by Design Guide to Creating High-Quality Units, Wiggins & McTighe share a method that teachers can use to determine the validity of their assessments. Consider how the Two Question Validity Test (Wiggins & McTighe, 2011, p. 91) below might help you evaluate how well your assessment measures student understanding versus recall abilities, effort, creativity, or presentation skills.

Construct validity evidence is more complex than content validity evidence. Often, we are interested in making broader judgments about students’ performances than specific skills such as doing fractions. The focus may be on constructs such as mathematical reasoning or reading comprehension.

A construct is a characteristic of a person we assume exists to help explain behavior.

For example, we use the concept of test anxiety to explain why some individuals when taking a test have difficulty concentrating, have physiological reactions such as sweating, and perform poorly on tests but not in class assignments. Similarly, mathematics reasoning and reading comprehension are constructs as we use them to help explain performance on an assessment.

Construct validation  is the process of determining the extent to which performance on an assessment can be interpreted in terms of the intended constructs and is not influenced by factors irrelevant to the construct.

For example, judgments about recent immigrants’ performance on a mathematical reasoning test administered in English will have low construct validity if the results are influenced by English language skills that are irrelevant to mathematical problem-solving. Similarly, construct validity of end-of-semester examinations is likely to be poor for those students who are highly anxious when taking major tests but not during regular class periods or when doing assignments. Teachers can help increase construct validity by trying to reduce factors that influence performance but are irrelevant to the construct being assessed. These factors include anxiety, English language skills, and reading speed  (Linn & Miller 2005).

The third form of validity evidence is called criterion-related validity.  Selective colleges in the USA use the ACT or SAT among other criteria to choose who will be admitted because these standardized tests help predict freshman grades, i.e. have high criterion-related validity. Some K-12 schools give students math or reading tests in the fall semester in order to predict which are likely to do well on the annual state tests administered in the spring semester and which students are unlikely to pass the tests and will need additional assistance. If the tests administered in the fall do not predict students’ performances accurately, the additional assistance may be given to the wrong students illustrating the importance of criterion-related validity.

Reliability

Reliability refers to the consistency of the measurement (Linn & Miller 2005). Suppose Mr. Garcia is teaching a unit on food chemistry in his tenth-grade class and gives an assessment at the end of the unit using test items from the teachers’ guide. Reliability is related to questions such as: How similar would the scores of the students be if they had taken the assessment on a Friday or Monday? Would the scores have varied if Mr. Garcia had selected different test items, or if a different teacher had graded the test? An assessment provides information about students by using a specific measure of performance at one particular time. Unless the results from the assessment are reasonably consistent over different occasions, different raters, or different tasks (in the same content domain) confidence in the results will be low and so cannot be useful in improving student learning.

We cannot expect perfect consistency. Students’ memory, attention, fatigue, effort, and anxiety fluctuate, and so influence performance. Even trained raters vary somewhat when grading assessments such as essays, science projects, or oral presentations. Also, the wording and design of specific items influence students’ performances. However, some assessments are more reliable than others, and there are several strategies teachers can use to increase reliability

  • First, assessments with more tasks or items typically have higher reliability.

To understand this, consider two tests one with five items and one with 50 items. Chance factors influence the shorter test more than the longer test. If a student does not understand one of the items in the first test the total score is very highly influenced (it would be reduced by 20 percent). In contrast, if there was one item in the test with 50 items that was confusing, the total score would be influenced much less (by only 2 percent). This does not mean that assessments should be inordinately long, but, on average, enough tasks should be included to reduce the influence of chance variations.

  • Second, clear directions and tasks help increase reliability.

If the directions or wording of specific tasks or items are unclear, then students have to guess what they mean undermining the accuracy of their results.

  • Third, clear scoring criteria are crucial in ensuring high reliability  (Linn & Miller, 2005).

Absence of bias

Bias occurs in assessment when there are components in the assessment method or the administration of the assessment that distort the performance of the student because of their characteristics such as gender, ethnicity, or social class (Popham, 2005).

  • Two types of assessment bias are important: offensiveness and unfair penalization.

An assessment is most likely offensive to a subgroup of students when negative stereotypes are included in the test.  For example, the assessment in a health class could include items, in which all the doctors were men and all the nurses were women. Or, a series of questions in a social studies class could portray Latinos and Asians as immigrants rather than native-born Americans. In these examples, some female, Latino or Asian students are likely to be offended by the stereotypes, and this can distract them from performing well on the assessment.

Unfair penalization occurs when items disadvantage one group not because they may be offensive but because of differential background experiences. For example, an item for math assessment that assumes knowledge of a particular sport may disadvantage groups not as familiar with that sport (e.g. American football for recent immigrants). Or an assessment on teamwork that asks students to model their concept of a team on a symphony orchestra is likely to be easier for those students who have attended orchestra performances—probably students from affluent families. Unfair penalization does not occur just because some students do poorly in class. For example, asking questions about a specific sport in a physical education class when information on that sport had been discussed in class is not unfair penalization as long as the questions do not require knowledge beyond that taught in class that some groups are less likely to have.

It can be difficult for new teachers teaching in multi-ethnic classrooms to devise interesting assessments that do not penalize any groups of students. Teachers need to think seriously about the impact of students’ differing backgrounds on the assessment they use in class. Listening carefully to what students say is important as is learning about the backgrounds of the students.

Assessments in the PE Setting

If you are teaching in a PE setting and you are thinking that assessment “looks different,” then you might consider reviewing some of the resources below to see how the principles above can help you gather evidence of student learning and skill development.

Formative assessment is most commonly referred to as assessment for learning, as the purpose is to inform your instructional decisions to guide student learning. In contrast, summative assessment is referred to as assessment of learning, as the purpose is to measure what students know at the conclusion of learning. To effectively use formative or summative assessment in the classroom, teachers must clearly define their learning objectives, choose assessment techniques that provide reliable individual evidence of student learning, and use data of student understanding to adjust their instruction. Technology should be considered when planning assessments as it may assist in increasing student motivation and analyzing resulting data.

Summarizing Key Understandings

Peer examples, references & attributions.

Attribution: “Definition of Formative Assessment” was adapted in part from GSC Lesson Planning 101 by  Deborah Kolling and Kate Shumway-Pitt, licensed CC BY-SA 4.0

Attribution: “Adjusting Instruction Based on Assessment” was adapted in part from Educational Psychology by Kelvin Seifert, licensed CC BY 3.0 . Download for free at http://cnx.org/contents/[email protected]

Attribution: “Technology & Formative Assessment” was adapted in part from Igniting Your Teaching with Educational Technology by Malikah R. Nu-Man and Tamika M. Porter, licensed CC BY 4.0

Attribution: “Summative Assessment” was adapted in part from Ch. 15 Teacher made assessment strategies by Kevin Seifert and Rosemary Sutton, licensed under a Creative Commons Attribution 4.0 International License .

Attribution: “High-Quality Assessments” section is adapted in part from Ch. 15 Teacher made assessment strategies by Kevin Seifert and Rosemary Sutton, licensed under a Creative Commons Attribution 4.0 International License .

Airasian, P. W. (2004). Classroom Assessment: Concepts and Applications 3rd ed. Boston: McGraw Hill.

Chappuis, J., Stiggins, R. J., Chappuis, S., & Arter, J. A. (2012). Classroom assessment for student learning: Doing it right – using it well. Boston, MA: Pearson.

Linn, R. L., & Miller, M. D. (2005). Measurement and Assessment in Teaching 9th ed. Upper Saddle River, NJ: Pearson.

Popham, W.J. (2005).

Popham, W. J. (2017). Classroom assessment: What teachers need to know, 8th edition. Boston, MA: Pearson

Popham, W. J., & Baker, E. L. (1970). Planning an instructional sequence. New Jersey: Prentice Hall.

Seifert, K. (May 11, 2011). Educational Psychology. OpenStax CNX. Download for free at http://cnx.org/contents/[email protected]

Stiggins, R. J. (2005). Student-involved assessment for learning. Upper Saddle River, NJ: Prentice Hall.

Timmis, S., Broadfoot, P., Sutherland, R., & Oldfield, A. (2016). Rethinking assessment in a digital age: Opportunities, challenges and risks. British Educational Research Journal, 42(3), 454-476.

Wiggins, G., & McTighe, J. (2011). The Understanding by Design Guide to Creating High-Quality Units. Alexandria, VA: Association for Supervision and Curriculum Development.

Wiggins, G., & McTighe, J. (2005). Understanding by design (2nd ed.). Alexandria, VA: Association for Supervision and Curriculum Development.

Planning Assessments Copyright © by Jason Proctor is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Center for Teaching Innovation

Resource library.

  • AACU VALUE Rubrics

Using rubrics

A rubric is a type of scoring guide that assesses and articulates specific components and expectations for an assignment. Rubrics can be used for a variety of assignments: research papers, group projects, portfolios, and presentations.  

Why use rubrics? 

Rubrics help instructors: 

  • Assess assignments consistently from student-to-student. 
  • Save time in grading, both short-term and long-term. 
  • Give timely, effective feedback and promote student learning in a sustainable way. 
  • Clarify expectations and components of an assignment for both students and course teaching assistants (TAs). 
  • Refine teaching methods by evaluating rubric results. 

Rubrics help students: 

  • Understand expectations and components of an assignment. 
  • Become more aware of their learning process and progress. 
  • Improve work through timely and detailed feedback. 

Considerations for using rubrics 

When developing rubrics consider the following:

  • Although it takes time to build a rubric, time will be saved in the long run as grading and providing feedback on student work will become more streamlined.  
  • A rubric can be a fillable pdf that can easily be emailed to students. 
  • They can be used for oral presentations. 
  • They are a great tool to evaluate teamwork and individual contribution to group tasks. 
  • Rubrics facilitate peer-review by setting evaluation standards. Have students use the rubric to provide peer assessment on various drafts. 
  • Students can use them for self-assessment to improve personal performance and learning. Encourage students to use the rubrics to assess their own work. 
  • Motivate students to improve their work by using rubric feedback to resubmit their work incorporating the feedback. 

Getting Started with Rubrics 

  • Start small by creating one rubric for one assignment in a semester.  
  • Ask colleagues if they have developed rubrics for similar assignments or adapt rubrics that are available online. For example, the  AACU has rubrics  for topics such as written and oral communication, critical thinking, and creative thinking. RubiStar helps you to develop your rubric based on templates.  
  • Examine an assignment for your course. Outline the elements or critical attributes to be evaluated (these attributes must be objectively measurable). 
  • Create an evaluative range for performance quality under each element; for instance, “excellent,” “good,” “unsatisfactory.” 
  • Avoid using subjective or vague criteria such as “interesting” or “creative.” Instead, outline objective indicators that would fall under these categories. 
  • The criteria must clearly differentiate one performance level from another. 
  • Assign a numerical scale to each level. 
  • Give a draft of the rubric to your colleagues and/or TAs for feedback. 
  • Train students to use your rubric and solicit feedback. This will help you judge whether the rubric is clear to them and will identify any weaknesses. 
  • Rework the rubric based on the feedback. 

Assessments in Education: 7 Types and How to Use Them

May 25th, 2023

Share via Twitter

Share via Facebook

Share via LinkedIn

Portrait of undefined

Rosilyn Jackson, M.Ed.

Customer Experience Manager

in which assessment does assignment fall

Assessments in education are a combination of tools and techniques teachers use to appraise, measure and document students’ prevailing educational needs, learning progress, skill attainment, and academic standing.

While traditional assessment typically involves standardized tests administered to large student populations, forward-thinking teachers employ a range of customized assessments to target specific outcomes that promote wholesome learning.

in which assessment does assignment fall

Types of Assessments In Education and How to Apply Them

1. Diagnostic Assessment

Diagnostic tests are administered at the beginning of the course, topic, or unit. They allow the teacher to gauge how familiar the students are with the subject, as well as their opinions and biases. Well-designed diagnostics enable teachers to draw a sound instruction plan that resonates with the students.

The test functions as a benchmark for student progress throughout the lesson. Student performance across similar tests demonstrates to the teacher how much they have understood and what they are struggling with. Teachers then use this information to determine when to move on to more complicated topics. Tests are administered progressively using quizzes, interviews, discussions, and interim assessments.

The stakes are low as scores on diagnostic tests typically don’t count for student grades. They are also sometimes administered at the end of the course, unit, or topic to assess if the learning objectives were fulfilled. Diagnostic assessments identify learning gaps so lesson plans can be improved to fill in the gaps.

2. Formative Assessment

Formative assessments are applied frequently throughout the student learning journey and offer immediate insights which can be used to guide instruction. Teachers often form or modify their instruction plan based on student feedback, for example, by changing their teaching and learning activities to accommodate students’ responses and emerging needs. Unlike diagnostic assessments, teachers don’t have to wait for certain milestones to administer these kinds of formative assessments.

These tests are also low stake assessments since their ultimate goal is to inform targeted instruction customization from the interim assessment outcomes.

3. Summative Assessment

Teachers use summative assessments to measure students’ grasp of the lesson, unit, or course after it has been taught. The assessment scores are measured against a predetermined standard or benchmark and form part of the student’s academic records, so the stakes are higher than in most types of diagnostic or formative assessments. The summative outcomes often determine if the student will progress to the next level of learning.

Summative assessments are administered in the form of standardized tests, projects, and recitals, among others. The data can be used formatively in subsequent units or courses as it provides the basis for advancing to the next level.

4. Peer Assessment

In this form of assessment, the students critique each other's work and provide feedback with guidance from the teacher. This form of active learning enables learners to self-assess, motivating responsibility and self-improvement.

Peer assessment can be formative or summative, depending on the purpose and point of administration. The teacher gives the assignment and models how to assess their peers’ papers.

in which assessment does assignment fall

5. Ipsative Assessments

In ipsative assessments , the student’s performance is tracked by comparing their current and previous scores. It doesn’t matter if their score meets some established criteria or if their performance is better or worse than other students at the same grade level. Both students and teachers can determine if the feedback from previous assessments has made the learning process more effective.

The assessment task does not demotivate weaker students because they are not comparing themselves to stronger students.

6. Norm-Referenced Assessments

This method assesses the student’s competency in comparison to their peers. Groups of students are ranked according to state assessment scores or how students from previous years fared under similar circumstances. It is used to determine how the subjects are faring among their peers. The next course of action is determined by how far they are above or below the median score.

7. Criterion-Referenced Assessments

In criterion-referenced assessments, the student’s performance is compared to predetermined standards that are accessed at the point of setting the assignment. The student’s score is compared to a set learning standard or performance level. Unlike the norm-referenced assessment, this method does not refer to other students’ assessment results.

High-Stakes vs Low-Stakes Assessments

High-stakes tests are typically standardized tests administered with accountability as the main aim. Important decisions are made based on the results. Federal, state, and local government agencies use the assessment data to measure the effectiveness of schools, teachers, or school districts.

The data impacts both negative and positive policy decisions, including sanctions, penalties or reduced funding, and rewards, such as bonuses and salary increments. On a personal level, student performance in high-stakes assessments generally determines whether they will be promoted to the next grade or be awarded a diploma at the conclusion of their high school studies.

Low-stakes test scores are important to the individual student or teacher and have no significant public consequence. An interim assessment is considered low-stakes because the classroom teacher uses the score to monitor the student’s progress in relation to their unique learning objective. Any adjustments made thereafter are between the student, their teacher, and the student’s family when their involvement is necessary.

All these approaches have a specific learning objective or target student. They have been developed by authorities in education ranging from teachers, special education experts, mental health professionals, and select teams that combine these professions.

When students are assessed in meaningful ways, they are motivated to become agents of their own education and empowered towards achievement. Learning is much more than a single score or assessment.

Easily plan and coordinate all your state and local K-12 school assessments with TestHound . Find out more about TestHound by contacting Education Advanced .

If your school is interested in new ways to improve the learning experience for children, you may also be interested in automating tasks and streamlining processes so that your teachers have more time to teach. Education Advanced offers a large suite of tools that may be able to help. For example, four of our most popular and effective tools are:

Cardonex, our master schedule software , helps schools save time on building master schedules. Many schools used to spend weeks using whiteboards to organize the right students, teachers, and classrooms into the right order so that students could graduate on time and get their preferred classes. However, Cardonex can now be used to automate this task and deliver 90% of students' first-choice classes within a couple of days.

Testhound, our test accommodation software , helps schools coordinate thousands of students across all state and local K-12 school assessments while taking into account dozens of accommodations (reading disabilities, physical disabilities, translations, etc.) for students.

Pathways, our college and career readiness software , helps administrators and counselors create, track, and analyze graduation pathways to ensure secondary students are on track to graduate.

Evaluation, our teacher evaluation software , documents every step of the staff evaluation process, including walk-throughs, self-evaluations, supporting evidence, reporting, and performance analytics.

in which assessment does assignment fall

More Great Content

We know you’ll love

Featured image for List of Standardized Tests by State article

Stay In the Know

Subscribe to our newsletter today!

in which assessment does assignment fall

Ohio State nav bar

The Ohio State University

  • BuckeyeLink
  • Find People
  • Search Ohio State

Designing Assessments of Student Learning

Image Hollie Nyseth Brehm, ​​​​​Associate Professor, Department of Sociology  Professor Hollie Nyseth Brehm was a graduate student the first time she taught a class, “I didn’t have any training on how to teach, so I assigned a final paper and gave them instructions: ‘Turn it in at the end of course.’ That was sort of it.” Brehm didn’t have a rubric or a process to check in with students along the way. Needless to say, the assignment didn’t lead to any major breakthroughs for her students. But it was a learning experience for Brehm. As she grew her teaching skills, she began to carefully craft assignments to align to course goals, make tasks realistic and meaningful, and break down large assignments into manageable steps. "Now I always have rubrics. … I always scaffold the assignment such that they’ll start by giving me their paper topic and a couple of sources and then turn in a smaller portion of it, and we write it in pieces. And that leads to a much better learning experience for them—and also for me, frankly, when I turn to grade it .”

Reflect  

Have you ever planned a big assignment that didn’t turn out as you’d hoped? What did you learn, and how would you design that assignment differently now? 

What are students learning in your class? Are they meeting your learning outcomes? You simply cannot answer these questions without assessment of some kind.

As educators, we measure student learning through many means, including assignments, quizzes, and tests. These assessments can be formal or informal, graded or ungraded. But assessment is not simply about awarding points and assigning grades. Learning is a process, not a product, and that process takes place during activities such as recall and practice. Assessing skills in varied ways helps you adjust your teaching throughout your course to support student learning

Instructor speaking to student on their laptop

Research tells us that our methods of assessment don’t only measure how much students have learned. They also play an important role in the learning process. A phenomenon known as the “testing effect” suggests students learn more from repeated testing than from repeated exposure to the material they are trying to learn (Karpicke & Roediger, 2008). While exposure to material, such as during lecture or study, helps students store new information, it’s crucial that students actively practice retrieving that information and putting it to use. Frequent assessment throughout a course provides students with the practice opportunities that are essential to learning.

In addition we can’t assume students can transfer what they have practiced in one context to a different context. Successful transfer of learning requires understanding of deep, structural features and patterns that novices to a subject are still developing (Barnett & Ceci, 2002; Bransford & Schwartz, 1999). If we want students to be able to apply their learning in a wide variety of contexts, they must practice what they’re learning in a wide variety of contexts .

Providing a variety of assessment types gives students multiple opportunities to practice and demonstrate learning. One way to categorize the range of assessment options is as formative or summative.

Formative and Summative Assessment

Opportunities not simply to practice, but to receive feedback on that practice, are crucial to learning (Ambrose et al., 2010). Formative assessment facilitates student learning by providing frequent low-stakes practice coupled with immediate and focused feedback. Whether graded or ungraded, formative assessment helps you monitor student progress and guide students to understand which outcomes they’ve mastered, which they need to focus on, and what strategies can support their learning. Formative assessment also informs how you modify your teaching to better meet student needs throughout your course.

Technology Tip

Design quizzes in CarmenCanvas to provide immediate and useful feedback to students based on their answers. Learn more about setting up quizzes in Carmen. 

Summative assessment measures student learning by comparing it to a standard. Usually these types of assessments evaluate a range of skills or overall performance at the end of a unit, module, or course. Unlike formative assessment, they tend to focus more on product than process. These high-stakes experiences are typically graded and should be less frequent (Ambrose et al., 2010).

Using Bloom's Taxonomy

A visual depiction of the Bloom's Taxonomy categories positioned like the layers of a cake. [row 1, at bottom] Remember; Recognizing and recalling facts. [Row 2] Understand: Understanding what the facts mean. [Row 3] Apply: Applying the facts, rules, concepts, and ideas. [Row 4] Analyze: Breaking down information into component parts. [Row 5] Evaluate: Judging the value of information or ideas. [Row 6, at top] Create: Combining parts to make a new whole.

Bloom’s Taxonomy is a common framework for thinking about how students can demonstrate their learning on assessments, as well as for articulating course and lesson learning outcomes .

Benjamin Bloom (alongside collaborators Max Englehart, Edward Furst, Walter Hill, and David Krathwohl) published Taxonomy of Educational Objectives in 1956.   The taxonomy provided a system for categorizing educational goals with the intent of aiding educators with assessment. Commonly known as Bloom’s Taxonomy, the framework has been widely used to guide and define instruction in both K-12 and university settings. The original taxonomy from 1956 included a cognitive domain made up of six categories: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. The categories after Knowledge were presented as “skills and abilities,” with the understanding that knowledge was the necessary precondition for putting these skills and abilities into practice. 

A revised Bloom's Taxonomy from 2001 updated these six categories to reflect how learners interact with knowledge. In the revised version, students can:  Remember content, Understand ideas, Apply information to new situations, Analyze relationships between ideas, Evaluate information to justify perspectives or decisions, and Create new ideas or original work. In the graphic pictured here, the categories from the revised taxonomy are imagined as the layers of a cake.

Assessing students on a variety of Bloom's categories will give you a better sense of how well they understand your course content. The taxonomy can be a helpful guide to predicting which tasks will be most difficult for students so you can provide extra support where it is needed. It can also be used to craft more transparent assignments and test questions by honing in on the specific skills you want to assess and finding the right language to communicate exactly what you want students to do.  See the Sample Bloom's Verbs in the Examples section below.

Diving deeper into Bloom's Taxonomy

Like most aspects of our lives, activities and assessments in today’s classroom are inextricably linked with technology. In 2008, Andrew Churches extended Bloom’s Taxonomy to address the emerging changes in learning behaviors and opportunities as “technology advances and becomes more ubiquitous.” Consult Bloom’s Digital Taxonomy for ideas on using digital tools to facilitate and assess learning across the six categories of learning.

Did you know that the cognitive domain (commonly referred to simply as Bloom's Taxonomy) was only one of three domains in the original Bloom's Taxonomy (1956)? While it is certainly the most well-known and widely used, the other two domains— psychomotor and affective —may be of interest to some educators. The psychomotor domain relates to physical movement, coordination, and motor skills—it might apply to the performing arts or other courses that involve movement, manipulation of objects, and non-discursive communication like body language. The affective domain pertains to feelings, values, motivations, and attitudes and is used more often in disciplines like medicine, social work, and education, where emotions and values are integral aspects of learning. Explore the full taxonomy in  Three Domains of Learning: Cognitive, Affective, and Psychomotor (Hoque, 2017).

In Practice

Consider the following to make your assessments of student learning effective and meaningful.

Align assignments, quizzes, and tests closely to learning outcomes.

It goes without saying that you want students to achieve the learning outcomes for your course. The testing effect implies, then, that your assessments must help them retrieve the knowledge and practice the skills that are relevant to those outcomes.

Plan assessments that measure specific outcomes for your course. Instead of choosing quizzes and tests that are easy to grade or assignment types common to your discipline, carefully consider what assessments will best help students practice important skills. When assignments and feedback are aligned to learning outcomes, and you share this alignment with students, they have a greater appreciation for your course and develop more effective strategies for study and practice targeted at achieving those outcomes (Wang, et al., 2013).

Student working in a lab.

Provide authentic learning experiences.

Consider how far removed from “the real world” traditional assessments like academic essays, standard textbook problems, and multiple-choice exams feel to students. In contrast, assignments that are authentic resemble real-world tasks. They feel relevant and purposeful, which can increase student motivation and engagement (Fink, 2013). Authentic assignments also help you assess whether students will be able to transfer what they learn into realistic contexts beyond your course.

Integrate assessment opportunities that prepare students to be effective and successful once they graduate, whether as professionals, as global citizens, or in their personal lives.

To design authentic assignments:

  • Choose real-world content . If you want students to be able to apply disciplinary methods, frameworks, and terminology to solve real-world problems after your course, you must have them engage with real-world examples, procedures, and tools during your course. Include actual case studies, documents, data sets, and problems from your field in your assessments.
  • Target a real-world audience . Ask students to direct their work to a tangible reader, listener or viewer, rather than to you. For example, they could write a blog for their peers or create a presentation for a future employer.
  • Use real-world formats . Have students develop content in formats used in professional or real-life discourse. For example, instead of a conventional paper, students could write an email to a colleague or a letter to a government official, develop a project proposal or product pitch for a community-based company, post a how-to video on YouTube, or create an infographic to share on social media.

Simulations, role plays, case studies, portfolios, project-based learning, and service learning are all great avenues to bring authentic assessment into your course.

Make sure assignments are achievable.

Your students juggle coursework from several classes, so it’s important to be conscious of workload. Assign tasks they can realistically handle at a given point in the term. If it takes you three hours to do something, it will likely take your students six hours or more. Choose assignments that assess multiple learning outcomes from your course to keep your grading manageable and your feedback useful (Rayner et al., 2016).

Scaffold assignments so students can develop knowledge and skills over time.

For large assignments, use scaffolding to integrate multiple opportunities for feedback, reflection, and improvement. Scaffolding means breaking a complex assignment down into component parts or smaller progressive tasks over time. Practicing these smaller tasks individually before attempting to integrate them into a completed assignment supports student learning by reducing the amount of information they need to process at a given time (Salden et al., 2006).

Scaffolding ensures students will start earlier and spend more time on big assignments. And it provides you more opportunities to give feedback and guidance to support their ultimate success. Additionally, scaffolding can draw students’ attention to important steps in a process that are often overlooked, such as planning and revision, leading them to be more independent and thoughtful about future work.

A familiar example of scaffolding is a research paper. You might ask students to submit a topic or thesis in Week 3 of the semester, an annotated bibliography of sources in Week 6, a detailed outline in Week 9, a first draft on which they can get peer feedback in Week 11, and the final draft in the last week of the semester.

Your course journey is decided in part by how you sequence assignments. Consider where students are in their learning and place assignments at strategic points throughout the term. Scaffold across the course journey by explaining how each assignment builds upon the learning achieved in previous ones (Walvoord & Anderson, 2011). 

Be transparent about assignment instructions and expectations. 

Communicate clearly to students about the purpose of each assignment, the process for completing the task, and the criteria you will use to evaluate it before they begin the work. Studies have shown that transparent assignments support students to meet learning goals and result in especially large increases in success and confidence for underserved students (Winkelmes et al., 2016).

To increase assignment transparency:

Instructor giving directions to a class.

  • Explain how the assignment links to one or more course learning outcomes . Understanding why the assignment matters and how it supports their learning can increase student motivation and investment in the work.
  • Outline steps of the task in the assignment prompt . Clear directions help students structure their time and effort. This is also a chance to call out disciplinary standards with which students are not yet familiar or guide them to focus on steps of the process they often neglect, such as initial research.
  • Provide a rubric with straightforward evaluation criteria . Rubrics make transparent which parts of an assignment you care most about. Sharing clear criteria sets students up for success by giving them the tools to self-evaluate and revise their work before submitting it. Be sure to explain your rubric, and particularly to unpack new or vague terms; for example, language like "argue," “close reading,” "list significant findings," and "document" can mean different things in different disciplines. It is helpful to show exemplars and non-exemplars along with your rubric to highlight differences in unacceptable, acceptable, and exceptional work.

Engage students in reflection or discussion to increase assignment transparency. Have them consider how the assessed outcomes connect to their personal lives or future careers. In-class activities that ask them to grade sample assignments and discuss the criteria they used, compare exemplars and non-exemplars, engage in self- or peer-evaluation, or complete steps of the assignment when you are present to give feedback can all support student success.

Technology Tip   

Enter all  assignments and due dates  in your Carmen course to increase transparency. When assignments are entered in Carmen, they also populate to Calendar, Syllabus, and Grades areas so students can easily track their upcoming work. Carmen also allows you to  develop rubrics  for every assignment in your course. 

Sample Bloom’s Verbs

Building a question bank, using the transparent assignment template, sample assignment: ai-generated lesson plan.

Include frequent low-stakes assignments and assessments throughout your course to provide the opportunities for practice and feedback that are essential to learning. Consider a variety of formative and summative assessment types so students can demonstrate learning in multiple ways. Use Bloom’s Taxonomy to determine—and communicate—the specific skills you want to assess.

Remember that effective assessments of student learning are:

  • Aligned to course learning outcomes
  • Authentic, or resembling real-world tasks
  • Achievable and realistic
  • Scaffolded so students can develop knowledge and skills over time
  • Transparent in purpose, tasks, and criteria for evaluation
  • Collaborative learning techniques: A handbook for college faculty (book)
  • Cheating Lessons (book)
  • Minds online: Teaching effectively with technology (book)
  • Assessment: The Silent Killer of Learning (video)
  • TILT Higher Ed Examples and Resource (website)
  • Writing to Learn: Critical Thinking Activities for Any Classroom (guide)

Ambrose, S.A., Bridges, M.W., Lovett, M.C., DiPietro, M., & Norman, M.K. (2010).  How learning works: Seven research-based principles for smart teaching . John Wiley & Sons. 

Barnett, S.M., & Ceci, S.J. (2002). When and where do we apply what we learn? A taxonomy for far transfer.  Psychological Bulletin , 128 (4). 612–637.  doi.org/10.1037/0033-2909.128.4.612  

Bransford, J.D, & Schwartz, D.L. (1999). Rethinking transfer: A simple proposal with multiple implications.  Review of Research in Education , 24 . 61–100.  doi.org/10.3102/0091732X024001061  

Fink, L. D. (2013).  Creating significant learning experiences: An integrated approach to designing college courses . John Wiley & Sons. 

Karpicke, J.D., & Roediger, H.L., III. (2008). The critical importance of retrieval for learning.  Science ,  319 . 966–968.  doi.org/10.1126/science.1152408  

Rayner, K., Schotter, E. R., Masson, M. E., Potter, M. C., & Treiman, R. (2016). So much to read, so little time: How do we read, and can speed reading help?.  Psychological Science in the Public Interest ,  17 (1), 4-34.  doi.org/10.1177/1529100615623267     

Salden, R.J.C.M., Paas, F., van Merriënboer, J.J.G. (2006). A comparison of approaches to learning task selection in the training of complex cognitive skills.  Computers in Human Behavior , 22 (3). 321–333.  doi.org/10.1016/j.chb.2004.06.003  

Walvoord, B. E., & Anderson, V. J. (2010).  Effective grading: A tool for learning and assessment in college . John Wiley & Sons. 

Wang, X., Su, Y., Cheung, S., Wong, E., & Kwong, T. (2013). An exploration of Biggs’ constructive alignment in course design and its impact on students’ learning approaches.  Assessment & Evaluation in Higher Education , 38 (4). 477–491.  doi.org/10.1016/j.chb.2004.06.003  

Winkelmes, M., Bernacki, M., Butler, J., Zochowski, M., Golanics, J., & Weavil, K.H. (2016). A teaching intervention that increases underserved college students’ success.  Peer Review , 18 (1/2). 31–36. Retrieved from  https://www.aacu.org/peerreview/2016/winter-spring/Winkelmes

Related Teaching Topics

A positive approach to academic integrity, creating and adapting assignments for online courses, ai teaching strategies: transparent assignment design, designing research or inquiry-based assignments, using backward design to plan your course, universal design for learning: planning with all students in mind, search for resources.

Assessment Rubrics

A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations. Markers of quality give students a clear idea about what must be done to demonstrate a certain level of mastery, understanding, or proficiency (i.e., "Exceeds Expectations" does xyz, "Meets Expectations" does only xy or yz, "Developing" does only x or y or z). Rubrics can be used for any assignment in a course, or for any way in which students are asked to demonstrate what they've learned. They can also be used to facilitate self and peer-reviews of student work.

Rubrics aren't just for summative evaluation. They can be used as a teaching tool as well. When used as part of a formative assessment, they can help students understand both the holistic nature and/or specific analytics of learning expected, the level of learning expected, and then make decisions about their current level of learning to inform revision and improvement (Reddy & Andrade, 2010). 

Why use rubrics?

Rubrics help instructors:

Provide students with feedback that is clear, directed and focused on ways to improve learning.

Demystify assignment expectations so students can focus on the work instead of guessing "what the instructor wants."

Reduce time spent on grading and develop consistency in how you evaluate student learning across students and throughout a class.

Rubrics help students:

Focus their efforts on completing assignments in line with clearly set expectations.

Self and Peer-reflect on their learning, making informed changes to achieve the desired learning level.

Developing a Rubric

During the process of developing a rubric, instructors might:

Select an assignment for your course - ideally one you identify as time intensive to grade, or students report as having unclear expectations.

Decide what you want students to demonstrate about their learning through that assignment. These are your criteria.

Identify the markers of quality on which you feel comfortable evaluating students’ level of learning - often along with a numerical scale (i.e., "Accomplished," "Emerging," "Beginning" for a developmental approach).

Give students the rubric ahead of time. Advise them to use it in guiding their completion of the assignment.

It can be overwhelming to create a rubric for every assignment in a class at once, so start by creating one rubric for one assignment. See how it goes and develop more from there! Also, do not reinvent the wheel. Rubric templates and examples exist all over the Internet, or consider asking colleagues if they have developed rubrics for similar assignments. 

Sample Rubrics

Examples of holistic and analytic rubrics : see Tables 2 & 3 in “Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners” (Allen & Tanner, 2006)

Examples across assessment types : see “Creating and Using Rubrics,” Carnegie Mellon Eberly Center for Teaching Excellence and & Educational Innovation

“VALUE Rubrics” : see the Association of American Colleges and Universities set of free, downloadable rubrics, with foci including creative thinking, problem solving, and information literacy. 

Andrade, H. 2000. Using rubrics to promote thinking and learning. Educational Leadership 57, no. 5: 13–18. Arter, J., and J. Chappuis. 2007. Creating and recognizing quality rubrics. Upper Saddle River, NJ: Pearson/Merrill Prentice Hall. Stiggins, R.J. 2001. Student-involved classroom assessment. 3rd ed. Upper Saddle River, NJ: Prentice-Hall. Reddy, Y., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation In Higher Education, 35(4), 435-448.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

4.3: Types of Assignments

  • Last updated
  • Save as PDF
  • Page ID 133160

  • Ana Stevenson
  • James Cook University via James Cook University

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Hand higghlighting notes on paper

Introduction

As discussed in the previous chapter, assignments are a common method of assessment at university. You may encounter many assignments over your years of study, yet some will look quite different from others. By recognising different types of assignments and understanding the purpose of the task, you can direct your writing skills effectively to meet task requirements. This chapter draws on the skills from the previous chapter, and extends the discussion, showing you where to aim with different types of assignments.

The chapter begins by exploring the popular essay assignment, with its two common categories, analytical and argumentative essays. It then examines assignments requiring case study responses , as often encountered in fields such as health or business. This is followed by a discussion of assignments seeking a report (such as a scientific report) and reflective writing assignments, which are common in nursing, education, and human services. The chapter concludes with an examination of annotated bibliographies and literature reviews. The chapter also has a selection of templates and examples throughout to enhance your understanding and improve the efficacy of your assignment writing skills.

Different Types of Written Assignments

At university, an essay is a common form of assessment. In the previous chapter Writing Assignments, we discussed what was meant by showing academic writing in your assignments. It is important that you consider these aspects of structure, tone, and language when writing an essay.

Components of an essay

Essays should use formal but reader-friendly language and have a clear and logical structure. They must include research from credible academic sources such as peer reviewed journal articles and textbooks. This research should be referenced throughout your essay to support your ideas (see the chapter Working with Information).

Diagram that allocates words of assignment

If you have never written an essay before, you may feel unsure about how to start. Breaking your essay into sections and allocating words accordingly will make this process more manageable and will make planning the overall essay structure much easier.

  • An essay requires an introduction, body paragraphs, and a conclusion.
  • Generally, an introduction and conclusion are each approximately 10% of the total word count.
  • The remaining words can then be divided into sections and a paragraph allowed for each area of content you need to cover.
  • Use your task and criteria sheet to decide what content needs to be in your plan

An effective essay introduction needs to inform your reader by doing four basic things:

An effective essay body paragraph needs to:

An effective essay conclusion needs to:

Elements of essay in diagram

Common types of essays

You may be required to write different types of essays, depending on your study area and topic. Two of the most commonly used essays are analytical and argumentative . The task analysis process discussed in the previous chapter Writing Assignments will help you determine the type of essay required. For example, if your assignment question uses task words such as analyse, examine, discuss, determine, or explore, then you would be writing an analytical essay . If your assignment question has task words such as argue, evaluate, justify, or assess, then you would be writing an argumentative essay . Regardless of the type of essay, your ability to analyse and think critically is important and common across genres.

Analytical essays

These essays usually provide some background description of the relevant theory, situation, problem, case, image, etcetera that is your topic. Being analytical requires you to look carefully at various components or sections of your topic in a methodical and logical way to create understanding.

The purpose of the analytical essay is to demonstrate your ability to examine the topic thoroughly. This requires you to go deeper than description by considering different sides of the situation, comparing and contrasting a variety of theories and the positives and negatives of the topic. Although your position on the topic may be clear in an analytical essay, it is not necessarily a requirement that you explicitly identify this with a thesis statement. In an argumentative essay, however, it is necessary that you explicitly identify your position on the topic with a thesis statement. If you are unsure whether you are required to take a position, and provide a thesis statement, it is best to check with your tutor.

Argumentative essays

These essays require you to take a position on the assignment topic. This is expressed through your thesis statement in your introduction. You must then present and develop your arguments throughout the body of your assignment using logically structured paragraphs. Each of these paragraphs needs a topic sentence that relates to the thesis statement. In an argumentative essay, you must reach a conclusion based on the evidence you have presented.

Case study responses

Case studies are a common form of assignment in many study areas and students can underperform in this genre for a number of key reasons.

Students typically lose marks for not:

  • Relating their answer sufficiently to the case details.
  • Applying critical thinking.
  • Writing with clear structure.
  • Using appropriate or sufficient sources.
  • Using accurate referencing.

When structuring your response to a case study, remember to refer to the case. Structure your paragraphs similarly to an essay paragraph structure, but include examples and data from the case as additional evidence to support your points (see Figure 68). The colours in the sample paragraph below show the function of each component.

Diagram fo structure of case study

The Nursing and Midwifery Board of Australia (NMBA) Code of Conduct and Nursing Standards (2018) play a crucial role in determining the scope of practice for nurses and midwives. A key component discussed in the code is the provision of person-centred care and the formation of therapeutic relationships between nurses and patients (NMBA, 2018). This ensures patient safety and promotes health and wellbeing (NMBA, 2018). The standards also discuss the importance of partnership and shared decision-making in the delivery of care (NMBA, 2018, 4). Boyd and Dare (2014) argue that good communication skills are vital for building therapeutic relationships and trust between patients and care givers. This will help ensure the patient is treated with dignity and respect and improve their overall hospital experience. In the case, the therapeutic relationship with the client has been compromised in several ways. Firstly, the nurse did not conform adequately to the guidelines for seeking informed consent before performing the examination as outlined in principle 2.3 (NMBA, 2018). Although she explained the procedure, she failed to give the patient appropriate choices regarding her health care.

Topic sentence | Explanations using paraphrased evidence including in-text references | Critical thinking (asks the so what? question to demonstrate your student voice). | Relating the theory back to the specifics of the case. The case becomes a source of examples as extra evidence to support the points you are making.

Reports are a common form of assessment at university and are also used widely in many professions. It is a common form of writing in business, government, scientific, and technical occupations.

Reports can take many different structures. A report is normally written to present information in a structured manner, which may include explaining laboratory experiments, technical information, or a business case. Reports may be written for different audiences, including clients, your manager, technical staff, or senior leadership within an organisation. The structure of reports can vary, and it is important to consider what format is required. The choice of structure will depend upon professional requirements and the ultimate aims of the report. Consider some of the options in the table below (see Table 18.2).

Reflective writing

Reflective writing is a popular method of assessment at university. It is used to help you explore feelings, experiences, opinions, events, or new information to gain a clearer and deeper understanding of your learning.

Reflective flower

A reflective writing task requires more than a description or summary. It requires you to analyse a situation, problem or experience, consider what you may have learnt, and evaluate how this may impact your thinking and actions in the future. This requires critical thinking, analysis, and usually the application of good quality research, to demonstrate your understanding or learning from a situation.

Diagram of bubbles that state what, now what, so what

Essentially, reflective practice is the process of looking back on past experiences and engaging with them in a thoughtful way and drawing conclusions to inform future experiences. The reflection skills you develop at university will be vital in the workplace to assist you to use feedback for growth and continuous improvement. There are numerous models of reflective writing and you should refer to your subject guidelines for your expected format. If there is no specific framework, a simple model to help frame your thinking is What? So what? Now what? (Rolfe et al., 2001).

The Gibbs’ Reflective Cycle

The Gibbs’ Cycle of reflection encourages you to consider your feelings as part of the reflective process. There are six specific steps to work through. Following this model carefully and being clear of the requirements of each stage, will help you focus your thinking and reflect more deeply. This model is popular in Health.

Gibb's reflective cycle of decription, feelings, evauation, analysis, action plan, cocnlusion

The 4 R’s of reflective thinking

This model (Ryan and Ryan, 2013) was designed specifically for university students engaged in experiential learning. Experiential learning includes any ‘real-world’ activities, including practice led activities, placements, and internships. Experiential learning, and the use of reflective practice to heighten this learning, is common in Creative Arts, Health, and Education.

Annotated bibliography

What is it.

An annotated bibliography is an alphabetical list of appropriate sources (e.g. books, journal articles, or websites) on a topic, accompanied by a brief summary, evaluation, and sometimes an explanation or reflection on their usefulness or relevance to your topic. Its purpose is to teach you to research carefully, evaluate sources and systematically organise your notes. An annotated bibliography may be one part of a larger assessment item or a stand-alone assessment item. Check your task guidelines for the number of sources you are required to annotate and the word limit for each entry.

How do I know what to include?

When choosing sources for your annotated bibliography, it is important to determine:

  • The topic you are investigating and if there is a specific question to answer.
  • The type of sources on which you need to focus.
  • Whether these sources are reputable and of high quality.

What do I say?

Important considerations include:

  • Is the work current?
  • Is the work relevant to your topic?
  • Is the author credible/reliable?
  • Is there any author bias?
  • The strength and limitations (this may include an evaluation of research methodology).

Annnotated bibliography example

Literature reviews

Generally, a literature review requires that you review the scholarly literature and establish the main ideas that have been written about your chosen topic. A literature review does not summarise and evaluate each resource you find (this is what you would do in an annotated bibliography). You are expected to analyse and synthesise or organise common ideas from multiple texts into key themes which are relevant to your topic (see Figure 18.10). You may also be expected to identify gaps in the research.

It is easy to get confused by the terminology used for literature reviews. Some tasks may be described as a systematic literature review when actually the requirement is simpler; to review the literature on the topic but do it in a systematic way. There is a distinct difference (see Table 15.4). As a commencing undergraduate student, it is unlikely you would be expected to complete a systematic literature review as this is a complex and more advanced research task. It is important to check with your lecturer or tutor if you are unsure of the requirements.

When conducting a literature review, use a table or a spreadsheet, if you know how, to organise the information you find. Record the full reference details of the sources as this will save you time later when compiling your reference list (see Table 18.5).

Table of themes

Overall, this chapter has provided an introduction to the types of assignments you can expect to complete at university, as well as outlined some tips and strategies with examples and templates for completing them. First, the chapter investigated essay assignments, including analytical and argumentative essays. It then examined case study assignments, followed by a discussion of the report format. Reflective writing , popular in nursing, education, and human services, was also considered. Finally, the chapter briefly addressed annotated bibliographies and literature reviews. The chapter also has a selection of templates and examples throughout to enhance your understanding and improve the efficacy of your assignment writing skills.

  • Not all assignments at university are the same. Understanding the requirements of different types of assignments will assist in meeting the criteria more effectively.
  • There are many different types of assignments. Most will require an introduction, body paragraphs, and a conclusion.
  • An essay should have a clear and logical structure and use formal but reader-friendly language.
  • Breaking your assignment into manageable chunks makes it easier to approach.
  • Effective body paragraphs contain a topic sentence.
  • A case study structure is similar to an essay, but you must remember to provide examples from the case or scenario to demonstrate your points.
  • The type of report you may be required to write will depend on its purpose and audience. A report requires structured writing and uses headings.
  • Reflective writing is popular in many disciplines and is used to explore feelings, experiences, opinions, or events to discover what learning or understanding has occurred. Reflective writing requires more than description. You need to be analytical, consider what has been learnt, and evaluate the impact of this on future actions.
  • Annotated bibliographies teach you to research and evaluate sources and systematically organise your notes. They may be part of a larger assignment.
  • Literature reviews require you to look across the literature and analyse and synthesise the information you find into themes.

Gibbs, G. (1988). Learning by doing: A guide to teaching and learning methods. Further Education Unit, Oxford Brookes University.

Rolfe, G., Freshwater, D., Jasper, M. (2001). Critical reflection in nursing and the helping professions: A user’s guide . Palgrave Macmillan.

Ryan, M. & Ryan, M. (2013). Theorising a model for teaching and assessing reflective learning in higher education. Higher Education Research & Development , 32(2), 244-257. https://doi.org/10.1080/07294360.2012.661704

in which assessment does assignment fall

How to Use Rubrics

in which assessment does assignment fall

A rubric is a document that describes the criteria by which students’ assignments are graded. Rubrics can be helpful for:

  • Making grading faster and more consistent (reducing potential bias). 
  • Communicating your expectations for an assignment to students before they begin. 

Moreover, for assignments whose criteria are more subjective, the process of creating a rubric and articulating what it looks like to succeed at an assignment provides an opportunity to check for alignment with the intended learning outcomes and modify the assignment prompt, as needed.

Why rubrics?

Rubrics are best for assignments or projects that require evaluation on multiple dimensions. Creating a rubric makes the instructor’s standards explicit to both students and other teaching staff for the class, showing students how to meet expectations.

Additionally, the more comprehensive a rubric is, the more it allows for grading to be streamlined—students will get informative feedback about their performance from the rubric, even if they don’t have as many individualized comments. Grading can be more standardized and efficient across graders.

Finally, rubrics allow for reflection, as the instructor has to think about their standards and outcomes for the students. Using rubrics can help with self-directed learning in students as well, especially if rubrics are used to review students’ own work or their peers’, or if students are involved in creating the rubric.

How to design a rubric

1. consider the desired learning outcomes.

What learning outcomes is this assignment reinforcing and assessing? If the learning outcome seems “fuzzy,” iterate on the outcome by thinking about the expected student work product. This may help you more clearly articulate the learning outcome in a way that is measurable.  

2. Define criteria

What does a successful assignment submission look like? As described by Allen and Tanner (2006), it can help develop an initial list of categories that the student should demonstrate proficiency in by completing the assignment. These categories should correlate with the intended learning outcomes you identified in Step 1, although they may be more granular in some cases. For example, if the task assesses students’ ability to formulate an effective communication strategy, what components of their communication strategy will you be looking for? Talking with colleagues or looking at existing rubrics for similar tasks may give you ideas for categories to consider for evaluation.

If you have assigned this task to students before and have samples of student work, it can help create a qualitative observation guide. This is described in Linda Suskie’s book Assessing Student Learning , where she suggests thinking about what made you decide to give one assignment an A and another a C, as well as taking notes when grading assignments and looking for common patterns. The often repeated themes that you comment on may show what your goals and expectations for students are. An example of an observation guide used to take notes on predetermined areas of an assignment is shown here .

In summary, consider the following list of questions when defining criteria for a rubric (O’Reilly and Cyr, 2006):

  • What do you want students to learn from the task?
  • How will students demonstrate that they have learned?
  • What knowledge, skills, and behaviors are required for the task?
  • What steps are required for the task?
  • What are the characteristics of the final product?

After developing an initial list of criteria, prioritize the most important skills you want to target and eliminate unessential criteria or combine similar skills into one group. Most rubrics have between 3 and 8 criteria. Rubrics that are too lengthy make it difficult to grade and challenging for students to understand the key skills they need to achieve for the given assignment. 

3. Create the rating scale

According to Suskie, you will want at least 3 performance levels: for adequate and inadequate performance, at the minimum, and an exemplary level to motivate students to strive for even better work. Rubrics often contain 5 levels, with an additional level between adequate and exemplary and a level between adequate and inadequate. Usually, no more than 5 levels are needed, as having too many rating levels can make it hard to consistently distinguish which rating to give an assignment (such as between a 6 or 7 out of 10). Suskie also suggests labeling each level with names to clarify which level represents the minimum acceptable performance. Labels will vary by assignment and subject, but some examples are: 

  • Exceeds standard, meets standard, approaching standard, below standard
  • Complete evidence, partial evidence, minimal evidence, no evidence

4. Fill in descriptors

Fill in descriptors for each criterion at each performance level. Expand on the list of criteria you developed in Step 2. Begin to write full descriptions, thinking about what an exemplary example would look like for students to strive towards. Avoid vague terms like “good” and make sure to use explicit, concrete terms to describe what would make a criterion good. For instance, a criterion called “organization and structure” would be more descriptive than “writing quality.” Describe measurable behavior and use parallel language for clarity; the wording for each criterion should be very similar, except for the degree to which standards are met. For example, in a sample rubric from Chapter 9 of Suskie’s book, the criterion of “persuasiveness” has the following descriptors:

  • Well Done (5): Motivating questions and advance organizers convey the main idea. Information is accurate.
  • Satisfactory (3-4): Includes persuasive information.
  • Needs Improvement (1-2): Include persuasive information with few facts.
  • Incomplete (0): Information is incomplete, out of date, or incorrect.

These sample descriptors generally have the same sentence structure that provides consistent language across performance levels and shows the degree to which each standard is met.

5. Test your rubric

Test your rubric using a range of student work to see if the rubric is realistic. You may also consider leaving room for aspects of the assignment, such as effort, originality, and creativity, to encourage students to go beyond the rubric. If there will be multiple instructors grading, it is important to calibrate the scoring by having all graders use the rubric to grade a selected set of student work and then discuss any differences in the scores. This process helps develop consistency in grading and making the grading more valid and reliable.

Types of Rubrics

If you would like to dive deeper into rubric terminology, this section is dedicated to discussing some of the different types of rubrics. However, regardless of the type of rubric you use, it’s still most important to focus first on your learning goals and think about how the rubric will help clarify students’ expectations and measure student progress towards those learning goals.

Depending on the nature of the assignment, rubrics can come in several varieties (Suskie, 2009):

Checklist Rubric

This is the simplest kind of rubric, which lists specific features or aspects of the assignment which may be present or absent. A checklist rubric does not involve the creation of a rating scale with descriptors. See example from 18.821 project-based math class .

Rating Scale Rubric

This is like a checklist rubric, but instead of merely noting the presence or absence of a feature or aspect of the assignment, the grader also rates quality (often on a graded or Likert-style scale). See example from 6.811 assistive technology class .

Descriptive Rubric

A descriptive rubric is like a rating scale, but including descriptions of what performing to a certain level on each scale looks like. Descriptive rubrics are particularly useful in communicating instructors’ expectations of performance to students and in creating consistency with multiple graders on an assignment. This kind of rubric is probably what most people think of when they imagine a rubric. See example from 15.279 communications class .

Holistic Scoring Guide

Unlike the first 3 types of rubrics, a holistic scoring guide describes performance at different levels (e.g., A-level performance, B-level performance) holistically without analyzing the assignment into several different scales. This kind of rubric is particularly useful when there are many assignments to grade and a moderate to a high degree of subjectivity in the assessment of quality. It can be difficult to have consistency across scores, and holistic scoring guides are most helpful when making decisions quickly rather than providing detailed feedback to students. See example from 11.229 advanced writing seminar .

The kind of rubric that is most appropriate will depend on the assignment in question.

Implementation tips

Rubrics are also available to use for Canvas assignments. See this resource from Boston College for more details and guides from Canvas Instructure.

Allen, D., & Tanner, K. (2006). Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners. CBE—Life Sciences Education, 5 (3), 197-203. doi:10.1187/cbe.06-06-0168

Cherie Miot Abbanat. 11.229 Advanced Writing Seminar. Spring 2004. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu . License: Creative Commons BY-NC-SA .

Haynes Miller, Nat Stapleton, Saul Glasman, and Susan Ruff. 18.821 Project Laboratory in Mathematics. Spring 2013. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu . License: Creative Commons BY-NC-SA .

Lori Breslow, and Terence Heagney. 15.279 Management Communication for Undergraduates. Fall 2012. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu . License: Creative Commons BY-NC-SA .

O’Reilly, L., & Cyr, T. (2006). Creating a Rubric: An Online Tutorial for Faculty. Retrieved from https://www.ucdenver.edu/faculty_staff/faculty/center-for-faculty-development/Documents/Tutorials/Rubrics/index.htm

Suskie, L. (2009). Using a scoring guide or rubric to plan and evaluate an assessment. In Assessing student learning: A common sense guide (2nd edition, pp. 137-154 ) . Jossey-Bass.

William Li, Grace Teo, and Robert Miller. 6.811 Principles and Practice of Assistive Technology. Fall 2014. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu . License: Creative Commons BY-NC-SA .

  • Center for Innovative Teaching and Learning
  • Instructional Guide
  • Formative and Summative Assessment

Assessment is the process of gathering data. More specifically, assessment is the ways instructors gather data about their teaching and their students’ learning (Hanna & Dettmer, 2004). The data provide a picture of a range of activities using different forms of assessment such as: pre-tests, observations, and examinations. Once these data are gathered, you can then evaluate the student’s performance. Evaluation, therefore, draws on one’s judgment to determine the overall value of an outcome based on the assessment data. It is in the decision-making process then, where we design ways to improve the recognized weaknesses, gaps, or deficiencies.

Types of Assessment

There are three types of assessment: diagnostic, formative, and summative. Although are three are generally referred to simply as assessment, there are distinct differences between the three.

There are three types of assessment: diagnostic, formative, and summative.

Diagnostic Assessment

Diagnostic assessment can help you identify your students’ current knowledge of a subject, their skill sets and capabilities, and to clarify misconceptions before teaching takes place (Just Science Now!, n.d.). Knowing students’ strengths and weaknesses can help you better plan what to teach and how to teach it.

Types of Diagnostic Assessments

  • Pre-tests (on content and abilities)
  • Self-assessments (identifying skills and competencies)
  • Discussion board responses (on content-specific prompts)
  • Interviews (brief, private, 10-minute interview of each student)

Formative Assessment

Formative assessment provides feedback and information during the instructional process, while learning is taking place, and while learning is occurring. Formative assessment measures student progress but it can also assess your own progress as an instructor. For example, when implementing a new activity in class, you can, through observation and/or surveying the students, determine whether or not the activity should be used again (or modified). A primary focus of formative assessment is to identify areas that may need improvement. These assessments typically are not graded and act as a gauge to students’ learning progress and to determine teaching effectiveness (implementing appropriate methods and activities).

A primary focus of formative assessment is to identify areas that may need improvement.

Types of Formative Assessment

  • Observations during in-class activities; of students non-verbal feedback during lecture
  • Homework exercises as review for exams and class discussions)
  • Reflections journals that are reviewed periodically during the semester
  • Question and answer sessions, both formal—planned and informal—spontaneous
  • Conferences between the instructor and student at various points in the semester
  • In-class activities where students informally present their results
  • Student feedback collected by periodically answering specific question about the instruction and their self-evaluation of performance and progress

Summative Assessment

Summative assessment takes place after the learning has been completed and provides information and feedback that sums up the teaching and learning process. Typically, no more formal learning is taking place at this stage, other than incidental learning which might take place through the completion of projects and assignments.

Rubrics, often developed around a set of standards or expectations, can be used for summative assessment. Rubrics can be given to students before they begin working on a particular project so they know what is expected of them (precisely what they have to do) for each of the criteria. Rubrics also can help you to be more objective when deriving a final, summative grade by following the same criteria students used to complete the project.

Rubrics also can help you to be more objective when deriving a final, summative grade by following the same criteria students used to complete the project.

High-stakes summative assessments typically are given to students at the end of a set point during or at the end of the semester to assess what has been learned and how well it was learned. Grades are usually an outcome of summative assessment: they indicate whether the student has an acceptable level of knowledge-gain—is the student able to effectively progress to the next part of the class? To the next course in the curriculum? To the next level of academic standing? See the section “Grading” for further information on grading and its affect on student achievement.

Summative assessment is more product-oriented and assesses the final product, whereas formative assessment focuses on the process toward completing the product. Once the project is completed, no further revisions can be made. If, however, students are allowed to make revisions, the assessment becomes formative, where students can take advantage of the opportunity to improve.

Summative assessment...assesses the final product, whereas formative assessment focuses on the process...

Types of Summative Assessment

  • Examinations (major, high-stakes exams)
  • Final examination (a truly summative assessment)
  • Term papers (drafts submitted throughout the semester would be a formative assessment)
  • Projects (project phases submitted at various completion points could be formatively assessed)
  • Portfolios (could also be assessed during it’s development as a formative assessment)
  • Performances
  • Student evaluation of the course (teaching effectiveness)
  • Instructor self-evaluation

Assessment measures if and how students are learning and if the teaching methods are effectively relaying the intended messages. Hanna and Dettmer (2004) suggest that you should strive to develop a range of assessments strategies that match all aspects of their instructional plans. Instead of trying to differentiate between formative and summative assessments it may be more beneficial to begin planning assessment strategies to match instructional goals and objectives at the beginning of the semester and implement them throughout the entire instructional experience. The selection of appropriate assessments should also match course and program objectives necessary for accreditation requirements.

Hanna, G. S., & Dettmer, P. A. (2004). Assessment for effective teaching: Using context-adaptive planning. Boston, MA: Pearson A&B.

Just Science Now! (n.d.). Assessment-inquiry connection. https://www.justsciencenow.com/assessment/index.htm

Selected Resources

Creative Commons License

Suggested citation

Northern Illinois University Center for Innovative Teaching and Learning. (2012). Formative and summative assessment. In Instructional guide for university faculty and teaching assistants. Retrieved from https://www.niu.edu/citl/resources/guides/instructional-guide

  • Active Learning Activities
  • Assessing Student Learning
  • Direct vs. Indirect Assessment
  • Examples of Classroom Assessment Techniques
  • Peer and Self-Assessment
  • Reflective Journals and Learning Logs
  • Rubrics for Assessment
  • The Process of Grading

Phone: 815-753-0595 Email: [email protected]

Connect with us on

Facebook page Twitter page YouTube page Instagram page LinkedIn page

Thomas Edison University

  • Support TESU
  • Request Info

Elizabeth Gehrig

3 Types of Assessment You’ll Take in College (And How to Approach Each One)

  • Taking Courses
  • Credit by Exam
  • Test-Taking Strategies

I’m 99.9 percent sure that most people do not like exams.

While that .1 percent is another story, there’s no way around it: throughout your time as a student, you will be tested on the knowledge and skills learned in your college courses. This requires hard work, but it also ensures you possess the expertise that your degree says you have. It confirms that after you graduate, the diploma hanging on your wall means something - to your employer, to your colleagues, to any educational institutions that you might attend in the future and, most importantly, to you.  

In your college classes, you will come across several different types of assessments. This may feel like a lot of pressure, but knowing what to expect and understanding the differences among them can offer a sense of relief and allow you to perform at your best. Here’s a breakdown of every kind you will encounter, what these evaluations mean and how you can prepare for whatever comes your way.

1.  Formative Assessments

Think, bite-sized learning..

Formative assessments are the type you see most frequently and include written assignments, discussion forum entries and quizzes. They are relatively low-stakes, cover a fairly small amount of material, and tell both you and your mentor where you stand in your mastery of course content. They are intended to support your learning as you go along.

As you complete a formative assessment, you will notice what material comes easily to you and what you need to keep studying or reviewing. Your mentor may offer feedback to you and your classmates based on the results. If your course has formative quizzes, each quiz may provide feedback on your performance, referring you to a particular chapter for any question that you answered incorrectly. These are the types of quizzes that you will want to take several times for additional practice to get the most benefit.  

2. Summative Assessments

Think, time to prove yourself..

Summative assessments are bigger, perhaps more intimidating and include final papers, projects and exams. These are higher-stakes, frequently worth 20-40 percent of your final course grade and they cover much more material than formative assessments. Their purpose is to evaluate your cumulative learning at the middle or end of a semester.

If your course includes exams, it may provide tools to help you prepare for them, such as an exam study guide or practice exam . The good news is that, if you have taken your formative activities seriously and internalized the feedback they offer, you should be well prepared for a summative paper, project or exam.  

3. Diagnostic Assessments

Think, painting the big picture..

Not everyone will take a diagnostic assessment, though whether you do or not depends on the courses you take. For example, you may see the ETS Proficiency Profile within a Capstone course . The University uses the results to understand students’ skill levels, as a group, at the beginning or end of their studies.

Diagnostic assessments are low stakes, and there’s no grade. However, they help the University determine how to effectively support student learning and future achievement in learning outcomes. When you take a diagnostic assessment, you help the University improve its degree programs, which in turn, benefits you as well as future students.

Now that you know about the kinds of assessments you’ll find in your courses, you can feel more confident about what knowledge and skills they’re testing and how you can approach them to enhance your learning. Good luck!

Written by Elizabeth Gehrig

Subscribe to the Thomas Edison State University Blog and get the latest updates delivered straight to your inbox.

National Academies Press: OpenBook

Developing Assessments for the Next Generation Science Standards (2014)

Chapter: 4 classroom assessment.

4 CLASSROOM ASSESSMENT

Assessments can be classified in terms of the way they relate to instructional activities. The term classroom assessment (sometimes called internal assessment) is used to refer to assessments designed or selected by teachers and given as an integral part of classroom instruction. They are given during or closely following an instructional activity or unit. This category of assessments may include teacher-student interactions in the classroom, observations, student products that result directly from ongoing instructional activities (called “immediate assessments”), and quizzes closely tied to instructional activities (called “close assessments”). They may also include formal classroom exams that cover the material from one or more instructional units (called “proximal assessments”). 1 This category may also include assessments created by curriculum developers and embedded in instructional materials for teacher use.

In contrast, external assessments are designed or selected by districts, states, countries, or international bodies and are typically used to audit or monitor learning. External assessments are usually more distant in time and context from instruction. They may be based on the content and skills defined in state or national standards, but they do not necessarily reflect the specific content that was covered in any particular classroom. They are typically given at a time that is determined by administrators, rather than by the classroom teacher. This category includes such assessments as the statewide science tests required by the No Child

___________

1 This terminology is drawn from Ruiz-Primo et al. (2002) and Pellegrino (2013).

Left Behind Act or other accountability purposes (called “distal assessments”), as well as national and international assessments: the National Assessment of Educational Progress and the Programme for International Student Assessment (called “remote assessments”). Such external assessments and their monitoring function are the subject of the next chapter.

In this chapter, we illustrate the types of assessment tasks that can be used in the classroom to meet the goals of A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas (National Research Council, 2012a, hereafter referred to as “the framework”) and the Next Generation Science Standards: For States, By States (NGSS Lead States, 2013). We present example tasks that we judged to be both rigorous and deep probes of student capabilities and also to be consistent with the framework and the Next Generation Science Standards (NGSS). We discuss external assessments in Chapter 5 and the integration of classroom and external assessments into a coherent system in Chapter 6 . The latter chapter argues that an effective assessment system should include a variety of types of internal and external assessments, with each designed to fulfill complementary functions in assessing achievement of the NGSS performance objectives.

Our starting point for looking in depth at classroom assessment is the analysis in Chapter 2 of what the new science framework and the NGSS imply for assessment. We combine these ideas with our analysis in Chapter 3 of current approaches to assessment design as we consider key aspects of classroom assessment that can be used as a component in assessment of the NGSS performance objectives.

ASSESSMENT PURPOSES: FORMATIVE OR SUMMATIVE

Classroom assessments can be designed primarily to guide instruction (formative purposes) or to support decisions made beyond the classroom (summative purposes). Assessments used for formative purposes occur during the course of a unit of instruction and may involve both formal tests and informal activities conducted as part of a lesson. They may be used to identify students’ strengths and weaknesses, assist educators in planning subsequent instruction, assist students in guiding their own learning by evaluating and revising their own work, and foster students’ sense of autonomy and responsibility for their own learning (Andrade and Cizek, 2010, p. 4). Assessments used for summative purposes may be administered at the end of a unit of instruction. They are designed to provide evidence of achievement that can be used in decision making, such as assigning grades; making promotion

or retention decisions; and classifying test takers according to defined performance categories, such as “basic,” “proficient,” and “advanced” (levels often used in score reporting) (Andrade and Cizek, 2010, p. 3).

The key difference between assessments used for formative purposes and those used for summative purposes is in how the information they provide is to be used: to guide and advance learning (usually while instruction is under way) or to obtain evidence of what students have learned for use beyond the classroom (usually at the conclusion of some defined period of instruction). Whether intended for formative or summative purposes, evidence gathered in the classroom should be closely linked to the curriculum being taught. This does not mean that the assessment must use the formats or exactly the same material that was presented in instruction, but rather that the assessment task should directly address the concepts and practices to which the students have been exposed.

The results of classroom assessments are evaluated by the teacher or sometimes by groups of teachers in the school. Formative assessments may also be used for reflection among small groups of students or by the whole class together. Classroom assessments can play an integral role in students’ learning experiences while also providing evidence of progress in that learning. Classroom instruction is the focus of the framework and the NGSS, and it is classroom assessment—which by definition is integral to instruction—that will be the most straightforward to align with NGSS goals (once classroom instruction is itself aligned with the NGSS).

Currently, many schools and districts administer benchmark or interim assessments, which seem to straddle the line between formative and summative purposes (see Box 4-1 ). They are formative in the sense that they are used for a diagnostic function intended to guide instruction (i.e., to predict how well students are likely to do on the end-of-year tests). However, because of this purpose, the format they use resembles the end-of-year tests rather than other types of internal assessments commonly used to guide instruction (such as quizzes, classroom dialogues, observations, or other types of immediate assessment strategies that are closely connected to instruction). Although benchmark and interim assessments serve a purpose, we note that they are not the types of formative assessments that we discuss in relation to the examples presented in this chapter or that are advocated by others (see, e.g., Black and Wiliam, 2009; Heritage, 2010; Perie et al., 2007). Box 4-1 provides additional information about these types of assessments.

BENCHMARK AND INTERIM ASSESSMENTS

Currently, many schools and districts administer benchmark or interim assessments, which they treat as formative assessments. These assessments use tasks that are taken from large-scale tests given in a district or state or are very similar to tasks that have been used in those tests. They are designed to provide an estimate of students’ level of learning, and schools use them to serve a diagnostic function, such as to predict how well students will do on the end-of-year tests.

Like the large-scale tests they closely resemble, benchmark tests rely heavily on multiple-choice items, each of which tests a single learning objective. The items are developed to provide only general information about whether students understand a particular idea, though sometimes the incorrect choices in a multiple-choice item are designed to probe for particular common misconceptions. Many such tasks would be needed to provide solid evidence that students have met the performance expectations for their grade level or grade band.

Teachers use these tests to assess student knowledge of a particular concept or a particular aspect of practice (e.g., control of variables), typically after teaching a unit that focuses on specific discrete learning objectives. The premise behind using items that mimic typical large-scale tests is that they help teachers measure students’ progress toward objectives for which they and their students will be held accountable and provide a basis for deciding which students need extra help and what the teacher needs to teach again.

CHARACTERISTICS OF NGSS-ALIGNED ASSESSMENTS

Chapter 2 discusses the implications of the NGSS for assessment, which led to our first two conclusions:

  • Measuring the three-dimensional science learning called for in the framework and the Next Generation Science Standards requires assessment tasks that examine students’ performance of scientific and engineering practices in the context of crosscutting concepts and disciplinary core ideas. To adequately cover the three dimensions, assessment tasks will generally need to contain multiple components (e.g., a set of interrelated questions). It may be useful to focus on individual practices, core ideas, or crosscutting concerts in the various components of an assessment task, but, together, the components need to support inferences about students’ three-dimensional science learning as described in a given performance expectation (Conclusion 2-1).
  • The Next Generation Science Standards require that assessment tasks be designed so that they can accurately locate students along a sequence of progressively more complex understandings of a core idea and successively more sophisticated applications of practices and crosscutting concepts (Conclusion 2-2).

Students will likely need repeated exposure to investigations and tasks aligned to the framework and the NGSS performance expectations, guidance about what is expected of them, and opportunities for reflection on their performance to develop these proficiencies, as discussed in Chapter 2 . The kind of instruction that will be effective in teaching science in the way the framework and the NGSS envision will require students to engage in science and engineering practices in the context of disciplinary core ideas—and to make connections across topics through the crosscutting ideas. Such instruction will include activities that provide many opportunities for teachers to observe and record evidence of student thinking, such as when students develop and refine models; generate, discuss, and analyze data; engage in both spoken and written explanations and argumentation; and reflect on their own understanding of the core idea and the subtopic at hand (possibly in a personal science journal).

The products of such instruction form a natural link to the characteristics of classroom assessment that aligns with the NGSS. We highlight four such characteristics:

  • the use of a variety of assessment activities that mirror the variety in NGSS-aligned instruction;
  • tasks that have multiple components so they can yield evidence of three-dimensional learning (and multiple performance expectations);
  • explicit attention to the connections among scientific concepts; and
  • the gathering of information about how far students have progressed along a defined sequence of learning.

Variation in Assessment Activities

Because NGSS-aligned instruction will naturally involve a range of activities, classroom assessment that is integral to instruction will need to involve a corresponding variation in the types of evidence it provides about student learning. Indeed, the distinction between instructional activities and assessment activities may be blurred, particularly when the assessment purpose is formative. A classroom

assessment may be based on a classroom discussion or a group activity in which students explore and respond to each other’s ideas and learn as they go through this process.

Science and engineering practices lend themselves well to assessment activities that can provide this type of evidence. For instance, when students are developing and using models, they may be given the opportunity to explain their models and to discuss them with classmates, thus providing the teacher with an opportunity for formative assessment reflection (illustrated in Example 4 , below). Student discourse can give the teacher a window into students’ thinking and help to guide lesson planning. A classroom assessment may also involve a formal test or diagnostic quiz. Or it may be based on artifacts that are the products of classroom activities, rather than on tasks designed solely for assessment purposes. These artifacts may include student work produced in the classroom, homework assignments (such as lab reports), a portfolio of student work collected over the course of a unit or a school year (which may include both artifacts of instruction as well as results from formal unit and end-of-course tests), or activities conducted using computer technology. A classroom assessment may occur in the context of group work or discussions, as long as the teacher ensures that all the students that need to be observed are in fact active participants. Summative assessments may also take a variety of forms, but they are usually intended to assess each student’s independent accomplishments.

Tasks with Multiple Components

The NGSS performance expectations each blend a practice and, in some cases, also a crosscutting idea with an aspect of a particular core idea. In the past, assessment tasks have typically focused on measuring students’ understanding of aspects of core ideas or of science practices as discrete pieces of knowledge. Progression in learning was generally thought of as knowing more or providing more complete and correct responses. Similarly, practices were intentionally assessed in a way that minimized specific content knowledge demands—assessments were more likely to ask for definitions than for actual use of the practice. Assessment developers took this approach in part to be sure they were obtaining accurate measures of clearly definable constructs. 2 However, although understanding the language and termi-

2 “Construct” is generally used to refer to concepts or ideas that cannot be directly observed, such as “liberty.” In the context of educational measurement, the word is used more specifically to refer to a particular body of content (knowledge, understanding, or skills) that an assessment

nology of science is fundamental and factual knowledge is very important, tasks that demand only declarative knowledge about practices or isolated facts would be insufficient to measure performance expectations in the NGSS.

As we note in Chapter 3 , the performance expectations provide a start in defining the claim or inference that is to be made about student proficiency. However, it is also important to determine the observations (the forms of evidence in student work) that are needed to support the claims, and then to develop tasks or situations that will elicit the needed evidence. The task development approaches described in Chapter 3 are commonly used for developing external tests, but they can also be useful in guiding the design of classroom assessments. Considering the intended inference, or claim, about student learning will help curriculum developers and classroom assessment designers ensure that the tasks elicit the needed evidence.

As we note in Chapter 2 , assessment tasks aligned with the NGSS performance expectations will need to have multiple components—that is, be composed of more than one kind of activity or question. They will need to include opportunities for students to engage in practices as a means to demonstrate their capacity to apply them. For example, a task designed to elicit evidence that a student can develop and use models to support explanations about structure-function relationships in the context of a core idea will need to have several components. It may require that students articulate a claim about selected structure-function relationships, develop or describe a model that supports the claim, and provide a justification that links evidence to the claim (such as an explanation of an observed phenomenon described by the model). A multicomponent task may include some short-answer questions, possibly some carefully designed selected-response questions, and some extended-response elements that require students to demonstrate their understandings (such as tasks in which students design an investigation or explain a pattern of data). For the purpose of making an appraisal of student learning, no single piece of evidence is likely to be sufficient; rather, the pattern of evidence across multiple components can provide a sufficient indicator of student understanding.

is to measure. It can be used to refer to a very specific aspect of tested content (e.g., the water cycle) or a much broader area (e.g., mathematics).

Making Connections

The NGSS emphasize the importance of the connections among scientific concepts. Thus, the NGSS performance expectations for one disciplinary core idea may be connected to performance expectations for other core ideas, both within the same domain or in other domains, in multiple ways: one core idea may be a prerequisite for understanding another, or a task may be linked to more than one performance expectation and thus involve more than one practice in the context of a given core idea. NGSS-aligned tasks will need to be constructed so that they provide information about how well students make these connections. For example, a task that focused only on students’ knowledge of a particular model would be less revealing than one that probed students’ understanding of the kinds of questions and investigations that motivated the development of the model. Example 1 , “What Is Going on Inside Me?” (in Chapter 2 ), shows how a single assessment task can be designed to yield evidence related to multiple performance expectations, such as applying physical science concepts in a life science context. Tasks that do not address these connections will not fully capture or adequately support three-dimensional science learning.

Learning as a Progression

The framework and the NGSS address the process of learning science. They make clear that students should be encouraged to take an investigative stance toward their own and others’ ideas, to be open about what they are struggling to understand, and to recognize that struggle as part of the way science is done, as well as part of their own learning process. Thus, revealing students’ emerging capabilities with science practices and their partially correct or incomplete understandings of core ideas is an important function of classroom assessment. The framework and the NGSS also postulate that students will develop disciplinary understandings by engaging in practices that help them to question and explain the functioning of natural and designed systems. Although learning is an ongoing process for both scientists and students, students are emerging practitioners of science, not scientists, and their ways of acting and reasoning differ from those of scientists in important ways. The framework discusses the importance of seeing learning as a trajectory in which students gradually progress in the course of a unit or a year, and across the whole K-12 span, and organizing instruction accordingly.

The first example in this chapter, “Measuring Silkworms” (also discussed in Chapter 3 ), illustrates how this idea works in an assessment that is embedded in a larger instructional unit. As they begin the task, students are not competent data

analysts. They are unaware of how displays can convey ideas or of professional conventions for display and the rationale for these conventions. In designing their own displays, students begin to develop an understanding of the value of these conventions. Their partial and incomplete understandings of data visualization have to be explicitly identified so teachers can help them develop a more general understanding. Teachers help students learn about how different mathematical practices, such as ordering and counting data, influence the shapes the data take in models. The students come to understand how the shapes of the data support inferences about population growth.

Thus, as discussed in Chapter 2 , uncovering students’ incomplete forms of practice and understanding is critical: NGSS-aligned assessments will need to clearly define the forms of evidence associated with beginning, intermediate, and sophisticated levels of knowledge and practice expected for a particular instructional sequence. A key goal of classroom assessments is to help teachers and students understand what has been learned and what areas will require further attention. NGSS-aligned assessments will also need to identify likely misunderstandings, productive ideas of students that can be built upon, and interim goals for learning.

The NGSS performance expectations are general: they do not specify the kinds of intermediate understandings of disciplinary core ideas students may express during instruction nor do they help teachers interpret students’ emerging capabilities with science practices or their partially correct or incomplete understanding. To teach toward the NGSS performance expectations, teachers will need a sense of the likely progression at a more micro level, to answer such questions as:

  • For this unit, where are the students expected to start, and where should they arrive?
  • What typical intermediate understandings emerge along this learning path?
  • What common logical errors or alternative conceptions present barriers to the desired learning or resources for beginning instruction?
  • What new aspects of a practice need to be developed in the context of this unit?

Classroom assessment probes will need to be designed to generate enough evidence about students’ understandings so that their locations on the intended pathway can be reliably determined, and it is clear what next steps (instructional activities) are needed for them to continue to progress. As we note in Chapter 2 ,

only a limited amount of research is available to support detailed learning progressions: assessment developers and others who have been applying this approach have used a combination of research and practical experience to support depictions of learning trajectories.

SIX EXAMPLES

We have identified six example tasks and task sets that illustrate the elements needed to assess the development of three-dimensional science learning. As noted in Chapter 1 , they all predate the publication of the NGSS. However, the constructs being measured by each of these examples are similar to those found in the NGSS performance expectations. Each example was designed to provide evidence of students’ capabilities in using one or more practices as they attempt to reach and present conclusions about one or more core ideas: that is, all of them assess three-dimensional learning. Table 1-1 shows the NGSS disciplinary core ideas, practices, and crosscutting ideas that are closest to the assessment targets for all of the examples in the report. 3

We emphasize that there are many possible designs for activities or tasks that assess three-dimensional science learning—these six examples are only a sampling of the possible range. They demonstrate a variety of approaches, but they share some common attributes. All of them require students to use some aspects of one or more science and engineering practices in the course of demonstrating and defending their understanding of aspects of a disciplinary core idea. Each of them also includes multiple components, such as asking students to engage in an activity, to work independently on a modeling or other task, and to discuss their thinking or defend their argument.

These examples also show how one can use classroom work products and discussions as formative assessment opportunities. In addition, several of the examples include summative assessments. In each case, the evidence produced provides teachers with information about students’ thinking and their developing understanding that would be useful for guiding next steps in instruction. Moreover, the time students spend in doing and reflecting on these tasks should

3 The particular combinations in the examples may not be the same as NGSS examples at that grade level, but each of these examples of classroom assessment involves integrated knowledge of the same general type as the NGSS performance expectations. However, because they predate the NGSS and its emphasis on crosscutting concepts, only a few of these examples include reference to a crosscutting concept, and none of them attempts to assess student understanding of, or disposition to invoke, such concepts.

be seen as an integral part of instruction, rather than as a stand-alone assessment task. We note that the example assessment tasks also produce a variety of products and scorable evidence. For some we include illustrations of typical student work, and for others we include a construct map or scoring rubric used to guide the data interpretation process. Both are needed to develop an effective scoring system.

Each example has been used in classrooms to gather information about particular core ideas and practices. The examples are drawn from different grade levels and assess knowledge related to different disciplinary core ideas. Evidence from their use documents that, with appropriate prior instruction, students can successfully carry out these kinds of tasks. We describe and illustrate each of these examples below and close the chapter with general reflections about the examples, as well as our overall conclusions and recommendations about classroom assessment.

Example 3: Measuring Silkworms

The committee chose this example because it illustrates several of the characteristics we argue an assessment aligned with the NGSS must have: in particular, it allows the teacher to place students along a defined learning trajectory (see Figure 3-13 in Chapter 3 ), while assessing both a disciplinary core idea and a crosscutting concept. 4 The assessment component is formative, in that it helps the teacher understand what students already understood about data display and to adjust the instruction accordingly. This example, in which 3rd-grade students investigated the growth of silkworm larvae, first assesses students’ conceptions of how data can be represented visually and then engages them in conversations about what different representations of the data they had collected reveal. It is closely tied to instruction—the assessment is embedded in a set of classroom activities.

The silkworm scenario is designed so that students’ responses to the tasks can be interpreted in reference to a trajectory of increasingly sophisticated forms of reasoning. A construct map displayed in Figure 3-13 shows developing conceptions of data display. Once the students collect their data (measure the silkworms) and produce their own ways of visually representing their findings, the teacher uses the data displays as the basis for a discussion that has several objectives.

4 This example is also discussed in Chapter 3 in the context of using construct modeling for task design.

The teacher uses the construct map to identify data displays that demonstrate several levels on the trajectory. In a whole-class discussion, she invites students to consider what the different ways of displaying the data “show and hide” about the data and how they do so. During this conversation, the students begin to appreciate the basis for conventions about display. 5 For example, in their initial attempt at representing the data they have collected, many of the students draw icons to resemble the organisms that are not of uniform size (see Figure 3-14 in Chapter 3 ). The mismatches between their icons and the actual relative lengths of the organisms become clear in the discussion. The teacher also invites students to consider how using mathematical ideas (related to ordering, counting, and intervals) helped them develop different shapes to represent the same data.

The teacher’s focus on shape is an assessment of what is defined as the crosscutting concept of patterns in the framework and the NGSS. These activities also cultivate the students’ capacity to think at a population level about the biological significance of the shapes, as they realize what the different representations of the measurements they have taken can tell them. Some of the student displays make a bell-like shape more evident, which inspires further questions and considerations in the whole-class discussion (see Figure 3-15 in Chapter 3 ): students notice that the tails of the distribution are comparatively sparse, especially for the longer larvae, and wonder why. As noted in Chapter 3 , they speculate about the possible reasons for the differences, which leads to a discussion and conclusions about competition for resources, which in turn leads them to consider not only individual silkworms, but the entire population of silkworms. Hence, this assessment provides students with opportunities for learning about representations, while also providing the teacher with information about their understanding of a crosscutting concept (pattern) and disciplinary core concepts (population-level descriptions of variability and the mechanisms that produce it).

Example 4: Behavior of Air

The committee chose this example to show the use of classroom discourse to assess student understanding. The exercise is designed to focus students’ attention on a particular concept: the teacher uses class discussion of the students’ models of air particles to identify misunderstandings and then support students in collaboratively resolving them. This task assesses both students’ understanding of the concept and their proficiency with the practices of modeling and developing oral

5 This is a form of meta-representational competence; see diSessa (2004).

arguments about what they have observed. This assessment is used formatively and is closely tied to classroom instruction.

Classroom discussions can be a critical component of formative assessment. They provide a way for students to engage in scientific practices and for teachers to instantly monitor what the students do and do not understand. This example, from a unit for middle school students on the particle nature of matter, illustrates how a teacher can use discussions to assess students’ progress and determine instructional next steps. 6

In this example, 6th-grade students are asked to develop a model to explain the behavior of air. The activity leads them to an investigation of phase change and the nature of air. The example is from a single class period in a unit devoted to developing a conceptual model of a gas as an assemblage of moving particles with space between them; it consists of a structured task and a discussion guided by the teacher (Krajcik et al., 2013; Krajcik and Merritt, 2012). The teacher is aware of an area of potential difficulty for students, namely, a lack of understanding that there is empty space between the molecules of air. She uses group-developed models and student discussion of them as a probe to evaluate whether this understanding has been reached or needs further development.

When students come to this activity in the course of the unit, they have already reached consensus on several important ideas they can use in constructing their models. They have defined matter as anything that takes up space and has mass. They have concluded that gases—including air—are matter. They have determined through investigation that more air can be added to a container even when it already seems full and that air can be subtracted from a container without changing its size. They are thus left with questions about how more matter can be forced into a space that already seems to be full and what happens to matter when it spreads out to occupy more space. The students have learned from earlier teacher-led class discussions that simply stating that the gas changes “density” is not sufficient, since it only names the phenomenon—it does not indicate what actually makes it possible for differing amounts of gas to expand or contract to occupy the same space.

In this activity, students are given a syringe and asked to gradually pull the plunger in and out of it to explore the air pressure. They notice the pressure

6 This example was drawn from research conducted on classroom enactments of the IQWST curriculum materials (Krajcik et al., 2008; Shwartz et al., 2008). In field trials of IQWST, a diverse group of students responded to the task described in this example: 43% were white/Asian and 57% were non-Asian/minority; and 4% were English learners (Banilower et al., 2010).

against their fingers when pushing in and the resistance as they pull the plunger out. They find that little or no air escapes when they manipulate the plunger. They are asked to work in small groups to develop a model to explain what happens to the air so that the same amount of it can occupy the syringe regardless of the volume of space available. The groups are asked to provide models of the air with the syringe in three positions: see Figure 4-1 . This modeling activity itself is not used as a formal assessment task; rather, it is the class discussion, in which students compare their models, that allows the teacher to diagnose the students’ understanding. That is, the assessment, which is intended to be formative, is conducted through the teacher’s probing of students’ understandings through classroom discussion.

Figure 4-2 shows the first models produced by five groups of students to depict the air in the syringe in its first position. The teacher asks the class to discuss the different models and to try to reach consensus on how to model the behavior of air to explain their observations. The class has agreed that there should be “air particles” (shown in each of their models as dark dots) and that the particles are moving (shown in some models by the arrows attached to the dots).

Most of their models are consistent in representing air as a mixture of different kinds of matter, including air, odor, dust, and “other particles.” What is not consistent in their models is what is represented as between the particles: groups 1 and 2 show “wind” as the force moving the air particles; groups 3, 4, and 5 appear to show empty space between the particles. Exactly what, if anything, is in between the air particles emerges as a point of contention as the students discuss their models. After the class agrees that the consensus model should include air particles shown with arrows to demonstrate that the particles “are coming out in different directions,” the teacher draws several particles with arrows and asks what to put next into the model. The actual classroom discussion is shown in Box 4-2 .

The discussion shows how students engage in several scientific and engineering practices as they construct and defend their understanding about a disciplinary core idea. In this case, the key disciplinary idea is that there must be empty space between moving particles, which allows them to move, either to become more densely packed or to spread apart. The teacher can assess the way the students have drawn their models, which reveals that their understanding is not complete. They have agreed that all matter, including gas, is made of particles that are moving, but many of the students do not understand what is in between these moving particles. Several students indicate that they think there is air between the air par-

images

FIGURE 4-1 Models for air in a syringe in three situations for Example 4 , “Behavior of Air.” SOURCE: Krajcik et al. (2013). Reprinted with permission from Sangari Active Science.

ticles, since “air is everywhere,” and some assert that the particles are all touching. Other students disagree that there can be air between the particles or that air particles are touching, although they do not yet articulate an argument for empty space between the particles, an idea that students begin to understand more clearly in subsequent lessons. Drawing on her observations, the teacher asks questions

images

FIGURE 4-2 First student models for Example 4 , “Behavior of Air.”

SOURCE: Reiser et al. (2013). Copyright by the author; used with permission.

and gives comments that prompt the students to realize that they do not yet agree on the question of what is between the particles. The teacher then uses this observation to make instructional decisions. She follows up on one student’s critique of the proposed addition to the consensus model to focus the students on their disagreement and then sends the class back into their groups to resolve the question.

In this example, the students’ argument about the models plays two roles: it is an opportunity for students to defend or challenge their existing ideas, and it is an opportunity for the teacher to observe what the students are thinking and to decide that she needs to pursue the issue of what is between the particles of air. It is important to note that the teacher does not simply bring up this question, but instead uses the disagreement that emerges from the discussion as the basis for the question. (Later interviews with the teacher reveal that she had in fact anticipated that the empty space between particles would come up and was prepared to take advantage of that opportunity.) The discussion thus provides insights into stu-

dents’ thinking beyond their written (and drawn) responses to a task. The models themselves provide a context in which the students can clarify their thinking and refine their models in response to the critiques, to make more explicit claims to explain what they have observed. Thus, this activity focuses their attention on key explanatory issues (Reiser, 2004).

This example also illustrates the importance of engaging students in practices to help them develop understanding of disciplinary core ideas while also giving teachers information to guide instruction. In this case, the teacher’s active probing of students’ ideas demonstrates the way that formative assessment strategies can be effectively used as a part of instruction. The discussion of the models not only reveals the students’ understanding about the phenomenon, but also allows the teacher to evaluate progress, uncover problematic issues, and help students construct and refine their models.

Example 5: Movement of Water

The committee chose this example to show how a teacher can monitor developing understanding in the course of a lesson. “Clicker technology” 7 is used to obtain individual student responses that inform teachers of what the students have learned from an activity and which are then the basis for structuring small-group discussions that address misunderstandings. This task assesses both understanding of a concept as it develops in the course of a lesson and students’ discussion skills. The assessments are used formatively and are closely tied to classroom instruction.

In the previous example ( Example 4 ), the teacher orchestrates a discussion in which students present alternative points of view and then come to consensus about a disciplinary core idea through the practice of argumentation. However, many teachers may find it challenging to track students’ thinking while also promoting the development of understanding for the whole class. The example on the movement of air was developed as part of a program for helping teachers learn to lead students in “assessment conversations” (Duschl and Gitomer, 1997). 8 In the

7 Clicker technology, also known as classroom response systems, allows students to use hand-held clickers to respond to questions from a teacher. The responses are gathered by a central receiver and immediately tallied for the teacher—or the whole class—to see.

8 This example is taken from the Contingent Pedagogies Project, which provides formative assessment tools for middle schools and supports teachers in integrating assessment activities into discussions for both small groups and entire classes. Of the students who responded to the task, 46 percent were Latino. For more information, see http://contingentpedagogies.org [October 2013].

STUDENT-TEACHER DIALOGUE

Haley’s objection: air is everywhere

Ms. B: OK. Now what?

S: Just draw like little. . . .

Haley: I think you should color the whole circle in, because dust . . . I mean air is everywhere, so. . . .

Miles: The whole circle?

Ms. B: So, I color the whole thing in.

Haley: Yeah.

Ms. B: So, if I do one like that, because I haven’t seen one up here yet. If I color this whole thing in. . . .

[Ms. B colors in the whole region completely to show the air as Haley suggests.]

Michael: Then how would you show that . . . ?

Ms. B: Then ask . . . ask Haley some questions.

Students: How could that be? How would you show that?

Ms. B: Haley, people have some questions for you.

Some students object to Haley’s proposal:

Frank: How would you show air?

Haley: Air is everywhere, so the air would be everything.

Alyssa: But then, how would you show the other molecules? I mean, you said air is everything, but then how

would you show the other . . .?

Ss: Yeah, because . . . [Multiple students talking]

Haley: What? I didn’t hear your question.

Alyssa: Um, I said if . . . You said air is everywhere, right?

Haley: Yeah. . . . so, that’s why you wanted to color it in. But there’s also other particles other than air, like dust

and etc. and odors and things like that, so, how would you show that?

Miles: How are we going to put in the particles?

Ms. B: Haley, can you answer her?

Ms. B: Why?

Haley: I don’t know.

Other student: Because there is no way.

Ms. B: Why can’t you answer?

Haley: What? I don’t know.

Ms. B: Is what she’s saying making sense?

Ms. B: What is it that you’re thinking about?

Haley: Um . . . that maybe you should take . . . like, erase some of it to show the odors and stuff.

Addison: No, wait, wait!

Ms. B: All right, call on somebody else.

Addison proposes a compromise, and Ms. B pushes for clarification

Addison: Um, I have an idea. Like since air is everywhere, you might be able to like use a different colored marker and put like, um, the other molecules in there, so you’re able to show that those are in there and then air is also everywhere.

Jerome: Yeah. I was gonna say that, or you could like erase it. If you make it all dark, you can just erase it and all of them will be.

Frank: Just erase some parts of the, uh . . . yeah, yeah, just to show there’s something in between it.

Ms. B: And what’s in between it?

Ss: The dust and the particles. Air particles. Other odors.

Miles: That’s like the same thing over there.

Alyssa: No, the colors are switched.

Ms. B: Same thing over where?

Alyssa: The big one, the consensus.

Ms. B: On this one?

Alyssa: Yeah.

Ms. B: Well, what she’s saying is that I should have black dots every which way, like that. [Ms. B draws the air particles touching one another in another representation, not in the consensus model, since it is Haley’s idea.]

Students: No what? Yeah.

Ms. B: Right?

Students: No. Sort of. Yep.

Ms. B: OK. Talk to your partners. Is this what we want? [pointing to the air particles touching one another in the diagram]

Students discuss in groups whether air particles are touching or not, and what is between the particles if anything.

task, middle school students engage in argumentation about disciplinary core ideas in earth science. As with the previous example, the formative assessment activity is more than just the initial question posed to students; it also includes the discussion that follows from student responses to it and teachers’ decisions about what to do next, after she brings the discussion to a close.

In this activity, which also takes place in a single class session, the teacher structures a conversation about how the movement of water affects the deposition of surface and subsurface materials. The activity involves disciplinary core ideas (similar to Earth’s systems in the NGSS) and engages students in practices, including modeling and constructing examples. It also requires students to reason about models of geosphere-hydrosphere interactions, which is an example of the crosscutting concept pertaining to systems and system models. 9

Teachers use classroom clicker technology to pose multiple-choice questions that are carefully designed to elicit students’ ideas related to the movement of water. These questions have been tested in classrooms, and the response choices reflect common student ideas, including those that are especially problematic. In the course of both small-group and whole-class discussions, students construct and challenge possible explanations of the process of deposition. If students have difficulty in developing explanations, teachers can guide students to activities designed to improve their understanding, such as interpreting models of the deposition of surface and subsurface materials.

When students begin this activity, they will just have completed a set of investigations of weathering, erosion, and deposition that are part of a curriculum on investigating Earth systems. 10 Students will have had the opportunity to build physical models of these phenomena and frame hypotheses about how water will move sediment using stream tables. 11 The teacher begins the formative assessment activity by projecting on a screen a question about the process of deposition designed to check students’ understanding of the activities they have completed: see Figure 4-3 . Students select their answers using clickers.

9 The specific NGSS core idea addressed is similar to MS-ESS2.C: “How do the properties and movement of water shape Earth’s surface and affect its systems?” The closest NGSS performance expectation is MS-ESS2-c: “Construct an explanation based on evidence for how geoscience processes have changed Earth’s surface at varying time and spatial scales.”

10 This curriculum, for middle school students, was developed by the American Geosciences Institute. For more information, see http://www.agiweb.org/education/ies [July 2013].

11 Stream tables are models of stream flows set up in large boxes filled with sedimentary material and tilted so that water can flow through.

images

FIGURE 4-3 Sample question for Example 5 , “Movement of Water.

The green areas marked above show the place where a river flows into an ocean. Why does this river look like a triangle (or fan) where it flows into the ocean? Be prepared to explain your response.

Answer A: Sediment is settling there as the land becomes flatter.

Answer B: The water is transporting all the sediment to the ocean, where it is being deposited.

Answer C: The water is moving faster near the mouth of the delta.

(The correct answer is A.)

SOURCE: NASA/GSFC/JPL/LaRC, MISR Science Team (2013) and Los Angeles County Museum of Art (2013).

Pairs or small groups of students then discuss their reasoning and offer explanations for their choices to the whole class. Teachers help students begin the small-group discussions by asking why someone might select A, B, or C, implying that any of them could be a reasonable response. Teachers press students for their reasoning and invite them to compare their own reasoning to that of others, using specific discussion strategies (see Michaels and O’Connor, 2011; National Research Council, 2007). After discussing their reasoning, students again vote, using their clickers. In this example, the student responses recorded using the clicker technology are scorable. A separate set of assessments (not discussed here) produces scores to evaluate the efficacy of the project as a whole.

The program materials include a set of “contingent activities” for teachers to use if students have difficulty meeting a performance expectation related to an investigation. Teachers use students’ responses to decide which contingent activities are needed, and thus they use the activity as an informal formative assessment. In these activities, students might be asked to interpret models, construct explanations, and make predictions using those models as a way to deepen their understanding of Earth systems. In this example about the movement of air,

students who are having difficulty understanding can view an animation of deposition and then make a prediction about a pattern they might expect to find at the mouth of a river where sediment is being deposited.

The aim of this kind of assessment activity is to guide teachers in using assessment techniques to improve student learning outcomes. 12 The techniques used in this example demonstrate a means of rapidly assessing how well students have mastered a complex combination of practices and concepts in the midst of a lesson, which allows teachers to immediately address areas students do not understand well. The contingent activities that provide alternative ways for students to master the core ideas (by engaging in particular practices) are an integral component of the formative assessment process.

Example 6: Biodiversity in the Schoolyard

The committee chose this example to show the use of multiple interrelated tasks to assess a disciplinary core idea, biodiversity, with multiple science practices. As part of an extended unit, students complete four assessment tasks. The first three serve formative purposes and are designed to function close to instruction, informing the teacher about how well students have learned key concepts and mastered practices. The last assessment task serves a summative purpose, as an end-of-unit test, and is an example of a proximal assessment. The tasks address concepts related to biodiversity and science practices in an integrated fashion.

This set of four assessment tasks was designed to provide evidence of 5th-grade students’ developing proficiency with a body of knowledge that blends a disciplinary core idea (biodiversity; LS4 in the NGSS; see Box 2-1 in Chapter 2 ) and a crosscutting concept (patterns) with three different practices: planning and carrying out investigations, analyzing and interpreting data, and constructing explanations (see Songer et al., 2009; Gotwals and Songer, 2013). These tasks, developed by researchers as part of an examination of the development of complex reasoning, are intended for use in an extended unit of study. 13

12 A quasi-experimental study compared the learning gains for students in classes that used the approach of the Contingent Pedagogies Project with gains for students in other classes in the same school district that used the same curriculum but not that approach. The students whose teachers used the Contingent Pedagogies Project demonstrated greater proficiency in earth science objectives than did students in classrooms in which teachers only had access to the regular curriculum materials (Penuel et al., 2012).

13 The tasks were given to a sample of 6th-grade students in the Detroit Public School System, the majority of whom were racial/ethnic minority students (for details, see Songer et al., 2009).

Formative Assessment Tasks

Task 1: Collect data on the number of animals (abundance) and the number of different species (richness) in schoolyard zones.

Instructions: Once you have formed your team, your teacher will assign your team to a zone in the schoolyard. Your job is to go outside and spend approximately 40 minutes observing and recording all of the animals and signs of animals that you see in your schoolyard zone during that time. Use the BioKIDS application on your iPod to collect and record all your data and observations.

In responding to this task, students use an Apple iPod to record their information. The data from each iPod is uploaded and combined into a spreadsheet that contains all of the students’ data; see Figure 4-4 . Teachers use data from individual groups or from the whole class as assessment information to provide formative information about students’ abilities to collect and record data for use in the other tasks.

Task 2: Create bar graphs that illustrate patterns in abundance and richness data from each of the schoolyard zones.

Task 2 assesses students’ ability to construct and interpret graphs of the data they have collected (an important element of the NGSS practice “analyzing and interpreting data”). The exact instructions for Task 2 appear in Figure 4-5 . Teachers use the graphs the students create for formative purposes, for making decisions about further instruction students may need. For example, if students are weak on the practices, the teacher may decide to help them with drawing accurate bars or the appropriate labeling of axes. Or if the students are weak on understanding of the core idea, the teacher might review the concepts of species abundance or species richness.

Task 3: Construct an explanation to support your answer to the question: Which zone of the schoolyard has the greatest biodiversity?

Before undertaking this task, students have completed an activity that helped them understand a definition of biodiversity: “An area is considered biodiverse if it has both a high animal abundance and high species richness.” The students were also given hints (reminders) that there are three key parts of an explanation: a claim, more than one piece of evidence, and reasoning. The students are also

images

FIGURE 4-4 Class summary of animal observations in the schoolyard, organized by region (schoolyard zones), for Example 6 , “Biodiversity in the Schoolyard.”

images

FIGURE 4-5 Instructions for Task 2 for Example 6 , “Biodiversity in the Schoolyard.”

NOTE: See text for discussion.

given the definitions of relevant terms. This task allows the teacher to see how well students have understood the concept and can support their ideas about it. Instructions for Task 3 and student answers are shown in Box 4-3 .

Summative Assessment Task

Task 4: Construct an explanation to support an answer to the question: Which zone of the schoolyard has the greatest biodiversity?

For the end-of-unit assessment, the task presents students with excerpts from a class data collection summary, shown in Table 4-1 , and asks them to construct an explanation, as they did in Task 3. The difference is that in Task 4, the hints are removed: at the end of the unit, they are expected to show that they understand what constitutes a full explanation without a reminder. The task and coding rubric used for Task 4 are shown in Box 4-4 .

The Set of Tasks

This set of tasks illustrates two points. First, using tasks to assess several practices in the context of a core idea together with a crosscutting concept can provide a wider range of information about students’ progression than would tasks that focused on only one practice. Second, classroom assessment tasks in which core ideas, crosscutting concepts, and practices are integrated can be used for both formative and summative purposes. Table 4-2 shows the core idea, crosscutting concept, practices, assessment purposes, and performance expectation targets for assessment for each of the tasks. Each of these four tasks was designed to provide information about a single performance expectation related to the core idea, and each performance expectation focused on one of three practices. Figure 4-6 illustrates the way these elements fit together to identify the target for assessment of Tasks 3 and 4.

Second, the design of each task was determined by its purpose (formative or summative) and the point in the curriculum at which it was to be used. Assessment tasks may, by design, include more or less guidance for students, depending on the type of information they are intended to collect. Because learning is a process that occurs over time, a teacher might choose an assessment task with fewer guides (or scaffolds) for students as they progress through a curriculum to gather evidence of what students can demonstrate without assistance. Thus, the task developers offered a practice progression to illustrate the different levels of

INSTRUCTIONS AND SAMPLE STUDENT ANSWERS FOR TASK 3 IN EXAMPLE 6 , “BIODIVERSITY IN THE SCHOOLYARD”

Instructions: Using what you have learned about biodiversity, the information from your class summary sheet, and your bar charts for abundance and richness, construct an explanation to answer the following scientific question:

Scientific Question: Which zone in the schoolyard has the highest biodiversity? My Explanation [figure or text box?]

Make a CLAIM: Write a complete sentence that answers the scientific question.

Zone A has the greatest biodiversity .

Hint: Look at your abundance and richness data sheets carefully.

Give your REASONING: Write the scientific concept or definition that you thought about to make your claim.

Hint: Think about how biodiversity is related to abundance and richness.

Biodiversity is related to abundance and richness because it shows the two amounts in one word.

Give your EVIDENCE: Look at your data and find two pieces of evidence that help answer the scientific question.

Hint: Think about which zone has the highest abundance and richness.

1. Zone A has the most richness.

2. Zone A has a lot of abundance.

NOTES: Student responses are shown in italics. See text for discussion.

TABLE 4-1 Schoolyard Animal Data for Example 6 Summative Task, “Biodiversity in the Schoolyard”

guidance that tasks might include, depending on their purpose and the stage students will have reached in the curriculum when they undertake the tasks.

Box 4-5 shows a progression for the design of tasks that assess one example of three-dimensional learning: the practice of constructing explanations with one core idea and crosscutting concept. This progression design was based on studies that examined students’ development of three-dimensional learning over time, which showed that students need less support in tackling assessment tasks as they progress in knowledge development (see, e.g., Songer et al., 2009).

Tasks 3 and 4, which target the same performance expectation but have different assessment purposes, illustrate this point. Task 3 was implemented midway through the curricular unit to provide formative information for the teacher on the kinds of three-dimensional learning students could demonstrate with the assistance of guides. Task 3 was classified as a Level 5 task (in terms of the progression shown in Box 4-5 ) and included two types of guides for the students (core idea guides in text boxes and practice guides that offer the definition of claim, evidence, and reasoning). Task 4 was classified as a Level 7 task because it did not provide students with any guides to the construction of explanations.

Example 7: Climate Change

The committee chose this flexible online assessment task to demonstrate how assessment can be customized to suit different purposes. It was designed to probe student understanding and to facilitate a teacher’s review of responses. Computer software allows teachers to tailor online assessment tasks to their purpose and to the stage of learning that students have reached, by offering more or less supporting information

TASK AND CODING RUBRIC FOR TASK 4 IN EXAMPLE 6 , “BIODIVERSITY IN THE SCHOOLYARD”

Write a scientific argument to support your answer for the following question.

Scientific Question: Which zone has the highest biodiversity?

4 points: Contains all parts of explanation (correct claim, 2 pieces of evidence, reasoning)

3 points: Contains correct claim and 2 pieces of evidence but incorrect or no reasoning

2 points: Contains correct claim + 1 piece correct evidence OR 2 pieces correct evidence and 1 piece incorrect evidence

1 point: Contains correct claim, but no evidence or incorrect evidence and incorrect or no reasoning

Correct Responses

Correct: Zone B has the highest biodiversity.

1. Zone B has the highest animal richness.

2. Zone B has high animal abundance.

Explicit written statement that ties evidence to claim with a reasoning statement: that is, Zone B has the highest biodiversity because it has the highest animal richness and high animal abundance. Biodiversity is a combination of both richness and abundance , not just one or the other.

and guidance. The tasks may be used for both formative and summative purposes: they are designed to function close to instruction.

This online assessment task is part of a climate change curriculum for high school students. It targets the performance expectation that students use geoscience data and the results from global climate models to make evidence-based

TABLE 4-2 Characteristics of Tasks in Example 6 , “Biodiversity in the Schoolyard”

forecasts of the impacts of climate change on organisms and ecosystems. 14 This example illustrates four potential benefits of online assessment tasks:

  • the capacity to present data from various external sources to students;
  • the capacity to make information about the quality and range of student responses continuously available to teachers so they can be used for formative purposes;
  • the possibility that tasks can be modified to provide more or less support, or scaffolding, depending on the point in the curriculum at which the task is being used; and

14 This performance expectation is similar to two in the NGSS ones: HS-LS2-2 and HS-ESS3-5, which cover the scientific practices of analyzing and interpreting data and obtaining, evaluating, and communicating evidence.

images

FIGURE 4-6 Combining practice, crosscutting concept, and core idea to form a blended learning performance expectation, assessed in Tasks 3 and 4, for Example 6 , “Biodiversity in the Schoolyard.”

  • the possibility that the tasks can be modified to be more or less active depending on teachers’ or students’ preferences.

In the instruction that takes place prior to this task, students will have selected a focal species in a particular ecosystem and studied its needs and how it is distributed in the ecosystem. They will also have become familiar with a set of model-based climate projections, called Future 1, 2, and 3, that represent more and less severe climate change effects. Those projections are taken from the Intergovernmental Panel on Climate Change (IPCC) data predictions for the year 2100 (Intergovernmental Panel on Climate Change, 2007): see Figure 4-7 . The materials provided online as part of the activity include

  • global climate model information presented in a table showing three different IPCC climate change scenarios (shown in Figure 4-7 );
  • geosciences data in the form of a map of North America that illustrates the current and the predicted distribution of locations of optimal biotic and abiotic 15 conditions for a species, as predicted by IPCC Future 3 scenario: see Figure 4-8 ; and

15 The biotic component of an environment consists of the living species that populate it, while the abiotic components are the nonliving influences such as geography, soil, water, and climate that are specific to the particular region.

PROGRESSION FOR MULTIDIMENSIONAL LEARNING TASK DESIGN

This progression covers constructing a claim with evidence and constructing explanations with and without guidance. The + and ++ symbols represent the number of guides provided in the task.

SOURCE: Adapted from Gotwals and Songer (2013).

  • an online guide for students in the development of predictions, which prompts them as to what is needed and records their responses in a database that teachers and students can use. (The teacher can choose whether or not to allow students access to the pop-up text that describes what is meant by a claim or by evidence.)

The task asks students to make and support a prediction in answer to the question, “In Future 3, would climate change impact your focal species?” Students are asked to provide the following:

images

FIGURE 4-7 Three simplified Intergovernmental Panel on Climate Change (IPCC)-modeled future scenarios for the year 2100.

SOURCE: Adapted from Peters et al. (2012).

  • a claim (the prediction) as to whether or not they believe the IPCC scenario information suggests that climate change will affect their chosen animal;
  • reasoning that connects their prediction to the model-based evidence, such as noting that their species needs a particular prey to survive; and
  • model-based evidence that is drawn from the information in the maps of model-based climate projections, such as whether or not the distribution of conditions needed by the animal and its food source in the future scenario will be significantly different from what it is at present.

Table 4-3 shows sample student responses that illustrate both correct responses and common errors. Students 1, 3, and 4 have made accurate predictions, and supplied reasoning and evidence; students 2, 5, and 6 demonstrate common errors, including insufficient evidence (student 2), inappropriate reasoning and evidence (student 5), and confusion between reasoning and evidence (student 6). Teachers can use this display to quickly see the range of responses in the class and use that information to make decisions about future instruction.

Example 8: Ecosystems

The committee chose this example, drawn from the SimScientists project, to demonstrate the use of simulation-based modules designed to be embedded in a curriculum unit to provide both formative and summative assessment information. Middle school students use computer simulations to demonstrate their understanding of core ideas about ecosystem dynamics and the progress of their

images

FIGURE 4-8 Current and predicted Future 3 distribution for the red squirrel for Example 7 , “Climate Change.” SOURCE: Songer et al. (2013). Copyright by the author; used with permission.

thinking as they move from exploring ecosystem components to interactions of those components to the way systems behave. Thus, the simulations also address the crosscutting concept of systems. The assessment components function close to classroom instruction.

In this set of classroom modules, students use simulated, dynamic representations of particular ecosystems, such as a mountain lake or grasslands, to investigate features common to all ecosystems. The students investigate the roles of and relationships among species within habitats and the effects of these interactions on population levels (Quellmalz et al., 2009). Simulations of these environments can be used both to improve students’ understanding of complex ecosystems and to

TABLE 4-3 Sample Student Responses in Example 7 , “Climate Change”

NOTE: Both correct and incorrect responses are shown.

SOURCE: Songer et al. (2013). Copyright by the author; used with permission.

images

FIGURE 4-9 Ecosystems target model for Example 8 , “Ecosystems.”

SOURCE: SimScientists Calipers II project (2013). Reprinted with permission.

assess what they have learned. The simulated environments provide multiple representations of system models at different scales. They require students to apply core ideas about ecosystems and to carry out such practices as building and using models, planning and conducting investigations (by manipulating the system elements), and interpreting patterns.

Figure 4-9 shows a model of the characteristics of and changes in ecosystems as it would appear on the screen. The model would be very difficult for students to observe or investigate using printed curriculum materials. 16 For example, Figure 4-10 shows part of a simulated mountain lake environment. Students observe animations of the organisms’ interactions and are then asked to draw a food web directly on the screen to represent a model of the flow of matter and energy in the ecosystem. If a student draws an arrow that links a food consumer to the wrong source of matter and energy, a feedback box coaches the student to observe again by reviewing the animation, thus providing formative feedback.

16 These same features also make it difficult to display the full impact of the simulation in this report.

images

FIGURE 4-10 Screenshot of a curriculum-embedded assessment of student constructing a food web to model the flow of matter and energy in the ecosystem (with feedback and coaching); part of Example 8 , “Ecosystems.”

SOURCE: Quellmalz et al. (2012, fig. 2, p. 372). Reprinted with permission from John Wiley & Sons.

In the subsequent curriculum-embedded assessment, students investigate what happens to population levels when relative starting numbers of particular organisms are varied: see Figure 4-11 . The interactive simulation allows students to conduct multiple trials to build, evaluate, and critique models of balanced ecosystems, interpret data, and draw conclusions. If the purpose of the assessment is formative, students can be given feedback and a graduated sequence of coaching by the program. Figure 4-11 shows a feedback box for this set of activities, which not only notifies the student that an error has occurred but also prompts the student to analyze the population graphs and design a third trial that maintains the survival of the organisms. As part of the assessment, students also complete tasks that ask them to construct descriptions, explanations, and conclusions. They are guided in assessing their own work by judging whether their response meets specified criteria, and then how well their response matches a sample one, as illustrated in Figure 4-12 .

images

FIGURE 4-11 Screenshot of a curriculum-embedded assessment of student using simulations to build balanced ecosystem population models (with feedback and coaching); part of Example 8 , “Ecosystems.”

The SimScientists assessments are designed to provide feedback that addresses common student misconceptions about the ecosystem components, interactions that take place within them, or the way they behave, as well as errors in the use of science practices. The simulation generates reports to students about their progress toward goals for conceptual understanding and use of practices, and it also provides a variety of reporting options for teachers. Teachers can view progress reports for individual students as well as class-level reports (Quellmalz et al., 2012).

The SimScientists assessment system was also designed to collect summative assessment information after students complete a regular curriculum unit on ecosystems (which might have included the formative assessment modules described above). Figures 4-13 and 4-14 show tasks that are part of a benchmark assessment scenario in which students are asked to investigate ways to restore an Australian grasslands ecosystem—one that is novel to them—that has been affected by a significant fire. No feedback or coaching is provided. Students investigate the roles of

images

FIGURE 4-12 Screenshot of a curriculum-embedded assessment of student comparing his/her constructed response describing the mountain lake matter and energy flow model to a sample response; part of Example 8 , “Ecosystems.”

and relationships among the animals, birds, insects, and grass by observing animations of their interactions. Students draw a food web representing a model of the flow of energy and matter throughout the ecosystem, based on the interactions they have observed. Students then use the simulation models to plan, conduct, interpret, explain, and critique investigations of what happens to population levels when numbers of particular organisms are varied. In a culminating task, students present their findings about the grasslands ecosystem.

These task examples from the SimScientists project illustrate ways that assessment tasks can take advantage of technology to represent generalizable, progressively more complex models of science systems, present challenging scientific reasoning tasks, provide individualized feedback, customize scaffolding, and promote self-assessment and metacognitive skills. Reports generated for teachers and students indicate the level of additional help students may need and classify students into groups for which tailored, follow-on, reflection activities are recommended (to be conducted during a subsequent class period).

images

FIGURE 4-13 Screenshot of a benchmark summative assessment of a student constructing a food web to model the flow of matter and energy in the ecosystem (without feedback and coaching); part of Example 8 , “Ecosystems.”

images

FIGURE 4-14 Screenshot of a benchmark summative assessment of a student using simulations to build balanced ecosystem population models (without feedback and coaching); part of Example 8 , “Ecosystems.”

These formative assessments also have an instructional purpose. They are designed to promote model-based reasoning about the common organization and behaviors of all ecosystems (see Figure 4-9 ) and to teach students how to transfer knowledge they gain about how one ecosystem functions to examples of new ecosystems (Buckley and Quellmalz, 2013). 17

LESSONS FROM THE EXAMPLES

The six examples discussed above, as well as the one in Chapter 2 , demonstrate characteristics we believe are needed to assess the learning called for in the NGSS and a range of approaches to using assessments constructively in the classroom to support such learning. A key goal of classroom assessment is to elicit and make visible students’ ways of thinking and acting. The examples demonstrate that it is possible to design tasks and contexts in which teachers elicit student thinking about a disciplinary core idea or crosscutting concept by engaging them in a scientific practice. The examples involve activities designed to stimulate classroom conversations or to produce a range of artifacts (products) that provide information to teachers about students’ current ways of thinking and acting, or both. This information can be used to adjust instruction or to evaluate learning that occurred during a specified time. Some of the examples involve formal scoring, while others are used by teachers to adjust their instructional activities without necessarily assigning student scores.

Types of Assessment Activities

In “What Is Going on Inside Me?” ( Example 1 in Chapter 2 ), students produce a written evidence-based argument for an explanation of how animals get energy from food and defend that explanation orally in front of the class. In “Measuring Silkworms” ( Example 3 , above, and also discussed in Chapter 3 ), students produce representations of data and discuss what they do and do not reveal about the data. In “Behavior of Air” ( Example 4 , above), models developed by groups of students are the stimulus for class discussion and argumentation that the teacher uses to diagnose and highlight discrepancies in students’ ideas. In “Movement of Water” ( Example 5 , above), multiple-choice questions that students answer using

17 The system was designed using the evidence-centered design approach discussed in Chapter 3. Research on the assessments supports the idea that this approach could be a part of a coherent, balanced state science assessment system: see discussion in Chapter 6.

clickers are the stimulus for class discussion (assessment conversation). In each of these examples, students’ writing and classroom discourse provide evidence that can be used in decisions about whether additional activities for learning might be needed, and, if so, what kinds of activities might be most productive. In many of these examples, listening to and engaging with other students as they discuss and defend their responses is a part of the learning process, as students work toward a classroom consensus explanation or a model based on the evidence they have collected. The classroom discussion itself in these cases is the basis for the formative assessment process.

We note that when assessments are designed to be used formatively, the goal is sometimes not to assign scores to individual students but rather to decide what further instruction is needed for groups of students or the class as a whole. Thus, instead of scoring rubrics, criteria or rubrics that can help guide instructional decisions may be used. (When the goal includes assessment of both individuals and groups, both types of scoring rubrics would be needed.) Teachers need support to learn to be intentional and deliberative about such decisions. In the examples shown, designers of curriculum and instruction have developed probes that address likely learning challenges, and teachers are supported in recognizing these challenges and in the use of the probes to seek evidence of what their students have learned and not learned, along some continuum.

“Ecosystems” ( Example 8 , above) is a computer-based system in which students use simulations both to learn and to demonstrate what they have learned about food webs. It includes tasks that are explicitly designed for assessment. Other tasks may not be sharply distinguished from ongoing classroom activities. The data collection tasks in “Biodiversity in the Schoolyard” ( Example 6 , above) are part of students’ ongoing investigations, not separate from them, but they can provide evidence that can be used for formative purposes.

Similarly, in “Measuring Silkworms” ( Example 3 ) students create displays as part of the learning process in order to answer questions about biological growth. Constructing these displays engages students in the practice of analyzing data, and their displays are also a source of evidence for teachers about students’ proficiencies in reasoning about data aggregations; thus they can be used formatively. These forms of reasoning also become a topic of instructional conversations, so that students are encouraged to consider additional aspects of data representation, including tradeoffs about what different kinds of displays do and do not show about the same data. As students improve their capacity to visualize data, the data discussion then leads them to notice characteristics of organisms or populations

that are otherwise not apparent. This interplay between learning a practice (data representation as an aspect of data analysis) and learning about a core idea (variation in a population), as well as a crosscutting concept (recognizing and interpreting patterns), provides an example of the power of three-dimensional learning, as well as an example of an assessment strategy.

Interpreting Results

A structured framework for interpreting evidence of student thinking is needed to make use of the task artifacts (products), which might include data displays, written explanations, or oral arguments. As we discuss in Chapter 3 , interpretation of results is a core element of assessment, and it should be a part of the assessment design. An interpretive framework can help teachers and students themselves recognize how far they have progressed and identify intermediate stages of understanding and problematic ideas. “Measuring Silkworms” shows one such framework, a learning progression for data display developed jointly by researchers and teachers. “Behavior of Air” is similarly grounded in a learning progressions approach. “Movement of Water” presents an alternative example, using what is called a facets-based approach 18 to track the stages in a learning progression (discussed in Chapter 2 )—that is, to identify ideas that are commonly held by students relative to a disciplinary core idea. Although these preconceptions are often labeled as misconceptions or problematic ideas, they are the base on which student learning must be built. Diagnosing students’ preconceptions can help teachers identify the types of instruction needed to move students toward a more scientific conception of the topic.

What these examples have in common is that they allow teachers to group students into categories, which helps with the difficult task of making sense of many kinds of student thinking; they also provide tools for helping teachers decide what to do next. In “Movement of Water,” for example, students’ use of clickers

18 In this approach, a facet is a piece of knowledge constructed by a learner in order to solve a problem or explain an event (diSessa and Minstrell, 1998). Facets that are related to one another can be organized into clusters, and the basis for grouping can either be an explanation or an interpretation of a physical situation or a disciplinary core idea (Minstrell and Kraus, 2005). Clusters comprise goal facets (which are often standards or disciplinary core ideas) and problematic facets (which are related to the disciplinary idea but which represent ways of reasoning about the idea that diverge from the goal facet). The facets perspective assumes that, in addition to problematic thinking, students also possess insights and understandings about the disciplinary core idea that can be deepened and revised through additional learning opportunities (Minstrell and van Zee, 2003).

to answer questions gives teachers initial feedback on the distribution of student ideas in the classroom. Depending on the prevalence of particular problematic ideas or forms of reasoning and their persistence in subsequent class discussion, teachers can choose to use a “contingent activity” that provides a different way of presenting a disciplinary core idea.

The interpretive framework for evaluating evidence has to be expressed with enough specificity to make it useful for helping teachers decide on next steps. The construct map for data display in “Measuring Silkworms” meets this requirement: a representation that articulated only the distinction between the lowest and highest levels of the construct map would be less useful. Learning progressions that articulate points of transition that take place across multiple years—rather than transitions that may occur in the course of a lesson or a unit—would be less useful for classroom decision making (although a single classroom may often include students who span such a range) (Alonzo and Gearhart, 2006).

Using Multiple Practices

The examples above involve tasks that cross different domains of science and cover multiple practices. “What Is Going on Inside Me?,” for example, requires students to demonstrate their understanding of how chemical processes support biological processes. It asks students not only to apply the crosscutting concept of energy and matter conservation, but also to support their arguments with explicit evidence about the chemical mechanism involved. In “Measuring Silkworms” and “Biodiversity in the Schoolyard,” students’ responses to the different tasks can provide evidence of their understanding of the crosscutting concept of patterns. It is important to note, however, that “patterns” in each case has a different and particular disciplinary interpretation. In “Measuring Silkworms,” students must recognize pattern in a display of data, in the form of the “shapes” the data can take, and begin to link ideas about growth and variation to these shapes. In contrast, in “Biodiversity in the Schoolyard,” students need to recognize patterns in the distribution and numbers of organisms in order to use the data in constructing arguments.

Three of the examples—“Measuring Silkworms,” “Biodiversity in the Schoolyard,” and “Climate Change”—provide some classroom-level snapshots of emerging proficiency with aspects of the practices of analyzing and interpreting data and using mathematics and computational thinking. We note, though, that each of these practices has multiple aspects, so multiple tasks would be needed to provide a complete picture of students’ capacity with each of them. Although

assessment tasks can identify particular skills related to specific practices, evaluating students’ disposition to engage in these practices without prompting likely requires some form of direct observation or assessment of the products of more open-ended student projects. 19

In instruction, students engage in practices in interconnected ways that support their ongoing investigations of phenomena. Thus, students are likely to find that to address their questions, they will need to decide which sorts of data (including observational data) are needed; that is, they will need to design an investigation, collect those data, interpret the results; and construct explanations that relate their evidence to both claims and reasoning. It makes little sense for students to construct data displays in the absence of a question. And it is not possible to assess the adequacy of their displays without knowing what question they are pursuing. In the past, teachers might have tried to isolate the skill of graphing data as something to teach separately from disciplinary content, but the new science framework and the NGSS call for teachers to structure tasks and interpret evidence in a broad context of learning that integrates or connects multiple content ideas and treats scientific practices as interrelated. Similarly, assessment tasks designed to examine students’ facility with a particular practice may require students to draw on other practices as they complete the task.

We stress in Chapter 2 that a key principle of the framework is that science education should connect to students’ interests and experiences. Students are likely to bring diverse interests and experiences to the classroom from their families and cultural communities. A potential focus of classroom assessment at the outset of instruction is to elicit students’ interests and experiences that may be relevant to the goals for instruction. However, identifying interests has not often been a focus of classroom assessment research in science, although it has been used to motivate and design assessments in specific curricula. 20

One approach that could prove fruitful for classroom assessment is a strategy used in an elementary curriculum unit called Micros and Me (Tzou et al., 2007) . The unit aims to engage students in the practice of argumentation to learn about key ideas in microbiology. In contrast to many curriculum units, however, this example provides students with the opportunity to pursue investigations related to issues that are relevant to them. The researchers adapted a qualitative

19 The phrase “disposition to engage” is used in the context of science education to refer to students’ degree of engagement with and motivation to persevere with scientific thinking.

20 One example is Issues, Evidence, and You : see Science Education for Public Understanding Program (SEPUP) (1995) and Wilson and Sloane (2000).

methodology from psychology, photo-elicitation, which is used to identify these issues. Research participants take photos that become the basis for interviews that elicit aspects of participants’ everyday lives (Clark-Ibañez, 2004). In Micros and Me, at the beginning of the unit, students take photos of things or activities they do to prevent disease and stay healthy. They share these photos in class, as a way to bring personally relevant experiences into the classroom to launch the unit. Their documentation also helps launch a student-led investigation focused on students’ own questions, which are refined as students encounter key ideas in microbiology.

In describing the curriculum, Tzou and Bell (2010) do not call out the practice of self-documentation of students’ personally relevant experiences as a form of assessment. At the same time, they note that a key function of self-documentation is to “elicit and make visible students’ everyday expertise” relevant to the unit content (Tzou and Bell, 2010, p. 1136). Eliciting and making visible prior knowledge is an important aspect of assessment that is used to guide instruction. It holds promise as a way to identify diversity in the classroom in science that can be used to help students productively engage in science practices (Clark-Ibañez, 2004; Tzou and Bell, 2010; Tzou et al., 2007).

Professional Development

The framework emphasizes that professional development will be an indispensable component of the changes to science education it calls for (see National Research Council, 2012a, Ch. 10). The needed changes in instruction are beyond our charge, but in the context of classroom assessment, we note that significant adaptation will be asked of teachers. They will need systematic opportunities to learn how to use classroom discourse as a means to elicit, develop, and assess student thinking. The Contingent Pedagogies Project (see Example 4 , above) illustrates one way to organize such professional development. In that approach, professional development included opportunities for teachers to learn how to orchestrate classroom discussion of core disciplinary ideas. Teachers also learned how to make use of specific discussion strategies to support the practice of argumentation.

Eliciting student thinking through skillful use of discussion is not enough, however. Tasks or teacher questions also have to successfully elicit and display students’ problematic ways of reasoning about disciplinary core ideas and problematic aspects of their participation in practices. They must also elicit the interests and experiences students bring, so they can build on them throughout instruction. This is part of the process of integrating teaching and assessment. Thus, both

teachers and assessment developers need to be aware of the typical student ideas about a topic and the various problematic alternative conceptions that students are likely to hold. (This is often called pedagogical content knowledge.) In addition, teachers need a system for interpreting students’ responses to tasks or questions. That system should be intelligible and usable in practice: it cannot be so elaborate that teachers find it difficult to use in order to understand student thinking during instruction. (The construct map and its associated scoring guide shown in Chapter 3 are an example of such a system.)

CONCLUSIONS AND RECOMMENDATIONS

The primary conclusion we draw from these examples is that it is possible to design tasks and contexts in which teachers elicit students’ thinking about disciplinary core ideas and crosscutting concepts by engaging them in scientific practices. Tasks designed with the characteristics we have discussed (three dimensions, interconnections among concepts and practices, a way to identify students’ place on a continuum) produce artifacts, discussions, and activities that provide teachers with information about students’ thinking and so can help them make decisions about how to proceed or how to adjust subsequent instruction or to evaluate the learning that took place over a specified period of time.

Questions have been raised about whether students can achieve the ambitious performance expectations in the NGSS. The implementation of the NGSS is a complex subject that is beyond the scope of our charge; however, each of the examples shown has been implemented with diverse samples of students, 21 and there have been students who succeeded on them (although there are also students who did not). The tasks in our examples assess learning that is part of a well-designed, coherent sequence of instruction on topics and in ways that are very similar to NGSS performance expectations. Each example offers multiple opportunities to engage in scientific practices and encourage students to draw connections among ideas, thus developing familiarity with crosscutting concepts.

CONCLUSION 4-1 Tasks designed to assess the performance expectations in the Next Generation Science Standards will need to have the following characteristics:

21 Samples included students from rural and inner-city schools, from diverse racial and ethnic backgrounds, and English-language learners.

  • multiple components that reflect the connected use of different scientific practices in the context of interconnected disciplinary ideas and crosscutting concepts;
  • reflect the progressive nature of learning by providing information about where students fall on a continuum between expected beginning and ending points in a given unit or grade; and
  • an interpretive system for evaluating a range of student products that is specific enough to be useful for helping teachers understand the range of student responses and that provides tools to helping them decide on next steps in instruction.

CONCLUSION 4-2 To develop the skills and dispositions to use scientific and engineering practices needed to further their learning and to solve problems, students need to experience instruction in which they (1) use multiple practices in developing a particular core idea and (2) apply each practice in the context of multiple core ideas. Effective use of the practices often requires that they be used in concert with one another, such as in supporting explanation with an argument or using mathematics to analyze data. Classroom assessments should include at least some tasks that reflect the connected use of multiple practices.

CONCLUSION 4-3 It is possible to design assessment tasks and scoring rubrics that assess three-dimensional science learning. Such assessments provide evidence that informs teachers and students of the strengths and weaknesses of a student’s current understanding, which can guide further instruction and student learning and can also be used to evaluate students’ learning.

We emphasize that implementing the conception of science learning envisioned in the framework and the NGSS will require teachers who are well trained in assessment strategies such as those discussed in this chapter. Professional development will be essential in meeting this goal.

CONCLUSION 4-4 Assessments of three-dimensional science learning are challenging to design, implement, and properly interpret. Teachers will need extensive professional development to successfully incorporate this type of assessment into their practice.

On the basis of the conclusions above, the committee offers recommendations about professional development and for curriculum and assessment development.

RECOMMENDATION 4-1 State and district leaders who design professional development for teachers should ensure that it addresses the changes called for by the framework and the Next Generation Science Standards in both the design and use of assessment tasks and instructional strategies. Professional development must support teachers in integrating practices, crosscutting concepts, and disciplinary core ideas in inclusive and engaging instruction and in using new modes of assessment that support such instructional activities.

Developing assessment tasks of this type will require the participation of several different kinds of experts. First, for the tasks to accurately reflect science ideas, scientists will need to be involved. Second, experts in science learning will also be needed to ensure that knowledge from research on learning is used as a guide to what is expected of students. Third, assessment experts will be needed to clarify relationships among tasks and the forms of knowledge and practice that the items are intended to elicit. Fourth, practitioners will be needed to ensure that the tasks and interpretive frameworks linked to them are usable in classrooms. And fifth, as we discuss further in Chapter 6 , this multidisciplinary group of experts will need to include people who have knowledge of and experience with population subgroups, such as students with disabilities and students with varied cultural backgrounds, to ensure that the tasks are not biased for or against any subgroups of students for reasons irrelevant to what is being measured.

We note also that curricula, textbooks, and other resources, such as digital content, in which assessments may be embedded, will also need to reflect the characteristics we have discussed—and their development will present similar challenges. For teachers to incorporate tasks of this type into their practice, and to design additional tasks for their classrooms, they will need to have worked with many good examples in their curriculum materials and professional development opportunities.

RECOMMENDATION 4-2 Curriculum developers, assessment developers, and others who create resource materials aligned to the science framework and the Next Generation Science Standards should ensure that assessment activities included in such materials (such as mid- and end-of-chapter activi-

ties, suggested tasks for unit assessment, and online activities) require students to engage in practices that demonstrate their understanding of core ideas and crosscutting concepts. These materials should also reflect multiple dimensions of diversity (e.g., by connecting with students’ cultural and linguistic identities). In designing these materials, development teams need to include experts in science, science learning, assessment design, equity and diversity, and science teaching.

Assessments, understood as tools for tracking what and how well students have learned, play a critical role in the classroom. Developing Assessments for the Next Generation Science Standards develops an approach to science assessment to meet the vision of science education for the future as it has been elaborated in A Framework for K-12 Science Education (Framework) and Next Generation Science Standards (NGSS). These documents are brand new and the changes they call for are barely under way, but the new assessments will be needed as soon as states and districts begin the process of implementing the NGSS and changing their approach to science education.

The new Framework and the NGSS are designed to guide educators in significantly altering the way K-12 science is taught. The Framework is aimed at making science education more closely resemble the way scientists actually work and think, and making instruction reflect research on learning that demonstrates the importance of building coherent understandings over time. It structures science education around three dimensions - the practices through which scientists and engineers do their work, the key crosscutting concepts that cut across disciplines, and the core ideas of the disciplines - and argues that they should be interwoven in every aspect of science education, building in sophistication as students progress through grades K-12.

Developing Assessments for the Next Generation Science Standards recommends strategies for developing assessments that yield valid measures of student proficiency in science as described in the new Framework . This report reviews recent and current work in science assessment to determine which aspects of the Framework's vision can be assessed with available techniques and what additional research and development will be needed to support an assessment system that fully meets that vision. The report offers a systems approach to science assessment, in which a range of assessment strategies are designed to answer different kinds of questions with appropriate degrees of specificity and provide results that complement one another.

Developing Assessments for the Next Generation Science Standards makes the case that a science assessment system that meets the Framework's vision should consist of assessments designed to support classroom instruction, assessments designed to monitor science learning on a broader scale, and indicators designed to track opportunity to learn. New standards for science education make clear that new modes of assessment designed to measure the integrated learning they promote are essential. The recommendations of this report will be key to making sure that the dramatic changes in curriculum and instruction signaled by Framework and the NGSS reduce inequities in science education and raise the level of science education for all students.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

Created by the Great Schools Partnership , the GLOSSARY OF EDUCATION REFORM is a comprehensive online resource that describes widely used school-improvement terms, concepts, and strategies for journalists, parents, and community members. | Learn more »

Share

Formative Assessment

Formative assessment refers to a wide variety of methods that teachers use to conduct in-process evaluations of student comprehension, learning needs, and academic progress during a lesson, unit, or course. Formative assessments help teachers identify concepts that students are struggling to understand, skills they are having difficulty acquiring, or learning standards they have not yet achieved so that adjustments can be made to lessons, instructional techniques, and academic support .

The general goal of formative assessment is to collect detailed information that can be used to improve instruction and student learning while it’s happening . What makes an assessment “formative” is not the design of a test, technique, or self-evaluation, per se, but the way it is used—i.e., to inform in-process teaching and learning modifications.

Formative assessments are commonly contrasted with summative assessments , which are used to evaluate student learning progress and achievement at the conclusion of a specific instructional period—usually at the end of a project, unit, course, semester, program, or school year. In other words, formative assessments are for learning, while summative assessments are of learning. Or as assessment expert Paul Black put it, “When the cook tastes the soup, that’s formative assessment. When the customer tastes the soup, that’s summative assessment.” It should be noted, however, that the distinction between formative and summative is often fuzzy in practice, and educators may hold divergent interpretations of and opinions on the subject.

Many educators and experts believe that formative assessment is an integral part of effective teaching. In contrast with most summative assessments, which are deliberately set apart from instruction, formative assessments are integrated into the teaching and learning process. For example, a formative-assessment technique could be as simple as a teacher asking students to raise their hands if they feel they have understood a newly introduced concept, or it could be as sophisticated as having students complete a self-assessment of their own writing (typically using a rubric outlining the criteria) that the teacher then reviews and comments on. While formative assessments help teachers identify learning needs and problems, in many cases the assessments also help students develop a stronger understanding of their own academic strengths and weaknesses. When students know what they do well and what they need to work harder on, it can help them take greater responsibility over their own learning and academic progress.

While the same assessment technique or process could, in theory, be used for either formative or summative purposes, many summative assessments are unsuitable for formative purposes because they do not provide useful feedback. For example, standardized-test scores may not be available to teachers for months after their students take the test (so the results cannot be used to modify lessons or teaching and better prepare students), or the assessments may not be specific or fine-grained enough to give teachers and students the detailed information they need to improve.

The following are a few representative examples of formative assessments:

  • Questions that teachers pose to individual students and groups of students during the learning process to determine what specific concepts or skills they may be having trouble with. A wide variety of intentional questioning strategies may be employed, such as phrasing questions in specific ways to elicit more useful responses.
  • Specific, detailed, and constructive feedback that teachers provide on student work , such as journal entries, essays, worksheets, research papers, projects, ungraded quizzes, lab results, or works of art, design, and performance. The feedback may be used to revise or improve a work product, for example.
  • “Exit slips” or “exit tickets” that quickly collect student responses to a teacher’s questions at the end of a lesson or class period. Based on what the responses indicate, the teacher can then modify the next lesson to address concepts that students have failed to comprehend or skills they may be struggling with. “Admit slips” are a similar strategy used at the beginning of a class or lesson to determine what students have retained from previous learning experiences .
  • Self-assessments that ask students to think about their own learning process, to reflect on what they do well or struggle with, and to articulate what they have learned or still need to learn to meet course expectations or learning standards.
  • Peer assessments that allow students to use one another as learning resources. For example, “workshopping” a piece of writing with classmates is one common form of peer assessment, particularly if students follow a rubric or guidelines provided by a teacher.

In addition to the reasons addressed above, educators may also use formative assessment to:

  • Refocus students on the learning process and its intrinsic value, rather than on grades or extrinsic rewards.
  • Encourage students to build on their strengths rather than fixate or dwell on their deficits. (For a related discussion, see growth mindset .)
  • Help students become more aware of their learning needs, strengths, and interests so they can take greater responsibility over their own educational growth. For example, students may learn how to self-assess their own progress and self-regulate their behaviors.
  • Give students more detailed, precise, and useful information. Because grades and test scores only provide a general impression of academic achievement, usually at the completion of an instructional period, formative feedback can help to clarify and calibrate learning expectations for both students and parents. Students gain a clearer understanding of what is expected of them, and parents have more detailed information they can use to more effectively support their child’s education.
  • Raise or accelerate the educational achievement of all students, while also reducing learning gaps and achievement gaps .

While the formative-assessment concept has only existed since the 1960s, educators have arguably been using “formative assessments” in various forms since the invention of teaching. As an intentional school-improvement strategy, however, formative assessment has received growing attention from educators and researchers in recent decades. In fact, it is now widely considered to be one of the more effective instructional strategies used by teachers, and there is a growing body of literature and academic research on the topic.

Schools are now more likely to encourage or require teachers to use formative-assessment strategies in the classroom, and there are a growing number of professional-development opportunities available to educators on the subject. Formative assessments are also integral components of personalized learning and other educational strategies designed to tailor lessons and instruction to the distinct learning needs and interests of individual students.

While there is relatively little disagreement in the education community about the utility of formative assessment, debates or disagreements may stem from differing interpretations of the term. For example, some educators believe the term is loosely applied to forms of assessment that are not “truly” formative, while others believe that formative assessment is rarely used appropriately or effectively in the classroom.

Another common debate is whether formative assessments can or should be graded. Many educators contend that formative assessments can only be considered truly formative when they are ungraded and used exclusively to improve student learning. If grades are assigned to a quiz, test, project, or other work product, the reasoning goes, they become de facto summative assessments—i.e., the act of assigning a grade turns the assessment into a performance evaluation that is documented in a student’s academic record, as opposed to a diagnostic strategy used to improve student understanding and preparation before they are given a graded test or assignment.

Some educators also make a distinction between “pure” formative assessments—those that are used on a daily basis by teachers while they are instructing students—and “interim” or “benchmark” assessments, which are typically periodic or quarterly assessments used to determine where students are in their learning progress or whether they are on track to meeting expected learning standards. While some educators may argue that any assessment method that is used diagnostically could be considered formative, including interim assessments, others contend that these two forms of assessment should remain distinct, given that different strategies, techniques, and professional development may be required.

Some proponents of formative assessment also suspect that testing companies mislabel and market some interim standardized tests as “formative” to capitalize on and profit from the popularity of the idea. Some observers express skepticism that commercial or prepackaged products can be authentically formative, arguing that formative assessment is a sophisticated instructional technique, and to do it well requires both a first-hand understanding of the students being assessed and sufficient training and professional development.

Creative Commons License

Alphabetical Search

US Fed's Powell expects inflation to fall, though not as confident as before

  • Medium Text

Fed Chair Powell speaks at a press conference  in Washington

Sign up here.

Reporting by Toby Sterling and Bart H. Meijer; Writing by Howard Schneider; Editing by Will Dunham, Andrea Ricci and Chris Reese

Our Standards: The Thomson Reuters Trust Principles. New Tab , opens new tab

Traders work on the floor at the New York Stock Exchange (NYSE) in New York City

Markets Chevron

Traders work on the floor of the NYSE in New York

Nasdaq, gold scale all-time highs amid cautious Fed comments

The Nasdaq closed at a record high on Monday and gold jumped to an all-time high as investors weighed hawkish statements from the Federal Reserve against evidence of cooling U.S. inflation.

Federal Reserve Board Building in Washington

IMAGES

  1. What is the Difference Between Assignment and Assessment

    in which assessment does assignment fall

  2. 75 Formative Assessment Examples (2024)

    in which assessment does assignment fall

  3. Assessments: Different Types, Importance, & More

    in which assessment does assignment fall

  4. Difference between Assessment and assignment (With Comparison Table) (2023)

    in which assessment does assignment fall

  5. Assignment Design and Assessment

    in which assessment does assignment fall

  6. What is the Difference Between Assignment and Assessment

    in which assessment does assignment fall

VIDEO

  1. How Much Time Does An Assessment Take?

  2. Fall Risk Assessment||Nursing foundation #nursing #youtubevideo #trending #viralvideo subscribe ❤️

  3. What happens if the assessment does not pass

  4. Assessment and its Types: Online Recorded Lecture-8.1

  5. fall risk assessment on retromalor trigon || assignment on fall risk assessment #nursing

  6. Health Assessment Checkoff #2 Fall 23

COMMENTS

  1. 6 Types of Assessment (and How to Use Them)

    Common types of assessment for learning include formative assessments and diagnostic assessments. Assessment as learning. Assessment as learning actively involves students in the learning process. It teaches critical thinking skills, problem-solving and encourages students to set achievable goals for themselves and objectively measure their ...

  2. Assessing Student Learning: 6 Types of Assessment and How to Use Them

    Summative assessments should be used in conjunction with other assessment types, such as formative assessments, to provide a comprehensive evaluation of student learning and growth. 3. Diagnostic assessment. Diagnostic assessment, often used at the beginning of a new unit or term, helps educators identify students' prior knowledge, skills ...

  3. Teachers' Essential Guide to Formative Assessment

    Formative assessments are generally used for planning future instruction and for helping students drive their own learning. In terms of future instruction, how you use assessment data most depends on what kind of results you get. If 80% or more demonstrate mastery, you'll likely want to proceed according to plan with subsequent lessons.

  4. A Guide to Types of Assessment: Diagnostic, Formative, Interim, and

    Such assessment becomes formative assessment when the evidence is actually used to adapt the teaching to meet student needs. Another Way to Check-up on Everyone. One common way to think of a formative assessment is to think about "check-ups" with the doctor. During a check-up, the doctor assesses the status of your health to make sure you ...

  5. Assessing Student Learning

    Defining assessment methods. Once goals are clear, an instructor must decide on what evidence - assignment(s) - will best reveal whether students are meeting the goals. We discuss several common methods below, but these need not be limited by anything but the learning goals and the teaching context. Developing the assessment.

  6. PDF The Assessment Playbook

    Welcome. Welcome to The Assessment Playbook. The vision of this Playbook is to highlight key ideas and deepen opportunities to learn more about the topics presented in the UDL assessment video series: Intro to Assessments. Evaluating Assessments. Grade Level K-2 Video. Grade Level 3-5 Video. Grade Level 6-8 Video.

  7. Planning Assessments

    Planning Assessments. Assessment is a critical component of the instructional planning process and should have a prominent role in the learning process. This means that teachers should plan to integrate multiple forms of assessment and use the data to understand how well their students are learning the content and skills specified by the ...

  8. Using rubrics

    Rubrics help instructors: Assess assignments consistently from student-to-student. Save time in grading, both short-term and long-term. Give timely, effective feedback and promote student learning in a sustainable way. Clarify expectations and components of an assignment for both students and course teaching assistants (TAs).

  9. Assessments in Education: 7 Types and How to Use Them

    Types of Assessments In Education and How to Apply Them. 1. Diagnostic Assessment. Diagnostic tests are administered at the beginning of the course, topic, or unit. They allow the teacher to gauge how familiar the students are with the subject, as well as their opinions and biases. Well-designed diagnostics enable teachers to draw a sound ...

  10. Designing Assessments of Student Learning

    As educators, we measure student learning through many means, including assignments, quizzes, and tests. These assessments can be formal or informal, graded or ungraded. But assessment is not simply about awarding points and assigning grades. Learning is a process, not a product, and that process takes place during activities such as recall and ...

  11. Assessment Rubrics

    Assessment Rubrics. A rubric is commonly defined as a tool that articulates the expectations for an assignment by listing criteria, and for each criteria, describing levels of quality (Andrade, 2000; Arter & Chappuis, 2007; Stiggins, 2001). Criteria are used in determining the level at which student work meets expectations.

  12. 17.3: How can classroom discussions be used for assessment?

    A. Formal Assessment. B. Performance Assessment. C. Formative Assessment. D. Summative Assessment (3) Mrs. Williams is a Drawing teacher at a High School. She has all of her students post their latest assignment on the board at the front of the classroom. She then asks her students what they like and dislike about each piece or artwork.

  13. 4.3: Types of Assignments

    Overall, this chapter has provided an introduction to the types of assignments you can expect to complete at university, as well as outlined some tips and strategies with examples and templates for completing them. First, the chapter investigated essay assignments, including analytical and argumentative essays.

  14. How to Use Rubrics

    3. Create the rating scale. According to Suskie, you will want at least 3 performance levels: for adequate and inadequate performance, at the minimum, and an exemplary level to motivate students to strive for even better work. Rubrics often contain 5 levels, with an additional level between adequate and exemplary and a level between adequate ...

  15. Formative and Summative Assessment

    Summative Assessment. Summative assessment takes place after the learning has been completed and provides information and feedback that sums up the teaching and learning process. Typically, no more formal learning is taking place at this stage, other than incidental learning which might take place through the completion of projects and assignments.

  16. PDF Use Bloom's Taxonomy to Align Assessments

    Activities such as papers, exams, problem sets, class discussions, or concept maps that require students to: •

  17. 6 Assessment in Practice

    Not only does large-scale assessment dominate over classroom assessment, but there is also ample evidence of accountability measures negatively impacting classroom instruction and assessment. For instance, as discussed earlier, teachers feel pressure to teach to the test, which results in a narrowing of instruction.

  18. 3 Types of Assessment You'll Take in College (And How to Approach Each One)

    Think, bite-sized learning. Formative assessments are the type you see most frequently and include written assignments, discussion forum entries and quizzes. They are relatively low-stakes, cover a fairly small amount of material, and tell both you and your mentor where you stand in your mastery of course content.

  19. PDF A Program Assessment Guide

    assessment process itself should be evaluated and refined in light of emerging insights. 6. Assessment fosters wider improvement when representatives from across the educational community are involved. Student learning is a campus-wide responsibility, and assessment is a way of enacting that responsibility. Thus, while assessment efforts

  20. 4 Classroom Assessment

    The aim of this kind of assessment activity is to guide teachers in using assessment techniques to improve student learning outcomes. 12 The techniques used in this example demonstrate a means of rapidly assessing how well students have mastered a complex combination of practices and concepts in the midst of a lesson, which allows teachers to ...

  21. Formative Assessment Definition

    LAST UPDATED: 04.29.14. Formative assessment refers to a wide variety of methods that teachers use to conduct in-process evaluations of student comprehension, learning needs, and academic progress during a lesson, unit, or course. Formative assessments help teachers identify concepts that students are struggling to understand, skills they are ...

  22. PDF Assessment Cycle and Plan for Academic Programs

    The table below shows each step in the assessment cycle and the associated assessment plan components. An example of a completed assessment plan can be found later in this document. Assessment Cycle. Assessment Plan Plan Component Component Description. Step 1. Plan. Develop/revisit mission and learning outcomes.

  23. Fall Risk Assessment for Patient Safety

    Fall Risk Assessment. Commonly used in older adults, a fall risk assessment checks your risk of falling. Healthcare providers use multiple tests to identify your risk factors, such as difficultly seeing or taking medications that make you dizzy. Prevention strategies, including exercise and assistive devices, help you lead a healthier life overall.

  24. US Fed's Powell expects inflation to fall, though not as confident as

    Federal Reserve Chair Jerome Powell on Tuesday gave a bullish assessment of where the U.S. economy stands now, with an outlook for continued above-trend growth and confidence in falling inflation ...

  25. Interactive Map: Russia's Invasion of Ukraine

    This interactive map complements the static control-of-terrain maps that ISW daily produces with high-fidelity and, where possible, street level assessments of the war in Ukraine. ISW's daily campaign assessments of the Russian invasion of Ukraine, including our static maps, are available at understandingwar.org ; you can subscribe to these ...