Center for Teaching

Assessing student learning.

teacher assessment in higher education

Forms and Purposes of Student Assessment

Assessment is more than grading, assessment plans, methods of student assessment, generative and reflective assessment, teaching guides related to student assessment, references and additional resources.

Student assessment is, arguably, the centerpiece of the teaching and learning process and therefore the subject of much discussion in the scholarship of teaching and learning. Without some method of obtaining and analyzing evidence of student learning, we can never know whether our teaching is making a difference. That is, teaching requires some process through which we can come to know whether students are developing the desired knowledge and skills, and therefore whether our instruction is effective. Learning assessment is like a magnifying glass we hold up to students’ learning to discern whether the teaching and learning process is functioning well or is in need of change.

To provide an overview of learning assessment, this teaching guide has several goals, 1) to define student learning assessment and why it is important, 2) to discuss several approaches that may help to guide and refine student assessment, 3) to address various methods of student assessment, including the test and the essay, and 4) to offer several resources for further research. In addition, you may find helfpul this five-part video series on assessment that was part of the Center for Teaching’s Online Course Design Institute.

What is student assessment and why is it Important?

In their handbook for course-based review and assessment, Martha L. A. Stassen et al. define assessment as “the systematic collection and analysis of information to improve student learning” (2001, p. 5). An intentional and thorough assessment of student learning is vital because it provides useful feedback to both instructors and students about the extent to which students are successfully meeting learning objectives. In their book Understanding by Design , Grant Wiggins and Jay McTighe offer a framework for classroom instruction — “Backward Design”— that emphasizes the critical role of assessment. For Wiggins and McTighe, assessment enables instructors to determine the metrics of measurement for student understanding of and proficiency in course goals. Assessment provides the evidence needed to document and validate that meaningful learning has occurred (2005, p. 18). Their approach “encourages teachers and curriculum planners to first ‘think like an assessor’ before designing specific units and lessons, and thus to consider up front how they will determine if students have attained the desired understandings” (Wiggins and McTighe, 2005, p. 18). [1]

Not only does effective assessment provide us with valuable information to support student growth, but it also enables critically reflective teaching. Stephen Brookfield, in Becoming a Critically Reflective Teacher, argues that critical reflection on one’s teaching is an essential part of developing as an educator and enhancing the learning experience of students (1995). Critical reflection on one’s teaching has a multitude of benefits for instructors, including the intentional and meaningful development of one’s teaching philosophy and practices. According to Brookfield, referencing higher education faculty, “A critically reflective teacher is much better placed to communicate to colleagues and students (as well as to herself) the rationale behind her practice. She works from a position of informed commitment” (Brookfield, 1995, p. 17). One important lens through which we may reflect on our teaching is our student evaluations and student learning assessments. This reflection allows educators to determine where their teaching has been effective in meeting learning goals and where it has not, allowing for improvements. Student assessment, then, both develop the rationale for pedagogical choices, and enables teachers to measure the effectiveness of their teaching.

The scholarship of teaching and learning discusses two general forms of assessment. The first, summative assessment , is one that is implemented at the end of the course of study, for example via comprehensive final exams or papers. Its primary purpose is to produce an evaluation that “sums up” student learning. Summative assessment is comprehensive in nature and is fundamentally concerned with learning outcomes. While summative assessment is often useful for communicating final evaluations of student achievement, it does so without providing opportunities for students to reflect on their progress, alter their learning, and demonstrate growth or improvement; nor does it allow instructors to modify their teaching strategies before student learning in a course has concluded (Maki, 2002).

The second form, formative assessment , involves the evaluation of student learning at intermediate points before any summative form. Its fundamental purpose is to help students during the learning process by enabling them to reflect on their challenges and growth so they may improve. By analyzing students’ performance through formative assessment and sharing the results with them, instructors help students to “understand their strengths and weaknesses and to reflect on how they need to improve over the course of their remaining studies” (Maki, 2002, p. 11). Pat Hutchings refers to as “assessment behind outcomes”: “the promise of assessment—mandated or otherwise—is improved student learning, and improvement requires attention not only to final results but also to how results occur. Assessment behind outcomes means looking more carefully at the process and conditions that lead to the learning we care about…” (Hutchings, 1992, p. 6, original emphasis). Formative assessment includes all manner of coursework with feedback, discussions between instructors and students, and end-of-unit examinations that provide an opportunity for students to identify important areas for necessary growth and development for themselves (Brown and Knight, 1994).

It is important to recognize that both summative and formative assessment indicate the purpose of assessment, not the method . Different methods of assessment (discussed below) can either be summative or formative depending on when and how the instructor implements them. Sally Brown and Peter Knight in Assessing Learners in Higher Education caution against a conflation of the method (e.g., an essay) with the goal (formative or summative): “Often the mistake is made of assuming that it is the method which is summative or formative, and not the purpose. This, we suggest, is a serious mistake because it turns the assessor’s attention away from the crucial issue of feedback” (1994, p. 17). If an instructor believes that a particular method is formative, but he or she does not take the requisite time or effort to provide extensive feedback to students, the assessment effectively functions as a summative assessment despite the instructor’s intentions (Brown and Knight, 1994). Indeed, feedback and discussion are critical factors that distinguish between formative and summative assessment; formative assessment is only as good as the feedback that accompanies it.

It is not uncommon to conflate assessment with grading, but this would be a mistake. Student assessment is more than just grading. Assessment links student performance to specific learning objectives in order to provide useful information to students and instructors about learning and teaching, respectively. Grading, on the other hand, according to Stassen et al. (2001) merely involves affixing a number or letter to an assignment, giving students only the most minimal indication of their performance relative to a set of criteria or to their peers: “Because grades don’t tell you about student performance on individual (or specific) learning goals or outcomes, they provide little information on the overall success of your course in helping students to attain the specific and distinct learning objectives of interest” (Stassen et al., 2001, p. 6). Grades are only the broadest of indicators of achievement or status, and as such do not provide very meaningful information about students’ learning of knowledge or skills, how they have developed, and what may yet improve. Unfortunately, despite the limited information grades provide students about their learning, grades do provide students with significant indicators of their status – their academic rank, their credits towards graduation, their post-graduation opportunities, their eligibility for grants and aid, etc. – which can distract students from the primary goal of assessment: learning. Indeed, shifting the focus of assessment away from grades and towards more meaningful understandings of intellectual growth can encourage students (as well as instructors and institutions) to attend to the primary goal of education.

Barbara Walvoord (2010) argues that assessment is more likely to be successful if there is a clear plan, whether one is assessing learning in a course or in an entire curriculum (see also Gelmon, Holland, and Spring, 2018). Without some intentional and careful plan, assessment can fall prey to unclear goals, vague criteria, limited communication of criteria or feedback, invalid or unreliable assessments, unfairness in student evaluations, or insufficient or even unmeasured learning. There are several steps in this planning process.

  • Defining learning goals. An assessment plan usually begins with a clearly articulated set of learning goals.
  • Defining assessment methods. Once goals are clear, an instructor must decide on what evidence – assignment(s) – will best reveal whether students are meeting the goals. We discuss several common methods below, but these need not be limited by anything but the learning goals and the teaching context.
  • Developing the assessment. The next step would be to formulate clear formats, prompts, and performance criteria that ensure students can prepare effectively and provide valid, reliable evidence of their learning.
  • Integrating assessment with other course elements. Then the remainder of the course design process can be completed. In both integrated (Fink 2013) and backward course design models (Wiggins & McTighe 2005), the primary assessment methods, once chosen, become the basis for other smaller reading and skill-building assignments as well as daily learning experiences such as lectures, discussions, and other activities that will prepare students for their best effort in the assessments.
  • Communicate about the assessment. Once the course has begun, it is possible and necessary to communicate the assignment and its performance criteria to students. This communication may take many and preferably multiple forms to ensure student clarity and preparation, including assignment overviews in the syllabus, handouts with prompts and assessment criteria, rubrics with learning goals, model assignments (e.g., papers), in-class discussions, and collaborative decision-making about prompts or criteria, among others.
  • Administer the assessment. Instructors then can implement the assessment at the appropriate time, collecting evidence of student learning – e.g., receiving papers or administering tests.
  • Analyze the results. Analysis of the results can take various forms – from reading essays to computer-assisted test scoring – but always involves comparing student work to the performance criteria and the relevant scholarly research from the field(s).
  • Communicate the results. Instructors then compose an assessment complete with areas of strength and improvement, and communicate it to students along with grades (if the assignment is graded), hopefully within a reasonable time frame. This also is the time to determine whether the assessment was valid and reliable, and if not, how to communicate this to students and adjust feedback and grades fairly. For instance, were the test or essay questions confusing, yielding invalid and unreliable assessments of student knowledge.
  • Reflect and revise. Once the assessment is complete, instructors and students can develop learning plans for the remainder of the course so as to ensure improvements, and the assignment may be changed for future courses, as necessary.

Let’s see how this might work in practice through an example. An instructor in a Political Science course on American Environmental Policy may have a learning goal (among others) of students understanding the historical precursors of various environmental policies and how these both enabled and constrained the resulting legislation and its impacts on environmental conservation and health. The instructor therefore decides that the course will be organized around a series of short papers that will combine to make a thorough policy report, one that will also be the subject of student presentations and discussions in the last third of the course. Each student will write about an American environmental policy of their choice, with a first paper addressing its historical precursors, a second focused on the process of policy formation, and a third analyzing the extent of its impacts on environmental conservation or health. This will help students to meet the content knowledge goals of the course, in addition to its goals of improving students’ research, writing, and oral presentation skills. The instructor then develops the prompts, guidelines, and performance criteria that will be used to assess student skills, in addition to other course elements to best prepare them for this work – e.g., scaffolded units with quizzes, readings, lectures, debates, and other activities. Once the course has begun, the instructor communicates with the students about the learning goals, the assignments, and the criteria used to assess them, giving them the necessary context (goals, assessment plan) in the syllabus, handouts on the policy papers, rubrics with assessment criteria, model papers (if possible), and discussions with them as they need to prepare. The instructor then collects the papers at the appropriate due dates, assesses their conceptual and writing quality against the criteria and field’s scholarship, and then provides written feedback and grades in a manner that is reasonably prompt and sufficiently thorough for students to make improvements. Then the instructor can make determinations about whether the assessment method was effective and what changes might be necessary.

Assessment can vary widely from informal checks on understanding, to quizzes, to blogs, to essays, and to elaborate performance tasks such as written or audiovisual projects (Wiggins & McTighe, 2005). Below are a few common methods of assessment identified by Brown and Knight (1994) that are important to consider.

According to Euan S. Henderson, essays make two important contributions to learning and assessment: the development of skills and the cultivation of a learning style (1980). The American Association of Colleges & Universities (AAC&U) also has found that intensive writing is a “high impact” teaching practice likely to help students in their engagement, learning, and academic attainment (Kuh 2008).

Things to Keep in Mind about Essays

  • Essays are a common form of writing assignment in courses and can be either a summative or formative form of assessment depending on how the instructor utilizes them.
  • Essays encompass a wide array of narrative forms and lengths, from short descriptive essays to long analytical or creative ones. Shorter essays are often best suited to assess student’s understanding of threshold concepts and discrete analytical or writing skills, while longer essays afford assessments of higher order concepts and more complex learning goals, such as rigorous analysis, synthetic writing, problem solving, or creative tasks.
  • A common challenge of the essay is that students can use them simply to regurgitate rather than analyze and synthesize information to make arguments. Students need performance criteria and prompts that urge them to go beyond mere memorization and comprehension, but encourage the highest levels of learning on Bloom’s Taxonomy . This may open the possibility for essay assignments that go beyond the common summary or descriptive essay on a given topic, but demand, for example, narrative or persuasive essays or more creative projects.
  • Instructors commonly assume that students know how to write essays and can encounter disappointment or frustration when they discover that this is sometimes not the case. For this reason, it is important for instructors to make their expectations clear and be prepared to assist, or provide students to resources that will enhance their writing skills. Faculty may also encourage students to attend writing workshops at university writing centers, such as Vanderbilt University’s Writing Studio .

Exams and time-constrained, individual assessment

Examinations have traditionally been a gold standard of assessment, particularly in post-secondary education. Many educators prefer them because they can be highly effective, they can be standardized, they are easily integrated into disciplines with certification standards, and they are efficient to implement since they can allow for less labor-intensive feedback and grading. They can involve multiple forms of questions, be of varying lengths, and can be used to assess multiple levels of student learning. Like essays they can be summative or formative forms of assessment.

Things to Keep in Mind about Exams

  • Exams typically focus on the assessment of students’ knowledge of facts, figures, and other discrete information crucial to a course. While they can involve questioning that demands students to engage in higher order demonstrations of comprehension, problem solving, analysis, synthesis, critique, and even creativity, such exams often require more time to prepare and validate.
  • Exam questions can be multiple choice, true/false, or other discrete answer formats, or they can be essay or problem-solving. For more on how to write good multiple choice questions, see this guide .
  • Exams can make significant demands on students’ factual knowledge and therefore can have the side-effect of encouraging cramming and surface learning. Further, when exams are offered infrequently, or when they have high stakes by virtue of their heavy weighting in course grade schemes or in student goals, they may accompany violations of academic integrity.
  • In the process of designing an exam, instructors should consider the following questions. What are the learning objectives that the exam seeks to evaluate? Have students been adequately prepared to meet exam expectations? What are the skills and abilities that students need to do well on the exam? How will this exam be utilized to enhance the student learning process?

Self-Assessment

The goal of implementing self-assessment in a course is to enable students to develop their own judgment and the capacities for critical meta-cognition – to learn how to learn. In self-assessment students are expected to assess both the processes and products of their learning. While the assessment of the product is often the task of the instructor, implementing student self-assessment in the classroom ensures students evaluate their performance and the process of learning that led to it. Self-assessment thus provides a sense of student ownership of their learning and can lead to greater investment and engagement. It also enables students to develop transferable skills in other areas of learning that involve group projects and teamwork, critical thinking and problem-solving, as well as leadership roles in the teaching and learning process with their peers.

Things to Keep in Mind about Self-Assessment

  • Self-assessment is not self-grading. According to Brown and Knight, “Self-assessment involves the use of evaluative processes in which judgement is involved, where self-grading is the marking of one’s own work against a set of criteria and potential outcomes provided by a third person, usually the [instructor]” (1994, p. 52). Self-assessment can involve self-grading, but instructors of record retain the final authority to determine and assign grades.
  • To accurately and thoroughly self-assess, students require clear learning goals for the assignment in question, as well as rubrics that clarify different performance criteria and levels of achievement for each. These rubrics may be instructor-designed, or they may be fashioned through a collaborative dialogue with students. Rubrics need not include any grade assignation, but merely descriptive academic standards for different criteria.
  • Students may not have the expertise to assess themselves thoroughly, so it is helpful to build students’ capacities for self-evaluation, and it is important that they always be supplemented with faculty assessments.
  • Students may initially resist instructor attempts to involve themselves in the assessment process. This is usually due to insecurities or lack of confidence in their ability to objectively evaluate their own work, or possibly because of habituation to more passive roles in the learning process. Brown and Knight note, however, that when students are asked to evaluate their work, frequently student-determined outcomes are very similar to those of instructors, particularly when the criteria and expectations have been made explicit in advance (1994).
  • Methods of self-assessment vary widely and can be as unique as the instructor or the course. Common forms of self-assessment involve written or oral reflection on a student’s own work, including portfolio, logs, instructor-student interviews, learner diaries and dialog journals, post-test reflections, and the like.

Peer Assessment

Peer assessment is a type of collaborative learning technique where students evaluate the work of their peers and, in return, have their own work evaluated as well. This dimension of assessment is significantly grounded in theoretical approaches to active learning and adult learning . Like self-assessment, peer assessment gives learners ownership of learning and focuses on the process of learning as students are able to “share with one another the experiences that they have undertaken” (Brown and Knight, 1994, p. 52).  However, it also provides students with other models of performance (e.g., different styles or narrative forms of writing), as well as the opportunity to teach, which can enable greater preparation, reflection, and meta-cognitive organization.

Things to Keep in Mind about Peer Assessment

  • Similar to self-assessment, students benefit from clear and specific learning goals and rubrics. Again, these may be instructor-defined or determined through collaborative dialogue.
  • Also similar to self-assessment, it is important to not conflate peer assessment and peer grading, since grading authority is retained by the instructor of record.
  • While student peer assessments are most often fair and accurate, they sometimes can be subject to bias. In competitive educational contexts, for example when students are graded normatively (“on a curve”), students can be biased or potentially game their peer assessments, giving their fellow students unmerited low evaluations. Conversely, in more cooperative teaching environments or in cases when they are friends with their peers, students may provide overly favorable evaluations. Also, other biases associated with identity (e.g., race, gender, or class) and personality differences can shape student assessments in unfair ways. Therefore, it is important for instructors to encourage fairness, to establish processes based on clear evidence and identifiable criteria, and to provide instructor assessments as accompaniments or correctives to peer evaluations.
  • Students may not have the disciplinary expertise or assessment experience of the instructor, and therefore can issue unsophisticated judgments of their peers. Therefore, to avoid unfairness, inaccuracy, and limited comments, formative peer assessments may need to be supplemented with instructor feedback.

As Brown and Knight assert, utilizing multiple methods of assessment, including more than one assessor when possible, improves the reliability of the assessment data. It also ensures that students with diverse aptitudes and abilities can be assessed accurately and have equal opportunities to excel. However, a primary challenge to the multiple methods approach is how to weigh the scores produced by multiple methods of assessment. When particular methods produce higher range of marks than others, instructors can potentially misinterpret and mis-evaluate student learning. Ultimately, they caution that, when multiple methods produce different messages about the same student, instructors should be mindful that the methods are likely assessing different forms of achievement (Brown and Knight, 1994).

These are only a few of the many forms of assessment that one might use to evaluate and enhance student learning (see also ideas present in Brown and Knight, 1994). To this list of assessment forms and methods we may add many more that encourage students to produce anything from research papers to films, theatrical productions to travel logs, op-eds to photo essays, manifestos to short stories. The limits of what may be assigned as a form of assessment is as varied as the subjects and skills we seek to empower in our students. Vanderbilt’s Center for Teaching has an ever-expanding array of guides on creative models of assessment that are present below, so please visit them to learn more about other assessment innovations and subjects.

Whatever plan and method you use, assessment often begins with an intentional clarification of the values that drive it. While many in higher education may argue that values do not have a role in assessment, we contend that values (for example, rigor) always motivate and shape even the most objective of learning assessments. Therefore, as in other aspects of assessment planning, it is helpful to be intentional and critically reflective about what values animate your teaching and the learning assessments it requires. There are many values that may direct learning assessment, but common ones include rigor, generativity, practicability, co-creativity, and full participation (Bandy et al., 2018). What do these characteristics mean in practice?

Rigor. In the context of learning assessment, rigor means aligning our methods with the goals we have for students, principles of validity and reliability, ethics of fairness and doing no harm, critical examinations of the meaning we make from the results, and good faith efforts to improve teaching and learning. In short, rigor suggests understanding learning assessment as we would any other form of intentional, thoroughgoing, critical, and ethical inquiry.

Generativity. Learning assessments may be most effective when they create conditions for the emergence of new knowledge and practice, including student learning and skill development, as well as instructor pedagogy and teaching methods. Generativity opens up rather than closes down possibilities for discovery, reflection, growth, and transformation.

Practicability. Practicability recommends that learning assessment be grounded in the realities of the world as it is, fitting within the boundaries of both instructor’s and students’ time and labor. While this may, at times, advise a method of learning assessment that seems to conflict with the other values, we believe that assessment fails to be rigorous, generative, participatory, or co-creative if it is not feasible and manageable for instructors and students.

Full Participation. Assessments should be equally accessible to, and encouraging of, learning for all students, empowering all to thrive regardless of identity or background. This requires multiple and varied methods of assessment that are inclusive of diverse identities – racial, ethnic, national, linguistic, gendered, sexual, class, etcetera – and their varied perspectives, skills, and cultures of learning.

Co-creation. As alluded to above regarding self- and peer-assessment, co-creative approaches empower students to become subjects of, not just objects of, learning assessment. That is, learning assessments may be more effective and generative when assessment is done with, not just for or to, students. This is consistent with feminist, social, and community engagement pedagogies, in which values of co-creation encourage us to critically interrogate and break down hierarchies between knowledge producers (traditionally, instructors) and consumers (traditionally, students) (e.g., Saltmarsh, Hartley, & Clayton, 2009, p. 10; Weimer, 2013). In co-creative approaches, students’ involvement enhances the meaningfulness, engagement, motivation, and meta-cognitive reflection of assessments, yielding greater learning (Bass & Elmendorf, 2019). The principle of students being co-creators of their own education is what motivates the course design and professional development work Vanderbilt University’s Center for Teaching has organized around the Students as Producers theme.

Below is a list of other CFT teaching guides that supplement this one and may be of assistance as you consider all of the factors that shape your assessment plan.

  • Active Learning
  • An Introduction to Lecturing
  • Beyond the Essay: Making Student Thinking Visible in the Humanities
  • Bloom’s Taxonomy
  • Classroom Assessment Techniques (CATs)
  • Classroom Response Systems
  • How People Learn
  • Service-Learning and Community Engagement
  • Syllabus Construction
  • Teaching with Blogs
  • Test-Enhanced Learning
  • Assessing Student Learning (a five-part video series for the CFT’s Online Course Design Institute)

Angelo, Thomas A., and K. Patricia Cross. Classroom Assessment Techniques: A Handbook for College Teachers . 2 nd edition. San Francisco: Jossey-Bass, 1993. Print.

Bandy, Joe, Mary Price, Patti Clayton, Julia Metzker, Georgia Nigro, Sarah Stanlick, Stephani Etheridge Woodson, Anna Bartel, & Sylvia Gale. Democratically engaged assessment: Reimagining the purposes and practices of assessment in community engagement . Davis, CA: Imagining America, 2018. Web.

Bass, Randy and Heidi Elmendorf. 2019. “ Designing for Difficulty: Social Pedagogies as a Framework for Course Design .” Social Pedagogies: Teagle Foundation White Paper. Georgetown University, 2019. Web.

Brookfield, Stephen D. Becoming a Critically Reflective Teacher . San Francisco: Jossey-Bass, 1995. Print

Brown, Sally, and Peter Knight. Assessing Learners in Higher Education . 1 edition. London ;Philadelphia: Routledge, 1998. Print.

Cameron, Jeanne et al. “Assessment as Critical Praxis: A Community College Experience.” Teaching Sociology 30.4 (2002): 414–429. JSTOR . Web.

Fink, L. Dee. Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses. Second Edition. San Francisco, CA: Jossey-Bass, 2013. Print.

Gibbs, Graham and Claire Simpson. “Conditions under which Assessment Supports Student Learning. Learning and Teaching in Higher Education 1 (2004): 3-31. Print.

Henderson, Euan S. “The Essay in Continuous Assessment.” Studies in Higher Education 5.2 (1980): 197–203. Taylor and Francis+NEJM . Web.

Gelmon, Sherril B., Barbara Holland, and Amy Spring. Assessing Service-Learning and Civic Engagement: Principles and Techniques. Second Edition . Stylus, 2018. Print.

Kuh, George. High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter , American Association of Colleges & Universities, 2008. Web.

Maki, Peggy L. “Developing an Assessment Plan to Learn about Student Learning.” The Journal of Academic Librarianship 28.1 (2002): 8–13. ScienceDirect . Web. The Journal of Academic Librarianship. Print.

Sharkey, Stephen, and William S. Johnson. Assessing Undergraduate Learning in Sociology . ASA Teaching Resource Center, 1992. Print.

Walvoord, Barbara. Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education. Second Edition . San Francisco, CA: Jossey-Bass, 2010. Print.

Weimer, Maryellen. Learner-Centered Teaching: Five Key Changes to Practice. Second Edition . San Francisco, CA: Jossey-Bass, 2013. Print.

Wiggins, Grant, and Jay McTighe. Understanding By Design . 2nd Expanded edition. Alexandria,

VA: Assn. for Supervision & Curriculum Development, 2005. Print.

[1] For more on Wiggins and McTighe’s “Backward Design” model, see our teaching guide here .

Photo credit

Creative Commons License

Teaching Guides

  • Online Course Development Resources
  • Principles & Frameworks
  • Pedagogies & Strategies
  • Reflecting & Assessing
  • Challenges & Opportunities
  • Populations & Contexts

Quick Links

  • Services for Departments and Schools
  • Examples of Online Instructional Modules

Woman working in library

Assessment and Feedback in Higher Education

Assessment is an essential part of education. Robust assessment processes are critical for a rigorous evaluation of the level of student learning. Beyond judgement, the modes of assessment we select will shape not just what, but how students learn. Empowering and engaging learners through assessment design and providing opportunities for dialogic feedback is central to learning and the student experience.

Modes of assessment are evolving, encompassing assessment for, of and as learning, allowing the evaluation of a broader range of professional and subject-specific competencies, providing greater student choice and opportunities for students to showcase their talents.

Kay Hack

Designing a diverse diet of authentic, valid and verifiable assessment tasks that excite and engage learners together with constructive and timely feedback are the most influential means we have as teachers to direct and support learning.” Dr Catherine (Kay) Hack (PFHEA) ,  Principal Adviser (Learning and Teaching) Advance HE

online learning

The challenges of effective assessment in the modern business school

Dr Joanna Tai

Equality, diversity and partnership in assessment and feedback

More resources, articles and publications on assessment and feedback in higher education.

The Assessment and Feedback information pack contains links to all the resources and content available on this page plus more.

Buff line

Framework for Enhancing Assessment

Assessment and feedback practices are at the core of higher education (HE). Assessment provides judgement so it needs to be valid and reliable. It also supports student learning and feedback, so it should be designed to promote desirable learning behaviours, develop subject-specific and professional competencies, as well as other graduate attributes.

This framework is designed for all stakeholders; for educators, those in charge of policy, quality assurance, and quality enhancement, and those responsible for leading education at all levels from pro vice-chancellors to programme leaders.

Enhancing Assessment in Higher Education

Impacts of higher education assessment and feedback policy and practice on students: a review of the literature 2016-2021

Conducted by Edd Pitt, PhD SFHEA, and Kathleen M Quinlan, PhD PFHEA, from the University of Kent, the review explores evidence from peer-reviewed journal articles relating to assessment and feedback in higher education published from 2016 to 2021. The review highlights evidence-based assessment and feedback policies or practices that have had a demonstrable impact on key student outcomes including performance, engagement and satisfaction, and prompts a rethinking of traditional views of assessment and feedback.

Advance HE members can access the literature review, a searchable dataset and an infographic exploring the research.  Find out if your organisation is a member .

Assessment and feedback literature review

Degree Standards - Professional Development and Calibration for the External Examining System in the UK

The project on degree standards, running from 2016 to 2021, was managed by the Office for Students on behalf of England and the devolved administrations in Northern Ireland and Wales. OfS contracted Advance HE to work across the UK to facilitate a sector-owned development process focusing on the professional development for external examiners. The project has two interrelated parts:

  • Working with a range of higher education providers to design and pilot generic professional development for external examiners
  • Exploring different forms of calibration exercises with subject associations and Professional, Statutory and Regulatory Bodies (PSRBs)

Assessment and Feedback reports, publications and resources

Advance HE has curated a number of resources to support the understanding of the role of Assessment and Feedback in higher education. These cover areas such as learner-focused feedback, student partnerships and guides to our frameworks and are available to download below.

Essential Frameworks For Enhancing Student Success: Transforming Assessment

Our framework for transforming assessment is an integral part of the Advance HE Frameworks. This report provides an updated guide to the framework.  Download the guide.  

The guide is open to colleagues at  Advance HE member organisations  only.  Find out if your organisation is a member .

On Your Marks: Learner-focused Feedback Practices and Feedback Literacy

The publication contains a diverse range of expertise in assessment and feedback, bringing together experts in a number of disciplines to present ideas on learner-centred feedback.

The main goal of the collection is to showcase learner-focused feedback practices that make an impact on student learning.  Download the publication.

More resources, reports and publications

You will find links to more resources, reports and publications in our information pack on assessment and feedback in higher education.

Ethnicity awarding gaps in UK higher education in 2019/20

Advance HE has published data and findings related to the size of ethnicity awarding gaps (the difference in proportions of white and students awarded a first/2:1 degree) since 2005 as part of our annual  Equality in higher education statistical reports . However, this is the first dedicated report which explores ethnicity awarding gaps in detail across individual and course-level characteristics, specifically for the 2019-20 cohort of qualifiers. Download the report .

Student Partnerships in Assessment (SPiA)

The  Student Partnerships in Assessment (SPiA) Connect Member Benefit Series  was coordinated by Advance HE through the Spring-Summer of 2021. This guide is informed by the contributions of a global community engaging with this initiative and features some overarching principles for SPiA, current thinking, and how those principles can be applied in practice. Access the guide .

Assessment and Feedback in Law

This publication arose from an Advance HE Collaboration Project, which took place between 2020-2021, during the Covid-19 pandemic. It consists of nine case study papers focusing on a particular issue or innovation in Law at UK higher education institutions. Download the publication .

©Advance HE 2020. Company limited by guarantee registered in England and Wales no. 04931031 | Company limited by guarantee registered in Ireland no. 703150 | Registered charity, England and Wales 1101607 | Registered charity, Scotland SC043946 | VAT Registered number GB 152 1219 50. Registered UK Address: Advance HE, Innovation Way, York Science Park, Heslington, York, YO10 5BR, United Kingdom | Registered Ireland Address: Advance HE, First Floor, Penrose 1, Penrose Dock, Cork, T23 Kw81, Ireland.

Advertisement

Advertisement

Teaching life cycle assessment in higher education

  • LIFE CYCLE SUSTAINABILITY ASSESSMENT
  • Open access
  • Published: 17 December 2020
  • Volume 26 , pages 511–527, ( 2021 )

Cite this article

You have full access to this open access article

teacher assessment in higher education

  • Tobias Viere   ORCID: orcid.org/0000-0001-9030-2405 1 ,
  • Ben Amor 2 ,
  • Nicolas Berger 3 ,
  • Ruba Dolfing Fanous 4 ,
  • Rachel Horta Arduin 3 ,
  • Regula Keller 5 ,
  • Alexis Laurent 6 ,
  • Philippe Loubet 3 ,
  • Philip Strothmann 7 ,
  • Steffi Weyand 8 ,
  • Laurie Wright 9 &
  • Guido Sonnemann 3  

25k Accesses

17 Citations

4 Altmetric

Explore all metrics

Scientific Life Cycle Assessment (LCA) literature provides some examples of LCA teaching in higher education, but not a structured overview of LCA teaching contents and related competencies. Hence this paper aims at assessing and highlighting trends in LCA learning outcomes, teaching approaches and developed content used to equip graduates for their future professional practices in sustainability.

Based on a literature review on teaching LCA in higher education and a collaborative consensus building approach through expert group panel discussions, an overview of LCA learning and competency levels with related teaching contents and corresponding workload is developed. The levels are built on the European Credit Transfer and Accumulation System (ECTS) and Bloom’s taxonomy of learning.

Results and discussion

The paper frames five LCA learning and competency levels that differ in terms of study program integration, workload, cognitive domain categories, learning outcomes, and envisioned professional skills. It furthermore provides insights into teaching approaches and content, including software use, related to these levels.

Conclusions and recommendations

This paper encourages and supports higher educational bodies to implement a minimum of ‘life cycle literacy’ into students’ curriculum across various domains by increasing the availability, visibility and quality of their teaching on life cycle thinking and LCA.

Similar content being viewed by others

teacher assessment in higher education

Global state of the art of teaching life cycle assessment in higher education

Life cycle thinking in graduate education: an experience from brazil, learning-by-doing: experience from 20 years of teaching lca to future engineers.

Avoid common mistakes on your manuscript.

1 Introduction

Throughout the last decades, a shift in many societies has been seen with politicians introducing stronger regulations focusing on sustainability aspects, customers demanding more sustainable products and companies increasingly offering sustainable products (Mittelstaedt et al. 2014 ). Consequently, there is a need for sustainability professionals, who can guide institutions and organizations of all types and sizes through a transition process towards sustainable societies.

In this setting, universities play an important role. Around the world, they have reacted to these increasing societal needs with the development of entire sustainability programs and/or introduction of sustainability aspects into their existing curricula (Shriberg and MacDonald 2013 ). Along with an increased focus on sustainability, several universities have integrated the concept of life cycle thinking in their programme. Life cycle thinking and the associated life cycle assessment (LCA) methodology are increasingly taught to equip students with an ability to address complex sustainability challenges (Roure et al. 2018 ).

Life cycle thinking aims to increase the sustainability of a product or system along its entire value chain by reducing environmental impacts and at the same time increasing socio-economic performances (UNEP 2020a ). LCA primarily focuses on environmental impacts alone (ISO 14,040/14,044 2006), assessing quantitatively the environmental impacts of products and services along their value chains. Over the past decades, methodological developments in the direction of environmental and societal Life Cycle Costing (Swarr et al. 2011 ), Social LCA (Benoit and Mazijn 2009 ) and Life Cycle Sustainability Assessment (Klöpffer 2008 ; Valdivia et al. 2012 ) have also been made, as have numerous differentiations and brandings, including carbon and water footprints (ISO 14067 2018 ; ISO 14046 2014 ) or the EU’s Product Environmental Footprint and Organisation Environmental Footprint (EC JRC 2020 ; Bach et al. 2018 ; Pant et al. 2012 ). In the following, the term LCA is used in a broad sense and captures these different concepts and methodologies.

According to the authors’ experience, teaching in this field usually starts with a short introduction with life cycle thinking followed by the core methodology of environmental LCA, and then conveys the developments and specializations mentioned above. Depending on their future careers, students may require different levels of understanding of LCA. For example, some might only be required to understand the concept of life cycle thinking and its importance in assessing and managing sustainability aspects. Others may require to be LCA-literate so they can understand and use LCA results for their decision-making. Others can also become fully proficient in LCA application to be able to perform full-scale LCA studies. The level of LCA competency required from students therefore varies to a high degree. Depending on the university and scientific field, the level of LCA competency that is currently taught can vary considerably (Olsen et al. 2018 ). While some universities only introduce the concept as (a small) part of regular courses, others offer dedicated courses, modules, minors, or majors on the subject. A systematic integration of life cycle approaches and tools in a curriculum is still rare, as outlined by Roure et al. ( 2018 ) and Cosme et al. ( 2019 ).

While there are some examples in scientific literature on how LCA is taught – which will be further explored within this paper – a more structured guidance and overview on LCA learning and competency levels, associated with matching teaching approaches and content, is not available. However, this could contribute to a better understanding and implementation of LCA teaching in higher education. Such an overview could enable a better exchange of knowledge among educators on the subject and allow students to identify available programs around the world. It would also enable prospective employers to better understand students’ qualifications and could support and encourage more universities to introduce LCA courses. It could also help curriculum developers to decide on appropriate methods and credits to provide the desired qualifications to the students.

This paper aims to fill this gap and intends to shed some light on LCA teaching in higher education by answering the research question: What LCA learning outcomes, teaching approaches and contents can be recommended to equip graduates for their respective future professional practices?

Initially, this question had been raised within the Forum for Sustainability through Life Cycle Innovation (FSLCI) in 2017. FSLCI was created as global network for LCA professionals in 2015 in response to and to further strengthen the global mainstreaming of LCA (FSLCI 2020 ). The initial question led to the foundation of a working group on LCA in higher education with the mission to encourage and support higher educational bodies to apply a minimum of literacy on life cycle thinking and LCA, shortly referred here as ‘life cycle literacy’, to students across various domains by increasing the availability, visibility and quality of their teaching on life cycle thinking and LCA.

2 Methods and materials

For describing and structuring of learning outcomes and LCA qualifications at various levels and across a range of subjects from marketing, through engineering, to dedicated industrial ecology courses, first, the state of the art of studies on LCA is identified and evaluated based on a literature review. Second, a generic framework on LCA learning and competency is developed making use of Bloom’s taxonomy of learning. To showcase the use of the framework for teaching LCA in higher education, LCA teaching activities, material, and contents are categorized according to the proposed framework.

2.1 Literature review

A literature review was performed aiming to identify the relevant scientific papers published in the past years on LCA teaching. The review did not collect all possible types of documentation and material from the numerous LCA courses taught around the world (many of them are not publicly accessible). It rather aimed at summarizing the current state of research on teaching LCA in higher education. Hence, the literature review covered databases of scientific research (JSTOR, ScienceDirect, ResearchGate) using the keywords “teaching”, “higher education” and “learning” combined with “Life Cycle Assessment”, “Life Cycle Analysis”, “LCA”, or “lifecycle” (with different spelling).

Since many of the search results also described course formats in more detail, all studies identified were assessed based on the following criteria:

• Scope: description of a teaching experience (in an university and /or country) or a generic proposal of curriculum and/or insights to develop an LCA course;

• Focus: teaching method, LCA competences, learning outcomes or literature review;

• Course content: details of the content, including topics taught;

• LCA software: use of spreadsheet, streamlined software or full LCA software;

• Number of hours and/or credits: workload of the course;

• Students background: engineering and technical sciences, or business or social sciences;

• Target audience of the course: bachelor, master or doctoral;

• General information: country and journal.

2.2 Nominal group technique for conducting expert panels

The development of the framework and the categorization of LCA contents and activities are results of expert panels. Expert panels or stakeholder workshops are commonly used ways to discuss and develop consensus in complex subjects from medical practice to specific resource efficiency topics. In the field of LCA, expert panels have been used to build consensus on the development of guiding principles for LCA databases (e.g. Pennington et al. 2010 ), definition of impact assessment indicators (e.g. Frischknecht et al. 2016 ) or weighting factors (e.g. Pizzol et al. 2017 ).

In this study, expert panels were conducted based on the five-step nominal group technique as described in Potter et al. ( 2004 ) and Harvey and Holmes ( 2012 ) to build stepwise consensus on the learning outcomes and achievable competencies in higher education on LCA. FSLCI invited about 40 internationally recognized universities and academic institutions and announced open expert panel workshops at the 2017 and 2019 Life Cycle Management Conferences. Three semi-structured expert panels were conducted in Luxembourg in 2017, at the University of Bordeaux in 2019, and due to the COVID-19 crisis online in 2020. The experts, who participated in the panels represented the following stakeholders of higher education in LCA:

• Université de Bordeaux and the engineering school Bordeaux INP (France), where LCA is integrated into general and professional bachelor studies (natural sciences and engineering), in master (chemistry and engineering), and PhD programs.

• Technical University of Denmark (DTU), which is an engineering university, where sustainability assessment and life cycle thinking are introduced to all students at BSc, MSc and PhD levels (via mandatory courses), and where LCA has been comprehensively taught at MSc level for the past 20 years.

• Technische Universität Darmstadt, where the Chair of Material Flow Management and Resource Economy provides LCA teaching for bachelor (environmental and civil engineering), master (various engineering and material sciences) and PhD programs.

• Pforzheim University’s business school (Germany), where LCA is the core content of a master program provided by the Institute for Industrial Ecology and major part of several PhD studies.

• PRé (the Netherlands) as one provider of LCA software used in higher education.

• Université de Sherbrooke (Canada), where the Interdisciplinary Research Laboratory in Life Cycle Assessment and Circular Economy (LIRIDE) integrated the concept of sustainable development and life cycle thinking into the bachelor, master and PhD programs (various disciplines of engineering).

• Solent University, Southampton (UK) where LCA is taught as an integrated topic across the engineering bachelor programmes and several PhD studies.

• The Zurich University of Applied Sciences (ZHAW), where life cycle thinking and the application of LCA results are integrated in various bachelor and master programmes (engineering and life sciences) and in-depth LCA competences are taught in a specific LCA minor for environmental engineers (Bachelor) and in advanced LCA courses in the masters programme on natural resource sciences.

The first expert panel provided an opportunity to discuss learning outcomes of different study programmes with incorporated LCA topics and key barriers of teaching and education of LCA. The second panel further developed a common understanding of learning outcomes, learning outcomes and competency levels achieved for different LCA courses. In the third panel, learning and teaching contents, materials, activities, and approaches were identified and categorized.

Given the large number of academic institutions teaching LCA, the expert panels which were carried out to complement the literature review provided a highly valuable yet limited insight into how LCA is taught around the world. In going forward, it is planned to carry out a comprehensive global survey to enhance the representativeness of the findings of the panels outlined in this paper as highlighted in our final chapter.

2.3 Learning competences

The pedagogical methods and metrics behind the development of the framework on teaching LCA in higher education are described in this section.

An important aspect identified already in the first expert panel meeting was the need to adjust the content and level of complexity to the audience, considering the overall workload and specific LCA learning outcomes. To qualify the comparative time spent studying LCA topics, the European Credit Transfer and Accumulation System (ECTS) was used as reference, which presents a system for the transfer and comparability of accredited learning (EC 2015 ). Credits are awarded for completed learning, where one academic year normally corresponds to 60 ECTS-credits (equivalent to approximately 1500–1800 h of study, irrespective of qualification or standard). The system provisions for transferability between countries, and provides equivalence with country and university systems outside the European Higher Education Area, where a multitude of crediting approaches exist as exemplified in Table 1 .

We applied the ECTS system to provide a gauge against which the comparative time spent studying LCA topics can be measured. The more time spent, the greater the credit allocation and the deeper the LCA teaching can be. The more time spent on study, broadly speaking, the higher the cognitive level at which learning can be achieved.

In line with cognitive levels, Bloom’s taxonomy is a set of hierarchical models used to classify educational learning outcomes into categories of complexity and specificity (Bloom et al. 1956 ). To differentiate cognitive skill levels within LCA teaching, Bloom’s taxonomy has been used. Originally presented to help develop rubrics (criteria for grading) and measure learning, the taxonomy encompasses six categories within the cognitive domain: knowledge, comprehension, application, analysis, synthesis, evaluation (Bloom et al. 1956 ). Bloom’s taxonomy is widely used for analysing and classifying cognitive skills in higher education and has been used in the LCA teaching context before (e.g. Favi et al. 2019 ; Olsen et al.  2018 ; Roure et al. 2018 ).

Bloom’s taxonomy has been modified by various authors since its original conception, with variations, modifications or additions (e.g. Anderson et al. 2001 ; Lytras and Pouloudi 2006 ). Most widely applied is the addition of a further category of ‘creation’ to the taxonomy, representing learner’s ability to create new knowledge at the point of achieving sophisticated understanding of a subject (Anderson et al. 2001 ). Table 2 lists and defines the categories of the revised taxonomy.

A final important aspect to address are the learning outcomes. They are defined as the expected goals of a course, lesson or activity in terms of demonstrable skills or knowledge, which the students acquire and as a result of the delivery of taught content or teaching activities. It is useful to also consider these skills in relation to Bloom's taxonomy to provide appropriately levelled outcomes (Anderson et al. 2001 ). The terms learning outcomes and learning objectives are often used interchangeably. This paper uses the term learning outcomes to describe the intended overarching achievements of students in contrast to instructional and course-specific learning objectives (cp. e.g. Allan 1996 , Harden 2002 ).

2.4 Framework development

To develop a framework on teaching LCA in higher education, the results of the stakeholder workshops were structured in several categories: the degree of LCA integration into study programs measured in ECTS and workload, associated cognitive domain categories according to Bloom’s taxonomy, LCA learning outcomes, and envisioned professional LCA competency. These categories are used to define distinctive LCA learning and competency levels. To further develop the framework, common LCA teaching content and approaches are described, analyzed and categorized within the given levels.

3 Results and discussion

Within this chapter, the outcome of the scientific literature review on LCA teaching (Section  3.1 ) and the results of expert panel discussion using the nominal group technique are utilized to conceptualize LCA learning and competency levels (3.2) and link these to their respective LCA teaching approaches and content with related workloads (3.3). If not stated otherwise, the results of Sections  3.2  and 3.3 are direct results of the expert panel discussions.

3.1 LCA teaching experiences in the literature

Twenty-eight studies were identified in the literature and summarized in Table 3 . The review aimed to identify published experience on teaching LCA in higher education. As highlighted by Burnley et al. ( 2019 ), LCA is taught in a significant number of higher education institutes across the world, but many do not publish papers on their teaching experience. Instead, this review reveals the current state of scientific discourse on teaching LCA and provides a basis for expert panel based assessment in later sections of this paper.

Through this literature review, it was confirmed that several universities implemented courses on LCA and environmental assessment tools in the past years. Most of the papers (26 out of 28) report an experience with an LCA course held at a specific university or different universities in the same country (identified as “course experience” in Table 3 ). For instance, Burnley et al. ( 2019 ) present a software tool developed at Cranfield University to allow part-time distance learning students to gain an understanding and experience of LCA. Olsen et al. ( 2018 ) and Cosme et al. ( 2019 ) present the LCA course experience at the Technical University of Denmark, including how students and companies are engaged in the course allowing a win–win situation to all stakeholders. De Souza et al. ( 2014 ) summarize experiences on graduate education in Brazil considering inputs from five different universities, and identified among others, common challenges among the partners as lack of national inventories to model LCA studies. In turn, Mälkki and Alanne ( 2017 ) aimed at understanding how LCA can be useful in renewable and sustainable energy education; a literature review was the starting point for the authors, and among other things, they concluded that LCA should be integrated into the learning outcomes of energy degree programmes. This last study was classified as a “generic course proposal”, although the authors do not suggest a detailed curriculum in the paper. As such, the different studies reflect different experiences from educators, but there is therefore no discussion nor comparisons between these different experiences.

In geographical terms, a predominant share of the studies (eleven) are based on experiences in European countries. The second region is North America with eight studies (seven from the United States and one from Canada), followed by Oceania (Australia) with three studies, South America (Brazil) with two studies and Asia (Malaysia) with one study. Three studies included several countries. Overall, this geographical distribution reflects the continental and regional evolution of LCA for the past 30 years, e.g. its use in industry (Stewart et al. 2018 ) or its spreading via LCA networks (Bjørn et al. 2013 ), where European and North American regions have seen the largest developments, while other regions like Asia or Africa lag behind with respect to LCA uptake.

Among the papers, ten included related approaches besides LCA, e.g. ecodesign, green chemistry, sustainable development, energy efficiency and circular economy (Loste et al. 2020 ; Oude Luttikhuis et al. 2015 ; Roure et al. 2018 ; Sahakian and Seyfang 2018 ). Although there is a consensus on the importance of considering sustainability in the academic curriculum of all areas, the majority of LCA courses identified (21) is applied to graduate or undergraduate students following an engineering or technology path. This may be explained by the traditional main use of LCA as a micro-level decision-support tool in industry for product or technology development, although it started to diversify in recent years with broader, large-scale assessments of organizations, sectors or countries (EC 2020 ; Laurent and Owsianiak 2017 ).

Different authors reported the relevance of project-based learning (e.g. Lockrey and Bissett Johnson 2013 ; Margallo et al. 2019 ; Piekarski et al. 2019 ; Sriraman et al. 2017a , b ), most of the time through a life cycle assessment project developed by the students and using one or different LCA commercial software. Some studies presented experiences with cases developed in partnership with industrial partners (Cosme et al. 2019 ; Piekarski et al. 2019 ). As emphasized by Sriraman et al. ( 2017a , b ), an appropriate pedagogy is essential to activate student engagement in the learning process and facilitate a deeper learning.

Regarding the structure of the courses, only a few studies presented the content of the lectures in details, and/or the learning outcomes (e.g. Cosme et al. 2019 ; Gilmore 2016 ; Margallo et al. 2019 ; Olsen 2010 ; Roure et al. 2018 ). Gilmore ( 2016 ) highlights the importance to adjust the content and level of complexity to the audience and the background of the students, but there is an evident absence of generic LCA course framework based on specific learning outcomes and teaching methods.

3.2 LCA learning and competency framework

Figure  1 and Table 4 summarize the outcome of our expert panel research with regard to LCA learning and competency levels. Depending on the degree of study program integration and the respective workload in ECTS and hours, LCA teaching covers smaller or larger parts of Bloom’s taxonomy and pursues different learning outcomes which eventually result in nuanced expectations regarding the students’ professional life cycle literacy.

figure 1

LCA learning outcomes and competency levels proposed within the frame of Bloom’s revised taxonomy

Considering the modular structure of most higher education courses in LCA, we consider typical ECTS awards and Bloom’s taxonomy to derive four broad conceptual levels of learning outcomes and cognitive competency. For higher LCA competency levels, higher levels according to Bloom are involved: From the ability to gain or apply knowledge like the basic concept of life cycle thinking, to the ability to critically analyse and evaluate the quality of an own LCA study and finally being able to combine LCA results with insights from other disciplines to create new methods (see Fig.  1 ). These levels are intended to reflect the time spent by students studying LCA, reflected by the weight of ECTS credits available for part of a course, entire courses, modules or minor and major or complete programs. A fifth learning and competency level is additionally considered to capture research work undertaken by postgraduate research students, particularly in thesis work (MSc or PhD students). This reflects the suggestion of various revisions of Bloom’s taxonomy to add ‘creation’ as further category to the cognitive domain, which is not conceptual knowledge per se, but rather based on the understanding and skills gained through the preceding cognitive categories (Anderson et al. 2001 ). This fifth level is very flexible depending on the course specific focus and is therefore not detailed in further sections.

In Table 4 we suggest generic learning outcomes for LCA taught content corresponding to the five levels defined above. Through definition of topics, time, and cognitive categories for courses, instructors are able to more clearly define consistent learning outcomes. We recognize that instructors or curricula may have specific requirements or demands and therefore we suggest that these outcomes be applied and adapted as required. Based on the learning outcomes we come up with an envisioned professional LCA competency as the final outcome for students with regards to their future role as sustainability professionals. After their studies, some students will just be aware of the life cycle concept, they might commission LCA studies for decision-making in the future (level 1&2), some will be users of LCA information and contribute to their generation (level 3), while others will be able to conduct full-scale LCA studies (level 4), either in an accompanied or independent manner. As such, the level of LCA competency that will be required from students varies to a high degree.

3.3 Application of the framework in higher education

The substantial differences in learning outcomes and envisioned professional LCA competencies do not only require distinct workloads and tailored study program integration, but also result in a large spectrum of LCA teaching activities, materials, and content. Therefore, this section showcases the use of the framework in higher education. Based on the expert panel research, we decided to focus on three broad types of LCA teaching: lecturing and similar activities, case studies and project work, and LCA software and database use. The latter is mostly part of case studies and project work and of particular relevance for LCA teaching in higher education.

The three types resemble Atkins and Brown’s continuum of teaching methods ( 2002 ) where lectures represent one end of the continuum with a high degree of engagement and control by the lecturer. The degree of student participation and independence then constantly increases via small group teaching, research supervision and lab work to the level of individual work (e.g. PhD studies) at the other end of the continuum. Accordingly, the addressed cognitive domain categories of Bloom’s taxonomy change from one type to the next.

The following sections further elaborate the three types (Sections  3.3.1 – 3.3.3 ), while typical teaching contents are addressed in Section  3.3.4 . Unless stated otherwise, the results are the direct outcome of the nominal group technique.

3.3.1 Lecturing and teaching material

Lecturing is a typical means of communicating basic LCA content to students with lecturers presenting LCA topics, instructing small exercises and encouraging discussions. Given that lectures are the oldest and most common means of university teaching it does not require further explanation here. More student-centred learning activities are gaining importance in LCA teaching, e.g. e-learning or flipped classroom concepts. The latter suggests moving lecture material out of the classroom (e.g. via recordings to watch by the students before coming to the classroom) and using the classroom time for active learning, e.g. exercises and discussion (Lage et al. 2000 ). Teaching approaches are expected to change even faster due to the COVID-19 pandemic, which has already evoked a worldwide shift from classroom teaching to online teaching (cp. e.g. Bao 2020 ).

LCA lecturing is supported by textbooks, published guidelines, and online resources. Table 5 lists such LCA teaching documents, indicates the type of each document and its relation to the teaching and competency levels specified in Table 4 . The specific role of standards and guidelines in teaching is to support and complement textbooks and other course material. A review of English textbooks is provided by Laurent et al. ( 2020 ).

3.3.2 Case studies and project work

Case studies and project work are very common and important elements of teaching LCA and appear at all levels of the teaching and competency level framework (Table 4 ). While case studies are often used for illustration and interpretation purposes at levels 1 and 2, the conduct of LCA case studies and projects, including data collection and other real-life challenges of LCA, takes place at levels 3 and 4.

On the first levels of the framework two main approaches can be distinguished:

• Case studies may be embedded into teaching LCA basics to support the understanding of main steps and procedures. For instance, after the functional unit and reference flow terminology has been introduced in a lecture (e.g. at level 2), groups of students reflect on this by making use of a given case study. Thereafter, the same order is repeated for product system, system boundaries, and so forth. Some textbooks (e.g. Klöpffer and Grahl 2014 ) support this “red thread” approach by integrating a continuous case study into each book chapter.

• Case studies are used subsequently to introductory lectures. Groups of students are for example requested to read and work on a given case study or elaborate a highly simplified case.

Typically, the case studies in levels 1 and 2 would relate to pre-made cases and not involve external collaboration. In some cases, software is used to support the case study work on these first levels (see also following section).

On higher levels of the competency framework (i.e. levels 3 and 4), the complexity and magnitude of case study and project works increase accordingly. Instead of just reading and interpreting elements of case studies or doing guided exercises on simplified case study material, students at levels 3 and above start to interpret and compare “full” LCA case studies and can also plan and execute full-fledged LCA studies by themselves including some parts of data collection and system modeling in LCA software. As a step further, student groups can conduct these LCA studies on real-life cases in collaboration with external partners (e.g. companies, municipalities), where they can experience challenges (e.g. data collection) and benefits (e.g. interactions and communication with engaged stakeholders) of real-life application of LCA.

These findings of the expert group panels suggest that group work and project work approaches, which stimulate active engagement by the students, are important to achieve desired LCA learning outcomes. This mirrors the outcome of the literature review in Section  3.1 , where several authors (e.g. Margallo et al. 2019 ; Sriraman et al. 2017a , b ) emphasize the importance of project- and problem-based learning approaches and others provide examples of real-life company cases being the core element of their teaching concepts (see Cosme et al. 2019 ; Lockrey and Bissett Johnson 2013 ; Piekarski et al. 2019 ).

3.3.3 Spreadsheet, software and database use

IT applications form an important part of LCA teaching and are often used to handle the quantity of data needed for Life Cycle Inventory (LCI) analysis and Life Cycle Impact Assessment (LCIA). While IT support can increase the understanding of LCA applications and better prepare the students for potential future careers as LCA practitioners, it also bears the risks of replacing or complicating the comprehension and critical questioning of LCA methodology. Defining the right dose of software use is hence an important part of LCA teaching. Besides the use of specialized LCA software and databases, spreadsheet exercises are a common part of LCA teaching. Spreadsheet exercises or by hand calculations enable first small LCA computations and help to illustrate LCA methods, which otherwise run automatically and therefore hidden in specialized software and database solutions. LCA teachers are hence encouraged to use LCA software to ensure that the mechanics behind LCA computations are well understood by the students, e.g. by small exercises with spreadsheets to mimic the computations of the LCA software at a lower scale (see Cosme et al. 2019 for an example).

Four types of LCA software for teaching purposes can be distinguished. Spreadsheet exercises (type 1) allow simple case studies or LCA calculations by displaying LCA results from simple LCI data multiplied by pre-calculated emission factors. Streamlined software applications (type 2) usually focus on one application (such as eco-design or footprinting), limited boundaries (e.g., cradle-to-gate) or limited sectors (e.g., buildings). Professional LCA software packages (type 3, e.g., SimaPro (PRé Consultants 2020 ), openLCA (GreenDelta 2020 ), GaBi (Sphera 2020 ), or Umberto (ifu Hamburg 2020 ) enable modelling of the life cycle of any products and calculate the associated environmental impacts based on several possible LCI databases for the background system and on several already implemented LCIA methods. Advanced use of LCA software through coding or programming (type 4) such as (i) stand-alone programming framework (e.g., Brightway2 (Mutel 2017 )) and (ii) COM-interface to automatically run LCA software from a third-party application of programming language.

As presented in Table 3 , types 1,2 and 3 software are used as a support for LCA classes and for doing exercises or conducting student projects. Type 4 are mostly used at PhD levels. However, the development of programming classes in curricula might foster the use of type 4 software in LCA classes at competency levels 3 and 4.

When designing an LCA class and selecting an LCA software, several aspects need to be considered, including financial feasibility (“cost range”), the product or activity to be studied (“applicability”), the ease-of-use during the learning process (“usability”), the different functionalities available in the software (“versatility”), the time dedicated to learn and use the software, the need to conduct a partial or full LCA, the need to use several LCI databases and/or several LCIA methods, the level of the class concerning LCA (according to Table 4 ), and the associated competencies to be learned. Figure  2 represents and compares most of these aspects for the types of software mentioned above.

figure 2

Comparison of four types of software to teach LCA (Levels: B = Bachelor, M = Master, D = Doctoral)

Access to LCI databases is also an important aspect when using LCA software for teaching. There are several educational and free of charge LCI database packages available. However, advanced LCI modelling as well as conducting real-life case studies require access to comprehensive databases that are often commercial (e.g. ecoinvent and GaBi). At advanced LCA competency levels, students might work on LCI datasets from such databases, e.g. to perform adaptations to specific regional settings or to conduct sensitivity analyses.

3.3.4 LCA teaching content

As LCA is an interdisciplinary approach, its concepts and topics are taught at various levels and across a range of subjects from engineering to management, to dedicated industrial ecology courses. Subsequently, the time spent, teaching content and methods covered, and depth of learning varies widely from single classes and modules to entire courses, at both under- and postgraduate levels. Also, the number of students/participants and the available teaching staff resources affect teaching content and related activities.

LCA teaching content varies from topics like introduction to life cycle thinking via life cycle inventory modeling and environmental impact assessment methods to the mathematical foundations of LCA. The typical workload for students goes from awareness-raising and basic knowledge acquisition of up to 30 h as part of courses, through entire courses and modules of up to 150 h, to specialized minor and major with more than 360 h in studies of different disciplines. Not all this workload is necessarily dedicated to lecturing; in particular in higher levels of competencies, case studies and group work are a major element of the course (see Section  3.3.2 ), meaning that lecturing may only represent a third or half of the total workload (see Table 4 ).

The teaching starts in general by introducing the wider context of LCA (e.g. sustainable development, resource efficiency, circular economy, environmental management and eco-design), the basic understanding of environmental impacts (e.g. climate change and acidification) and the concept of life cycle thinking as a way of providing a holistic understanding of value chains and product systems. Often jointly with this concept, the trade-offs or burden shifting across life cycles and impacts is explained. This broad introduction is followed by the description of the purposes of LCA jointly with illustrations of the lectures by use cases corresponding to the students’ area of study. As a next step, generally the ISO 14,040/44 framework including the LCA phases and its terminology (functional unit, system boundaries etc.) are clarified. These teaching contents are considered necessary to achieve the learning outcomes that students understand life cycle thinking and its utilization and relevance.

Students who are supposed to use LCA methodology and understand the basics of LCA studies need to learn more details on goal and scope definition, inventory modeling, impact assessment methods and the interpretation of results. They should also be aware of special types of LCA. That means carbon and water footprinting, life cycle sustainability assessment, including social LCA and life cycle costing, organisational LCA and/ or Input–Output and hybrid LCA. Moreover, multiple applications of LCA in specific sectors of relevance for the students (such as e.g. building, materials or energy) are explained more deeply. Finally, a basic understanding of how LCA is implemented in industry for innovation and communication through the concept of Life Cycle Management is often part of this level.

At higher learning and competency levels, classes often also include sensitivity and uncertainty analyses, the mathematical foundation of LCA and stimulate system thinking with the combination of LCA with other system-analytical tools like material flow analysis (MFA) and environmental risk assessment (ERA). Moreover, the more hours the topics mentioned above are taught, the higher the level is, with the exception of level 5 where the students mostly learn in an autodidactic way within a project coached by a professor or researcher, but not through classroom formats.

To systematize this diversity of LCA teaching content, the expert panels conflated the LCA learning and competency levels from Table 4 , their workshops’ collection of teaching topics, and the three types of LCA teaching approaches from previous sections. Subsequently, the expert panels assigned typical workloads for students for each topic at each level. Table 6 consolidates these findings and presents an overview of typical teaching content at the different levels. For instance, the typical workload for introducing the wider context of LCA (e.g. its role within eco-design etc.) is about 2 h on level 1, but up to 8 h on higher levels (see first line in the classroom format category). While level 1 aims to provide a basic understanding of environmental impacts such as climate change within 4 h of workload, higher levels typically double this (see second line in the classroom category). In addition, levels 2, 3, and 4 enable more specific insights into environmental impact assessment methods with typical workload of 8, 16, and 32 h respectively (see row on LCIA in the lectures category). While Table 6 provides a good overview of typical topics and time requirements at each LCA competency level, it is not meant to be exhaustive. It furthermore does not detail the required abilities for each item. These abilities are reflected by Bloom’s cognitive categories that are juxtaposed to the LCA levels in Fig.  1 . It is recommended to incorporate these categories into the planning and conduct of LCA courses for each given topic in Table 6 .

Table 6 illustrates the increasing diversity of teaching approaches with a high degree of student participation and independence as the learning and competency levels increase (for competency levels, see Chapter 3.2). Case study and project work as well as software and database use form larger portions of the overall workload, increasing from about a quarter of the total workload on levels 1 and 2 to more than half of the workload at level 4.

4 Conclusions and perspectives

The framework developed in this paper describes five LCA learning and competency levels in higher education institutions. It aligns the topics that should be addressed with the total workload needed to achieve the different competency levels. The learning outcomes of the LCA competency levels are linked with Bloom’s taxonomy, with the higher levels covering a larger share of Bloom’s cognitive categories, also reflecting the importance of critical and system thinking within LCA. The framework is proposed as a practical tool for those involved in the delivery of LCA education in higher education. On the one hand, it provides guidance on the workload, which needs to be invested in order to fulfil the learning outcomes for different LCA competency levels. On the other hand, it allows determining which approaches and content can be taught given a pre-set workload or number of available hours. Moreover, teaching documents that can be used as support material are cited.

This work does not include empirical research but represents the consolidated findings of LCA teachers and other stakeholders on the topic following the nominal group technique. The results represent an average of various study programs and disciplinary backgrounds and thus cannot reflect each individual study program in all aspects. The results are based on the view of various experts mainly from Europe but also from North America who have experience in teaching LCA. Since they are not adequately representing the perspective of Africa, Asia and Latin America, the results are clearly not representative for the world, in particular not for emerging economies. Overall, the paper does contribute to a better understanding of teaching LCA in higher education by providing structured guidance and a framework on LCA learning and competency levels with related teaching approaches and content. It encourages and supports higher educational bodies and their staff to apply a minimum of ‘life cycle literacy’ to students across various disciplines by increasing the availability, visibility and quality of their teaching on life cycle thinking and LCA. The framework also reveals that educating competent and independent LCA experts requires comprehensive and interdisciplinary LCA teaching, which goes beyond simplified and mechanistic approaches and encourages critical systems thinking.

In that setting, we provide sets of recommendations for users of the framework:

• Curriculum developers can use the framework to determine the amount of credits that would need to be assigned per course to allow students to acquire the different competence levels.

• University professors who are interested in offering courses on the subject can get orientation and, based on Table 6 , can choose the content that corresponds to the level of their students as well as the available time.

• Students can use the overview to compare their course descriptions and contents with the competency levels listed in Table 4 in order to self-assess their own capabilities and they can manage their expectations on which time investment will lead to what level of competency in LCA.

• LCA-software providers and LCI database providers can profit from the overview of skills listed for different competency levels to create suitable services for each level.

• Prospective employers who want to recruit a person with an LCA background can use the framework to better determine the actual qualifications that they expect from their candidates.

Future research might enhance the framework by triangulating its results further, e.g. through conducting empirical surveys to gather the knowledge and experience of LCA teachers in higher education and beyond globally, using a bottom-up approach instead of the top down approach applied in this paper. This could extend the insights into teaching LCA in higher education by providing a mapping of all programs and courses available worldwide. This could also pave the way for developing a repository or platform providing all LCA teaching resources available in the world.

The results of such a new research project would help students to find a qualification and education that best fits their prospective careers and they would show educational institutions and their staff which options for integrating LCA are available. In addition, the overview could provide contact details that would help students to orientate themselves and institutions to set up free online platforms to foster knowledge exchange on teaching practices.

Furthermore, the current focus of the framework on higher education could be broadened by including lifelong learning activities as envisioned for instance by the Life Cycle Assessment Certified Professional programs of the American Center for Life Cycle Assessment (ACLCA 2020 ). Lifelong learning is crucial since it assures that practitioners keep or increase their LCA competency level after their initial training and that they can continuously provide high quality LCA studies.

Finally, opportunities exist to use the framework to foster ‘life cycle literacy’ in a more systematic way by working not only with curriculum developers but also policy-makers, who want to promote a more sustainable economy based on scientifically sound information. These stakeholders may have interest in ensuring that ‘life cycle literacy’ is built, and thus could use the framework to find out what kind of knowledge is currently taught by universities in their region. They could then engage in collaborations with curriculum developers to foster the teaching on life cycle thinking and LCA in their higher education institutions. FSLCI and other organizations working on encouraging the use of life cycle information worldwide could become facilitators for promoting the growth or ‘life cycle literacy’ to those stakeholders based on the LCA learning and competency level framework for higher education presented in this paper.

ACLCA (2020) Life Cycle Assessment Certified Professional. American Center for Life Cycle Assessment, USA. https://aclca.org/lcacp-certification/ Accessed 01 May 2020

Allan J (1996) Learning outcomes in higher education. Studies in Higher Education 21(1):93–108

Google Scholar  

Anderson LW, Krathwohl DR, Airasian PW, Cruikshank KA, Mayer RE, Pintrich PR, Raths J, Wittrock MC (2001) A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives. Longman, New York

Atkins M, and Brown G (2002). Effective teaching in higher education. Routledge.

Bach V, Lehmann A, Görmer M, Finkbeiner M (2018) Product Environmental Footprint (PEF) Pilot Phase — Comparability over Flexibility? Sustainability 10:2898

Balan P, Manickam G (2013) Promoting holistic education through design of meaningful and effective assignments in sustainable engineering. In: Proceedings of 2013 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE). IEEE: 382–385

Bao W (2020) COVID-19 and Online Teaching in Higher Education: A Case Study of Peking University. Human Behavior and Emerging Technologies 2:113–115

Bauman H, Tillman AM (2004) The Hitch Hiker’s Guide to LCA: An orientation in life cycle assessment methodology and application. Studentlitteratur, Gothenburg, Sweden

Benoit C, Mazijn B (2009) Guidelines for Social Life Cycle Assessment of Products. UNEP/SETAC Life Cycle Initiative, Paris

Bjørn A, Owsianiak M, Laurent A, Westh TB, Molin C, Hauschild MZ (2013) Mapping and characterization of LCA networks. Int J Life Cycle Assess 18:812–827

Bloom BS, Engelhart MD, Furst EJ, Hill WH, Krathwohl DR (1956) Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive domain. David McKay, New York

Burnley S, Wagland S, Longhurst P (2019) Using life cycle assessment in environmental engineering education. High Educ Pedagog 4:64–79

Cosme N, Hauschild MZ, Molin C, Rosenbaum RK, Laurent A (2019) Learning-by-doing: experience from 20 years of teaching LCA to future engineers. Int J Life Cycle Assess 24:553–565

Crossin E, Carre A, Grant T, et al. (2011) Teaching life cycle assessment:‘greening’undergraduate engineering students at RMIT University. 7th Aust Conf Life Cycle Assessment, Conf Proceedings, Life Cycle Assess Reveal secrets a green Mark Melbourne, Aust March 9–10

Curran MA (2012) Life Cycle Assessment Handbook: A Guide for Environmentally Sustainable Products. Scrivener Publishing, Beverly, MA

De Souza XL, Peixoto JAA, De Souza CG, Pontes AT, Futuro DO (2014) Life cycle thinking in graduate education: An experience from Brazil. Int J Life Cycle Assess 19:1433–1444

EC (2015) ECTS users’ guide. European Commission, Directorate-General for Education, Youth, Sport and Culture, Brussels

EC (2020). Results and deliverables of the Environmental Footprint pilot phase, European Commission, Brussels, BE. Retrieved from http://ec.europa.eu/environment/eussd/smgp/PEFCR_OEFSR_en.htm . Accessed 3 July 2020

EC JRC (2010) International Reference Life Cycle Data System (ILCD) Handbook - General guide for Life Cycle Assessment - Detailed guidance. EUR 24708 EN, - European Commission, Joint Research Centre, Luxembourg

Evans GM, Galvin KP, Doroodchi E (2008) Introducing quantitative life cycle analysis into the chemical engineering curriculum. Educ Chem Eng 3:e57–e65

Favi C, Marconi M, Germani M (2019) Teaching eco-design by using LCA analysis of company’s product portfolio: The case study of an Italian manufacturing firm. Procedia CIRP 80:452–457

Finkbeiner M (2016) Special Types of Life Cycle Assessment. Springer, Netherlands Netherlands, Dordrecht

Uni Freiburg (2020) Industrial Ecology Open Online Course. Industrial Ecology Freiburg, Research group at the Faculty of Environment and Natural Resources Universität Freiburg, Germany  http://www.teaching.industrialecology.uni-freiburg.de/ . Accessed 03 May 2020

Frischknecht R (2020) Lehrbuch der Ökobilanzierung. Springer Spectrum, Berlin

Frischknecht R, Fantke P, Tschümperlin L, Niero M, Antón A, Bare J et al. (2016) Global guidance on environmental life cycle impact assessment indicators: progress and case study. Int J Life Cycle Assess 21(3):429–442

FSLCI Forum for Sustainability through Life Cycle Innovation (2020) Vision, mission and goals  https://fslci.org/vision-mission/ . Accessed 24 September 2020

Fullana P, Puig R (1994) Análisis del ciclo de la vida. Rubes, Barcelona

Gilmore KR (2016) Teaching life cycle assessment in environmental engineering: a disinfection case study for students. Int J Life Cycle Assess 21:1706–1718

GreenDelta (2020) openLCA, Berlin, Germany. http://www.openlca.org/

Grisel L, Osset P (2004) L’Analyse du Cycle de Vie d’un produit ou d’un service : Applications et mise en pratique. AFNOR, La Plaine Saint-Denis, France

Guinée J (2002) Handbook on Life Cycle Assessment: Operational Guide to the ISO Standards. Institute of Environmental Sciences (CML) - Leiden University, Springer Netherlands, Dordrecht

Harden RM (2002) Learning outcomes and instructional objectives: is there a difference? Med Teach 24(2):151–155

CAS   Google Scholar  

Harding TS (2004) Life cycle assessment as a tool for Green manufacturing education. ASEE Annu Conf Proc 9089–9098

Harvey N, Holmes CA (2012) Nominal group technique: an effective method for obtaining group consensus. Int J Nurs Pract 18:188–194

Hauschild MZ, Huijbregts MAJ (2015) Life Cycle Impact Assessment. Springer, Netherlands Netherlands, Dordrecht

Hauschild M, Rosenbaum RK, Olsen S (2018) Life Cycle Assessment - Theory and Practice. Springer International Publishing, Berlin

Heijungs R, Suh S (2002) The Computational Structure of Life Cycle Assessment. Springer, Netherlands, Dordrecht

Ifu Hamburg GmbH (2020) Umberto LCA Software, Hamburg, Germany  https://www.ifu.com/en/umberto/lca-software/ . Accessed 24 September 2020

ILCA 2020 Open teaching material International Life Cycle Academy Barcelona  https://ilca.es/teaching-materials/open-teaching-material/ . Accessed 03 May 2020

ISO 14040 (2006) Environmental management — Life cycle assessment — Principles and framework. International Organization for Standardization, Geneva

ISO 14044 (2006) Environmental management — Life cycle assessment — Requirements and guidelines. International Organization for Standardization, Geneva

ISO 14046 (2014) Environmental management — Water footprint — Principles, requirements and guidelines. International Organization for Standardization, Geneva

ISO 14067 (2018) Greenhouse gases — Carbon footprint of products — Requirements and guidelines for quantification. International Organization for Standardization, Geneva

Jolliet O, Saade-Sbeih M, Shaked S, Jolliet A, Pierre Crettaz P (2015) Environmental Life Cycle Assessment. CRC Press, Boca Raton, FL

JRC EC (2012) Product Environmental Footprint (PEF) and Organisation Environmental Footprint (OEF) Guide. European Commission, Joint Research Centre, Ispra

Klöpffer W (2008) Life cycle sustainability assessment of products. Int J Life Cycle Assess 13:89

Klöpffer W, Grahl B (2014) Life Cycle Assessment (LCA): A Guide to Best Practice. Wiley‐VCH Verlag

Krathwohl D (2002) A Revision of Bloom’s Taxonomy: An Overview. Theory Pract 41(4):212–218

Lage M J, Platt G J, Treglia M (2000) Inverting the Classroom: A Gateway to Creating an Inclusive Learning Environment. The Journal of Economic Education 3130–43.

Laurent A, Olsen SI, Fantke P, Andersson PH (2015) Active learning in sustainability teaching. In: Exploring teaching for active learning in engineering education, ETALEE conference, pp. 77-78

Laurent A, Owsianiak M (2017) Potentials and limitations of footprints for gauging environmental sustainability. Curr Opin Environ Sustain 25:20–27

Laurent A, Weidema B, Bare J, Liao X, Maia de Souza D, Pizzol M, Sala S, Schreiber H, Thonemann N, Verones F (2020) Methodological review and detailed guidance for the life cycle interpretation phase. J Ind Ecol 1–18 (In Press)

Lockrey S, Bissett Johnson K (2013) Designing pedagogy with emerging sustainable technologies. J Clean Prod 61:70–79

Loste N, Chinarro D, Gomez M, Roldán E, Giner B (2020) Assessing awareness of green chemistry as a tool for advancing sustainability. J Clean Prod 256. https://doi.org/10.1016/j.jclepro.2020.120392

Lytras MD, Pouloudi A (2006) Towards the development of a novel taxonomy of knowledge management systems from a learning perspective: an integrated approach to learning and knowledge infrastructures. J Knowl Manag 10(6):64–68

Mälkki H, Alanne K (2017) An overview of life cycle assessment (LCA) and research-based teaching in renewable and sustainable energy education. Renew Sustain Energy Rev 69:218–231

Mälkki H, Alanne K, Hirsto L, Soukka R (2016) Life cycle assessment (LCA) as a sustainability and research tool in energy degree programmes. 44th Annu. Conf. Eur. Soc. Eng. Educ. - Eng. Educ Top World Ind Coop SEFI 2016:12–15

Margallo M, Dominguez-Ramos A, Aldaco R (2019) Incorporating life cycle assessment and ecodesign tools for green chemical engineering: A case study of competences and learning outcomes assessment. Educ Chem Eng 26:89–96

Masanet E, Chang Y, Yao Y et al. (2014) Reflections on a massive open online life cycle assessment course. Int J Life Cycle Assess 19:1901–1907

Matthews HS, Hendrickson CT, Matthews DH (2020). Life Cycle Assessment - Quantitative Approaches for Decisions That Matter - Free Textbook and Other Learning Resources for the Global LCA Community  https://www.lcatextbook.com/ . Accessed 03 May 2020

Meo M, Bowman K, Brandt K et al. (2014) Teaching Life-Cycle Assessment with Sustainable Minds - A Discussion with Examples of Student Projects. J Sustain Educ 7:11

Mittelstaedt JD, Shultz CJ, Peterson KWE, M, (2014) Sustainability as Megatrend: Two Schools of Macromarketing Thought. J Macromarketing 34(3):253–264

Mulder-Nijkamp M, de Koeijer B, Torn RJ (2018) Synthesizing sustainability considerations through educational interventions. Sustain 11:1–37

Mutel C (2017) Brightway: An open source framework for Life Cycle Assessment. J Open Source Softw 12:2

Olsen SI (2010) A Strategy for Teaching Sustainability Assessment. 3rd Int. Symp. Eng. Educ. 2010

Olsen SI, Fantke P, Laurent A, Birkved M, Bey N, Hauschild MZ (2018) Sustainability and LCA in Engineering Education - A Course Curriculum. Procedia CIRP 69:627–632

Oude Luttikhuis EJ, Toxopeus ME, Lutters E (2015) Effective integration of life cycle engineering in education. Procedia CIRP 29:550–555

Pant R, Chomkhamsri K, Pelletier N, Manfredi S, Galatola M, Bedo I (2012) European Guidelines for measuring the Environmental Footprint of Products and Organisations

Pennington DW, Chomkhamsri K, Pant R, Wolf MA, Bidoglio G, Kögler K et al. (2010) ILCD Handbook Public Consultation Workshop. Int J Life Cycle Assess 15(3):231–237

Perini S, Luglietti R, Margoudi M et al. (2018) Learning and motivational effects of digital game-based learning (DGBL) for manufacturing education –The Life Cycle Assessment (LCA) game. Comput Ind 102:40–49

Piekarski CM, Puglieri FN, de Araújo CK, C, Barros MV, Salvador R, (2019) LCA and ecodesign teaching via university-industry cooperation. Int J Sustai High Educ 20:1061–1079

Pizzol M, Laurent A, Sala S, Weidema B, Verones F, Koffler C (2017) Normalisation and weighting in life cycle assessment: quo vadis? Int J Life Cycle Assess 22(6):853–866

Potter M, Gordon S, Hamer PW (2004) The Nominal Group Technique: A useful consensus methodology in physiotherapy research. New Zealand Journal of Physiotherapy 32:70–75

PRé Consultants B.V. (2020) SimaPro, Amersfoort, Netherlands. https://simapro.com/education/

Roure B, Anand C, Bisaillon V, Amor B (2018) Systematic curriculum integration of sustainable development using life cycle approaches: The case of the Civil Engineering Department at the Université de Sherbrooke. Int J Sustain High Educ 19:589–607

Sahakian M, Seyfang G (2018) A sustainable consumption teaching review: From building competencies to transformative learning. J Clean Prod 198:231–241

Schenk R, White P (2014) Environmental Life Cycle Assessment: Measuring the environmental performance of products. American Center for Life Cycle Assessment, USA

Shriberg M, MacDonald L (2013) Sustainability Leadership Programs: Emerging Goals, Methods & Best Practices. Journal of Sustainability Education 5

Sonnemann G, Margni M (2015) Life Cycle Management. Springer. https://link.springer.com/book/10.1007%2F978-94-017-7221-1

Sonnemann G, Vigon B (2011) Global guidance principles for life cycle assessment databases. UNEP/SETAC Life Cycle Initiative, Paris

Sphera (2020) GaBi LCA Software, Leinfelden-Echterdingen, Germany. http://www.gabi-software.com/international/software/gabi-universities/

Sriraman V, Torres A, Ortiz AM (2017) Teaching sustainable engineering and industrial ecology using a hybrid problem-project based learning approach. J Eng Technol 34:8–15

Sriraman V, Torres A, Ortiz A (2017) Teaching Sustainable Engineering and Industrial Ecology using a Hybrid Problem-Project Based Learning Approach. In: 2017 ASEE Annual Conference & Exposition Proceedings. ASEE Conferences, pp 8–15

Stewart R, Fantke P, Bjørn A, Owsianiak M, Molin C, Hauschild MZ, Laurent A (2018) Life cycle assessment in corporate sustainability reporting: Global, regional, sectoral and company-level trends. Bus Strategy Environ 27:1751–1764

Swarr TE, Hunkeler D, Klöpffer W, Pesonen HL, Ciroth A, Brent AC, Pagan R (2011) Environmental life cycle costing: a code of practice. Society of Environmental Chemistry and Toxicology (SETAC), Pensacola

Tasdemir C, Gazo R (2020) Integrating sustainability into higher education curriculum through a transdisciplinary perspective. J Clean Prod 265:121759

TCD – Trinity College Dublin (2020) ECTS Equivalents Credit Table for UK & Non-European Study Abroad Exchanges  https://www.tcd.ie/study/assets/PDF/ECTS_Equivalents_Credit_Table_UK.pdf . Accessed 03 July 2020

UNEP (2020a) What is life cycle thinking? United Nations Environment Programme hosted Life Cycle Initiative  https://www.lifecycleinitiative.org/starting-life-cycle-thinking/what-is-life-cycle-thinking/ . Accessed 01 June 2020

UNEP (2020b) Training materials. United Nations Environment Programme hosted Life Cycle Initiative  https://www.lifecycleinitiative.org/resources/training/ . Accessed 03 May 2020

Valdivia S, Ugaya C, Sonnemann G, Hildenbrand J (2012) Towards a Life Cycle Sustainability Assessment. UNEP/SETAC Life Cycle Initiative, Paris

Vallero DA, Braiser C (2008) Teaching Green Engineering: The Case of Ethanol Lifecycle Analysis. Bull Sci Technol Soc 28:236–243

Weber NR, Strobel J, Dyehouse MA et al. (2014) First-Year Students’ Environmental Awareness and Understanding of Environmental Sustainability Through a Life Cycle Assessment Module. J Eng Educ 103:154–181

Download references

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Institute for Industrial Ecology (INEC), Pforzheim University, Pforzheim, Germany

Tobias Viere

Interdisciplinary Research Laboratory in Life Cycle Assessment and Circular Economy (LIRIDE), Université de Sherbrooke, Quebec, Canada

ISM, UMR 5255, Université de Bordeaux, Talence, France

Nicolas Berger, Rachel Horta Arduin, Philippe Loubet & Guido Sonnemann

PRé Sustainability B.V, Amersfoort, Netherlands

Ruba Dolfing Fanous

Institute of Natural Resource Sciences, Zurich University of Applied Sciences, Wädenswil, Switzerland

Regula Keller

Section for Quantitative Sustainability Assessment, Technical University of Denmark (DTU), Kgs. Lyngby, Denmark

Alexis Laurent

Forum for Sustainability Through Life Cycle Innovation e.V. (FSLCI), Berlin, Germany

Philip Strothmann

Institute IWAR, Material Flow and Resource Economy, Technical University of Darmstadt, Darmstadt, Germany

Steffi Weyand

Solent University Southampton, Southampton, UK

Laurie Wright

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Tobias Viere .

Additional information

Communicated by Arnaud Hélias.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Viere, T., Amor, B., Berger, N. et al. Teaching life cycle assessment in higher education. Int J Life Cycle Assess 26 , 511–527 (2021). https://doi.org/10.1007/s11367-020-01844-3

Download citation

Received : 30 July 2020

Accepted : 16 November 2020

Published : 17 December 2020

Issue Date : March 2021

DOI : https://doi.org/10.1007/s11367-020-01844-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Life cycle thinking
  • Learning outcomes
  • Competency levels
  • Teaching approaches and content
  • Find a journal
  • Publish with us
  • Track your research

Browse content to learn how our tools are helping clients or stay up-to-date on trends in higher ed.

go down

  • Resources Home
  • Resource Type
  • On-demand demos
  • eBooks & guides
  • Case studies
  • Infographics
  • Area of Interest
  • Assessment & Accreditation
  • Course Evaluations & Institutional Surveys
  • Faculty Activity Reporting
  • Curriculum & Catalog Management
  • Student Success & Engagement
  • Student Learning & Licensure
  • Outcomes Assessment Projects
  • COVID resources

9 assessment trends to watch for in 2024

  • Share this Article

teacher assessment in higher education

Assessment plays a pivotal role in the future of higher education as institutions constantly seek ways to improve the student experience, boost enrollment and retention, and prepare students for successful careers. 

While the COVID-19 pandemic, the growth of artificial intelligence (AI), and cultural shifts in America’s opinion of higher education have resulted in many challenges, they have also forced institutions of higher learning to adopt new teaching and learning methods, ultimately shaping assessment processes.     

Defining assessment in higher education

In simple terms, assessment in higher education measures what students know and what they can do. Educators and administrators gather information to better understand how students learn and discover strengths and weaknesses, not just in students but also in curricula and teaching methods. 

Assessments also measure how student learning impacts the institution, making it a mutually beneficial process. Institutions have actionable data on which to base improvements, and students benefit from these improvements. In addition, assessments can inform resource allocation and educational planning. In short, assessment helps institutions confidently understand how to make the most significant impact in the lives of the students they serve. 

9 trends in higher education assessment

How we assess learning is the backbone of student success. It drives institutional improvement and gives institutions valuable insights into how students perform and prepare for the workplace. Assessments present opportunities for improvement in teaching, assessing, and engaging students. 

Over the years, student assessment has evolved to reflect the ever-changing educational landscape. The COVID-19 pandemic forced institutions to abruptly change their teaching and assessment strategies . AI has forced institutions to engage with what will likely be a permanent tool for both faculty and students. Accreditation agencies are recognizing the need for more disaggregated data that shows student learning for different demographics of students. These changes and others have prompted new trends in higher education assessment that include:  

1. Shifting toward new assessment types

Summative assessment has long been the go-to assessment model for higher learning institutions. In recent years, however, the limitations of this model have become increasingly apparent. Summative assessments give students one way to demonstrate their knowledge at the end of an instructional period and often favor some students over others. It also limits ongoing feedback and widens the divide between students and educators.

In response, institutions have initiated more formative assessments throughout the course. This interactive format allows educators to tweak their approaches throughout an instructional period, addressing challenges or gaps in student knowledge as they arise. 

Strategic assessment — using different assessment methods designed across an entire curriculum to assess specific outcomes in a given year — is another noteworthy trend. Educators assess students on activities that allow insight into how well they achieve program goals. Instead of legacy methods where programs were required to capture data on all outcomes, every semester, more institutions are focusing on looking at a sample of outcomes each semester, releasing some of the data-gathering pressure experienced by faculty.   

2. Data-driven innovation

Data-driven decision-making has become the new norm in many institutions, and many are using data to keep students engaged, increase retention, identify and support at-risk students, and improve student outcomes such as degree completion.

With the enhanced ability to gather student data, institutions are making that data actionable and using it to drive improvement at all stages of the student journey. The ability to use data to drive success has resulted in a shift towards better assessment technology. In the long term, administrators and decision-makers can use the data to drive instructional best practices, increase student engagement, and inform curricula. 

teacher assessment in higher education

3. Leveraging data-informed, proactive advising

Formative assessment models combined with data can provide educators and success coaches with information on which students are struggling and how to engage with them. Early alerts notify advisors when students struggle, prompted by low scores on early assessment or low course enrollment. With this data, advisors can intervene at critical moments , providing students with actionable strategies for getting back on track and encouraging them to take advantage of study groups and learning centers. 

In addition to facilitating advisor interventions, assessment data can prompt course redesign and further investigation into courses many students don't finish. This assessment model has significant implications for institutions, and a proactive approach to handling struggling students can increase retention — a positive outcome for institutions in times of declining student enrollment.   

4. Increasing leadership and stakeholder engagement

Assessment is becoming more of a culture than a practice in many institutions as more institution leaders and critical stakeholders experience the value of supporting student learning with data-driven decisions. When leaders use data to drive student participation and change educational programming to foster student engagement, this culture of improvement and student support filters throughout the institution. 

Institutions will likely continue using more effective communication methods with stakeholders to gather feedback on how their assessments support student learning experiences. This trend represents a fundamental shift toward collaboration in higher education institutions as higher-level administrators are beginning to receive more insight into institutional effectiveness activities.   

5. Increasing inclusivity

Each individual student who enters higher education has a background that influences their learning. Students from privileged backgrounds often find it easier to transition to the demands of higher education. In contrast, students from less privileged backgrounds may require more support to receive equal access to different resources and experiences. 

Assessment findings can provide valuable information on which groups of students are leveraging the resources at their disposal and which may be falling behind. This data helps focus educators on closing the gaps and prioritizing engagement with all students for learning experiences that incorporate global affairs and diverse perspectives. Assessments are shifting away from standardization , focusing instead on allowing students to demonstrate their full potential in the ways that make most sense to them.

Additionally, more accreditation agencies are requiring institutions to report on student success disaggregated by student demographics. This kind of focus on a more granular approach to assessment helps institutions put resources towards students who may need additional support or curricula that may need improvement.   

6. Expanding access to career-related experiences

Institutions are consistently looking beyond academic achievement. Instead, they focus on readying students for the challenges of entering a successful career. Experiential learning activities are essential for students to make informed career choices and enter the workplace with the required skills and confidence. 

Experiential learning includes internships, field-based learning, service learning, and mentored research — and these require a different approach to assessment. Assessment methods will likely go beyond measuring attendance to focus on specific skills that help students clarify career goals and build relevant knowledge. 

In addition, more institutions are recognizing that AI and the use of AI in daily work is likely going to increase. Helping students to critically consider how best to use AI ethically and efficiently will encourage the development of a new generation of workforce leaders who can leverage new technology to meet goals.   

7. Upgrading assessment technology for data-driven improvements

Assessment practices continue to move away from limited and inefficient manual assessments in favor of faster and more secure digital processes. Institutions are no longer asking whether to integrate technology but rather how to integrate it best. Robust assessment technology provides students with many opportunities to demonstrate their skills and understanding, and stakeholders expect more frequent insights into student and educator performance. 

Innovative assessment technology gives institutions increasing insights into the needs of each student. Gathering data from students creates an obligation to use that data to drive their success.  

Upgrading Assessment Technology for Data-Driven Improvements

8. Prioritizing ongoing professional development

Ongoing professional development remains vital to creating and sustaining effective assessment processes. As assessment practices change, educators must have the opportunity to understand them. Ongoing professional development opportunities ensure that all educators and administrators at an institution have the tools to use innovative assessment practices. Institutions that embrace continued learning at all levels can also look forward to greater consistency in facilitating student success. 

Institutions are likely to integrate more workshops and partnerships to ensure higher technology adoption rates and provide students with all the benefits of assessment methods and technology. Developing the workforce also fosters collaboration and creates cohesive assessment strategies that benefit educators, decision-makers, administrators, students, and the institution itself.   

9. Continuing to innovate

Assessment is the key to ongoing student success in higher education. As the educational landscape evolves, institutions will continue to revise what they assess and the methods they choose for promoting student success. Trends that evaluate the whole student will continue to evolve, and more effective outcomes will undoubtedly be explored. 

The future of assessment in higher education

As assessments continue to transform, higher education faces an exciting balance between embracing innovation and personalizing education. More than ever, assessment data will drive student success as institutions can preemptively intervene with struggling students, tailor the learning experience to each individual, and drive outcomes that prepare students for successful careers. 

Educators will likely encounter new, more sophisticated alternatives to conventional systems for engaging with students individually and helping produce well-rounded, responsive members of society. Although this transition to technology-driven assessment may present some challenges, there are significant benefits on the other side. Most importantly, student assessment will remain focused on elevating the student experience through data so that every student might achieve their academic and professional goals.

Using assessment trends to elevate student success

The primary goal of assessment is to elevate the student experience and facilitate student success. Higher education institutions have a unique opportunity to balance innovative technology and a personalized approach to reshape the student experience. 

Ready to reimagine your institution’s assessment goals? Request a demo today to learn how a robust digital assessment solution like Watermark Planning & Self-Study can help.

teacher assessment in higher education

About the Author Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut euismod velit sed dolor ornare maximus. Nam eu magna eros. More Content by Watermark Insights

Previous Article

How to use data to improve campus learning

Connected data is changing higher education and allowing schools to continually review and reflect on the s...

Next Article

Keeping up with 2024 higher ed trends?

Today's job market calls for more flexible academic options and skill-based education. As higher ed continu...

More Helpful Articles

Boosting alumni engagement opportunities

Explore these 10 strategies for boosting alumni engagement to support greater institutional success, and request a demo of Watermark's solutions today.

Why flexibility is important for student success

Higher education institutions need to offer more flexible accommodations to improve retention and graduation rates. Learn how technology can help.

A comprehensive guide to conducting academic program reviews in higher education

Continuous improvement is a key part of staying competitive for higher ed institutions. Learn how academic program reviews can help you achieve that goal.

How to enhance your curriculum to meet current labor market needs

Learn why your educational institution should update its degree program curricula to align with labor market trends and how Watermark's software can help.

Building bridges: Strengthening the faculty-student connection for academic success

Learn about the importance of strong connections between your institution's faculty and staff. Request a demo of the higher education software from Watermark.

How academic program reviews help ensure curriculum relevance and future readiness

The world changes rapidly. Learn how software solutions from Watermark can help you better prepare students for long term success after graduation.

How technology can improve how you're building, managing, and supporting the faculty and academic affairs staff

Providing your faculty with the proper support is critical for improving outcomes across your institution. Learn how advanced technologies can help.

How to assess internship readiness of students

We are breaking down the value of internship readiness to secure a great position and thrive. Request a demo of Student Success & Engagement from Watermark.

How to use LMS data to support curriculum development

Explore strategies for using learning management system data to inform curriculum development, and request a demo of higher education software from Watermark!

How to use SIS data at your school

Explore the ways your institution can utilize student information system data, and request a demo of the higher education software from Watermark today.

How AI will transform higher education

Explore the ways artificial intelligence is transforming colleges and universities, and request a demo of the higher education software from Watermark today!

How to improve student academic planning

Explore these tactics for improving student academic planning at your higher education institution, and request a demo of Watermark's solutions today!

10 ways to support students struggling with classes

Too many students drop out of higher education without a degree. Learn how to help students who are struggling with their college coursework here.

How to support students struggling with time management

Explore these useful time management tips to share with struggling students, and request a demo of Student Success & Engagement from Watermark today.

How to ensure education equity in the age of AI

Explore the ways your college or university can ensure educational equity in the age of AI, and request a demo of higher education software from Watermark now.

Effective data-sharing strategies for assessment professionals

Your institution can glean important insights from student data. Click here to learn mow to share it effectively and how Watermark can help!

How to cultivate a positive faculty culture at your higher ed institution

Where could you be going wrong when it comes to faculty engagement? Explore these 7 roadblocks and learn what steps your institution can take to create a culture where everyone thrives.

How higher ed can benefit from open data

Big data will be essential for enabling continuous improvement in higher education institutions moving forward. Learn how to use open data in your institution.

How higher education can be more environmentally sustainable

What is sustainability in higher education, and why is it important? Find useful tips for creating a more sustainable campus in our comprehensive guide.

How to implement data-driven decision-making in higher education

Data-driven decision-making is more than a business buzzword. It's essential for survival in the ultra-competitive higher ed landscape. Click to learn more.

  • Share this Hub

Let your curiosity lead the way:

Apply Today

  • Arts & Sciences
  • Graduate Studies in A&S

Assessment and Evaluation in Higher Education

Education 5832.

The New Order Of Web-Based Assessment System In Higher Education: A Study Of An Indian B-School

Article sidebar, main article content.

We used a mixed method and experimented with the features of the web-based online system Moodle applied in four management courses offering business economics courses with more than 400 students per trimester at Birla Institute of Management Technology (BIMTECH), Greater Noida, a premier private B-school in India. The study assesses Moodle, e-Mail, and a self-developed knowledge management system based on network address translation (NAT) platforms. The major contributions of this study are as follows: First, it adapts Moodle, an online learning management system (LMS) platform, for conducting web-based external examinations smoothly. Second, it compared the outcomes of three modes of external examinations, i.e., offline, online, and hybrid, and their relevance in the digital era. Third, it provides the option of a self-contained assessment tool for fair and better results. Thus, the innovative design saved the efforts of both teachers and students, demonstrating its future implications and effectiveness in higher education.

Article Details

Dr navin kumar shrivastava.

Associate Professor, Birla Institute of Management Technology,  Greater Noida, Plot No.5, Knowledge Park II, 201306 (NCR), India

Prof Kamal Kalra

Associate Professor, Birla Institute of Management Technology, Greater Noida, Plot No.5, Knowledge Park II, 201306 (NCR), India

Crossref

Higher Education Whisperer

Course Design, Teaching and Research.

Wednesday, May 8, 2024

Academic integrity through digital proctoring of assessment at edutech asia in singapore in november.

teacher assessment in higher education

No comments:

Post a comment.

IMAGES

  1. Describing coherence of curriculum, pedagogy and assessment

    teacher assessment in higher education

  2. The Five Levels of Assessment in Higher Education

    teacher assessment in higher education

  3. The Teacher Assessment Cycle: Becoming A Better Teacher

    teacher assessment in higher education

  4. Assessment in Higher Education: 10 Things to Know

    teacher assessment in higher education

  5. Improving Assessment in Higher Education

    teacher assessment in higher education

  6. Learning Outcomes Assessment

    teacher assessment in higher education

VIDEO

  1. Classroom Assessment

  2. EAT1-3

  3. Some critical issues in enabling impactful assessment

  4. 1BQ, S2E22

  5. Creating a Student-Led Academic Journal

  6. AF3-4

COMMENTS

  1. PDF Assessment in Higher Education and Student Learning

    that teachers do not always link assessment with quality teaching ( Postareff, Virtanen, Katajavuori, & Lindblom-Ylänne, 2012 ). Instead, they view assessment as a practice ... higher education, most assessment strategies, such as course assignments, serve both a formative (assessment for learning) and a summative (assessment of learning ...

  2. Establishing a Framework for Assessing Teaching Effectiveness

    Developing a framework for assessing teaching in higher education. In a recent IDEA paper, Benton and Young identify a number of best practices for evaluating teaching (Table 1) (Benton and Young Citation 2018).Essentially, these 13 best practices recognize that evaluation is only as valuable as the work that goes into it, both from those being evaluated to those doing the evaluation.

  3. Assessment & Evaluation in Higher Education

    Assessment & Evaluation in Higher Education. is an established international peer-reviewed journal which publishes papers and reports on all aspects of assessment and evaluation within higher education. Its purpose is to advance understanding of assessment and evaluation practices and processes, particularly the contribution that these make to ...

  4. Exploring teachers' (future) digital assessment practices in higher

    The results show the survey is a valid and reliable instrument for assessing teachers' digital assessment practice in higher education. Teachers' pedagogical knowledge and pedagogical content knowledge of digital assessment is critical, while teachers' technological pedagogical knowledge seems to have a more limited impact on the future of ...

  5. Student Assessment in Teaching and Learning

    By Michael R. Fisher, Jr. Much scholarship has focused on the importance of student assessment in teaching and learning in higher education. Student assessment is a critical aspect of the teaching and learning process. Whether teaching at the undergraduate or graduate level, it is important for instructors to strategically evaluate the effectiveness of their teaching...

  6. Formative assessment and feedback for learning in higher education: A

    INTRODUCTION. Formative assessment and feedback are fundamental aspects of learning. In higher education (HE), both topics have received considerable attention in recent years with proponents linking assessment and feedback—and strategies for these—to educational, social, psychological and employability benefits (Gaynor, 2020; Jonsson, 2013; van der Schaaf et al., 2013).

  7. Assessment in Higher Education: Professional Development for Teachers

    The results of student assessments often have far-reaching consequences for their educational or even professional careers. When we allow our assessments to have such impacts on people's careers, it is of the utmost importance that we are very sure that our measurement is reliable and accurate and a valid representation of students' actual abilities….

  8. Assessment & Evaluation in Higher Education: Vol 49, No 3 (Current issue)

    Playing by the many rules: taught masters and their assessment regulations. et al. Article | Published online: 15 Apr 2024. View all latest articles. Explore the current issue of Assessment & Evaluation in Higher Education, Volume 49, Issue 3, 2024.

  9. Assessment and Feedback in Higher Education: A Guide for Teachers on JSTOR

    Part 1: This chapter will help you think through the process of assessment design for your course unit or module. A course unit or module is a discrete unit of study, involving teaching time (either face-to-face or online), individual study, possibly group work and typically at least one summative assessment.

  10. Framework for Enhancing Assessment in Higher Education

    Impacts of higher education assessment and feedback policy and practice on students: a review of the literature 2016-2021 This literature review was part of a six-month longitudinal project for the Connect Benefit Series 2021-22 on Student Success, which focused on assessment and feedback, as well as other themes of employability, access ...

  11. Assessing Student Learning

    Learning and Teaching in Higher Education 1 (2004): 3-31. Print. Henderson, Euan S. "The Essay in Continuous Assessment." Studies in Higher Education 5.2 (1980): 197-203. Taylor and Francis+NEJM. Web. Gelmon, Sherril B., Barbara Holland, and Amy Spring. Assessing Service-Learning and Civic Engagement: Principles and Techniques.

  12. Assessment and Feedback in Higher Education

    Impacts of higher education assessment and feedback policy and practice on students: a review of the literature 2016-2021. Conducted by Edd Pitt, PhD SFHEA, and Kathleen M Quinlan, PhD PFHEA, from the University of Kent, the review explores evidence from peer-reviewed journal articles relating to assessment and feedback in higher education published from 2016 to 2021.

  13. Classroom Assessment in Higher Education

    Abstract. Classroom assessment is the process of documenting the knowledge, skills, attitudes and beliefs of learners. It provides essential feedback to both instructors and students to improve their teaching methods for guiding and motivating students to be actively involved in their learning. Assessment drives learning.

  14. The Importance of Assessment in Higher Education

    The Five Levels of Assessment in Higher Education. For institutions to realize the total value of assessment, it must occur on every level. Using a framework of five levels helps institutions pinpoint more specific improvement areas and focus their efforts. The five principal levels of assessment in higher education include: 1.

  15. Assessment and Evaluation in Higher Education

    Interests: teacher education; assessment and evaluation in education; project-based learning (PBL); higher education; curriculum development; ... Recent research on assessment in higher education shows that the most favored assessment methods continue to be examinations within a one- and only-time frame. Therefore, despite policy ...

  16. (PDF) Assessment in Higher Education

    Sadler [40] notes that assessment in higher education has two basic function, facilitating learning, and creating formal records of achievement for student transcripts. E-learning technologies are ...

  17. Faculty and Students Conceptions of Assessment in Higher Education

    Assessment in higher education serves multiple purposes such as providing information about student learning, student progress, teaching quality, and program and institutional accountability. Yet, little is known about faculty and students' attitudes regarding different aspects of assessment that have wide-ranging implications for policy and practice in tertiary institutions. To investigate ...

  18. Journal of Assessment in Higher Education

    The journal is an open-access, semi-annual publication that presents articles on current philosophy, research, teaching, learning and scholarship in higher education assessment. The journal is hosted on the Florida Open Journal Systems platform, as a service of the Smathers Libraries to the university community.

  19. Assessment and feedback in higher education reimagined: using

    ABSTRACT. We argue that assessment and feedback practices in higher education need to be transformed to better address three purposes: promoting learning, assuring assessment rigour, and communicating students' employability. To address shortcomings in the current assessment and feedback culture, we propose programmatic assessment (PA), a new approach to assessment developed initially in ...

  20. Teaching life cycle assessment in higher education

    Purpose Scientific Life Cycle Assessment (LCA) literature provides some examples of LCA teaching in higher education, but not a structured overview of LCA teaching contents and related competencies. Hence this paper aims at assessing and highlighting trends in LCA learning outcomes, teaching approaches and developed content used to equip graduates for their future professional practices in ...

  21. Assessment trends in higher education

    January 26, 2024 Watermark Insights. Assessment plays a pivotal role in the future of higher education as institutions constantly seek ways to improve the student experience, boost enrollment and retention, and prepare students for successful careers. While the COVID-19 pandemic, the growth of artificial intelligence (AI), and cultural shifts ...

  22. Assessment and Evaluation in Higher Education

    EDUCATION 5832. The purpose of this course is to provide an overview of the scholarship and practice of assessment within the context of higher education. This course examines the various approaches for curricular, co-curricular, and institutional assessment in higher education. Course Attributes:

  23. PDF Culturally Responsive Higher Education Curriculum Assessment Tool

    presenting this Culturally Responsive Higher Education Curriculum Assessment Tool. We are hopeful this instrument, along with our associated professional development efforts, will lead to a greater understanding and use of culturally responsive-sustaining education practices at all higher education levels. Using this tool to assess curriculum will

  24. The New Order Of Web-Based Assessment System In Higher Education: A

    We used a mixed method and experimented with the features of the web-based online system Moodle applied in four management courses offering business economics courses with more than 400 students per trimester at Birla Institute of Management Technology (BIMTECH), Greater Noida, a premier private B-school in India. The study assesses Moodle, e-Mail, and a self-developed knowledge management ...

  25. Exploring usage of summative peer assessments in engineering education

    Summative peer assessment is an assessment method where the one's work is typically graded by several other anonymous peers using predefined criteria. The value of summative peer assessments in higher education stems from the fact that they can provide scalability in assessment for large enrollment classes for a variety of different assessment types.

  26. Latest articles from Assessment & Evaluation in Higher Education

    A critical review of GenAI policies in higher education assessment: a call to reconsider the "originality" of students' work . Jiahui Luo (Jess) Published online: 04 Feb ... Measurement of higher education students' and teachers' experiences in learning management systems: a scoping review. Patricia Simon, Juming Jiang & Luke K. Fryer

  27. Higher Education Whisperer: Academic integrity through digital

    Looking forward to speaking on a panel on "Safety check: maintaining academic integrity through digital proctoring in assessments" in Singapore, at EDUtech Asia 2024, 7 November, 11am, Stage 4, with Girija Veerappan, & Mohd Rozi Ismail.I have lost count how many EduTechs I have been to.

  28. Cybersecurity in Higher Education: Don't Let the Hackers Win

    The CIS Benchmarks can help higher education institutions address key areas of vulnerability and secure their technology and data. At the beginning of the 2021 fall semester, two American community colleges had to close due to cyberattacks. In June 2022, the IT system of a university in Italy was held ransom for $4.5 million.

  29. Authentic assessment training for university teachers

    Dr. David Boud is Alfred Deakin Professor, and Founder & Co-Director of the center for Research in Assessment and Digital Learning (CREADLE) at Deakin University. He han been involved in research on assessment in higher education for over 40 years, and is one of the mos cited authors in this fileds.