• About Meera

Search form

what is evaluation in education

My Environmental Education Evaluation Resource Assistant

Evaluation: what is it and why do it.

  • Planning and Implementing an EE Evaluation
  • Step 1: Before You Get Started
  • Step 2: Program Logic
  • Step 3: Goals of Evaluation
  • Step 4: Evaluation Design
  • Step 5: Collecting Data
  • Step 6: Analyzing Data
  • Step 7: Reporting Results
  • Step 8: Improve Program
  • Related Topics
  • Sample EE Evaluations
  • Links & Resources

Evaluation. What associations does this word bring to mind? Do you see evaluation as an invaluable tool to improve your program? Or do you find it intimidating because you don't know much about it? Regardless of your perspective on evaluation, MEERA is here to help! The purpose of this introductory section is to provide you with some useful background information on evaluation.

Table of Contents

What is evaluation?

Should i evaluate my program, what type of evaluation should i conduct and when, what makes a good evaluation, how do i make evaluation an integral part of my program, how can i learn more.

Evaluation is a process that critically examines a program. It involves collecting and analyzing information about a program’s activities, characteristics, and outcomes. Its purpose is to make judgments about a program, to improve its effectiveness, and/or to inform programming decisions (Patton, 1987).

Experts stress that evaluation can:

Improve program design and implementation.

It is important to periodically assess and adapt your activities to ensure they are as effective as they can be. Evaluation can help you identify areas for improvement and ultimately help you realize your goals more efficiently. Additionally, when you share your results about what was more and less effective, you help advance environmental education.

Demonstrate program impact.

Evaluation enables you to demonstrate your program’s success or progress. The information you collect allows you to better communicate your program's impact to others, which is critical for public relations, staff morale, and attracting and retaining support from current and potential funders.

Why conduct evaluations? approx. 2 minutes

Gus Medina, Project Manager, Environmental Education and Training Partnership

There are some situations where evaluation may not be a good idea

Evaluations fall into one of two broad categories: formative and summative. Formative evaluations are conducted during program development and implementation and are useful if you want direction on how to best achieve your goals or improve your program. Summative evaluations should be completed once your programs are well established and will tell you to what extent the program is achieving its goals.

Within the categories of formative and summative, there are different types of evaluation.

Which of these evaluations is most appropriate depends on the stage of your program:

Make evaluation part of your program; don’t tack it on at the end!

what is evaluation in education

Adapted from:

Norland, E. (2004, Sept). From education theory.. to conservation practice Presented at the Annual Meeting of the International Association for Fish & Wildlife Agencies, Atlantic City, New Jersey.

Pancer, s. M., and Westhues, A. (1989) "A developmental stage approach to program planning and evaluation." Evaluation Review (13): 56-77.

Rossi R H., Lipsey, M. W., & Freeman. H. E. (2004). Evaluation: a systematic approach Thousand Oaks. Call.: Sage Publications.

For additional information on the differences between outcomes and impacts, including lists of potential EE outcomes and impacts, see MEERA's Outcomes and Impacts page.

A well-planned and carefully executed evaluation will reap more benefits for all stakeholders than an evaluation that is thrown together hastily and retrospectively. Though you may feel that you lack the time, resources, and expertise to carry out an evaluation, learning about evaluation early-on and planning carefully will help you navigate the process.

MEERA provides suggestions for all phases of an evaluation. But before you start, it will help to review the following characteristics of a good evaluation (list adapted from resource formerly available through the University of Sussex, Teaching and Learning Development Unit Evaluation Guidelines and John W. Evans' Short Course on Evaluation Basics):

Good evaluation is tailored to your program and builds on existing evaluation knowledge and resources.

Your evaluation should be crafted to address the specific goals and objectives of your EE program. However, it is likely that other environmental educators have created and field-tested similar evaluation designs and instruments. Rather than starting from scratch, looking at what others have done can help you conduct a better evaluation. See MEERA’s searchable database of EE evaluations to get started.

Good evaluation is inclusive.

It ensures that diverse viewpoints are taken into account and that results are as complete and unbiased as possible. Input should be sought from all of those involved and affected by the evaluation such as students, parents, teachers, program staff, or community members. One way to ensure your evaluation is inclusive is by following the practice of participatory evaluation.

Good evaluation is honest.

Evaluation results are likely to suggest that your program has strengths as well as limitations. Your evaluation should not be a simple declaration of program success or failure. Evidence that your EE program is not achieving all of its ambitious objectives can be hard to swallow, but it can also help you learn where to best put your limited resources.

Good evaluation is replicable and its methods are as rigorous as circumstances allow.

A good evaluation is one that is likely to be replicable, meaning that someone else should be able to conduct the same evaluation and get the same results. The higher the quality of your evaluation design, its data collection methods and its data analysis, the more accurate its conclusions and the more confident others will be in its findings.

Consider doing a “best practices” review of your program before proceeding with your evaluation.

Making evaluation an integral part of your program means evaluation is a part of everything you do. You design your program with evaluation in mind, collect data on an on-going basis, and use these data to continuously improve your program.

Developing and implementing such an evaluation system has many benefits including helping you to:

  • better understand your target audiences' needs and how to meet these needs
  • design objectives that are more achievable and measurable
  • monitor progress toward objectives more effectively and efficiently
  • learn more from evaluation
  • increase your program's productivity and effectiveness

To build and support an evaluation system:

Couple evaluation with strategic planning.

As you set goals, objectives, and a desired vision of the future for your program, identify ways to measure these goals and objectives and how you might collect, analyze, and use this information. This process will help ensure that your objectives are measurable and that you are collecting information that you will use. Strategic planning is also a good time to create a list of questions you would like your evaluation to answer.

Revisit and update your evaluation plan and logic model

(See Step 2) to make sure you are on track. Update these documents on a regular basis, adding new strategies, changing unsuccessful strategies, revising relationships in the model, and adding unforeseen impacts of an activity (EMI, 2004).

Build an evaluation culture

by rewarding participation in evaluation, offering evaluation capacity building opportunities, providing funding for evaluation, communicating a convincing and unified purpose for evaluation, and celebrating evaluation successes.

The following resource provides more depth on integrating evaluation into program planning:

Best Practices Guide to Program Evaluation for Aquatic Educators (.pdf) Beginner Intermediate Recreational Boating and Fishing Foundation. (2006).

Chapter 2 of this guide, “Create a climate for evaluation,” gives advice on how to fully institutionalize evaluation into your organization. It describes features of an organizational culture, and explains how to build teamwork, administrative support and leadership for evaluation. It discusses the importance of developing organizational capacity for evaluation, linking evaluation to organizational planning and performance reviews, and unexpected benefits of evaluation to organizational culture.

If you want to learn more about how to institutionalize evaluation, check out the following resources on adaptive management. Adaptive management is an approach to conservation management that is based on learning from systematic, on-going monitoring and evaluation, and involves adapting and improving programs based on the findings from monitoring and evaluation.

  • Adaptive Management: A Tool for Conservation Practitioners Salafsky, N., R. Margoluis, and K. Redford, (2001) Biodiversity Support Program. Beginner This guide provides an overview of adaptive management, defines the approach, describes the conditions under which adaptive managements makes most sense, and outlines the steps involved.
  • Measures of Conservation Success: Designing, Managing, and Monitoring Conservation and Development Projects Margoluis, R., and N. Salafsky. (1998) Island Press. Beginner Intermediate Advanced Available for purchase at Amazon.com. This book provides a detailed guide to project management and evaluation. The chapters and case studies describe the process step-by-step, from project conception to conclusion. The chapters on creating and implementing a monitoring plan, and on using the information obtained to modify the project are particularly useful.
  • Does your project make a difference? A guide to evaluating environmental education projects and programs. Sydney: Department of Environment and Conservation, Australia. (2004) Beginner Section 1 provides a useful introduction to evaluation in EE. It defines evaluation, and explains why it is important and challenging, with quotes about the evaluation experiences of several environmental educators.
  • Designing Evaluation for Education Projects (.pdf), NOAA Office of Education and Sustainable Development. (2004) Beginner In Section 3, “Why is evaluation important to project design and implementation?” nine benefits of evaluation are listed, including, for example, the value of using evaluation results for public relations and outreach.
  • Evaluating EE in Schools: A Practical Guide for Teachers (.pdf) Bennett, D.B. (1984). UNESCO-UNEP Beginner Intermediate The introduction of this guide explains four main benefits of evaluation in EE, including: 1) building greater support for your program, 2) improving your program, 3) advancing student learning, 4) promoting better environmental outcomes.
  • Guidelines for Evaluating Non-Profit Communications Efforts (.pdf) Communications Consortium Media Center. (2004) Beginner Intermediate A section titled “Overarching Evaluation Principles” describes twelve principles of evaluation, such as the importance of being realistic about the potential impact of a project, and being aware of how values shape evaluation. Another noteworthy section, “Acknowledging the Challenges of Evaluation,” outlines nine substantial challenges, including the difficulty in assessing complicated changes in multiple levels of society (school, community, state, etc.). This resource focuses on evaluating public communications efforts, though most of the content is relevant to EE.

EMI (Ecosystem Management Initiative). (2004). Measuring Progress: An Evaluation Guide for Ecosystem and Community-Based Projects. School of Natural Resources and Environment, University of Michigan. Downloaded September 20, 2006 from: www.snre.umich.edu/ecomgt/evaluation/templates.htm

Patton, M.Q. (1987). Qualitative Research Evaluation Methods. Thousand Oaks, CA: Sage Publishers.

Thomson, G. & Hoffman, J. (2003). Measuring the success of EE programs. Canadian Parks and Wilderness Society.

infed

education, community-building and change

Evaluation for education, learning and change – theory and practice

The picture - Office of Mayhem Evaluation - is by xiaming and is reproduced here under a Creative Commons Attribution-Non-Commercial-Share Alike 2.0 Generic licence. Flickr:

Evaluation  for education, learning and change – theory and practice. Evaluation is part and parcel of educating – yet it can be experienced as a burden and an unnecessary intrusion. We explore the theory and practice of evaluation and some of the key issues for informal and community educators, social pedagogues youth workers and others. In particular, we examine educators as connoisseurs and critics, and the way in which they can deepen their theory base and become researchers in practice.

Contents : introduction · on evaluation · three key dimensions · thinking about indicators · on being connoisseurs and critics · educators as action researchers · some issues when evaluating informal education · conclusion · further reading and references · acknowledgements · how to cite this article

A lot is written about evaluation in education – a great deal of which is misleading and confused. Many informal educators such as youth workers and social pedagogues are suspicious of evaluation because they see it as something that is imposed from outside. It is a thing that we are asked to do; or that people impose on us. As Gitlin and Smyth (1989) comment, from its Latin origin meaning ‘to strengthen’ or to empower, the term evaluation has taken a numerical turn – it is now largely about the measurement of things – and in the process can easily slip into becoming an end rather than a means. In this discussion of evaluation we will be focusing on how we can bring questions of value (rather than numerical worth) back into the centre of the process. Evaluation is part and parcel of educating.  To be informal educators we are constantly called upon to make judgements, to make theory, and to discern whether what is happening is for the good. We have, in Elliot W. Eisner’s words, to be connoisseurs and critics. In this piece we explore some important dimensions of this process; the theories involved; the significance of viewing ourselves as action researchers; and some issues and possibilities around evaluation in informal and community education, youth work and social pedagogy. However, first we need to spend a little bit of time on the notion of evaluation itself.

On evaluation

Much of the current interest in evaluation theory and practice can be directly linked to the expansion of government programmes (often described as the ‘New Deal’) during the 1930s in the United States and the implementation of various initiatives during the 1960s (such as Kennedy’s ‘War on Poverty’) (see Shadish, Cork and Leviton 1991). From the 1960s-on ‘evaluation’ grew as an activity, a specialist field of employment with its own professional bodies, and as a body of theory. With large sums of state money flowing into new agencies (with projects and programmes often controlled or influenced by people previously excluded from such political power) officials and politicians looked to increased monitoring and review both to curb what they saw as ‘abuses’, and to increase the effectiveness and efficiency of their programmes. A less charitable reading would be that they were both increasingly concerned with micro-managing initiatives and in controlling the activities of new agencies and groups. Their efforts were aided in this by developments in social scientific research. Of special note here are the activities of Kurt Lewin and the interest in action research after the Second World War.

As a starter I want to offer an orienting definition:

Evaluation is the systematic exploration and judgement of working processes, experiences and outcomes. It pays special attention to aims, values, perceptions, needs and resources.

There are several things that need to be said about this.

First, evaluation entails gathering, ordering and making judgments about information in a methodical way. It is a research process.

Second, evaluation is something more than monitoring. Monitoring is largely about ‘watching’ or keeping track and may well involve things like performance indicators. Evaluation involves making careful judgements about the worth, significance and meaning of phenomenon.

Third, evaluation is very sophisticated. There is no simple way of making good judgements. It involves, for example, developing criteria or standards that are both meaningful and honour the work and those involved.

Fourth, evaluation operates at a number of levels. It is used to explore and judge practice and programmes and projects (see below).

Last, evaluation if it is to have any meaning must look at the people involved, the processes and any outcomes we can identify. Appreciating and getting of flavour of these involves dialogue. This makes the focus enquiry rather than measurement – although some measurement might be involved (Rowlands 1991). The result has to be an emphasis upon negotiation and consensus concerning the process of evaluation, and the conclusions reached.

Three key dimensions

Basically, evaluation is either about proving something is working or needed, or improving practice or a project (Rogers and Smith 2006). The first often arises out of our accountability to funders, managers and, crucially, the people are working with. The second is born of a wish to do what we do better. We look to evaluation as an aid to strengthen our practice, organization and programmes (Chelimsky 1997: 97-188).

To help make sense of the development of evaluation I want to explore three key dimensions or distinctions and some of the theory associated.

Programme or practice evaluation? First, it is helpful to make a distinction between programme and project evaluation, and practice evaluation. Much of the growth in evaluation has been driven by the former.

Programme and project evaluation. This form of evaluation is typically concerned with making judgements about the effectiveness, efficiency and sustainability of pieces of work. Here evaluation is essentially a management tool. Judgements are made in order to reward the agency or the workers, and/or to provide feedback so that future work can be improved or altered. The former may well be related to some form of payment by results such as the giving of bonuses for ‘successful’ activities, the invoking of penalty clauses for those deemed not to have met the objectives set for it and to decisions about giving further funding. The latter is important and necessary for the development of work.

Practice evaluation . This form of evaluation is directed at the enhancement of work undertaken with particular individuals and groups, and to the development of participants (including the informal educator). It tends to be an integral part of the working process. In order to respond to a situation workers have to make sense of what is going on, and how they can best intervene (or not intervene). Similarly, other participants may also be encouraged or take it upon themselves to make judgements about the situation. In other words, they evaluate the situation and their part in it. Such evaluation is sometimes described as educative or pedagogical as it seeks to foster learning. But this is only part of the process. The learning involved is oriented to future or further action. It is also informed by certain values and commitments (informal educators need to have an appreciation of what might make for human flourishing and what is ‘good’). For this reason we can say the approach is concerned with praxis – action that is informed and committed

These two forms of evaluation will tend to pull in different directions. Both are necessary – but just how they are experienced will depend on the next two dimensions.

Summative or formative evaluation? Evaluations can be summative or formative. Evaluation can be primarily directed at one of two ends:

  • To enable people and agencies make judgements about the work undertaken; to identify their knowledge, attitudes and skills, and to understand the changes that have occurred in these; and to increase their ability to assess their learning and performance ( formative evaluation ).
  • To enable people and agencies to demonstrate that they have fulfilled the objectives of the programme or project, or to demonstrate they have achieved the standard required ( summative evaluation ).

Either can be applied to a programme or to the work of an individual. Our experience of evaluation is likely to be different according to the underlying purpose. If it is to provide feedback so that programmes or practice can be developed we are less likely, for example, to be defensive about our activities. Such evaluation isn’t necessarily a comfortable exercise, and we may well experience it as punishing – especially if it is imposed on us (see below). Often a lot more is riding on a summative evaluation. It can mean the difference between having work and being unemployed!

Banking or dialogical evaluation? Last, it is necessary to explore the extent to which evaluation is dialogical. As we have already seen much evaluation is imposed or required by people external to the situation. The nature of the relationship between those requiring evaluation and those being evaluated is, thus of fundamental importance. Here we might useful employ two contrasting models. We can usefully contrast the dominant or traditional model that tend to see the people involved in a project as objects, with an alternative, dialogical approach that views all those involved as subjects. This division has many affinities to Freire’s (1972) split between banking and dialogical models of education.

Exhibit 1: Rowlands on traditional (banking) and alternative (dialogical) evaluation

Joanna Rowlands has provided us with a useful summary of these approaches to evaluation. She was particularly concerned with the evaluation of social development projects.

The characteristics of the traditional (banking) approach to evaluation:

1.     A search for objectivity and a ‘scientific approach’, through standardized procedures. The values used in this approach… often reflect the priorities of the evaluator.

2.     An over-reliance on quantitative measures. Qualitative aspects…, being difficult to measure, tend to be ignored.

3.     A high degree of managerial control, whereby managers can influence the questions being asked Other people, who may be affected by the findings of an evaluation, may have little input, either in shaping the questions to be asked or reflecting on the findings.

4.     Outsiders are usually contracted to be evaluator in the belief that his will increase objectivity, and there may be a negative perception of them by those being evaluated’.

The characteristics of the alternative (dialogical) approach to evaluation

1.     Evaluation is viewed as an integral part of the development or change process and involves ‘reflection-action’. Subjectivity is recognized and appreciated.

2.     There is a focus on dialogue, enquiry rather than measurement, and a tendency to use less formal methods like unstructured interviews and participant observation.

3.     It is approached as an ‘empowering process’ rather than control by an external body. There is a recognition that different individuals and groups will have different perceptions. Negotiation and consensus is valued concerning the process of evaluation, and the conclusions reached, and recommendations made

4.     The evaluator takes on the role of facilitator, rather than being an objective and neutral outsider. Such evaluation may well be undertaken by ‘insiders’ – people directly involved in the project or programme.

Adapted from Joanna Rowlands (1991) How do we know it is working? The evaluation of social development projects , and discussed in Rubin (1995: 17-23)

We can see in these contrasting models important questions about power and control, the way in which those directly involved in programmes and projects are viewed. Dialogical evaluation places the responsibility for evaluation squarely on the educators and the other participants in the setting (Jeffs and Smith 2005: 85-92).

Thinking about indicators

The key part of evaluation, some may argue, is framing the questions we want to ask, and the information we want to collect such that the answers provide us with the indicators of change.  Unfortunately, as we have seen, much of the talk and practice around indicators in evaluation has been linked to rather crude measures of performance and the need to justify funding (Rogers and Smith 2006). We want to explore the sort of indicators that might be more fitting to the work we do.

In common usage an indicator points to something, it is a sign or symptom. The difficulty facing us is working out just what we are seeing might be a sign of. In informal education – and any authentic education – the results of our labours may only become apparent some time later in the way that people live their lives. In addition, any changes in behaviour we see may be specific to the situation or relationship (see below). Further, it is often difficult to identify who or what was significant in bringing about change. Last, when we look at, or evaluate, the work, as E Lesley Sewell (1966) put it, we tend to see what we are looking for. For these reasons a lot of the outcomes that are claimed in evaluations and reports about work with particular groups or individuals have to be taken with a large pinch of salt.

Luckily, in trying to make sense of our work and the sorts of indicators that might be useful in evaluation, we can draw upon wisdom about practice, broader research findings, and our values.

Exhibit 2: Evaluation – what might we need indicators for?

We want to suggest four possible areas that we might want indicators for:

The number of  people we are in contact with and working with . In general, as informal educators we should expect to make and maintain a lot of contacts . This is so people know about us, and the opportunities and support we can offer. We can also expect to involve smaller numbers of participants in groups and projects, and an even smaller number as ‘clients’ in intensive work. The numbers we might expect – and the balance between them – will differ from project to project (Jeffs and Smith 2005: 116-121). However, through dialogue it does seem possible to come some agreement about these – and in the process we gain a useful tool for evaluation.

The nature of the opportunities we offer . We should expect to be asked questions about the nature and range of opportunities we offer. For example, do young people have a chance to talk freely and have fun; expand and enlarge their experience, and learn? As informal educators we should also expect to work with people to build varied programmes and groups and activities with different foci.

The quality of relationships available . Many of us talk about our work in terms of ‘building relationships’. By this we often mean that we work both through relationship, and for relationship (see Smith and Smith forthcoming). This has come under attack from those advocating targeted and more outcome-oriented work. However, the little sustained research that has been done confirms that it is the relationships that informal educators and social pedagogues form with people, and encourage them to develop with others, that really matters (see Hirsch 2005). Unfortunately identifying sensible indicators of progress is not easy – and the job of evaluation becomes difficult as a result.

How well people work together and for others . Within many of the arenas where informal education flourishes there is a valuing of working so that people may organize things for themselves, and be of service to others. The respect in which this held is also backed up by research. We know, for example, that people involved in running groups generally grow in self-confidence and develop a range of skills (Elsdon 1995). We also know that those communities where a significant number of people are involved in organizing groups and activities are healthier, have more positive experiences of education, are more active economically, and have less crime (Putnam 1999). (Taken from Rogers and Smith 2006)

For some of these areas it is fairly easy to work out indicators. However, when it comes to things like relationships, as Lesley Sewell noted many years ago, ‘Much of it is intangible and can be felt in atmosphere and spirit. Appraisal of this inevitably depends to some extent on the beholders themselves’ (1966: 6). There are some outward signs – like the way people talk to each other. In the end though, informal education is fundamentally an act of faith. However, our faith can be sustained and strengthened by reflection and exploration.

On being connoisseurs and critics

Informal education involves more than gaining and exercising technical knowledge and skills. It depends on us also cultivating a kind of artistry. In this sense, educators are not engineers applying their skills to carry out a plan or drawing, they are artists who are able to improvise and devise new ways of looking at things. We have to work within a personal but shared idea of the ‘good’ – an appreciation of what might make for human flourishing and well-being (see Jeffs and Smith 1990). What is more, there is little that is routine or predictable in our work. As a result, central to what we do as educators is the ability to ‘think on our feet’. Informal education is driven by conversation and by certain values and commitments (Jeffs and Smith 2005).

Describing informal education as an art does sound a bit pretentious. It may also appear twee. But there is a serious point here. When we listen to other educators, for example in team meetings, or have the chance to observe them in action, we inevitably form judgments about their ability. At one level, for example, we might be impressed by someone’s knowledge of the income support system or of the effects of different drugs. However, such knowledge is useless if it cannot be used in the best way. We may be informed and be able to draw on a range of techniques, yet the thing that makes us special is the way in which we are able to combine these and improvise regarding the particular situation. It is this quality that we are describing as artistry.

For Donald Schön (1987: 13) artistry is an exercise of intelligence, a kind of knowing. Through engaging with our experiences we are able to develop maxims about, for example, group work or working with an individual. In other words, we learn to appreciate – to be aware and to understand – what we have experienced. We become what Eisner (1985; 1998) describes as ‘ connoisseurs ‘. This involves very different qualities to those required by dominant models of evaluation.

Connoisseurship is the art of appreciation. It can be displayed in any realm in which the character, import, or value of objects, situations, and performances is distributed and variable, including educational practice. (Eisner 1998: 63)

The word connoisseurship comes from the Latin cognoscere , to know (Eisner 1998: 6). It involves the ability to see, not merely to look. To do this we have to develop the ability to name and appreciate the different dimensions of situations and experiences, and the way they relate one to another. We have to be able to draw upon, and make use of, a wide array of information. We also have to be able to place our experiences and understandings in a wider context, and connect them with our values and commitments. Connoisseurship is something that needs to be worked at – but it is not a technical exercise. The bringing together of the different elements into a whole involves artistry.

However, educators need to become something more than connoisseurs. We need to become critics .

If connoisseurship is the art of appreciation, criticism is the art of disclosure. Criticism, as Dewey pointed out in Art as Experience , has at is end the re-education of perception… The task of the critic is to help us to see.
Thus…  connoisseurship provides criticism with its subject matter. Connoisseurship is private, but criticism is public. Connoisseurs simply need to appreciate what they encounter. Critics, however, must render these qualities vivid by the artful use of critical disclosure. (Eisner 1985: 92-93)

Criticism can be approached as the process of enabling others to see the qualities of something. As Eisner (1998: 6) puts it, ‘effective criticism functions as the midwife to perception. It helps it come into being, then later refines it and helps it to become more acute’. The significance of this for those who want to be educators is, thus, clear. Educators also need to develop the ability to work with others so that they may discover the truth in situations, experiences and phenomenon.

Educators as action researchers

Schön (1987) talks about professionals being ‘researchers in the practice context’. As Bogdan and Biklen (1992: 223) put it, ‘research is a frame of mind – a perspective people take towards objects and activities’. For them, and for us here, it is something that we can all undertake. It isn’t confined to people with long and specialist training. It involves (Stringer 1999: 5):

• A problem to be investigated.

• A process of enquiry

• Explanations that enable people to understand the nature of the problem

Within the action research tradition there have been two basic orientations. The British tradition – especially that linked to education – tends to view action research as research oriented toward the enhancement of direct practice. For example, Carr and Kemmis provide a classic definition:

Action research is simply a form of self-reflective enquiry undertaken by participants in social situations in order to improve the rationality and justice of their own practices, their understanding of these practices, and the situations in which the practices are carried out (Carr and Kemmis 1986: 162).

The second tradition, perhaps more widely approached within the social welfare field – and most certainly the broader understanding in the USA – is of action research as ‘the systematic collection of information that is designed to bring about social change’ (Bogdan and Biklen 1992: 223). Bogdan and Biklen continue by saying that its practitioners marshal evidence or data to expose unjust practices or environmental dangers and recommend actions for change. It has been linked into traditions of citizen’s action and community organizing, but in more recent years has been adopted by workers in very different fields.

In many respects, this distinction mirrors one we have already been using – between programme evaluation and practice evaluation. In the latter, we may well set out to explore a particular piece of work. We may think of it as a case study – a detailed examination of one setting, or a single subject, a single depository of documents, or one particular event (Merriam 1988). We can explore what we did as educators: what were our aims and concerns; how did we act; what were we thinking and feeling and so on? We can look at what may have been going on for other participants; the conversations and interactions that took place; and what people may have learnt and how this may have affected their behaviour. Through doing this we can develop our abilities as connoisseurs and critics. We can enhance what we are able to take into future encounters.

When evaluating a programme or project we may ask other participants to join with us to explore and judge the processes they have been involved in (especially if we are concerned with a more dialogical approach to evaluation). Our concern is to collect information, to reflect upon it, and to make some judgements as to the worth of the project or programme, and how it may be improved. This takes us into the realm of what a number of writers have called community-based action research. We have set out one example of this below.

Exhibit 3: Stringer on community-based action research

A fundamental premise of community-based action research is that it commences with an interest in the problems of a group, a community, or an organization. Its purpose is to assist people in extending their understanding of their situation and thus resolving problems that confront them….

Community-based action research is always enacted through an explicit set of social values. In modern, democratic social contexts, it is seen as a process of inquiry that has the following characteristics:

  • It is democratic , enabling the participation of all people.
  • It is equitable , acknowledging people’s equality of worth.
  • It is liberating , providing freedom from oppressive, debilitating conditions.
  • It is life enhancing , enabling the expression of people’s full human potential. (Stringer 1999: 9-10)
The action research process

Action research works through three basic phases:

Look – building a picture and gathering information. When evaluating we define and describe the problem to be investigated and the context in which it is set. We also describe what all the participants (educators, group members, managers etc.) have been doing.

Think – interpreting and explaining. When evaluating we analyse and interpret the situation. We reflect on what participants have been doing. We look at areas of success and any deficiencies, issues or problems.

Act – resolving issues and problems. In evaluation we judge the worth, effectiveness, appropriateness, and outcomes of those activities. We act to formulate solutions to any problems.

(Stringer 1999: 18; 43-44;160)

 We could contrast with a more traditional, banking, style of research in which an outsider (or just the educators working on their own) collect information, organize it, and come to some conclusions as to the success or otherwise of the work.

Some issues when evaluating informal education

In recent years informal educators have been put under great pressure to provide ‘output indicators’, ‘qualitative criteria’, ‘objective success measures’ and ‘adequate assessment criteria’. Those working with young people have been encouraged to show how young people have developed ‘personally and socially through participation’. We face a number of problems when asked to approach our work in such ways. As we have already seen, our way of working as informal educators places us within a more dialogical framework. Evaluating our work in a more bureaucratic and less inclusive fashion may well compromise or cut across our work.

There are also some basic practical problems. Here we explore four particular issues identified by Jeffs and Smith (2005) with respect to programme or project evaluations.

The problem of multiple influences. The different things that influence the way people behave can’t be easily broken down. For example, an informal educator working with a project to reduce teen crime on two estates might notice that the one with a youth club open every weekday evening has less crime than the estate without such provision. But what will this variation, if it even exists, prove? It could be explained, as research has shown, by differences in the ethos of local schools, policing practices, housing, unemployment rates, and the willingness of people to report offences.

The problem of indirect impact.  Those who may have been affected by the work of informal educators are often not easily identified. It may be possible to list those who have been worked with directly over a period of time. However, much contact is sporadic and may even take the form of a single encounter. The indirect impact is just about impossible to quantify. Our efforts may result in significant changes in the lives of people we do not work with. This can happen as those we work with directly develop. Consider, for example, how we reflect on conversations that others recount to us, or ideas that we acquire second- or third-hand. Good informal education aims to achieve a ripple effect. We hope to encourage learning through conversation and example and can only have a limited idea of what the true impact might be.

The problem of evidence. Change can rarely be monitored even on an individual basis. For example, informal educators who focus on alcohol abuse within a particular group can face an insurmountable problem if challenged to provide evidence of success. They will not be able to measure use levels prior to intervention, during contact or subsequent to the completion of their work. In the end all the educator will be able to offer, at best, is vague evidence relating to contact or anecdotal material.

The problem of timescale . Change of the sort with which informal educators are concerned does not happen overnight. Changes in values, and the ways that people come to appreciate themselves and others, are notoriously hard to identify – especially as they are happening. What may seem ordinary at the time can, with hindsight, be recognized as special.

Workarounds

There are two classic routes around such practical problems. We can use both as informal educators.

The first is to undertake the sort of participatory action research we have been discussing here. When setting up and running programmes and projects we can build in participatory research and evaluation from the start. We make it part of our way of working. Participants are routinely invited and involved in evaluation. We encourage them to think about the processes they have been participating in, the way in which they have changed and so on. This can be done in ways that fit in with the general run of things that we do as informal educators.

The second route is to make linkages between our own activities as informal educators and the general research literature. An example here is group or club membership. We may find it very hard to identify the concrete benefits for individuals from being member of a particular group such as a football team or social club. What we can do, however, is to look to the general research on such matters. We know, for example, that involvement in such groups builds social capital . We have evidence that:

In those countries where the state invested most in cultural and sporting facilities young people responded by investing more of their own time in such activities (Gauthier and Furstenberg 2001). The more involved people are in structured leisure activities, good social contacts with friends, and participation in the arts, cultural activities and sport, the more likely they are to do well educationally, and the less likely they are to be involved even in low-level delinquency (Larson and Verma 1999). There appears to be a strong relationship between the possession of social capital and better health. ‘As a rough rule of thumb, if you belong to no groups but decide to join one, you cut your risk of dying over the next year in half . If you smoke and belong to no groups, it’s a toss-up statistically whether you should stop smoking or start joining’ (ibid.: 331). Regular club attendance, volunteering, entertaining, or church attendance is the happiness equivalent of getting a college degree or more than doubling your income. Civic connections rival marriage and affluence as predictors of life happiness (Putnam 2000: 333).

This approach can work where there is some freedom in the way that you can respond to funders and others with regard to evaluation. Where you are forced to fill in forms that require the answers to certain set questions we can still use the evaluations that we have undertaken in a participatory manner – and there may even be room to bring in some references to the broader literature. The key here is to remember that we are educators – and that we have a responsibility foster learning, not only among those we work with in a project or programme, but also among funders, managers and policymakers. We need to view their requests for information as opportunities to work at deepening their appreciation and understanding of informal education and the issues and questions with which we work.

The purpose of evaluation, as Everitt et al (1992: 129) is to reflect critically on the effectiveness of personal and professional practice. It is to contribute to the development of ‘good’ rather than ‘correct’ practice.

Missing from the instrumental and technicist ways of evaluating teaching are the kinds of educative relationships that permit the asking of moral, ethical and political questions about the ‘rightness’ of actions. When based upon educative (as distinct from managerial) relations, evaluative practices become concerned with breaking down structured silences and narrow prejudices. (Gitlin and Smyth 1989: 161)

Evaluation is not primarily about the counting and measuring of things. It entails valuing – and to do this we have to develop as connoisseurs and critics. We have also to ensure that this process of ‘looking, thinking and acting’ is participative.

Further reading and references

For the moment I have listed some guides to evaluation. At a later date I will be adding in some more contextual material concerning evaluation in informal education.

Berk, R. A. and Rossi, P. H. (1990) Thinking About Program Evaluation , Newbury Park: Sage. 128 pages. Clear introduction with chapters on key concepts in evaluation research; designing programmes; examining programmes (using a chronological perspective). Useful US annotated bibliography.

Eisner, E. W. (1985) The Art of Educational Evaluation. A personal view , Barcombe: Falmer. 272 + viii pages. Wonderful collection of material around scientific curriculum making and its alternatives. Good chapters on Eisner’s championship of educational connoisseurship and criticism. Not a cookbook, rather a way of orienting oneself.

Eisner, E. W. (1998) The Enlightened Eye. Qualitative inquiry and the enhancement of educational practice , Upper Saddle River, NJ: Prentice Hall. 264 + viii pages. Re-issue of a 1990 classic in which Eisner plays with the ideas of educational connoisseurship and educational criticism. Chapters explore these ideas, questions of validity, method and evaluation. An introductory chapter explores qualitative thought and human understanding and final chapters turn to ethical tensions, controversies and dilemmas; and the preparation of qualitative researchers.

Everitt, A. and Hardiker, P. (1996) Evaluating for Good Practice , London: Macmillan. 223 + x pages. Excellent introduction that takes care to avoid technicist solutions and approaches. Chapters examine purposes; facts, truth and values; measuring performance; a critical approach to evaluation; designing critical evaluation; generating evidence; and making judgements and effecting change.

Hirsch, B. J. (2005) A Place to Call Home. After-school programs for urban youth , New York: Teachers College Press. A rigorous and insightful evaluation of the work of six inner city boys and girls clubs that concludes that the most important thing they can and do offer is relationships (both with peers and with the workers) and a ‘second home’.

Patton, M. Q. (1997) Utilization-Focused Evaluation. The new century text 3e, Thousand Oaks, Ca.: Sage. 452 pages. Claimed to be the most comprehensive review and integration of the literature on evaluation. Sections focus on evaluation use; focusing evaluations; appropriate methods; and the realities and practicalities of utilization-focused evaluation.

Rossi, P. H., Freeman, H. and Lipsey, M. W. (2004) Evaluation. A systematic approach 7e, Newbury Park, Ca.: Sage. 488 pages. Practical guidance from diagnosing problems through to measuring and analysing programmes. Includes material on formative evaluation procedures, practical ethics, and cost-benefits.

Stringer, E. T. (1999) Action Research 2e, Thousand Oaks, CA.: Sage. 229 + xxv pages. Useful discussion of community-based action research directed at practitioners.

Bogden, R. and Biklen, S. K. (1992) Qualitative Research For Education , Boston: Allyn and Bacon.

Carr, W. and Kemmis, S. (1986) Becoming Critical. Education, knowledge and action research , Lewes: Falmer.

Chelimsky E. (1997) Thoughts for a new evaluation society. Evaluation 3(1): 97-118.

Elsdon, K. T. with Reynolds, J. and Stewart, S. (1995) Voluntary Organizations. Citizenship, learning and change , Leicester: NIACE.

Freire, P. (1972) Pedagogy of the Oppressed , London: Penguin.

Gauthier, A. H. and Furstenberg, F. F. (2001) ‘Inequalities in the use of time by teenagers and young adults’ in K. Vleminckx and T. M. Smeeding (eds.) Child Well-being, Child Poverty and Child Policy in Modern Nations Bristol: Policy Press.

Gitlin, A. and Smyth, J. (1989) Teacher Evaluation. Critical education and transformative alternatives , Lewes: Falmer Press.

Jeffs, T. and Smith, M. (eds.) (1990) Using Informal Education , Buckingham: Open University Press.

Jeffs and Smith, M. K. (2005) Informal Education. Conversation, democracy and learning 3e, Nottingham: Educational Heretics Press.

Larson, R. W. and Vera, A. (1999) ‘How children and adolescents spend time across the world: work, play and developmental opportunities’ Psychological Bulletin 125(6).

Merriam, S. B. (1988) Case Study Research in Education , San Francisco: Jossey-Bass.

Putman, R. D. (2000) Bowling Alone: The collapse and revival of American community , New York: Simon and Schuster.

Rogers, A. and Smith, M. K. (2006) Evaluation: Learning what matters , London: Rank Foundation/YMCA George Williams College. Available as a pdf: www.ymca.org.uk/rank/conference/evaluation_learning_what_matters.pdf .

Rubin, F. (1995) A Basic Guide to Evaluation for Development Workers , Oxford: Oxfam.

Schön, D. A. (1983) The Reflective Practitioner. How professionals think in action , London: Temple Smith.

Sewell, L. (1966) Looking at Youth Clubs , London: National Association of Youth Clubs. Available in the informal education archives : http://www.infed.org/archives/nayc/sewell_looking.htm .

Shadish, W. R., Cook, T. D. and Leviton, L. C. (1991) Foundations of Program Evaluation , Newbury Park C.A.: Sage.

Smith, H. and Smith, M. K. (forthcoming) The Art of Helping Others . Being around, being there, being wise . See www.infed.org/helping .

Acknowledgements and credits : Alan Rogers and Sarah Lloyd-Jones were a great help when updating this article – and some of the material in this piece first appeared in Rogers and Smith 2006.

The picture – Office of Mayhem Evaluation – is by xiaming and is reproduced here under a Creative Commons Attribution-Non-Commercial-Share Alike 2.0 Generic licence. Flickr: http://www.flickr.com/photos/xiaming/78385893/

How to cite this article : Smith, M. K. (2001, 2006). Evaluation for education, learning and change – theory and practice, The encyclopedia of pedagogy and informal education. [ https://infed.org/mobi/evaluation-theory-and-practice/ . Retrieved: insert date]

© Mark K. Smith 2001, 2006

Last Updated on April 7, 2021 by infed.org

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

what is evaluation in education

Home Surveys Academic Research

Educational Evaluation: What Is It & Importance

An educational evaluation may be based on the professional judgment of the people doing it. Find out about it, its importance & principles.

Educational evaluation is acquiring and analyzing data to determine how each student’s behavior evolves during their academic career.

Evaluation is a continual process more interested in a student’s informal academic growth than their formal academic performance. It is interpreted as an individual’s growth regarding a desired behavioral shift in the relationship between his feelings, thoughts, and deeds. A student interest survey helps customize teaching methods and curriculum to make learning more engaging and relevant to students’ lives.

The classroom response system allowed students to answer multiple-choice questions and engage in real-time discussions instantly.

The practice of determining something’s worth using a particular appraisal is called evaluation. This blog will discuss educational evaluation, its importance, and its principles.

LEARN ABOUT: course evaluation survey examples

What is educational evaluation?

An educational evaluation comprises standardized tests that evaluate a child’s academic aptitude in several topics.

The assessment will show if a kid is falling behind evenly in each subject area or whether specific barriers are preventing that student from performing at grade level in a particular subject.

Educational evaluators generally hold a master’s or doctoral degree in education or psychology, and assessments take three to five hours to complete.

Examining the success of program interventions is part of educational evaluation. When it comes to education, these usually have to do with learning (like reading), behavioral, emotional, and social development (like antibullying programs), or more general issues (like changes to the entire school system, like inclusive education).

Importance of educational evaluation

In the teaching-learning process, educational evaluation is crucial since it serves a common goal.

  • Diagnostic: Evaluation is a thorough, ongoing process. It aids a teacher in identifying problems and aids a teacher in solving problems with his students.
  • Remedial: By remedial work, we imply the appropriate resolution is found once issues are identified. The development of a student’s personality and the desired change in behavior can be achieved with a teacher’s help.
  • To make education goals clear: It’s also crucial to define the goals of schooling. The purpose of education is to alter a student’s behavior. A teacher can demonstrate how a learner’s conduct has changed through evaluation.
  • It offers guidance: A teacher can only provide advice if he is adequately informed about his students. And only after a thorough assessment that considers all aspects of aptitude, interest, intelligence, etc., can counsel be provided.
  • Classification aid: Evaluation is a way for teachers to classify their pupils and assist them by determining their student’s intelligence, ability, and interest levels.
  • Beneficial for Improving the Learning and Teaching Process: A teacher can enhance a student’s personality and learn through evaluation, and he can also know the effectiveness of his instruction. As a result, it aids in enhancing the teaching and learning process.

Principles of educational evaluation

The following principles form the foundation of educational evaluation:

  • The principle of continuity: Evaluation is a continuous process as long as the student is in school. Evaluation in education is an integral part of the teaching-learning process.

Whatever the learner does should be evaluated every day. Only then could the learner have a better grasp of the language.

  • The principle of comprehensiveness: When we say “comprehensiveness,” we look at all aspects of the learner’s personality. It cared about the child’s development in all areas.
  • The principle of Objectives: Evaluation should be based on the goals of education. It should help determine where the learner’s behavior needs to be changed or stopped.
  • The principle of Learning Experience: Evaluation is also related to the learner’s experiences.

In this process, we don’t just look at the learner’s schoolwork but his extracurricular activities. Both types of activities can help learners gain more experience.

  • The principle of Broadness: Evaluation should be broad enough to embrace all elements of life.
  • The principle of child-centeredness is: The child is at the center of the evaluation process. The child’s behavior is the most important thing to look at when judging. 

It helps a teacher know how much a child can understand and how valuable the teaching material is.

  • The principle of Application: During the teaching and learning process, a child may learn many things, but they may not be helpful in everyday life. If he can’t use it, then it’s useless to find. It can be seen through evaluation.

Evaluation decides which student is better at using his knowledge and understanding in different situations to help him succeed.

Educational evaluations are meant to present evidence-based arguments regarding whether or not educational results may be improved by implementing intervention measures. The evaluation objectives are broadening along with the parameters of educational assessment.

Understanding the various learning exams and evaluations will help you identify the testing most helpful for your child and the causes of any issues or learning disparities they may be experiencing. 

You might need a professional’s help to decide whether your child needs an evaluation and what kind of assessment they need.

Students have a lot to say. Therefore, it should be significant to let them know their input is needed to change how they are taught. Quick survey creation is possible with programs like QuestionPro and LivePolls, which can help improve academic performance and foster student experience .

MORE LIKE THIS

what is evaluation in education

Taking Action in CX – Tuesday CX Thoughts

Apr 30, 2024

what is evaluation in education

QuestionPro CX Product Updates – Quarter 1, 2024

Apr 29, 2024

NPS Survey Platform

NPS Survey Platform: Types, Tips, 11 Best Platforms & Tools

Apr 26, 2024

user journey vs user flow

User Journey vs User Flow: Differences and Similarities

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

U.S. flag

An official website of the United States government

Here’s how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Evaluation Home

The Federal Evaluation Toolkit BETA

Evaluation 101.

What is evaluation? How can it help me do my job better? Evaluation 101 provides resources to help you answer those questions and more. You will learn about program evaluation and why it is needed, along with some helpful frameworks that place evaluation in the broader evidence context. Other resources provide helpful overviews of specific types of evaluation you may encounter or be considering, including implementation, outcome, and impact evaluations, and rapid cycle approaches.

What is Evaluation?

Heard the term "evaluation," but are still not quite sure what that means? These resources help you answer the question, "what is evaluation?," and learn more about how evaluation fits into a broader evidence-building framework.

What is Program Evaluation?: A Beginners Guide

Program evaluation uses systematic data collection to help us understand whether programs, policies, or organizations are effective. This guide explains how program evaluation can contribute to improving program services. It provides a high-level, easy-to-read overview of program evaluation from start (planning and evaluation design) to finish (dissemination), and includes links to additional resources.

Types of Evaluation

What's the difference between an impact evaluation and an implementation evaluation? What does each type of evaluation tell us? Use these resources to learn more about the different types of evaluation, what they are, how they are used, and what types of evaluation questions they answer.

Common Framework for Research and Evaluation The Administration for Children & Families Common Framework for Research and Evaluation (OPRE Report #2016-14). Office of Planning, Research, and Evaluation, U.S. Department of Health and Human Services. https://www.acf.hhs.gov/sites/default/files/documents/opre/acf_common_framework_for_research_and_evaluation_v02_a.pdf" aria-label="Info for Common Framework for Research and Evaluation">

Building evidence is not one-size-fits all, and different questions require different methods and approaches. The Administration for Children & Families Common Framework for Research and Evaluation describes, in detail, six different types of research and evaluation approaches – foundational descriptive studies, exploratory descriptive studies, design and development studies, efficacy studies, effectiveness studies, and scale-up studies – and can help you understand which type of evaluation might be most useful for you and your information needs.

Formative Evaluation Toolkit Formative evaluation toolkit: A step-by-step guide and resources for evaluating program implementation and early outcomes . Washington, DC: Children’s Bureau, Administration for Children and Families, U.S. Department of Health and Human Services." aria-label="Info for Formative Evaluation Toolkit">

Formative evaluation can help determine whether an intervention or program is being implemented as intended and producing the expected outputs and short-term outcomes. This toolkit outlines the steps involved in conducting a formative evaluation and includes multiple planning tools, references, and a glossary. Check out the overview to learn more about how this resource can help you.

Introduction to Randomized Evaluations

Randomized evaluations, also known as randomized controlled trials (RCTs), are one of the most rigorous evaluation methods used to conduct impact evaluations to determine the extent to which your program, policy, or initiative caused the outcomes you see. They use random assignment of people/organizations/communities affected by the program or policy to rule out other factors that might have caused the changes your program or policy was designed to achieve. This in-depth resource introduces randomized evaluations in a non-technical way, provides examples of RCTs in practice, describes when RCTs might be the right approach, and offers a thorough FAQ about RCTs.

Rapid Cycle Evaluation at a Glance Rapid Cycle Evaluation at a Glance (OPRE #2020-152). Office of Planning, Research, and Evaluation, U.S. Department of Health and Human Services. https://www.acf.hhs.gov/opre/report/rapid-cycle-evaluation-glance" aria-label="Info for Rapid Cycle Evaluation at a Glance">

Rapid Cycle Evaluation (RCE) can be used to efficiently assess implementation and inform program improvement. This brief provides an introduction to RCE, describing what it is, how it compares to other methods, when and how to use it, and includes more in-depth resources. Use this brief to help you figure out whether RCE makes sense for your program.

Evaluation.gov

An official website of the Federal Government

Evaluation in Education

Cite this chapter.

what is evaluation in education

  • Naftaly S. Glasman 5 &
  • David Nevo 6  

Part of the book series: Evaluation in Education and Human Services Series ((EEHS,volume 19))

80 Accesses

In this chapter we attempt to clarify the meaning of evaluation as it has been conceptualized and practiced in recent years in the field of education.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Author information

Authors and affiliations.

University of California, Santa Barbara, USA

Naftaly S. Glasman

Tel Aviv University, Israel

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 1988 Kluwer Academic Publishers

About this chapter

Glasman, N.S., Nevo, D. (1988). Evaluation in Education. In: Evaluation in Decision Making. Evaluation in Education and Human Services Series, vol 19. Springer, Dordrecht. https://doi.org/10.1007/978-94-009-2669-1_3

Download citation

DOI : https://doi.org/10.1007/978-94-009-2669-1_3

Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-010-7703-3

Online ISBN : 978-94-009-2669-1

eBook Packages : Springer Book Archive

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

SAY GOODBYE TO JAMB,GAIN DIRECT ENTRY ÀDMISSION INTO 200LEVEL TO STUDY YOUR DESIRED COURSE IN ANY UNIVERSITY OF YOUR CHOICE.LOW FEES. REGISTRATION IS IN PROGRESS . CALL / WHATSAPP 09059908384.

Evaluation in Education: Meaning, Types, Importance, Principles & Characteristics

Let’s start with the definition of evaluation in this article before we delve into evaluation in Education, the meaning as well as types and its importance.

What exactly is evaluation?

Evaluation is a procedure that reviews a program critically. It is a process that involves careful gathering and evaluating of data on the actions, features, and consequences of a program. Its objective is to evaluate programs, improve program effectiveness, and influence programming decisions.

The efficacy of program interventions is assessed through educational evaluation. These often address the learning, such as reading; emotional, behavioral, and social development, such as anti-bullying initiatives; or wider subjects, such as whole-school system improvements such as inclusive education. Within the research community, debates have raged regarding methodology, specifically the use of qualitative approaches in evaluating program efficacy vs quantitative ones. There has also been significant political participation, with certain governments adopting stances on the sort of evidence necessary for assessment studies, with a focus on randomized controlled trials in particular (RCTs).

The initial goal of program assessment is to determine the effectiveness of the intervention. This can be done on a small basis, such as a school studying the implementation of a new reading scheme, but it can also be done on a big scale, at the district, school, state (local authority ), or national level. The availability of national or state data collections, such as those from the United Kingdom. The Government’s National Pupil Database and its student-level School Census provide opportunities for large-scale evaluations of educational interventions, such as curricular reforms or the differential development of types of kids (e.g., the relationship between identification of special educational needs and ethnicity). However, the importance of evaluating both the effectiveness of the program itself and its implementation is becoming increasingly recognized.

Other definitions of Evaluation by other authors;

“The technique of obtaining and assessing information changes in the conduct of all children as they advance through school,” Hanna says.

According to Muffat, “evaluation is a continual process that is concerned with more than the official academic accomplishment of students.” It is viewed in terms of the individual’s growth in terms of desired behavioral change in connection to his feelings, thoughts, and actions.”

Evaluation is a crucial issue in both the first and second years of B.Ed. Every B.Ed. student should comprehend the notion of evaluation and assessment.

Types of Evaluation in Education

  • Formative Evaluation
  • Summative Evaluation
  • Prognostic Evaluation
  • Diagnostic Evaluation
  • Norm Referenced Evaluation
  • Criterion Referenced Evaluation
  • Quantitative Evaluation
  • Formative evaluation

We would be explaining each of these types of evaluation in education.

1. Formative Evaluation

  • This form of evaluation takes place during the instructional process. Its goal is to offer students and teachers with continual feedback.
  • This aids in making modifications to the instruction process as needed. It considers smaller and autonomous curricular sections, and pupils are ultimately assessed through assessments.
  • It is assessing the kids’ understanding and which part of their work needs more work. A teacher can assess their pupils while educating them on a class or a lesson or after the topic has been completed to see whether or not modifications in teaching approach are required.
  • It is really beneficial to make modifications or timely corrections in pupils and teaching style.

2. Summative Evaluation

  • Summative evaluation occurs at the end of the school year. It assesses the success of objectives and changes in a student’s general personality at the end of the session.
  • Summative evaluations address a wide range of aspects of learning. It considers formative assessment ratings and student tests after course completion to provide final grades and comments to students.
  • Summative evaluation is used to grade students.

3. Prognostic Evaluation

  • Prognostic evaluations are used to estimate and anticipate a person’s future career.
  • A prognostic evaluation adds a new dimension to the discoveries of an assessment with analysis of talents and potentials: the concerned person’s future growth, as well as the necessary circumstances, timeline, and constraints.

4. Diagnostic Evaluation

  • As the phrase implies, diagnosis is the process of determining the root cause of a problem. During this examination, a teacher attempts to diagnose each student on many characteristics in order to determine the caliber of pupils.
  • A diagnostic assessment is a type of pre-evaluation in which teachers assess students’ strengths, weaknesses, knowledge, and abilities prior to the start of the teaching learning process.
  • It necessitates specifically designed diagnostic tests as well as several additional observational procedures.
  • It is useful in developing the course and curriculum based on the learner’s ability.

5. Norm Referenced Evaluation

  • This type of assessment is centered on evaluating students’ relative performance, either by comparing the outputs of individual learners within the group being evaluated or by juxtaposing their performance to that of others of comparable age, experience, and background.
  • It influences the placement of pupils inside the group.

6. Criterion Referenced Evaluation

  • Criterion Reference Evaluation explains a person’s performance in relation to a predetermined performance benchmark.
  • It describes a student’s performance accuracy, or how well the individual performs in relation to a given standard.
  • In other words, it’s like comparing a student’s performance to a predetermined benchmark.

7. Quantitative Evaluation

Quantitative assessments employ scientific instruments and measures. The outcomes can be tallied or measured.

Quantitative Evaluation Techniques or Tools;

  • Performance evaluation

8. Formative Evaluation

  • Qualitative observations, which are more subjective than qualitative evaluation, are described in science as any observation made utilizing the five senses.
  • It entails value assessment.

9. Qualitative Evaluation Techniques or Tools

  • Cumulative Records
  • The school keeps such statistics to demonstrate pupils’ overall improvement.

10. Anecdotal evidence

These records preserve descriptions of noteworthy events or student efforts.

11. Observation

This is the most popular method of qualitative student assessment. This is the only method for assessing classroom interaction.

12. Checklist

Checklists specify precise criteria and allow instructors and students to collect data and make judgments about what pupils know and can perform in connection to the outcomes. They provide systematic methods for gathering data on certain behaviors, knowledge, and abilities.

Difference Between Evaluation and Assessment

Functions and importance of evaluation in education.

The primary goal of the teaching, learning process is to enable the student to obtain the desired learning outcomes. The learning objectives are established during this phase, and then the learning progress is reviewed on a regular basis using tests and other assessment tools.

The evaluation process’s role may be described as follows:

  • Evaluation aids in the preparation of instructional objectives: The evaluation results may be used to fix the learning goals expected from the classroom discussion.
  • What kind of information and comprehension should the learner gain?
  • What skill should they demonstrate?
  • What kind of interest and attitude should they cultivate?

Only when we determine the instructional objectives and communicate them clearly in terms of expected learning outcomes will this be achievable. Only a thorough evaluation method allows us to create a set of ideal instructional objectives.

  • The evaluation method aids in analyzing the learner’s requirements: It is critical to understand the needs of the learners during the teaching-learning process. The instructor must be aware of the knowledge and abilities that the pupils must learn.

Evaluation aids in delivering feedback to students: An evaluation procedure assists the instructor in determining the students’ learning issues. It contributes to the improvement of many school procedures. It also assures proper follow-up service.

Evaluation aids in the preparation of program materials: A continuous succession of learning sequences is referred to as programmed instruction. First, a limited quantity of teaching content is offered, followed by a test to respond to the instructional material. The following feedback is given based on the accuracy of the response given. Thus, programmed learning is impossible without an adequate evaluation method.

Evaluation aids in curriculum development:

Curriculum creation is an essential component of the educational process. Data from evaluations allow for curriculum creation, determining the efficacy of new methods, and identifying areas that require change. The evaluation also aids in determining the effectiveness of an existing curriculum. Thus, assessment data aids in the development of new curriculum as well as the evaluation of existing curriculum.

  • Evaluation aids in communicating the development of students to their parents: A structured evaluation approach gives an objective and complete view of each student’s development. This comprehensive nature of the assessment procedure enables the instructor to report to the parents on the pupil’s overall growth. This sort of objective information about the student serves as the foundation for the most successful collaboration between parents and instructors.

7. Evaluation data are quite valuable in advice and counseling: Educational, vocational, and personal guidance all require evaluation methods. To help students address difficulties in the educational, vocational, and personal domains, the counselor must have an objective understanding of the students’ talents, interests, attitudes, and other personal traits. An successful assessment system aids in the formation of a complete image of the student, which leads to appropriate guiding and counseling.

8 . Evaluation data aids in good school administration: Evaluation data assists administrators in determining the amount to which the school’s objectives are met, determining the strengths and weaknesses of the curriculum, and planning special school programs.

  • Evaluation data are useful in school research: Research is required to improve the effectiveness of the school program. Data from evaluations aid in research fields such as comparative studies of different curricula, efficacy of different approaches, effectiveness of different organizational designs, and so on.

Principles of Evaluation

The following concepts guide evaluation:

  • Continuity principle: Evaluation is a continual process that continues as long as the student is involved in education. Evaluation is an important part of the teaching-learning process. Whatever the student learns should be examined on a daily basis. Only then will the student have a greater mastery of the language.
  • The comprehensiveness principle: states that we must evaluate all parts of the learner’s personality. It is concerned with the child’s whole development.
  • The principle of Objectives: states that evaluation should always be based on educational objectives. It should aid in determining where there is a need for revamping and stopping the learner’s behavior.
  • Child-Centeredness Principle: The child is at the center of the evaluation process. The child’s conduct is the focal point for evaluation. It assists a teacher in determining a child’s grasping ability and the effectiveness of teaching content.
  • The principle of broadness: states that evaluation should be wide enough to encompass all areas of life.
  • Principle of Application: The child may learn many things during the teaching and learning process. However, they may be ineffective in his daily life. If he can’t utilize it, it’s pointless to look for it. It can be determined by assessment. The evaluation determines if a student is better able to use his knowledge and understanding in various circumstances in order to thrive in life.

8 Evaluation characteristics in education

  • Process what is ongoing: Evaluation is a never-ending process. It co-leads with the teaching-learning process.
  • Comprehensive: Evaluation is comprehensive because it encompasses everything that can be reviewed.
  • Child-Centered: Evaluation is a child-centered technique that emphasizes the learning process rather than the teaching process.
  • Remedial: Although evaluation remarks on the outcome, it is not a remedy. The purpose of an evaluation is to correct problems.
  • Cooperative Process: Evaluation is a collaborative process that involves students, instructors, parents, and peer groups.
  • Teaching Approaches: The effectiveness of various teaching methods is assessed.
  • Common Practice: Evaluation is a typical technique for the optimal mental and physical development of the kid.
  • Multiple Aspects: It is concerned with pupils’ whole personalities.

In summary, Evidence of efficacy, often from random controlled trials, and the effectiveness of project planning is required for the evaluation of educational programs. Program evaluations must prove that the program can perform under ideal controlled situations and that it will work when carried out on a broad scale in community settings to order to give valuable data to support policy. To address these many dimensions of effectiveness, evaluation benefits from a combined approaches strategic plan.

Share this:

This website may not work correctly because your browser is out of date. Please update your browser .

What is evaluation?

There are many different ways that people use the term 'evaluation'. 

At BetterEvaluation, when we talk about evaluation, we mean:

any systematic process to judge merit, worth or significance by combining evidence and values

That means we consider a broad range of activities to be evaluations, including some you might not have thought of as 'evaluations' before. We might even consider you to be an evaluator, even if you have never thought of yourself as an evaluator before!

Different labels for evaluation

When we talk about evaluation, we also include evaluation known by different labels:

  • Impact analysis
  • Social impact analysis
  • Appreciative inquiry
  • Cost-benefit assessment

Different types of evaluation

When we talk about evaluation we include many different types of evaluation - before, during and after implementation, such as:

  • Needs analysis —​ ​which analyses and prioritises needs to inform planning for an intervention​
  • Ex-ante impact evaluation — which predicts the likely impacts of an intervention to inform resource allocation
  • Process evaluation —​ which examines the nature and quality of implementation of an intervention​
  • Outcome and impact evaluation —​ which examines the results of an intervention​
  • Sustained and emerging impacts evaluations —​ which examine the enduring impacts of an intervention sometime after it has ended​
  • Value-for-money evaluations —​ which examine the relationship between the cost of an intervention and the value of its positive and negative impacts​
  • Syntheses of multiple evaluations —​ which combine evidence from multiple evaluations​

Monitoring and evaluation

When we talk about evaluation we include discrete evaluations and ongoing monitoring, including:

  • Performance indicators and metrics
  • Integrated monitoring and evaluation systems

Evaluations by different groups

When we talk about evaluation we include evaluations done by different groups, such as:

  • External evaluators
  • Internal staff
  • Communities
  • A hybrid team

Evaluation for different purposes

When we talk about evaluation we include evaluations that are intended to be used for different purposes:

  • Formatively, to make improvements
  • Summatively, to inform decisions about whether to start, continue, expand or stop an intervention.

Formative evaluation is not the same as process evaluation. Formative evaluation refers to the intended use of an evaluation (to make improvements); process evaluation refers to the focus of an evaluation (how it is being implemented).

As you can see, our definition of evaluation is broad. The resources on BetterEvaluation are designed with this in mind, and we hope they will help you in a range of evaluative activities.

How is this different to what other people mean by 'evaluation'?

Not everyone defines evaluation in this way because of their diverse professional and educational backgrounds and training and organisational context. Be aware that people might define evaluation differently, and consider the implications of the labels and definitions that are used.

For example, some organisations use a definition of evaluation that focuses only on understanding whether or not an intervention has met its goals. However, this definition would not include a process evaluation, which might be used to check the quality of implementation and provide timely information to guide improvements. And it would not include a more comprehensive impact evaluation that considered unintended impacts (positive and negative) as well as intended impacts identified as goals.

Some organisations refer only to formal evaluations that are contracted out to external evaluators, which leaves out important methods for self-evaluation, peer evaluation and community-led evaluation.

A brief (4-page) overview that presents a statement from the American Evaluation Association defining evaluation as "a systematic process to determine merit, worth, value or significance".

The statement covers the following areas:

Back to top

© 2022 BetterEvaluation. All right reserved.

Your Article Library

Evaluation in education: meaning, principles and functions.

what is evaluation in education

ADVERTISEMENTS:

After reading this article you will learn about:- 1. Meaning of Evaluation 2. Principles of Evaluation 3. Functions.

Meaning of Evaluation:

Evaluation is a broader term than the Measurement. It is more comprehensive than mere in­clusive than the term Measurement. It goes ahead of measurement which simply indicates the numerical value. It gives the value judgement to the numerical value. It includes both tangible and intangible qualities.

Different educationist has defined evaluation as following:

James M. Bradfield:

Evaluation is the assignment of sym­bols to phenomenon, in order to characterize the worth or value of a phenomenon, usually with reference to some cultural or scientific standards.

Thorndike and Hegan:

The term evaluation is closely re­lated to measurement. It is in some respect, inclusive in­cluding informal and intuitive judgement of pupil’s progress. Evaluation is describing something in term of selected attributes and judging the degree of acceptability or suitability of that which has been described.

Norman E. Gronlund and Robert L. Linn:

Evaluation is a systematic process of collecting, analysing and interpreting in­formation to determine the extent to which pupil’s are achievement instructional objectives.

The process of ascertaining or judging the value or amount of something by use of a standard of standard of appraisal includes judgement in terms of internal evidence and external criteria. From the above definitions it can b said that evaluations a much more comprehensive and inclusive term than the meas­urement and test. A test is a set of question measurement is assigning numbers to the results of test according to some specific rules on the other hand evaluation adds value judgement.

For example when we say Rohan secured 45 numbers in Arith­metic. It just indicates ‘how much’ Rohan has successfully answered. It does not include any qualitative description i.e. ‘how good’ he is in Arithmetic. Evaluation on the other hand includes both quantitative description (measurement) and qualitative description (Non measurement) along with value judgements. This relationship between measurement, non measurement and evaluation can be illustrated with the help of following diagram (1.1).

Relationship between Measurement, Non Measurement and Evaluation

Principles of Evaluation:

Evaluation is a sys­tematic process of determining to what extent instructional ob­jectives has been achieved. Therefore evaluation process must be carried out with effective techniques.

The following principles will help to make the evaluation process an effective one:

1. It must be clearly stated what is to be evaluated:

A teacher must be clear about the purpose of evaluation. He must formulate the instructional objectives and define them clearly in terms of student’s observable behaviour. Before selecting the achievement measures the intended learning out comes must be specified clearly.

2. A variety of evaluation techniques should be used for a comprehensive evaluation:

It is not possible to evaluate all the aspect of achievement with the help of a single technique. For the better evaluation the techniques like objective tests, essay tests, observational techniques etc. should be used. So that a complete’ picture of the pupil achievement and development can be assessed.

3. An evaluator should know the limitations of dif­ferent evaluation techniques:

Evaluation can be done with the help of simple observation or highly developed standardized tests. But whatever the instrument or technique may be it has its own limitation. There may be measurement errors. Sampling error is a common factor in educational and psychological meas­urements. An achievement test may not include the whole course content. Error in measurement can also be found due to students guessing on objective tests. Error is also found due to incorrect interpretation of test scores.

4. The technique of evaluation must be appropriate for the characteristics or performance to be measured:

Every evaluation technique is appropriate for some uses and inap­propriate for another. Therefore while selecting an evaluation technique one must be well aware of the strength and limitations of the techniques.

5. Evaluation is a means to an end but not an end in itself:

The evaluation technique is used to take decisions about the learner. It is not merely gathering data about the learner. Because blind collection of data is wastage of both time and effort. But the evaluation is meant for some useful purpose.

Functions of Evaluation:

The main aim of teaching learning process is to enable the pupil to achieve intended learning outcomes. In this process the learning objectives are fixed then after the instruction learning progress is periodically evaluated by tests and other evaluation devices.

The function of evaluation process can be summarized as following:

1. Evaluation helps in preparing instructional objec­tives:

Learning outcomes expected from class-room discussion can be fixed by using evaluation results.

What type of knowledge and understanding the student should develop?

What skill they should display?

What interest and attitude they should develop?

Can only be possible when we shall identify the instructional objectives and state them clearly in terms of intended learning outcomes. Only a good evaluation process helps us to fix up a set of perfect instructional objectives.

2. Evaluation process helps in assessing the learner’s needs:

In the teaching learning process it is very much necessary to know the needs of the learners. The instructor must know the knowledge and skills to be mastered by the students. Evaluation helps to know whether the students possess required knowledge and skills to proceed with the instruction.

3. Evaluation help in providing feed back to the stu­dents:

An evaluation process helps the teacher to know the learn­ing difficulties of the students. It helps to bring about an im­provement in different school practices. It also ensures an ap­propriate follow-up service.

4. Evaluation helps in preparing programmed materials:

Programmed instruction is a continuous series of learning sequences. First the instructional material is presented in a limited amount then a test is given to response the instructional material. Next feedback is provided on the basis of correctness of response made. So that without an effective evaluation process the programmed learning is not possible.

5. Evaluation helps in curriculum development:

Cur­riculum development is an important aspect of the instructional process. Evaluation data enable the curriculum development, to determine the effectiveness of new procedures, identify areas where revision is needed. Evaluation also helps to determine the degree to what extent an existing curriculum is effective. Thus evaluation data are helpful in constructing the new curriculum and evaluating the existing curriculum.

6. Evaluation helps in reporting pupil’s progress to parents:

A systematic evaluation procedure provides an objective and comprehensive picture of each pupil’s progress. This com­prehensive nature of the evaluation process helps the teacher to report on the total development of the pupil to the parents. This type of objective information about the pupil provides the foun­dation for the most effective co-operation between the parents and teachers.

7. Evaluation data are very much useful in guidance and counselling:

Evaluation procedures are very much neces­sary for educational, vocational and personal guidance. In order to assist the pupils to solve their problems in the educational, vocational and personal fields the counsellor must have an objec­tive knowledge of the pupils abilities, interests, attitudes and other personal characteristics. An effective evaluation procedure helps in getting a comprehensive picture of the pupil which leads to effective guidance and of counselling.

8. Evaluation helps in effective school administration:

Evaluation data helps the administrators to judge the extent to which the objectives of the school are being achieved, to find out strengths and weaknesses of the curriculum and arranging special school programmes. It also helps in decisions concerning admis­sion, grouping and promotion of the students.

9. Evaluation data are helpful in school research:

In order to make the school programme more effective, researches are necessary. Evaluation data help in research areas like comparative study of different curricula, effectiveness of different methods, effectiveness of different organisational plans, etc.

Related Articles:

  • Difference between Formative and Summative Evaluation
  • Comparison between Measurement and Evaluation | Statistics

Education , Evaluation , Evaluation in Education

Comments are closed.

web statistics

  • Open access
  • Published: 01 May 2024

Evaluation and students’ perception of a health equity education program in physical therapy: a mixed methods pilot study

  • Alexis A. Wright 1 ,
  • Dominique Reynolds 1 &
  • Megan Donaldson 2  

BMC Medical Education volume  24 , Article number:  481 ( 2024 ) Cite this article

Metrics details

Health equity is a common theme discussed in health professions education, yet only some researchers have addressed it in entry-level education.

The purpose of this study is to serve as an educational intervention pilot to 1) evaluate students’ perception of the effectiveness of the DPT program in providing a foundation for health equity education, with or without the benefit of a supplemental resource and 2) establishing priorities for the program related to educating students on health inequities in physical therapy clinical practice.

A mixed method design with a focus-group interview was utilized to explore students’ perceptions of the DPT program's commitment to advancing health equity.

A three-staged sequential mixed methods study was conducted. Stage 1 began with quantitative data collection after completing the DEI Bundle utilizing the Tripod DEI survey. Stage 2 involved identifying themes from the Tripod Survey data and creating semi-structured interview questions. Stage 3 consisted of a focus group interview process.

A total of 78 students completed the Tripod DEI survey upon completing 70% of the curriculum. Thirty-five students, eight core faculty, 13 associated faculty, and four clinical instructors completed the APTA DEI Bundle Course Series. According to the Tripod DEI Survey results, program stakeholders found the program’s commitment to DEI and overall climate to be inclusive, fair, caring, safe, welcoming, and understanding of individuals from different backgrounds, including a sense of student belonging where students feel valued and respected. Three themes emerged from the qualitative focus group interviews, including the value of inclusivity, health equity curricular foundations, and DEI in entry-level DPT education.

Conclusions

This study highlights the value of incorporating health equity and DEI topics into curricula while fostering an incluse program culture.

Peer Review reports

Introduction

Racial and ethnic disparities in healthcare are a longstanding and well-documented crisis in the United States [ 1 ]. A strategic goal of the American Physical Therapy Association (APTA) is to increase diversity, equity, and inclusion within the profession to serve society's health better. At its core, physical therapy is rooted in optimizing overall health and decreasing preventable illness and injury. Additionally, physical therapists are trained to be adaptive and respond to patients' social and environmental influences that impact health outcomes. These foundational traits uniquely position healthcare providers with the skills to respond to health inequities. Education and training for health providers are rarely studied to determine the effectiveness or implementation of the educational training [ 1 , 2 ]. Specifically, diversity, equity, and inclusion (DEI) education training provides a basis to confront systemic racism and improve health equity, and physical therapy programs are being called to action [ 2 ]. However, the measurement of learners’ awareness and perceived effectiveness of educational interventions has lagged [ 1 ].

The literature review on this topic includes a study by the Institute of Medicine (IOM), which has provided recommendations for addressing and eliminating racial/ethnic disparities in healthcare. These recommendations include increasing healthcare providers’ awareness of racial/ethnic disparities in healthcare and educating health providers on health disparities, cultural competence, and the impact of race/ethnicity on clinical decision-making [ 3 ] A developing entry-level Doctor of Physical Therapy program intentionally designed curricula aligned with the IOM recommendations. Curricular topics were informed by the Clinical Prevention and Population Health Curriculum Framework, a product of the Healthy People Curriculum Task Force established in 2002 by the Association for Prevention Teaching and Research (APTR) [ 4 ]. Knowledge-based activities were designed to further awareness and understanding of the social determinants of health, health prevention, cultural awareness, health inequities, healthcare accessibility, systems thinking, and implicit and explicit bias among entry-level DPT students. The theoretical framework of the DPT curriculum is based on a theoretical framework of constructivism, which refers to the belief that learners actively construct knowledge by linking new information to what they have previously learned and by incorporating new experiences into their knowledge base and that learners’ knowledge structures are continually constructed and reconstructed [ 5 ].

Additionally, co-curricular educational activities were promoted throughout the program.

The theoretical framework for co-curricular educational activities is based on relational learning. Specifically, this model has been used for health promotion and inclusion [ 6 , 7 ]. The co-curriculum does what the standard academic curriculum generally does not: it is developmental, transformative, and future-focused. For example, as a program, sessions were provided for learners to attend speaker sessions on DEI topics, apply for leadership roles (including the Diversity, Equity, and Anti-Racism (DEAR) Council), and engage in service activities, all grounded in an expectation of professional behaviors that encourage intellectual discussions on complex topics in an environment free of criticism, discrimination, harassment or any other emotional or physical harm.

The purpose of this study is to serve as an educational intervention pilot to 1) evaluate students’ perceptions of the effectiveness of the DPT program in providing a foundation for health equity education, with or without the benefit of a supplemental resource, and 2) establish priorities for the program related to educating students on health inequities in physical therapy clinical practice.

Materials and methods

Participants and study design.

Determining the research question(s) is vital in the mixed research process. Research questions are pivotal in the mixed research process, which is interactive, emergent, fluid, and evolving [ 8 ]. As Leech and Onwuegbuzie [ 8 ] defined, “mixed methods research questions combine or mix both the quantitative and qualitative research questions necessitating the resulting data be collected and analyzed.” Mixed research sampling designs can be classified according to (a) the time orientation of the components (e.g., whether the qualitative and quantitative phases occur concurrently or sequentially) and (b) the relationship of the qualitative and quantitative samples (e.g., identical vs. parallel vs. nested vs. multilevel).

Design:  To address the objectives of this study, a partially mixed-method design with a sequential and nested relationship was selected. The nested structure implies that individuals chosen for one phase of the study (qualitative focus group interviews) constitute a subset of those selected in the preceding phase (participants in the quantitative surveys) [ 8 , 9 ]. Nonetheless, qualitative and quantitative research methodologies hold equal significance in this study's design and analytical approach.

Sampling Strategy: Participant enrichment refers to the mixing of qualitative and quantitative techniques for the rationale of optimizing the sample. Beginning with Phase 1, a total of 153 participants, including students (81) from the Class of 2022 (as pre-professionals) and 2) program faculty (16), associated faculty (36), and clinical instructors (20) (as post-professionals) were offered the option to participate in this mixed methods study. An email describing the purpose of the study was sent to all participants.

Within mixed-method designs, instrument fidelity is essential and used by researchers to maximize the appropriateness and utility of the quantitative and qualitative instruments used in the study. These included the Tripod DEI survey, the APTA Diversity, Equity, and Inclusion (DEI) Bundle, and the qualitative semi-guided interview process. Stage 1 began with quantitative data collection after completing the Diversity, Equity, and Inclusion Bundle utilizing the Tripod DEI survey. Stage 2 involved identifying themes from the Tripod Survey data and creating semi-structured interview questions. Stage 3 consisted of the focus group interview process. See further details outlining the timeline and phases of the study in Fig.  1 . Timeline and Process for Study.

figure 1

Timeline and Process for Study

The research implementation began with the quantitative survey, in which all students were surveyed using the Tripod DEI survey, which was deployed after semester 4 of the program, reflecting 70% completion of the curriculum [ 10 ]. Students were allowed to participate in the voluntary, supplementary APTA DEI Bundle beginning in Semester 5 [ 11 ]. Before participating in the APTA DEI Bundle, the Tripod DEI Survey was readministered to all students, program faculty, associated faculty, and clinical instructors who elected to participate [ 10 ]. Following completion of the APTA DEI Bundle, the Tripod DEI Survey was readministered a second time to all students, program faculty, associated faculty, and clinical instructors who completed the APTA DEI Bundle course series [ 10 , 11 ]. The pre-test and post-test methodologies explored differences between adding the American Physical Therapy Association DEI Bundle to the program’s curriculum and co-curricular activities [ 11 ].

The study commenced once approval to conduct it was obtained from the Institutional Review Board at the university. After the submission was reviewed, the Tufts University IRB office determined that the proposed activity was not deemed human research as defined by DHHS and FDA regulations. (IRB ID:STUDY00002820).

Research planning: quantitative study instrument

Tripod Education Partners works with programs to gather, organize, and report on student and teacher perspectives [ 10 ]. The Tripod DEI survey captures student perceptions of how diversity, equity, and inclusion issues play out in their school. The survey collects feedback from teachers about their experiences as teachers and perspectives about strengths and opportunities for improvement. Permission and funding for survey distribution were obtained before disseminating the survey.

The survey consisted of a total of 38 questions with eight distinct measures including 1) School commitment to DEI ( N  = 3), 2) School climate overall ( N  = 4), 3) School climate for DEI ( N  = 4), 4) Classroom teaching supporting DEI ( N  = 7), 5) Co-Curricular activities supporting DEI ( N  = 3), 6) Everyday discrimination by students ( N  = 6), 7) Everyday discrimination by teachers ( N  = 6), 8) Meaningful interactions across difference N  = 5) (Tripod Education Partners,2019). School commitment to DEI is scored on a Likert scale from 1 (totally untrue) to 5 (totally true). School climate overall and DEI are scored as ordinal variables, with 2 being more favorable. Classroom teaching supporting DEI is scored on a Likert scale from 1 (none) to 5 (all). Co-curricular activities supporting DEI is scored on a Likert scale from 1 (my school doesn’t sponsor things like this) to 6 (very often). Everyday discrimination by students and teachers and meaningful interactions across differences are scored on a Likert scale from 1 (never) to 5 (very often).

The “overall sense of belonging” ( N  = 3) was scored on a Likert scale from 1 (totally untrue) to 5 (totally true).

The Tripod DEI survey development shows good construct validity and internal consistency [ 10 ]. Diverse student populations are at the center of the survey. Reports disaggregate findings by social identities across various groups, including but not limited to race, gender, and socioeconomic status. This breakdown allows programs to pinpoint groups of students reporting less-than-positive experiences and take action to address their needs.

Research planning: description of the DEI training bundle

The optional training program was conducted through asynchronous electronic delivery of the APTA DEI bundle [ 11 ]. This program is a three-part series exploring foundational concepts related to diversity, equity, and inclusion and is led by Diana Lautenberger, MA, co-lead of the American Medical Colleges' leadership development seminar program. The three-part series utilizes a highly reflective approach whereby participants learn about identity, privilege, bias, and allyship as foundational pillars to achieving DEI. In addition, participants engage in self-reflection throughout the series to apply concepts to their clinical and personal lives to create more respectful and inclusive environments.

The series consists of three two-hour sessions: Part 1 – Unconscious Bias in the Health Professions; Part 2 – Power, Privilege, and Microaggressions; Part 3 – Responding to Microaggressions Through Allyship. The elements of this bundle listed objectives for the learners to 1) understand how their various identities carry social capital or power, 2) describe aspects of a dominant culture that advantage some and disadvantage others, and 3) utilize allyship and bystander intervention strategies that reduce harm to create more respectful and inclusive environments [ 11 ]. This program requires the completion of an assessment from the training. Viewers who completed all three sessions and scored at least 70% on each session's assessment (built into the modules) were also allowed to earn 0.6 CEUs (six contact hours) and a certificate of completion.

Research planning: qualitative focus group interviews

Using an explanatory sequential mixed methods study, the qualitative portion aimed to further understand the students’ perceptions, establish priorities for the program related to educating students on health inequities in physical therapy clinical practice, and evaluate the effectiveness of adding the DEI Bundle. Based on the results of the quantitative portion of the study, two researchers created questions that would be used in the focus group interviews. The a priori semi-structured question guide in Table  1 was designed to allow emergent focus group discussion to explore concepts further.

Data analysis plan

Quantitative data collection and analysis.

The data analysis program IBM SPSS 28.0 was utilized to store and analyze data from the Tripod DEI survey. For all the Tripod DEI survey subscales, items were summed, and scores were calculated. Descriptive statistics were utilized to calculate means, standard deviations, and 95% confidence intervals for each of the eight domains and Overall Sense of Belonging. Paired sample t-tests were conducted to compare pre-test and post-test scores. Summary independent samples t-tests compared the entire sample data ( N  = 81) to the post-DEI Bundle Series data.

Qualitative data collection and analysis

The semi-structured focus group interview guide questions (Table  1 ) were designed after the quantitative data collection was completed, and the data assessment revolved around concepts collected from the survey data.

A variety of data collection strategies were used, including (a) a mixture of open- and closed-ended items within the questionnaires that guided the focus group interview process, (b) a mixture of a priori (from the quantitative results) and additional emergent/flowing focus-group strategies through a semi-guided interview process. The Standards for Reporting Qualitative Research (SRQR) checklist was utilized for reporting.

Given the small sample size, no statistical software was utilized. Coding was used to assign labels to data segments to capture their meaning and allow comparison to identify themes or patterns. Both researchers used qualitative content analysis to systematically categorize transcribed content into topic areas from the thick descriptions provided. Qualitative fields were created to organize data by topic counts of language content areas (such as “DEI” and “belonging” quotes). The preliminary or open coding was done first and then refined to a higher level to reflect broader categories. All coding stages were done separately and then together to ensure improved accuracy. Then, the researchers used the comparison analysis and consensus approach to categorize and interpret data to identify patterns and content themes during the analysis. The analysis used a matrix table as a visual spreadsheet, where the rows represented participants, and the columns represented codes identified.

Researcher characteristics and reflexivity: The background and experience of the researchers could have influenced the research as two of the researchers had routine involvement with the participants within the study. The same researchers that conducted the study design and implementation conducted the focus group interviews via Zoom while participants were on clinical rotations. The focus-group interviews were audio-recorded and transcribed by an administrative coordinator who supported the faculty and had limited student interactions during daily work.

Techniques to enhance trustworthiness: The research team, consistent throughout the study, undertook the quantitative and qualitative data analysis. To maintain objectivity, they devised a set of a priori questions for interviews, steering clear of leading inquiries or interpretations. Subsequently, they conducted content analysis directly from transcriptions. Reflexivity strategies encompassed credibility checks via member validation and a post-session peer debriefing (between researchers), ensuring accuracy in focus group interviews. The research coordinator, unbiased to quantitative analysis, remained uninvolved in question formulation, solely providing session transcriptions for analysis. Furthermore, thick descriptions were provided, and qualitative counts of language content areas were evenly applied to promote the transferability of qualitative findings. By integrating these measures, the study aimed to mitigate inherent limitations in its design and bolster the credibility, transferability, dependability, and confirmability of its qualitative research, thus enhancing the trustworthiness and reliability of its findings.

Quantitative analysis and results

A total of 78 students completed the Tripod DEI survey upon completing Semester 4 of the curriculum. A total of 42 students, eight core faculty, 16 associated faculty, and four clinical instructors elected to participate and complete the voluntary, supplementary pre-APTA DEI bundle Tripod DEI survey beginning Semester five. A total of 35 students, eight core faculty, 13 associated faculty, and four clinical instructors completed the APTA DEI Bundle Course Series. Thirty-two students, eight core faculty, 13 associated faculty, and four clinical instructors completed the post-APTA DEI Bundle Tripod DEI Survey.

Student results

Demographics of the full sample of 78 students can be found in Table  2 .

Survey results following the completion of Semester 4 are summarized below and reported as mean, standard deviation.

School Commitment to DEI (1 = totally untrue; to 5 = totally true)

Students generally found the program's commitment to DEI to be inclusive, fair, and understanding of individuals from different backgrounds (M = 4.1, SD = 0.9) or “mostly true”.

School Climate Overall (1 = less favorable; 2 = favorable)

Students reported the program's climate/culture as caring, respectful, safe, and welcoming (M = 2.0, SD = 0.1) where 2 is scored as caring, respectful, safe, and welcoming.

School Climate for DEI (1 = less favorable; 2 = favorable)

Students rated the program's climate/culture for DEI as “equally fair” to all students, regardless of their social identity (M = 1.9, SD = 0.2). This included questions related to race, ethnicity, sexual orientation, socioeconomic status, and gender where 2 is scored as equally fair to all students.

Classroom teaching Supporting DEI (1 = none; 5 = all)

Classroom teaching supporting diversity, equity, and inclusion rated “most but not all” (M = 4.1, SD = 0.8) faculty as having integrated material on different social identities, discussing issues of social inequality, and using student-centered teaching methods. This included questions related to helping students think about how to improve the world, leading discussions about why some people have difficult lives and other people have easier lives, connecting content from the classroom to problems or issues in the world as well as the student’s own life and interests, helping students think about how to improve other people’s lives, assigning readings or materials about people from different backgrounds or places, and taught about influential people from many different cultures.

Co-Curricular Activities Supporting DEI (1 = my school doesn’t sponsor things like this; 6 = very often)

With regards to co-curricular activities supporting diversity, equity, and inclusion, students reported on average that they “hardly ever” participated in a school-sponsored group for students of different racial, ethnic, socioeconomic, gender, sexual orientation, or ability groups; attended a school-sponsored event related to diversity, fairness, or inclusion; or participated in a program sponsored group working to make the world a better place (M = 3.3, SD = 1.0).

Everyday Discrimination by Students (1 = never; 5 = very often)

Students reported “never to hardly ever” regarding everyday discrimination by students regarding courtesy, respect, intelligence, being better than others, being bullied or threatened, and insults (M = 1.8, SD = 0.7).

Everyday Discrimination by Teachers (1 = never; 5 = very often)

Students reported “never to hardly ever” regarding everyday discrimination by faculty regarding courtesy, respect, intelligence, being better than others, being bullied or threatened, and insults (M = 1.4, SD = 0.6).

Meaningful Interactions Across Differences (1 = never; 5 = very often)

Students rated the program as “fairly often” with regards to meaningful interactions across differences, including honest discussions with other students whose religion was different from their own, whose families have more or less money than their own, whose culture is different from their own, and whose race is different from their own (M = 3.8, SD = 0.9).

Belonging (1 = totally untrue; 5 = totally true)

Finally, the students rated the program as “mostly true to totally true” concerning their sense of belonging in the program, whereby the student feels valued, respected, and a sense of belonging (M = 4.4, SD = 0.8).

Comparison of tripod survey pre-post

Thirty-two students elected to participate and complete the APTA DEI Bundle Series with completed pre- and post-Bundle Series survey data. Demographic information on student participation in the DEI Bundle can be found in Table  3 . After completing the APTA DEI Bundle Series, we found no significant difference in any of the eight domains or Sense of Belonging. We found no significant differences in any domain between the full sample ( N  = 78) and the post-DEI Bundle Series data sample ( N  = 32).

Post-professional stakeholder results

Twenty-five of our post-professional stakeholders elected to participate and complete the APTA DEI Bundle Series with completed pre- and post-Bundle Series survey data. After completing the APTA DEI Bundle Series, we found no significant difference in any of the eight domains or Sense of Belonging.

Similarly, the post-professional stakeholders generally found the program's commitment to DEI to be inclusive, fair, and understanding of individuals from different backgrounds (M = 4.2, SD = 1.2).

Post-professionals reported the program’s climate/culture overall as caring, respectful, safe, and welcoming (M = 2.0, SD = 0.0).

Post-professionals rated the program’s climate/culture for DEI as “equally fair” to all students, regardless of their social identity (M = 2.0, SD = 0.1). This included questions related to race, ethnicity, sexual orientation, socioeconomic status, and gender.

Classroom Teaching Supporting DEI (1 = none; 5 = all)

Post-professionals rated climate for DEI Classroom teaching supporting diversity, equity, and inclusion rated “most but not all faculty” (M = 3.8, SD = 1.0) as having integrated material on different social identities, discussing issues of social inequality, and using student-centered teaching methods. This included questions related to helping them think about how to improve the world, leading discussions about why some people have difficult lives and other people have easier lives, connecting content from the classroom to problems or issues in the world as well as the student’s own life and interests, helping students think about how to improve other people’s lives, assigning readings or materials about people from different backgrounds or places, and taught about influential people from many different cultures.

With regards to co-curricular activities supporting diversity, equity, and inclusion, post professionals reported on average that they “hardly ever participated” in a school-sponsored group for students of different racial, ethnic, socioeconomic, gender, sexual orientation, or ability groups; attended a school-sponsored event related to diversity, fairness, or inclusion; or participated in a program sponsored group working to make the world a better place (M = 2.9, SD = 1.0).

Post professionals reported “never to hardly ever” concerning everyday discrimination by students (M = 1.3, SD = 0.5).

Post professionals reported “never to hardly ever” concerning everyday discrimination by teachers (M = 1.4, SD = 0.5).

Post professionals rated the program as “fairly often” with regards to meaningful interactions across differences, including honest discussions with other students whose religion was different from their own, whose families have more or less money than their own, whose culture is different from their own, and whose race is different from their own (M = 3.1, SD = 0.9).

Finally, the post professionals rated the program as “mostly true to totally true” regarding their sense of belonging in the program, whereby the student feels valued, respected, and a sense of belonging (M = 4.5, SD = 1.0).

Result of qualitative focus group content analysis

From those participants completing the quantitative portion of the study, a nested sub-group of students ( n  = 9) volunteered to participate in the semi-structured focus group interview following the completion of the DEI Bundle. Demographic information on student participation in the interviews can be found in Table  4 .

There was a rich discussion with the interview guide around the topics 1) DEI with or without the training supplement related to health equity in physical therapy and 2) the program’s commitment to training students on topics associated with health equity. Three themes emerged from the qualitative focus group interviews based on the final qualitative content analysis.

Theme 1: student’s perceived value of inclusivity

Theme one was the value of inclusivity with three associated sub-themes of fairness, actions, and communication. In higher education, inclusivity is the ongoing process of improving the education system to meet the needs of all students, especially those in marginalized groups. Inclusivity involves reimagining educational services to cater to a diverse audience and making learning materials and teaching methods accessible to as many students as possible. This includes considering a range of diverse student identities, including race, gender, sexuality, and abilities. “ The program does make an effort, especially with adjuncts that we bring in, ableism talks, and people from different backgrounds speaking to us in classes on Zoom .”

Additionally, providing sessions to improve inclusivity and communicating and demonstrating actions consistent with the value of inclusivity is essential to the participants. “ Being a member of the gay community, having a faculty in class that you feel you belong in and are not outcasted in is super important .” Participants valued being included during activities and communicating support during school and personal life challenges. The participants recognized the challenge of finding people from different backgrounds who meet the expectations and specialties to teach within the program. They identified that, at times, visual diversity was limited within the core faculty but felt an intention of more inclusivity of race and ethnicity within the associated faculty roles or lecturers.

Within the value of inclusivity, there is also an inherent limitation to who can afford the DPT graduate-level program at a private university. Hybrid education offers more geographical convenience and reaches a more diverse student group; however, current students feel that money concerns could be a barrier to inclusivity, especially those in marginalized groups. “Program doesn’t have control over the cost of tuition but does communicate what is available as far as opportunities for financial aid.” However, they felt that communication about costs for the hybrid program and what financial aid was available was essential.

Theme 2: student’s perceived value of health equity curricular foundations

Theme two was the value of health equity curricular foundations with three sub-themes of representation in assignments, system resources, and practice issues. Health equity is the goal of helping people reach their highest level of health. It means everyone has a fair chance to achieve optimal health regardless of race, ethnicity, gender identity, or socioeconomic status. Health equity can be promoted through DEI initiatives, which focus on representing the acceptance and inclusiveness of people. The focus group reported health equity topics associated with race, social determinants, and access were satisfactorily addressed within the curriculum. However, there were opportunities to gain additional insights on improving formative activities to be more integrated with how health issues affect those with visual diversity. “ Activities within the program should also include skin tone other than white throughout systems-focused curriculum case studies, mannequins, and simulation/ standardized patients .”

Theme 3: student’s perceive value of DEI in entry-level PT education

Lastly, one remaining theme specifically addressed DEI supplementation to the curriculum. Theme three is the value of DEI in entry-level physical therapy education, with three sub-themes emerging on the timing of content, planned redundancy of learning, and the limited value of a stand-alone DEI bundle. The students in the focus group had a consensus on their perceived confidence and appropriate knowledge of social determinants of health when working with the underserved population during their clinical education exposures. However, the focus group agreed with “ concerns about generalizing their feelings to all classmates, as some students may have had different experiences based on their final clinical education setting and exposure .”

Additionally, according to the student perception, inclusivity and health equity values should be blended across the curriculum so that support and the training of those with different backgrounds can be promoted through DEI initiatives. Curriculum initiatives were given rich context regarding the program and curriculum that would be more “ inclusive and supportive of a health equity curricular track and activities threaded throughout the curriculum rather than a stand-alone module .” There was a consensus from the focus group that mirrored the quantitative results that there was a perceived “ limited value in the DEI Bundle as a stand-alone module outside of the curriculum .” Instead, the students preferred the curriculum designed to include the topics sufficiently within systems and population coursework.

The mixed methods analysis allows a better explanation of the student’s perceptions by blending the results from this study's qualitative and quantitative study portions. It was found in both portions of the study design that the program climate/culture is essential, especially as students relate inclusivity and accepting others when learning to value DEI from a health equity perspective. Students further strengthened their perceived value for their education and blended content topics across the curriculum as they related to health equity and diversity. Still, they found value when more than just content was presented. Students felt that there was a program culture, planned curriculum content, and co-curricular (outside of a class) support for health equity and inclusivity of the population's health care providers serve. As educators look to streamline variation in essential content across healthcare disciplines, utilizing a structured format (toolkit or bundle) could benefit students educationally but may be valued less by them.

Our study aimed to explore the students’ perceptions and establish priorities for the program regarding educating students on health inequities in physical therapy clinical practice.

Health equity is a common theme discussed in health professions education, yet only some have published the methods to address it in entry-level education. National organizations recommend that medical schools and health professions train students in the social determinants of health. This provides the opportunity to educate the next generation of healthcare professionals about sensitive yet essential issues.

Given the complexity of this topic, we utilized a three-staged sequential mixed methods approach to generate the results presented in this study. We found the program’s commitment to DEI and overall climate to be inclusive, fair, caring, safe, welcoming, and understanding of individuals from different backgrounds, including a sense of student belonging where students feel valued and respected. Additionally, the sample provided feedback on the educational approach and format, which was provided with the DEI Bundle. The modular-based curricular approach (not integrated through a course) was used in this study. Thus, the results of the APTA’s DEI Bundle should be considered, given the context of the study, regarding the curricular delivery and format as an “addition to” approach. Given this format, the DEI Bundle was insignificant due to the threaded curricular approach already within the program, as assessed on the Tripod DEI survey or qualitative focus group theme. This approach aligns with other recommendations for curriculum approaches to health equity [ 12 ] that integrate health equity content longitudinally and alongside other topics. The goal would be to eliminate views of health equity and healthcare as separate [ 13 ].

Limited studies explore health equity topics' style, content, and delivery through the healthcare professional’s entry-level educational program. However, the Association of American Medical Colleges recommends that medical educators expose their students to content about health disparities [ 14 ]. There are some challenges to implementing the recommendations [ 15 ], which are further complicated by the lack of recommendations regarding format, delivery, and the requisite degree of competency, which are poorly defined. Several resources are provided but not easily found across all health professions disciplines. However, several studies highlight the importance of health equity education, its impact on therapeutic relationships (trust and caring), and identify the consequences of implicit bias on patient adherence and outcomes [ 16 ].

Significant work must be done to unite all the health professions on strategies for implementing the health equity curriculum. However, an external resource strategy or modular-based approach could be effective, given limited resources and a lack of topic expertise within the program faculty. Still, it should be used with an integrated approach and placed intentionally within the curriculum design. It should have more opportunities for integration across courses, with case studies to facilitate thinking and reasoning and culminate in a competency type of assessment. Curriculum toolkits provided by professional associations may be one way to unite the disciplines to support health equity education in the health professions [ 17 ]. An excellent example of this approach is the American Academy of Family Practitioners Health Equity Curricular Toolkit, which has over 40 content experts [ 18 , 19 ]. A threaded curriculum with a program culture and willingness to utilize health equity curriculum toolkits are essential for our next generation of health practitioners. These toolkits are resources for learning and reducing the variability in education [ 18 ]. Exploring outcomes associated with toolkits may be an option to begin to explore best practices in curriculum delivery to maximize learning outcomes and competency on health equity [ 20 ]. Lastly, any health equity resource or curricular approach should facilitate the exploration of some of the most pressing questions around social determinants of health, vulnerable populations, economics, and policy from an evidence-informed perspective.

Limitations

There are several limitations that we would like to address. Within the quantitative portion of the study, the Tripod DEI survey adequately assessed overall student perception of the DPT program commitment to DEI; however, it may need more responsiveness surrounding the APTA DEI Bundle. Within any mixed methods design approach, it is important to address data fidelity during the qualitative portion. A non-investigator conducted both the survey distribution and outcome assessment; however, the focus group interviews were conducted by two study investigators. Additionally, both researchers are on the leadership team within the program, which may compromise the fidelity, trustworthiness, or sharing from the participants during this experience. It is a limitation in the study that the researchers also are involved in the education. Although a safe space and relational learning theory approach is utilized within the program, this may have limited some of the exploration of the topics/themes if the participants were sensitive. From what was shared in the focus groups, a non-investigator recorded and transcribed the data analysis portion. The second limitation of the qualitative focus groups was the limited number and need for more diversity within the sample. Specifically, the individuals who made time to participate in the qualitative focus group were not significantly diverse regarding their race or sex. The third limitation is the inability to identify the number of students who respond based on their participation in additional co-curricular activities to supplement their learning in DEI.

However, significant work must be done to unite all the health professions on strategies for implementing health equity curricula. It was essential to gain insight from the students’ perception and establish priorities on the current curriculum and entry-level education program culture related to educating students on health inequities in physical therapy clinical practice. However, given limited resources and a lack of topic expertise for health equity content among program administrators and faculty, an external resource strategy or modular-based approach could be effective. However, based on our study, the program culture is important as it relates to DEI from a health equity perspective. It should be evident to students as we influence them to become the next generation of health professionals.

Lastly, the intentional curriculum design should have more opportunities for integration across courses with case studies and culminate in a competency type of assessment, even if an external resource is used. Resources are available to support health equity education in the health professions, including health equity curriculum toolkits, which provide free links and resources for learning and may help to reduce the variability in education [ 15 ]. Any health equity resource or curricular approach should facilitate faculty’s willingness to include some of the most pressing questions around social determinants of health, vulnerable populations, economics, and policy within their current or future developed curriculum. However, motivating incremental changes in entry-level professional teaching methods and working intentionally to integrate health equity into the clinic- and classroom-based environments are tangible next steps. Identifying best practices from education to implementation has yet to be well known, and this study only provided a pilot for future studies.

Availability of data and materials

The data supporting this study's findings are available from the corresponding author upon request.

Fitzgerald C, Hurst S. Implicit bias in healthcare professionals: a systematic review. BMC Med Ethics. 2017;18:19.

Article   Google Scholar  

Matthews ND, Rowley KM, Dusing SC, Krause L, Yamaguchi N, Gordon J. Beyond a Statement of Support: Changing the Culture of Equity, Diversity, and Inclusion in Physical Therapy. Phys Ther. 2021;101(12):pzab212. https://doi.org/10.1093/ptj/pzab212 .

Institute of Medicine (US) Committee on Understanding and Eliminating Racial and Ethnic Disparities in Health Care. Unequal treatment: confronting racial and ethnic disparities in health care. In: Smedley BD, Stith AY, Nelson AR, editors. Washington (DC): National Academies Press (US); 2003. PMID: 25032386.

“Clinical Prevention and Population Health Curriculum Framework.” Association for Prevention Teaching and Research. https://www.teachpopulationhealth.org/ .

Dong H, Lio J, Sherer R, Jiang I. Some Learning Theories for Medical Educators. Med Sci Educ. 2021;31(3):1157–72. https://doi.org/10.1007/s40670-021-01270-6 . PMID:34457959; PMCID:PMC8368150.

Nieminen, J.H. Assessment for Inclusion: rethinking inclusive assessment in higher education, Teaching in Higher Education. 2022. https://doi.org/10.1080/13562517.2021.2021395 .

Rogers DT. The working alliance in teaching and learning: theoretical clarity and research implications. International Journal for the Scholarship of Teaching and Learning. 2009;3(2):Article 28.

Leech NL, Onwuegbuzie AJ. Guidelines for Conducting and Reporting Mixed Research in the Field of Counseling and Beyond. J Couns Dev. 2010;88:61–9.

Leech NL, Onwuegbuzie AJ. A typology of mixed methods research designs. Quality and Quantity: International Journal of Methodology. 2009;43:265–75.

Tripod Education Partners. Tripod’s Diversity, Equity, and Inclusion (DEI Survey) Technical Manual; 2019.

Google Scholar  

American Physical Therapy Association DEI Bundle. October 22, 2021. APTA's New DEI Course Series Helps You Work Toward More Inclusivity. 2022.  https://www.apta.org/news/2021/10/22/dei-bundle .

Butler, L., Arya, V., Nonyel, N., & Moore, T. The Rx-HEART Framework to Address Health Equity and Racism Within Pharmacy Education. American Journal of Pharmaceutical Education. 2021:85. https://doi.org/10.5688/ajpe8590 .

Landry AM. Integrating Health Equity Content Into Health Professions Education. AMA J Ethics. 2021;23(3):E229–234.

Association of American Medical Colleges. Quality improvement and patient safety competencies across the learning continuum. Association of American Medical Colleges: New and Emerging Areas in Medicine Series; 2019.

Liburd L, Hall J, Mpofu J, Williams S, Bouye K, Penman-Aguilar A. Addressing Health Equity in Public Health Practice: Frameworks, Promising Strategies, and Measurement Considerations. Annu Rev Public Health. 2020. https://doi.org/10.1146/annurev-publhealth-040119-094119 .

Paterick TE, Patel N, Tajik AJ, Chandrasekaran K. Improving health outcomes through patient education and partnerships with patients. Proc (Bayl Univ Med Cent). 2017;30(1):112–3.

Ueffing E, Tugwell P, Roberts J, Walker P, Hamel N, Welch V. Equity-oriented toolkit for health technology assessment and knowledge translation: application to scaling up of training and education for health workers. Hum Resour Health. 2009;7:67–67. https://doi.org/10.1186/1478-4491-7-67 .

American Academy of Family Practitioners. Health Equity Curricular Toolkit. Accessed 6 Jun 2023. https://www.aafp.org/family-physician/patient-care/the-everyone-project/health-equity-tools.html .

Martinez-Bianchi V, Frank B, Edgoose J, Michener L, Rodiguez M, Gottlieb L, Reddick B, Kelly C, Yu K, Davis S, Carr J, Lee JW, Smith KL, New RD. Addressing Family Medicine’s Capacity to Improve Health Equity through Collaboration. Accountability and Coalition-Building Fam Med. 2019;51(2):198–203.

Porroche-Escudero A, Popay J. The Health Inequalities Assessment Toolkit: supporting integration of equity into applied health research. J Public Health (Oxf). 2020;43:567–72. https://doi.org/10.1093/pubmed/fdaa047 .

Download references

Acknowledgements

Not applicable (NA).

This project was funded through an internal Tufts University School of Medicine Innovations in Diversity Education Awards (IDEAS) program.

Author information

Authors and affiliations.

Doctor of Physical Therapy Program, Department of Rehabilitation Sciences, Tufts University, Boston, MA, 02111, USA

Alexis A. Wright & Dominique Reynolds

Department of Rehabilitation Sciences, Medical University of South Carolina, Charleston, SC, 29425, USA

Megan Donaldson

You can also search for this author in PubMed   Google Scholar

Contributions

Authors AW and MD contributed to the work's concept, design, and data analysis; drafting and final approval of the work; and agree to be accountable for all aspects of the work. Author DR contributed to acquiring the data for the work; reviewing and final approval of the work; and agrees to be accountable for the accuracy.

Corresponding author

Correspondence to Alexis A. Wright .

Ethics declarations

Ethics approval and consent to participate.

This research study was performed following the Declaration of Helsinki. This research study was submitted to the Tufts University Social, Behavioral, & Educational Research IRB on 6/9/2022 and was determined not to meet the criteria for human research—study Protocol STUDY00002820. The IRB determined “that the proposed activity is not research as defined by DHHS and FDA regulations. IRB review and approval by this organization is not required. This determination applies only to the activities described in the IRB submission and does not apply should any changes be made.” The research team did follow all the requirements for ethical research, including the consent process. Informed consent to participate in the study was obtained from all participant data reported in this study.

Consent for publication

Competing interests, additional information, publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Wright, A.A., Reynolds, D. & Donaldson, M. Evaluation and students’ perception of a health equity education program in physical therapy: a mixed methods pilot study. BMC Med Educ 24 , 481 (2024). https://doi.org/10.1186/s12909-024-05471-6

Download citation

Received : 27 June 2023

Accepted : 25 April 2024

Published : 01 May 2024

DOI : https://doi.org/10.1186/s12909-024-05471-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Doctor of physical therapy
  • Health equity
  • Health professions

BMC Medical Education

ISSN: 1472-6920

what is evaluation in education

Skip to Content

Other ways to search:

  • Events Calendar

K-5 SEL Pilot Grant Evaluation

In 2022-2024 CADRE researchers partnered with the Colorado Department of Education to evaluate a state grant pilot program. This program emerged from Colorado HB 19-1017, which provided pilot K-5 Social-Emotional Learning (SEL) grants to 14 schools. One central purpose of these grants was to ensure that school health professionals (SHPs) at each elementary school could provide SEL support to all students, while also identifying and matching those with more severe behavioral and mental health needs with community partners to assist them and their families outside of school. Another purpose was to reduce demands on teachers who are often the first ones called upon to provide social-emotional, behavioral, and mental supports to students. The hope is that support provided to students by SHPs hired through this grant will reduce the number of students requiring more intensive interventions.

The schools that participated in this study are located in seven distinct geographical regions across the state of Colorado. Twelve of the 14 schools have a Title 1 designation because over 40% of their student body is eligible for Free and Reduced Lunch. While this grant was intended to be implemented in schools beginning in the 2020-21 school year, the launch of these grants was delayed until the 2021-22 school year due to the COVID-19 pandemic. 

To evaluate the effectiveness of the K-5 SEL pilot and to chart the ongoing SEL work taking place at all 14 sites, we collected and analyzed data for this report using a combination of qualitative and quantitative methods. We used data collected from the 14 schools to track the extent to which the pilot grants met their intended objectives. These data included surveys of teachers and SHPs, mental health systems assessments, and performance measures that tracked services provided for students. Within the initial 14 schools in the pilot, we conducted an exploratory case study in 2022-24 at four schools. Our goal was to gain an understanding of the context for the implementation work underway. We consulted with CDE staff to select four schools with distinct characteristics and levels of SEL program implementation so that we could learn about the common and unique ways in which SHPs provided students with SEL, as well as other mental and behavioral supports. At these four schools we conducted interviews with SHPs and principals, focus groups with teachers, and observations of both whole class and small group SEL activities. 

Key findings from this report include:  

  • SHP hires supported by this grant have lowered the service ratio of mental health professionals to students, and this appears to have increased each pilot site’s capacity to respond to students in crisis and to provide more personalized interventions. Compared to the first year of the grant (2021-22) in which several schools indicated a need to broaden the levels or tiers of behavioral support provided to students, all pilot schools reported in 2022-23 that mental health practices were implemented.  
  • Although the COVID-19 pandemic is no longer creating massive disruptions (i.e., closures), a common theme running through selected case study sites is that although teachers indicated spending less time on behavioral management in year 2 compared to year 1 of the grant, behavioral disruptions appear to be persistent.  

The study results point to positive developments achieved by grantees, but also indicate challenges for sustaining this work over time. The grants increased the capacity of schools to address student emotional needs by hiring new SHP or expanding existing SHP roles; this is encouraging. However, personnel at case study sites and the secondary data analyzed both indicate that further resources and support are needed to help mitigate the persisting and growing behavioral and mental health needs at all 14 elementary schools.

This pilot was initiated prior to the COVID-19 pandemic, in response to a reported trend of increased mental-health and behavioral needs in elementary schools. The massive disruptions and closures occasioned by the COVID-19 exacerbated this trend in the schools involved in this study, and likely many others across the state. Negative ramifications from pandemic-related disruptions linger at all 14 schools in this study, indicating the need to continue attending to the behavioral and mental needs of elementary students. 

Our work continues in 2024 as we complete the final report on the pilot program. In addition to ongoing review of secondary data across all of the pilot schools, our case study analysis will explore themes related to improvement to SEL practices and school-wide coordination, the expanded use of SEL screeners in some schools, and communication with families by SEL specialists. 

what is evaluation in education

Precision Education in Medical Education: A Paradigm Shift in Learner Assessment

Apr 29, 2024 • By James Bretz & Jessica Bartolone

Medical education has undergone significant transformations in recent years, with a growing emphasis on precision education. This innovative approach tailors learning experiences to individual learners, recognizing the diversity of students and their unique needs. In this article, we will delve into the basics of precision education within the medical education industry, specifically focusing on learner assessment.

Precision Education Defined

Precision education, also referred to as personalized or adaptive learning, is a contemporary approach to education that utilizes data and technology to create a tailored learning experience for each individual student.

Within this model, educators utilize a variety of methods and resources to gather information on students’ unique learning styles, strengths, weaknesses, and preferences. This may include utilizing assessments, making observations, seeking student feedback, and incorporating data from educational technology platforms.

By carefully analyzing this data, educators are able to gain a deeper understanding of each student’s learning profile. Armed with this knowledge, they can then craft personalized learning opportunities that are specifically designed to meet the individual needs and aspirations of each student.

Learner Assessment in Precision Education

Adaptive assessments.

In the world of education, standardized assessments have been the norm for a long time. However, with the advent of precision education, we are now entering a new era where adaptive assessments are used to measure a learner’s performance. These assessments are dynamic in nature, adjusting the difficulty level based on the individual’s abilities. This means that learners are neither overwhelmed nor bored, resulting in optimal engagement.

One of the major benefits of precision education is the incorporation of continuous feedback mechanisms. This means that learners receive real-time updates on their progress, allowing them to identify their strengths and weaknesses. With this timely information, educators can intervene and provide appropriate support and interventions to address areas that need improvement.

Moreover, with the help of data analytics tools, educators can gain valuable insights into individual and group learning patterns. These analytics allow them to identify trends and patterns, facilitating the optimization of teaching methods to suit the specific needs of learners. With precision education, the potential for growth and improvement is endless.

Outcome-based Evaluation

Outcome-based evaluation in precision education reshapes the traditional approach by shifting the emphasis to measurable outcomes. It goes beyond student grades to encompass a holistic assessment of critical thinking, clinical reasoning, and practical application of knowledge in various real-world situations.

In this dynamic educational model, precision education strives for continuous improvement. By closely analyzing data on an ongoing basis, educators are able to pinpoint areas where the program can be enhanced. This ensures that the curriculum constantly evolves to meet the changing demands of the ever-evolving healthcare landscape. One of the key components of program evaluation in precision education is the development of individualized learning plans. With a personalized approach, these plans cater to the unique needs and goals of each student, leading to a more effective and tailored learning experience.

Precision education is a groundbreaking approach to medical education that prioritizes personalized learning opportunities and tailored interventions. Through careful learner assessment and program evaluation, this approach paves the way for a more efficient and adaptive medical education system. With the adoption of precision education, the medical education industry has the potential to equip future healthcare professionals with the necessary skills and knowledge to tackle the ever-changing landscape of healthcare.

Royal KD, Hecker KG, Farrow R, Smith CD, Brinkman WB. Precision education in pediatric critical care: Current knowledge and future directions. Pediatr Crit Care Med. 2019;20(4):380-386.

Cook DA, Artino AR. Motivation to learn: An overview of contemporary theories. Med Educ. 2016;50(10):997-1014.

Cucina RJ, Berg J. Exploring the Dimensions of Adaptive Learning. Pearson Research Report. 2011.

Oak Ridge school superintendent's contract extended to 2028

Borchers got an outstanding evaluation from the school board, chairman says.

what is evaluation in education

Oak Ridge School Superintendent Bruce Borchers' contract has been extended another year.

The Oak Ridge Board of Education approved the one-year contract extension at the board's Jan. 22 meeting. All four members present voted in favor of the extension, with member Heather Hartmann absent from the meeting at the School Administration Building.

As part of the annual contract extension vote, the board members evaluate Borchers and have the opportunity to meet with him one-on-one to discuss matters and the evaluation.

"His evaluation was an outstanding evaluation," Board Chairman Keys Fillauer said. He added that Borchers tends to give credit for accomplishments to others on his staff and in the individual schools, but that credit ultimately must be given to him as leader of the school system.

Borchers is currently serving his 11th year as superintendent. His new four-year contract will go into effect on June 18. With the one-year extension his contract would expire on June 17, 2028. A four-year contract is the longest one the board can offer the superintendent under state law.

The contract extension carries no increase in pay or benefits. That's done - as it is with other Oak Ridge educators - through the approval of a new school system budget each year. Borchers currently makes an annual salary of $235,455.

The Oak Ridger's News Editor Donna Smith covers Oak Ridge area news. Email her at  [email protected]  and follow her on X, the social media platform formerly known as Twitter, @ridgernewsed.  

Support The Oak Ridger by subscribing. Offers available at  https://subscribe .

COMMENTS

  1. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    The definition problem in evaluation has been around for decades (as early as Carter, 1971), and multiple definitions of evaluation have been offered throughout the years (see Table 1 for some examples). One notable definition is provided by Scriven (1991) and later adopted by the American Evaluation Association (): "Evaluation is the systematic process to determine merit, worth, value, or ...

  2. Evaluation: What is it and why do it?

    Evaluation is a process that critically examines a program and its effectiveness. It can improve program design, implementation, impact, and programming decisions. Learn about the types, purposes, and characteristics of evaluation, as well as how to make evaluation part of your EE program.

  3. Evaluation for education, learning and change

    Evaluation for education, learning and change - theory and practice. Evaluation is part and parcel of educating - yet it can be experienced as a burden and an unnecessary intrusion. We explore the theory and practice of evaluation and some of the key issues for informal and community educators, social pedagogues youth workers and others. ...

  4. Educational Evaluation: What Is It & Importance

    Educational evaluation is acquiring and analyzing data to determine how each student's behavior evolves during their academic career. Evaluation is a continual process more interested in a student's informal academic growth than their formal academic performance. It is interpreted as an individual's growth regarding a desired behavioral ...

  5. Educational evaluation

    Educational evaluation is the evaluation process of characterizing and appraising some aspect/s of an educational process. There are two common purposes in educational evaluation which are, at times, in conflict with one another. Educational institutions usually require evaluation data to demonstrate effectiveness to funders and other ...

  6. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    Program evaluation is the process of systematically gathering empirical data and contextual information about an intervention program—specifically answers to what, who, how, whether, and why questions that will assist in assessing a program's planning, implementation, and/or effectiveness. Figure 1.

  7. Evaluation.gov

    Evaluation 101 provides resources to help you answer those questions and more. You will learn about program evaluation and why it is needed, along with some helpful frameworks that place evaluation in the broader evidence context. Other resources provide helpful overviews of specific types of evaluation you may encounter or be considering ...

  8. PDF Evaluation and Assessment Frameworks for Improving School Outcomes

    evaluation are: combining the improvement and accountability functions of teacher evaluation; accounting for student results in evaluation of teachers; and using teacher evaluation results to shape incentives for teachers. School Evaluation School evaluation presents common policy challenges concerning: aligning external

  9. The Conceptualization of Educational Evaluation: An Analytical Review

    Recent decades have been productive in the conceptualization of educational evaluation, trying to clarify its meaning and exposing the distinction between evaluation and other related concepts. This article reviews the evaluation literature through an analytical framework representing issues addressed by major evaluation approaches in education.

  10. PDF EVALUATION IN EDUCATION

    Evaluation in education has acquired a variety of meanings. Some of them have been in use for almost a half a century. The well-known definition originated by Ralph Tyler perceives evaluation as "The process of determining to what extent educational objectives are actually being realized" (Tyler, 1950, p.69). ...

  11. PDF Measuring success: Evaluating educational programs

    life in an educational organization is worthy of being evaluated. Unlike in the 1960s, when students alone were evaluated, now the entire staff, educational programs and organizational features are similarly subject to monitoring (Nevo, 2001). The main emphasis of evaluation is a systematic analysis with three main objectives (Nevo, 2001; Freidman,

  12. National Center for Education Evaluation and Regional Assistance (NCEE

    The National Center for Education Evaluation and Regional Assistance (NCEE) conducts unbiased large-scale evaluations of education programs and practices supported by federal funds, such as Reading First and Title I of the Elementary and Secondary Education Act.

  13. Evaluation In Education: Meaning, Types, Importance, Principles

    Evaluation is a procedure that reviews a program critically. It is a process that involves careful gathering and evaluating of data on the actions, features, and consequences of a program. Its objective is to evaluate programs, improve program effectiveness, and influence programming decisions. The efficacy of program interventions is assessed ...

  14. (PDF) Educational Evaluation: Functions, Essence and Applications in

    Evaluation is a technique used to acquire reliable and valuable information about previous, ongoing, or completed educational programmes, events, or activities (Osiesi, 2020).

  15. PDF WHAT IS EVALUATION?

    The Key Evaluation Checklist evolved out of the work of a committee set up by the U.S. Office of Education which was to hand out money to dissemi-nate the best educational products to come out of the chain of Federal Labs and R&D Centers (some of which still exist). The submissions were

  16. Education evaluation manual: approaches, designs, and ...

    Education evaluation manual: approaches, designs, and techniques in evaluating educational programs and projects: holistic considerations for developing countries programme and meeting document Person as author

  17. What is evaluation?

    Evaluation is a systematic process to judge merit, worth or significance by combining evidence and values. It can be done for different purposes, such as formative, summative, process, impact, value-for-money and monitoring. Learn about the different types, labels and groups of evaluation and how they apply to education.

  18. The school evaluation process: What to expect

    A school evaluation looks at a student's areas of challenges and strengths. Doing just one test or assessment wouldn't provide all the information an IEP team needs to make decisions about services, supports, and interventions. In most schools, an evaluation is called a comprehensive educational evaluation. Keep in mind that not all schools ...

  19. Evaluation in Education: Meaning, Principles and Functions

    Evaluation procedures are very much neces­sary for educational, vocational and personal guidance. In order to assist the pupils to solve their problems in the educational, vocational and personal fields the counsellor must have an objec­tive knowledge of the pupils abilities, interests, attitudes and other personal characteristics.

  20. Studies in Educational Evaluation

    Studies in Educational Evaluation publishes original reports of evaluation studies. Four types of articles are published by the journal: (a) Empirical evaluation studies representing assessment and evaluation practice in educational systems around the world; (b) Empirical studies related to …. View full aims & scope.

  21. 7 Types of Evaluation in Education

    Types of evaluation B.ed notes is also the same as the above mentioned what is evaluation in education. Types of evaluation B.ed notes are Formative, Summative, and diagnostic. These are some of the primary types of evaluation in education as well as types of evaluation B.ed notes. Conclusion. Evaluation is an integral part of learning and ...

  22. What is an evaluation for special education?

    An evaluation for special education will show a child's strengths and challenges. When kids are having trouble with academics or behavior, there's a process that schools can use to find out what's causing these struggles. This process is called an "evaluation for special education.". The goal is to see if a child has a disability and ...

  23. What is Evaluation in education : concept, defination, importance and

    Evaluation is a process of judging the value or something by certain appraisal. It includes all types and examinations in education, such as test, measurement, and child-centeredness. It has a continuous, comprehensive, cooperative, and remedial nature. It helps to diagnose, guide, clarify, and improve the learning process and the objectives of education.

  24. IES Resources for Supporting Student Engagement and Attendance

    Text messaging is a low-cost practice that districts can adopt to encourage family engagement. The Institute of Education Sciences (IES) at the U.S. Department of Education developed a toolkit that provides information on how districts can develop their own text messaging approach. The toolkit encourages districts to form an attendance team to ...

  25. Evaluation and students' perception of a health equity education

    Health equity is a common theme discussed in health professions education, yet only some researchers have addressed it in entry-level education. The purpose of this study is to serve as an educational intervention pilot to 1) evaluate students' perception of the effectiveness of the DPT program in providing a foundation for health equity education, with or without the benefit of a ...

  26. K-5 SEL Pilot Grant Evaluation

    In 2022-2024 CADRE researchers partnered with the Colorado Department of Education to evaluate a state grant pilot program. This program emerged from Colorado HB 19-1017, which provided pilot K-5 Social-Emotional Learning (SEL) grants to 14 schools.

  27. Precision Education in Medical Education: A Paradigm Shift in Learner

    Precision education is a groundbreaking approach to medical education that prioritizes personalized learning opportunities and tailored interventions. Through careful learner assessment and program evaluation, this approach paves the way for a more efficient and adaptive medical education system.

  28. Oak Ridge school superintendent's contract extended to '28

    Oak Ridge School Superintendent Bruce Borchers' contract has been extended another year.. The Oak Ridge Board of Education approved the one-year contract extension at the board's Jan. 22 meeting ...