• Copy/Paste Link Link Copied

Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research to Make Curricular & Instructional Decisions

Paula J. Stanovich and Keith E. Stanovich University of Toronto

Produced by RMC Research Corporation, Portsmouth, New Hampshire

This publication was produced under National Institute for Literacy Contract No. ED-00CO-0093 with RMC Research Corporation. Sandra Baxter served as the contracting officer's technical representative. The views expressed herein do not necessarily represent the policies of the National Institute for Literacy. No official endorsement by the National Institute for Literacy or any product, commodity, service, or enterprise is intended or should be inferred.

The National Institute for Literacy

Sandra Baxter, Interim Executive Director Lynn Reddy, Communications Director

To order copies of this booklet, contact the National Institute for Literacy at EdPubs, PO Box 1398, Jessup, MD 20794-1398. Call 800-228-8813 or email [email protected] .

The National Institute for Literacy, an independent federal organization, supports the development of high quality state, regional, and national literacy services so that all Americans can develop the literacy skills they need to succeed at work, at home, and in the community.

The Partnership for Reading, a project administered by the National Institute for Literacy, is a collaborative effort of the National Institute for Literacy, the National Institute of Child Health and Human Development, the U.S. Department of Education, and the U.S. Department of Health and Human Services to make evidence-based reading research available to educators, parents, policy makers, and others with an interest in helping all people learn to read well.

Editorial support provided by C. Ralph Adler and Elizabeth Goldman, and design/production support provided by Diane Draper and Bob Kozman, all of RMC Research Corporation.

Introduction

In the recent move toward standards-based reform in public education, many educational reform efforts require schools to demonstrate that they are achieving educational outcomes with students performing at a required level of achievement. Federal and state legislation, in particular, has codified this standards-based movement and tied funding and other incentives to student achievement.

At first, demonstrating student learning may seem like a simple task, but reflection reveals that it is a complex challenge requiring educators to use specific knowledge and skills. Standards-based reform has many curricular and instructional prerequisites. The curriculum must represent the most important knowledge, skills, and attributes that schools want their students to acquire because these learning outcomes will serve as the basis of assessment instruments. Likewise, instructional methods should be appropriate for the designed curriculum. Teaching methods should lead to students learning the outcomes that are the focus of the assessment standards.

Standards- and assessment-based educational reforms seek to obligate schools and teachers to supply evidence that their instructional methods are effective. But testing is only one of three ways to gather evidence about the effectiveness of instructional methods. Evidence of instructional effectiveness can come from any of the following sources:

  • Demonstrated student achievement in formal testing situations implemented by the teacher, school district, or state;
  • Published findings of research-based evidence that the instructional methods being used by teachers lead to student achievement; or
  • Proof of reason-based practice that converges with a research-based consensus in the scientific literature. This type of justification of educational practice becomes important when direct evidence may be lacking (a direct test of the instructional efficacy of a particular method is absent), but there is a theoretical link to research-based evidence that can be traced.

Each of these methods has its pluses and minuses. While testing seems the most straightforward, it is not necessarily the clear indicator of good educational practice that the public seems to think it is. The meaning of test results is often not immediately clear. For example, comparing averages or other indicators of overall performance from tests across classrooms, schools, or school districts takes no account of the resources and support provided to a school, school district, or individual professional. Poor outcomes do not necessarily indict the efforts of physicians in Third World countries who work with substandard equipment and supplies. Likewise, objective evidence of below-grade or below-standard mean performance of a group of students should not necessarily indict their teachers if essential resources and supports (e.g., curriculum materials, institutional aid, parental cooperation) to support teaching efforts were lacking. However, the extent to which children could learn effectively even in under-equipped schools is not known because evidence-based practices are, by and large, not implemented. That is, there is evidence that children experiencing academic difficulties can achieve more educationally if they are taught with effective methods; sadly, scientific research about what works does not usually find its way into most classrooms.

Testing provides a useful professional calibrator, but it requires great contextual sensitivity in interpretation. It is not the entire solution for assessing the quality of instructional efforts. This is why research-based and reason-based educational practice are also crucial for determining the quality and impact of programs. Teachers thus have the responsibility to be effective users and interpreters of research. Providing a survey and synthesis of the most effective practices for a variety of key curriculum goals (such as literacy and numeracy) would seem to be a helpful idea, but no document could provide all of that information. (Many excellent research syntheses exist, such as the National Reading Panel, 2000; Snow, Burns, & Griffin, 1998; Swanson, 1999, but the knowledge base about effective educational practices is constantly being updated, and many issues remain to be settled.)

As professionals, teachers can become more effective and powerful by developing the skills to recognize scientifically based practice and, when the evidence is not available, use some basic research concepts to draw conclusions on their own. This paper offers a primer for those skills that will allow teachers to become independent evaluators of educational research.

The Formal Scientific Method and Scientific Thinking in Educational Practice

When you go to your family physician with a medical complaint, you expect that the recommended treatment has proven to be effective with many other patients who have had the same symptoms. You may even ask why a particular medication is being recommended for you. The doctor may summarize the background knowledge that led to that recommendation and very likely will cite summary evidence from the drug's many clinical trials and perhaps even give you an overview of the theory behind the drug's success in treating symptoms like yours.

All of this discussion will probably occur in rather simple terms, but that does not obscure the fact that the doctor has provided you with data to support a theory about your complaint and its treatment. The doctor has shared knowledge of medical science with you. And while everyone would agree that the practice of medicine has its "artful" components (for example, the creation of a healing relationship between doctor and patient), we have come to expect and depend upon the scientific foundation that underpins even the artful aspects of medical treatment. Even when we do not ask our doctors specifically for the data, we assume it is there, supporting our course of treatment.

Actually, Vaughn and Dammann (2001) have argued that the correct analogy is to say that teaching is in part a craft, rather than an art. They point out that craft knowledge is superior to alternative forms of knowledge such as superstition and folklore because, among other things, craft knowledge is compatible with scientific knowledge and can be more easily integrated with it. One could argue that in this age of education reform and accountability, educators are being asked to demonstrate that their craft has been integrated with science--that their instructional models, methods, and materials can be likened to the evidence a physician should be able to produce showing that a specific treatment will be effective. As with medicine, constructing teaching practice on a firm scientific foundation does not mean denying the craft aspects of teaching.

Architecture is another professional practice that, like medicine and education, grew from being purely a craft to a craft based firmly on a scientific foundation. Architects wish to design beautiful buildings and environments, but they must also apply many foundational principles of engineering and adhere to structural principles. If they do not, their buildings, however beautiful they may be, will not stand. Similarly, a teacher seeks to design lessons that stimulate students and entice them to learn--lessons that are sometimes a beauty to behold. But if the lessons are not based in the science of pedagogy, they, like poorly constructed buildings, will fail.

Education is informed by formal scientific research through the use of archival research-based knowledge such as that found in peer-reviewed educational journals. Preservice teachers are first exposed to the formal scientific research in their university teacher preparation courses (it is hoped), through the instruction received from their professors, and in their course readings (e.g., textbooks, journal articles). Practicing teachers continue their exposure to the results of formal scientific research by subscribing to and reading professional journals, by enrolling in graduate programs, and by becoming lifelong learners.

Scientific thinking in practice is what characterizes reflective teachers--those who inquire into their own practice and who examine their own classrooms to find out what works best for them and their students. What follows in this document is, first, a "short course" on how to become an effective consumer of the archival literature that results from the conduct of formal scientific research in education and, second, a section describing how teachers can think scientifically in their ongoing reflection about their classroom practice.

Being able to access mechanisms that evaluate claims about teaching methods and to recognize scientific research and its findings is especially important for teachers because they are often confronted with the view that "anything goes" in the field of education--that there is no such thing as best practice in education, that there are no ways to verify what works best, that teachers should base their practice on intuition, or that the latest fad must be the best way to teach, please a principal, or address local school reform. The "anything goes" mentality actually represents a threat to teachers' professional autonomy. It provides a fertile environment for gurus to sell untested educational "remedies" that are not supported by an established research base.

Teachers as independent evaluators of research evidence

One factor that has impeded teachers from being active and effective consumers of educational science has been a lack of orientation and training in how to understand the scientific process and how that process results in the cumulative growth of knowledge that leads to validated educational practice. Educators have only recently attempted to resolve educational disputes scientifically, and teachers have not yet been armed with the skills to evaluate disputes on their own.

Educational practice has suffered greatly because its dominant model for resolving or adjudicating disputes has been more political (with its corresponding factions and interest groups) than scientific. The field's failure to ground practice in the attitudes and values of science has made educators susceptible to the "authority syndrome" as well as fads and gimmicks that ignore evidence-based practice.

When our ancestors needed information about how to act, they would ask their elders and other wise people. Contemporary society and culture are much more complex. Mass communication allows virtually anyone (on the Internet, through self-help books) to proffer advice, to appear to be a "wise elder." The current problem is how to sift through the avalanche of misguided and uninformed advice to find genuine knowledge. Our problem is not information; we have tons of information. What we need are quality control mechanisms.

Peer-reviewed research journals in various disciplines provide those mechanisms. However, even with mechanisms like these in behavioral science and education, it is all too easy to do an "end run" around the quality control they provide. Powerful information dissemination outlets such as publishing houses and mass media frequently do not discriminate between good and bad information. This provides a fertile environment for gurus to sell untested educational "remedies" that are not supported by an established research base and, often, to discredit science, scientific evidence, and the notion of research-based best practice in education. As Gersten (2001) notes, both seasoned and novice teachers are "deluged with misinformation" (p. 45).

We need tools for evaluating the credibility of these many and varied sources of information; the ability to recognize research-based conclusions is especially important. Acquiring those tools means understanding scientific values and learning methods for making inferences from the research evidence that arises through the scientific process. These values and methods were recently summarized by a panel of the National Academy of Sciences convened on scientific inquiry in education (Shavelson & Towne, 2002), and our discussion here will be completely consistent with the conclusions of that NAS panel.

The scientific criteria for evaluating knowledge claims are not complicated and could easily be included in initial teacher preparation programs, but they usually are not (which deprives teachers from an opportunity to become more efficient and autonomous in their work right at the beginning of their careers). These criteria include:

  • the publication of findings in refereed journals (scientific publications that employ a process of peer review),
  • the duplication of the results by other investigators, and
  • a consensus within a particular research community on whether there is a critical mass of studies that point toward a particular conclusion.

In their discussion of the evolution of the American Educational Research Association (AERA) conference and the importance of separating research evidence from opinion when making decisions about instructional practice, Levin and O'Donnell (2000) highlight the importance of enabling teachers to become independent evaluators of research evidence. Being aware of the importance of research published in peer-reviewed scientific journals is only the first step because this represents only the most minimal of criteria. Following is a review of some of the principles of research-based evaluation that teachers will find useful in their work.

Publicly verifiable research conclusions: Replication and Peer Review

Source credibility: the consumer protection of peer reviewed journals..

The front line of defense for teachers against incorrect information in education is the existence of peer-reviewed journals in education, psychology, and other related social sciences. These journals publish empirical research on topics relevant to classroom practice and human cognition and learning. They are the first place that teachers should look for evidence of validated instructional practices.

As a general quality control mechanism, peer review journals provide a "first pass" filter that teachers can use to evaluate the plausibility of educational claims. To put it more concretely, one ironclad criterion that will always work for teachers when presented with claims of uncertain validity is the question: Have findings supporting this method been published in recognized scientific journals that use some type of peer review procedure? The answer to this question will almost always separate pseudoscientific claims from the real thing.

In a peer review, authors submit a paper to a journal for publication, where it is critiqued by several scientists. The critiques are reviewed by an editor (usually a scientist with an extensive history of work in the specialty area covered by the journal). The editor then decides whether the weight of opinion warrants immediate publication, publication after further experimentation and statistical analysis, or rejection because the research is flawed or does not add to the knowledge base. Most journals carry a statement of editorial policy outlining their exact procedures for publication, so it is easy to check whether a journal is in fact, peer-reviewed.

Peer review is a minimal criterion, not a stringent one. Not all information in peer-reviewed scientific journals is necessarily correct, but it has at the very least undergone a cycle of peer criticism and scrutiny. However, it is because the presence of peer-reviewed research is such a minimal criterion that its absence becomes so diagnostic. The failure of an idea, a theory, an educational practice, behavioral therapy, or a remediation technique to have adequate documentation in the peer-reviewed literature of a scientific discipline is a very strong indication to be wary of the practice.

The mechanisms of peer review vary somewhat from discipline to discipline, but the underlying rationale is the same. Peer review is one way (replication of a research finding is another) that science institutionalizes the attitudes of objectivity and public criticism. Ideas and experimentation undergo a honing process in which they are submitted to other critical minds for evaluation. Ideas that survive this critical process have begun to meet the criterion of public verifiability. The peer review process is far from perfect, but it really is the only external consumer protection that teachers have.

The history of reading instruction illustrates the high cost that is paid when the peer-reviewed literature is ignored, when the normal processes of scientific adjudication are replaced with political debates and rhetorical posturing. A vast literature has been generated on best practices that foster children's reading acquisition (Adams, 1990; Anderson, Hiebert, Scott, & Wilkinson, 1985; Chard & Osborn, 1999; Cunningham & Allington, 1994; Ehri, Nunes, Stahl, & Willows, 2001; Moats, 1999; National Reading Panel, 2000; Pearson, 1993; Pressley, 1998; Pressley, Rankin, & Yokol, 1996; Rayner, Foorman, Perfetti, Pesetsky, & Seidenberg, 2002; Reading Coherence Initiative, 1999; Snow, Burns, & Griffin, 1998; Spear-Swerling & Sternberg, 2001). Yet much of this literature remains unknown to many teachers, contributing to the frustrating lack of clarity about accepted, scientifically validated findings and conclusions on reading acquisition.

Teachers should also be forewarned about the difference between professional education journals that are magazines of opinion in contrast to journals where primary reports of research, or reviews of research, are peer reviewed. For example, the magazines Phi Delta Kappan and Educational Leadership both contain stimulating discussions of educational issues, but neither is a peer-reviewed journal of original research. In contrast, the American Educational Research Journal (a flagship journal of the AERA) and the Journal of Educational Psychology (a flagship journal of the American Psychological Association) are both peer-reviewed journals of original research. Both are main sources for evidence on validated techniques of reading instruction and for research on aspects of the reading process that are relevant to a teacher's instructional decisions.

This is true, too, of presentations at conferences of educational organizations. Some are data-based presentations of original research. Others are speeches reflecting personal opinion about educational problems. While these talks can be stimulating and informative, they are not a substitute for empirical research on educational effectiveness.

Replication and the importance of public verifiability.

Research-based conclusions about educational practice are public in an important sense: they do not exist solely in the mind of a particular individual but have been submitted to the scientific community for criticism and empirical testing by others. Knowledge considered "special"--the province of the thought of an individual and immune from scrutiny and criticism by others--can never have the status of scientific knowledge. Research-based conclusions, when published in a peer reviewed journal, become part of the public realm, available to all, in a way that claims of "special expertise" are not.

Replication is the second way that science uses to make research-based conclusions concrete and "public." In order to be considered scientific, a research finding must be presented to other researchers in the scientific community in a way that enables them to attempt the same experiment and obtain the same results. When the same results occur, the finding has been replicated . This process ensures that a finding is not the result of the errors or biases of a particular investigator. Replicable findings become part of the converging evidence that forms the basis of a research-based conclusion about educational practice.

John Donne told us that "no man is an island." Similarly, in science, no researcher is an island. Each investigator is connected to the research community and its knowledge base. This interconnection enables science to grow cumulatively and for research-based educational practice to be built on a convergence of knowledge from a variety of sources. Researchers constantly build on previous knowledge in order to go beyond what is currently known. This process is possible only if research findings are presented in such a way that any investigator can use them to build on.

Philosopher Daniel Dennett (1995) has said that science is "making mistakes in public. Making mistakes for all to see, in the hopes of getting the others to help with the corrections" (p. 380). We might ask those proposing an educational innovation for the evidence that they have in fact "made some mistakes in public." Legitimate scientific disciplines can easily provide such evidence. For example, scientists studying the psychology of reading once thought that reading difficulties were caused by faulty eye movements. This hypothesis has been shown to be in error, as has another that followed it, that so-called visual reversal errors were a major cause of reading difficulty. Both hypotheses were found not to square with the empirical evidence (Rayner, 1998; Share & Stanovich, 1995). The hypothesis that reading difficulties can be related to language difficulties at the phonological level has received much more support (Liberman, 1999; National Reading Panel, 2000; Rayner, Foorman, Perfetti, Pesetsky, & Seidenberg, 2002; Shankweiler, 1999; Stanovich, 2000).

After making a few such "errors" in public, reading scientists have begun, in the last 20 years, to get it right. But the only reason teachers can have confidence that researchers are now "getting it right" is that researchers made it open, public knowledge when they got things wrong. Proponents of untested and pseudoscientific educational practices will never point to cases where they "got it wrong" because they are not committed to public knowledge in the way that actual science is. These proponents do not need, as Dennett says, "to get others to help in making the corrections" because they have no intention of correcting their beliefs and prescriptions based on empirical evidence.

Education is so susceptible to fads and unproven practices because of its tacit endorsement of a personalistic view of knowledge acquisition--one that is antithetical to the scientific value of the public verifiability of knowledge claims. Many educators believe that knowledge resides within particular individuals--with particularly elite insights--who then must be called upon to dispense this knowledge to others. Indeed, some educators reject public, depersonalized knowledge in social science because they believe it dehumanizes people. Science, however, with its conception of publicly verifiable knowledge, actually democratizes knowledge. It frees practitioners and researchers from slavish dependence on authority.

Subjective, personalized views of knowledge degrade the human intellect by creating conditions that subjugate it to an elite whose "personal" knowledge is not accessible to all (Bronowski, 1956, 1977; Dawkins, 1998; Gross, Levitt, & Lewis, 1997; Medawar, 1982, 1984, 1990; Popper, 1972; Wilson, 1998). Empirical science, by generating knowledge and moving it into the public domain, is a liberating force. Teachers can consult the research and decide for themselves whether the state of the literature is as the expert portrays it. All teachers can benefit from some rudimentary grounding in the most fundamental principles of scientific inference. With knowledge of a few uncomplicated research principles, such as control, manipulation, and randomization, anyone can enter the open, public discourse about empirical findings. In fact, with the exception of a few select areas such as the eye movement research mentioned previously, much of the work described in noted summaries of reading research (e.g., Adams, 1990; Snow, Burns, & Griffin, 1998) could easily be replicated by teachers themselves.

There are many ways that the criteria of replication and peer review can be utilized in education to base practitioner training on research-based best practice. Take continuing teacher education in the form of inservice sessions, for example. Teachers and principals who select speakers for professional development activities should ask speakers for the sources of their conclusions in the form of research evidence in peer-reviewed journals. They should ask speakers for bibliographies of the research evidence published on the practices recommended in their presentations.

The science behind research-based practice relies on systematic empiricism

Empiricism is the practice of relying on observation. Scientists find out about the world by examining it. The refusal by some scientists to look into Galileo's telescope is an example of how empiricism has been ignored at certain points in history. It was long believed that knowledge was best obtained through pure thought or by appealing to authority. Galileo claimed to have seen moons around the planet Jupiter. Another scholar, Francesco Sizi, attempted to refute Galileo, not with observations, but with the following argument:

There are seven windows in the head, two nostrils, two ears, two eyes and a mouth; so in the heavens there are two favorable stars, two unpropitious, two luminaries, and Mercury alone undecided and indifferent. From which and many other similar phenomena of nature such as the seven metals, etc., which it were tedious to enumerate, we gather that the number of planets is necessarily seven...ancient nations, as well as modern Europeans, have adopted the division of the week into seven days, and have named them from the seven planets; now if we increase the number of planets, this whole system falls to the ground...moreover, the satellites are invisible to the naked eye and therefore can have no influence on the earth and therefore would be useless and therefore do not exist. (Holton & Roller, 1958, p. 160)

Three centuries of the demonstrated power of the empirical approach give us an edge on poor Sizi. Take away those years of empiricism, and many of us might have been there nodding our heads and urging him on. In fact, the empirical approach is not necessarily obvious, which is why we often have to teach it, even in a society that is dominated by science.

Empiricism pure and simple is not enough, however. Observation itself is fine and necessary, but pure, unstructured observation of the natural world will not lead to scientific knowledge. Write down every observation you make from the time you get up in the morning to the time you go to bed on a given day. When you finish, you will have a great number of facts, but you will not have a greater understanding of the world. Scientific observation is termed systematic because it is structured so that the results of the observation reveal something about the underlying causal structure of events in the world. Observations are structured so that, depending upon the outcome of the observation, some theories of the causes of the outcome are supported and others rejected.

Teachers can benefit by understanding two things about research and causal inferences. The first is the simple (but sometimes obscured) fact that statements about best instructional practices are statements that contain a causal claim. These statements claim that one type of method or practice causes superior educational outcomes. Second, teachers must understand how the logic of the experimental method provides the critical support for making causal inferences.

Science addresses testable questions

Science advances by positing theories to account for particular phenomena in the world, by deriving predictions from these theories, by testing the predictions empirically, and by modifying the theories based on the tests (the sequence is typically theory -> prediction -> test -> theory modification). What makes a theory testable? A theory must have specific implications for observable events in the natural world.

Science deals only with a certain class of problem: the kind that is empirically solvable. That does not mean that different classes of problems are inherently solvable or unsolvable and that this division is fixed forever. Quite the contrary: some problems that are currently unsolvable may become solvable as theory and empirical techniques become more sophisticated. For example, decades ago historians would not have believed that the controversial issue of whether Thomas Jefferson had a child with his slave Sally Hemings was an empirically solvable question. Yet, by 1998, this problem had become solvable through advances in genetic technology, and a paper was published in the journal Nature (Foster, Jobling, Taylor, Donnelly, Deknijeff, Renemieremet, Zerjal, & Tyler-Smith, 1998) on the question.

The criterion of whether a problem is "testable" is called the falsifiability criterion: a scientific theory must always be stated in such a way that the predictions derived from it can potentially be shown to be false. The falsifiability criterion states that, for a theory to be useful, the predictions drawn from it must be specific. The theory must go out on a limb, so to speak, because in telling us what should happen, the theory must also imply that certain things will not happen. If these latter things do happen, it is a clear signal that something is wrong with the theory. It may need to be modified, or we may need to look for an entirely new theory. Either way, we will end up with a theory that is closer to the truth.

In contrast, if a theory does not rule out any possible observations, then the theory can never be changed, and we are frozen into our current way of thinking with no possibility of progress. A successful theory cannot posit or account for every possible happening. Such a theory robs itself of any predictive power.

What we are talking about here is a certain type of intellectual honesty. In science, the proponent of a theory is always asked to address this question before the data are collected: "What data pattern would cause you to give up, or at least to alter, this theory?" In the same way, the falsifiability criterion is a useful consumer protection for the teacher when evaluating claims of educational effectiveness. Proponents of an educational practice should be asked for evidence; they should also be willing to admit that contrary data will lead them to abandon the practice. True scientific knowledge is held tentatively and is subject to change based on contrary evidence. Educational remedies not based on scientific evidence will often fail to put themselves at risk by specifying what data patterns would prove them false.

Objectivity and intellectual honesty

Objectivity, another form of intellectual honesty in research, means that we let nature "speak for itself" without imposing our wishes on it--that we report the results of experimentation as accurately as we can and that we interpret them as fairly as possible. (The fact that this goal is unattainable for any single human being should not dissuade us from holding objectivity as a value.)

In the language of the general public, open-mindedness means being open to possible theories and explanations for a particular phenomenon. But in science it means that and something more. Philosopher Jonathan Adler (1998) teaches us that science values another aspect of open-mindedness even more highly: "What truly marks an open-minded person is the willingness to follow where evidence leads. The open-minded person is willing to defer to impartial investigations rather than to his own predilections...Scientific method is attunement to the world, not to ourselves" (p. 44).

Objectivity is critical to the process of science, but it does not mean that such attitudes must characterize each and every scientist for science as a whole to work. Jacob Bronowski (1973, 1977) often argued that the unique power of science to reveal knowledge about the world does not arise because scientists are uniquely virtuous (that they are completely objective or that they are never biased in interpreting findings, for example). It arises because fallible scientists are immersed in a process of checks and balances --a process in which scientists are always there to criticize and to root out errors. Philosopher Daniel Dennett (1999/2000) points out that "scientists take themselves to be just as weak and fallible as anybody else, but recognizing those very sources of error in themselvesÉthey have devised elaborate systems to tie their own hands, forcibly preventing their frailties and prejudices from infecting their results" (p. 42). More humorously, psychologist Ray Nickerson (1998) makes the related point that the vanities of scientists are actually put to use by the scientific process, by noting that it is "not so much the critical attitude that individual scientists have taken with respect to their own ideas that has given science its success...but more the fact that individual scientists have been highly motivated to demonstrate that hypotheses that are held by some other scientists are false" (p. 32). These authors suggest that the strength of scientific knowledge comes not because scientists are virtuous, but from the social process where scientists constantly cross-check each others' knowledge and conclusions.

The public criteria of peer review and replication of findings exist in part to keep checks on the objectivity of individual scientists. Individuals cannot hide bias and nonobjectivity by personalizing their claims and keeping them from public scrutiny. Science does not accept findings that have failed the tests of replication and peer review precisely because it wants to ensure that all findings in science are in the public domain, as defined above. Purveyors of pseudoscientific educational practices fail the test of objectivity and are often identifiable by their attempts to do an "end run" around the public mechanisms of science by avoiding established peer review mechanisms and the information-sharing mechanisms that make replication possible. Instead, they attempt to promulgate their findings directly to consumers, such as teachers.

The principle of converging evidence

The principle of converging evidence has been well illustrated in the controversies surrounding the teaching of reading. The methods of systematic empiricism employed in the study of reading acquisition are many and varied. They include case studies, correlational studies, experimental studies, narratives, quasi-experimental studies, surveys, epidemiological studies and many others. The results of many of these studies have been synthesized in several important research syntheses (Adams, 1990; Ehri et al., 2001; National Reading Panel, 2000; Pressley, 1998; Rayner et al., 2002; Reading Coherence Initiative, 1999; Share & Stanovich, 1995; Snow, Burns, & Griffin, 1998; Snowling, 2000; Spear-Swerling & Sternberg, 2001; Stanovich, 2000). These studies were used in a process of establishing converging evidence, a principle that governs the drawing of the conclusion that a particular educational practice is research-based.

The principle of converging evidence is applied in situations requiring a judgment about where the "preponderance of evidence" points. Most areas of science contain competing theories. The extent to which a particular study can be seen as uniquely supporting one particular theory depends on whether other competing explanations have been ruled out. A particular experimental result is never equally relevant to all competing theories. An experiment may be a very strong test of one or two alternative theories but a weak test of others. Thus, research is considered highly convergent when a series of experiments consistently supports a given theory while collectively eliminating the most important competing explanations. Although no single experiment can rule out all alternative explanations, taken collectively, a series of partially diagnostic experiments can lead to a strong conclusion if the data converge.

Contrast this idea of converging evidence with the mistaken view that a problem in science can be solved with a single, crucial experiment, or that a single critical insight can advance theory and overturn all previous knowledge. This view of scientific progress fits nicely with the operation of the news media, in which history is tracked by presenting separate, disconnected "events" in bite-sized units. This is a gross misunderstanding of scientific progress and, if taken too seriously, leads to misconceptions about how conclusions are reached about research-based practices.

One experiment rarely decides an issue, supporting one theory and ruling out all others. Issues are most often decided when the community of scientists gradually begins to agree that the preponderance of evidence supports one alternative theory rather than another. Scientists do not evaluate data from a single experiment that has finally been designed in the perfect way. They most often evaluate data from dozens of experiments, each containing some flaws but providing part of the answer.

Although there are many ways in which an experiment can go wrong (or become confounded ), a scientist with experience working on a particular problem usually has a good idea of what most of the critical factors are, and there are usually only a few. The idea of converging evidence tells us to examine the pattern of flaws running through the research literature because the nature of this pattern can either support or undermine the conclusions that we might draw.

For example, suppose that the findings from a number of different experiments were largely consistent in supporting a particular conclusion. Given the imperfect nature of experiments, we would evaluate the extent and nature of the flaws in these studies. If all the experiments were flawed in a similar way, this circumstance would undermine confidence in the conclusions drawn from them because the consistency of the outcome may simply have resulted from a particular, consistent flaw. On the other hand, if all the experiments were flawed in different ways, our confidence in the conclusions increases because it is less likely that the consistency in the results was due to a contaminating factor that confounded all the experiments. As Anderson and Anderson (1996) note, "When a conceptual hypothesis survives many potential falsifications based on different sets of assumptions, we have a robust effect." (p. 742).

Suppose that five different theoretical summaries (call them A, B, C, D, and E) of a given set of phenomena exist at one time and are investigated in a series of experiments. Suppose that one set of experiments represents a strong test of theories A, B, and C, and that the data largely refute theories A and B and support C. Imagine also that another set of experiments is a particularly strong test of theories C, D, and E, and that the data largely refute theories D and E and support C. In such a situation, we would have strong converging evidence for theory C. Not only do we have data supportive of theory C, but we have data that contradict its major competitors. Note that no one experiment tests all the theories, but taken together, the entire set of experiments allows a strong inference.

In contrast, if the two sets of experiments each represent strong tests of B, C, and E, and the data strongly support C and refute B and E, the overall support for theory C would be less strong than in our previous example. The reason is that, although data supporting theory C have been generated, there is no strong evidence ruling out two viable alternative theories (A and D). Thus research is highly convergent when a series of experiments consistently supports a given theory while collectively eliminating the most important competing explanations. Although no single experiment can rule out all alternative explanations, taken collectively, a series of partially diagnostic experiments can lead to a strong conclusion if the data converge in the manner of our first example.

Increasingly, the combining of evidence from disparate studies to form a conclusion is being done more formally by the use of the statistical technique termed meta-analysis (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Hunter & Schmidt, 1990; Rosenthal, 1995; Schmidt, 1992; Swanson, 1999) which has been used extensively to establish whether various medical practices are research based. In a medical context, meta-analysis:

involves adding together the data from many clinical trials to create a single pool of data big enough to eliminate much of the statistical uncertainty that plagues individual trials...The great virtue of meta-analysis is that clear findings can emerge from a group of studies whose findings are scattered all over the map. (Plotkin,1996, p. 70)

The use of meta-analysis for determining the research validation of educational practices is just the same as in medicine. The effects obtained when one practice is compared against another are expressed in a common statistical metric that allows comparison of effects across studies. The findings are then statistically amalgamated in some standard ways (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Swanson, 1999) and a conclusion about differential efficacy is reached if the amalgamation process passes certain statistical criteria. In some cases, of course, no conclusion can be drawn with confidence, and the result of the meta-analysis is inconclusive.

More and more commentators on the educational research literature are calling for a greater emphasis on meta-analysis as a way of dampening the contentious disputes about conflicting studies that plague education and other behavioral sciences (Kavale & Forness, 1995; Rosnow & Rosenthal, 1989; Schmidt, 1996; Stanovich, 2001; Swanson, 1999). The method is useful for ending disputes that seem to be nothing more than a "he-said, she-said" debate. An emphasis on meta-analysis has often revealed that we actually have more stable and useful findings than is apparent from a perusal of the conflicts in our journals.

The National Reading Panel (2000) found just this in their meta-analysis of the evidence surrounding several issues in reading education. For example, they concluded that the results of a meta-analysis of the results of 66 comparisons from 38 different studies indicated "solid support for the conclusion that systematic phonics instruction makes a bigger contribution to children's growth in reading than alternative programs providing unsystematic or no phonics instruction" (p. 2-84). In another section of their report, the National Reading Panel reported that a meta-analysis of 52 studies of phonemic awareness training indicated that "teaching children to manipulate the sounds in language helps them learn to read. Across the various conditions of teaching, testing, and participant characteristics, the effect sizes were all significantly greater than chance and ranged from large to small, with the majority in the moderate range. Effects of phonemic awareness training on reading lasted well beyond the end of training" (p. 2-5).

A statement by a task force of the American Psychological Association (Wilkinson, 1999) on statistical methods in psychology journals provides an apt summary for this section. The task force stated that investigators should not "interpret a single study's results as having importance independent of the effects reported elsewhere in the relevant literature" (p. 602). Science progresses by convergence upon conclusions. The outcomes of one study can only be interpreted in the context of the present state of the convergence on the particular issue in question.

The logic of the experimental method

Scientific thinking is based on the ideas of comparison, control, and manipulation . In a true experimental study, these characteristics of scientific investigation must be arranged to work in concert.

Comparison alone is not enough to justify a causal inference. In methodology texts, correlational investigations (which involve comparison only) are distinguished from true experimental investigations that warrant much stronger causal inferences because they involve comparison, control, and manipulation. The mere existence of a relationship between two variables does not guarantee that changes in one are causing changes in the other. Correlation does not imply causation.

There are two potential problems with drawing causal inferences from correlational evidence. The first is called the third-variable problem. It occurs when the correlation between the two variables does not indicate a direct causal path between them but arises because both variables are related to a third variable that has not even been measured.

The second reason is called the directionality problem. It creates potential interpretive difficulties because even if two variables have a direct causal relationship, the direction of that relationship is not indicated by the mere presence of the correlation. In short, a correlation between variables A and B could arise because changes in A are causing changes in B or because changes in B are causing changes in A. The mere presence of the correlation does not allow us to decide between these two possibilities.

The heart of the experimental method lies in manipulation and control. In contrast to a correlational study, where the investigator simply observes whether the natural fluctuation in two variables displays a relationship, the investigator in a true experiment manipulates the variable thought to be the cause (the independent variable) and looks for an effect on the variable thought to be the effect (the dependent variable ) while holding all other variables constant by control and randomization. This method removes the third-variable problem because, in the natural world, many different things are related. The experimental method may be viewed as a way of prying apart these naturally occurring relationships. It does so because it isolates one particular variable (the hypothesized cause) by manipulating it and holding everything else constant (control).

When manipulation is combined with a procedure known as random assignment (in which the subjects themselves do not determine which experimental condition they will be in but, instead, are randomly assigned to one of the experimental groups), scientists can rule out alternative explanations of data patterns. By using manipulation, experimental control, and random assignment, investigators construct stronger comparisons so that the outcome eliminates alternative theories and explanations.

The need for both correlational methods and true experiments

As strong as they are methodologically, studies employing true experimental logic are not the only type that can be used to draw conclusions. Correlational studies have value. The results from many different types of investigation, including correlational studies, can be amalgamated to derive a general conclusion. The basis for conclusion rests on the convergence observed from the variety of methods used. This is most certainly true in classroom and curriculum research. It is necessary to amalgamate the results from not only experimental investigations, but correlational studies, nonequivalent control group studies, time series designs, and various other quasi-experimental designs and multivariate correlational designs, All have their strengths and weaknesses. For example, it is often (but not always) the case that experimental investigations are high in internal validity, but limited in external validity, whereas correlational studies are often high in external validity, but low in internal validity.

Internal validity concerns whether we can infer a causal effect for a particular variable. The more a study employs the logic of a true experiment (i.e., includes manipulation, control, and randomization), the more we can make a strong causal inference. External validity concerns the generalizability of the conclusion to the population and setting of interest. Internal and external validity are often traded off across different methodologies. Experimental laboratory investigations are high in internal validity but may not fully address concerns about external validity. Field classroom investigations, on the other hand, are often quite high in external validity but because of the logistical difficulties involved in carrying them out, they are often quite low in internal validity. That is why we need to look for a convergence of results, not just consistency from one method. Convergence increases our confidence in the external and internal validity of our conclusions.

Again, this underscores why correlational studies can contribute to knowledge. First, some variables simply cannot be manipulated for ethical reasons (for instance, human malnutrition or physical disabilities). Other variables, such as birth order, sex, and age, are inherently correlational because they cannot be manipulated, and therefore the scientific knowledge concerning them must be based on correlational evidence. Finally, logistical difficulties in classroom and curriculum research often make it impossible to achieve the logic of the true experiment. However, this circumstance is not unique to educational or psychological research. Astronomers obviously cannot manipulate all the variables affecting the objects they study, yet they are able to arrive at conclusions.

Complex correlational techniques are essential in the absence of experimental research because complex correlational statistics such as multiple regression, path analysis, and structural equation modeling that allow for the partial control of third variables when those variables can be measured. These statistics allow us to recalculate the correlation between two variables after the influence of other variables is removed. If a potential third variable can be measured, complex correlational statistics can help us determine whether that third variable is determining the relationship. These correlational statistics and designs help to rule out certain causal hypotheses, even if they cannot demonstrate the true causal relation definitively.

Stages of scientific investigation: The Role of Case Studies and Qualitative Investigations

The educational literature includes many qualitative investigations that focus less on issues of causal explanation and variable control and more on thick description , in the manner of the anthropologist (Geertz, 1973, 1979). The context of a person's behavior is described as much as possible from the standpoint of the participant. Many different fields (e.g., anthropology, psychology, education) contain case studies where the focus is detailed description and contextualization of the situation of a single participant (or very few participants).

The usefulness of case studies and qualitative investigations is strongly determined by how far scientific investigation has advanced in a particular area. The insights gained from case studies or qualitative investigations may be quite useful in the early stages of an investigation of a certain problem. They can help us determine which variables deserve more intense study by drawing attention to heretofore unrecognized aspects of a person's behavior and by suggesting how understanding of behavior might be sharpened by incorporating the participant's perspective.

However, when we move from the early stages of scientific investigation, where case studies may be very useful, to the more mature stages of theory testing--where adjudicating between causal explanations is the main task--the situation changes drastically. Case studies and qualitative description are not useful at the later stages of scientific investigation because they cannot be used to confirm or disconfirm a particular causal theory. They lack the comparative information necessary to rule out alternative explanations.

Where qualitative investigations are useful relates strongly to a distinction in philosophy of science between the context of discovery and the context of justification . Qualitative research, case studies, and clinical observations support a context of discovery where, as Levin and O'Donnell (2000) note in an educational context, such research must be regarded as "preliminary/exploratory, observational, hypothesis generating" (p. 26). They rightly point to the essential importance of qualitative investigations because "in the early stages of inquiry into a research topic, one has to look before one can leap into designing interventions, making predictions, or testing hypotheses" (p. 26). The orientation provided by qualitative investigations is critical in such cases. Even more important, the results of quantitative investigations--which must sometimes abstract away some of the contextual features of a situation--are often contextualized by the thick situational description provided by qualitative work.

However, in the context of justification, variables must be measured precisely, large groups must be tested to make sure the conclusion generalizes and, most importantly, many variables must be controlled because alternative causal explanations must be ruled out. Gersten (2001) summarizes the value of qualitative research accurately when he says that "despite the rich insights they often provide, descriptive studies cannot be used as evidence for an intervention's efficacy...descriptive research can only suggest innovative strategies to teach students and lay the groundwork for development of such strategies" (p. 47). Qualitative research does, however, help to identify fruitful directions for future experimental studies.

Nevertheless, here is why the sole reliance on qualitative techniques to determine the effectiveness of curricula and instructional strategies has become problematic. As a researcher, you desire to do one of two things.

Objective A

The researcher wishes to make some type of statement about a relationship, however minimal. That is, you at least want to use terms like greater than, or less than, or equal to. You want to say that such and such an educational program or practice is better than another. "Better than" and "worse than" are, of course, quantitative statements--and, in the context of issues about what leads to or fosters greater educational achievement, they are causal statements as well . As quantitative causal statements, the support for such claims obviously must be found in the experimental logic that has been outlined above. To justify such statements, you must adhere to the canons of quantitative research logic.

Objective B

The researcher seeks to adhere to an exclusively qualitative path that abjures statements about relationships and never uses comparative terms of magnitude. The investigator desires to simply engage in thick description of a domain that may well prompt hypotheses when later work moves on to the more quantitative methods that are necessary to justify a causal inference.

Investigators pursuing Objective B are doing essential work. They provide quantitative information with suggestions for richer hypotheses to study. In education, however, investigators sometimes claim to be pursuing Objective B but slide over into Objective A without realizing they have made a crucial switch. They want to make comparative, or quantitative, statements, but have not carried out the proper types of investigation to justify them. They want to say that a certain educational program is better than another (that is, it causes better school outcomes). They want to give educational strictures that are assumed to hold for a population of students, not just to the single or few individuals who were the objects of the qualitative study. They want to condemn an educational practice (and, by inference, deem an alternative quantitatively and causally better). But instead of taking the necessary course of pursuing Objective A, they carry out their investigation in the manner of Objective B.

Let's recall why the use of single case or qualitative description as evidence in support of a particular causal explanation is inappropriate. The idea of alternative explanations is critical to an understanding of theory testing. The goal of experimental design is to structure events so that support of one particular explanation simultaneously disconfirms other explanations. Scientific progress can occur only if the data that are collected rule out some explanations. Science sets up conditions for the natural selection of ideas. Some survive empirical testing and others do not.

This is the honing process by which ideas are sifted so that those that contain the most truth are found. But there must be selection in this process: data collected as support for a particular theory must not leave many other alternative explanations as equally viable candidates. For this reason, scientists construct control or comparison groups in their experimentation. These groups are formed so that, when their results are compared with those from an experimental group, some alternative explanations are ruled out.

Case studies and qualitative description lack the comparative information necessary to prove that a particular theory or educational practice is superior, because they fail to test an alternative; they rule nothing out. Take the seminal work of Jean Piaget for example. His case studies were critical in pointing developmental psychology in new and important directions, but many of his theoretical conclusions and causal explanations did not hold up in controlled experiments (Bjorklund, 1995; Goswami, 1998; Siegler, 1991).

In summary, as educational psychologist Richard Mayer (2000) notes, "the domain of science includes both some quantitative and qualitative methodologies" (p. 39), and the key is to use each where it is most effective (see Kamil, 1995). Likewise, in their recent book on research-based best practices in comprehension instruction, Block and Pressley (2002) argue that future progress in understanding how comprehension works will depend on a healthy interaction between qualitative and quantitative approaches. They point out that getting an initial idea of the comprehension processes involved in hypertext and Web-based environments will involve detailed descriptive studies using think-alouds and assessments of qualitative decision making. Qualitative studies of real reading environments will set the stage for more controlled investigations of causal hypotheses.

The progression to more powerful methods

A final useful concept is the progression to more powerful research methods ("more powerful" in this context meaning more diagnostic of a causal explanation). Research on a particular problem often proceeds from weaker methods (ones less likely to yield a causal explanation) to ones that allow stronger causal inferences. For example, interest in a particular hypothesis may originally emerge from a particular case study of unusual interest. This is the proper role for case studies: to suggest hypotheses for further study with more powerful techniques and to motivate scientists to apply more rigorous methods to a research problem. Thus, following the case studies, researchers often undertake correlational investigations to verify whether the link between variables is real rather than the result of the peculiarities of a few case studies. If the correlational studies support the relationship between relevant variables, then researchers will attempt experiments in which variables are manipulated in order to isolate a causal relationship between the variables.

Summary of principles that support research-based inferences about best practice

Our sketch of the principles that support research-based inferences about best practice in education has revealed that:

  • Science progresses by investigating solvable, or testable, empirical problems.
  • To be testable, a theory must yield predictions that could possible be shown to be wrong.
  • The concepts in the theories in science evolve as evidence accumulates. Scientific knowledge is not infallible knowledge, but knowledge that has at least passed some minimal tests. The theories behind research-based practice can be proven wrong, and therefore they contain a mechanism for growth and advancement.
  • Theories are tested by systematic empiricism. The data obtained from empirical research are in the public domain in the sense that they are presented in a manner that allows replication and criticism by other scientists.
  • Data and theories in science are considered in the public domain only after publication in peer-reviewed scientific journals.
  • Empiricism is systematic because it strives for the logic of control and manipulation that characterizes a true experiment.
  • Correlational techniques are helpful when the logic of an experiment cannot be approximated, but because these techniques only help rule out hypotheses, they are considered weaker than true experimental methods.
  • Researchers use many different methods to arrive at their conclusions, and the strengths and weaknesses of these methods vary. Most often, conclusions are drawn only after a slow accumulation of data from many studies.

Scientific thinking in educational practice: Reason-based practice in the absence of direct evidence

Some areas in educational research, to date, lack a research-based consensus, for a number of reasons. Perhaps the problem or issue has not been researched extensively. Perhaps research into the issue is in the early stages of investigation, where descriptive studies are suggesting interesting avenues, but no controlled research justifying a causal inference has been completed. Perhaps many correlational studies and experiments have been conducted on the issue, but the research evidence has not yet converged in a consistent direction.

Even if teachers know the principles of scientific evaluation described earlier, the research literature sometimes fails to give them clear direction. They will have to fall back on their own reasoning processes as informed by their own teaching experiences. In those cases, teachers still have many ways of reasoning scientifically.

Tracing the link from scientific research to scientific thinking in practice

Scientific thinking in can be done in several ways. Earlier we discussed different types of professional publications that teachers can read to improve their practice. The most important defining feature of these outlets is whether they are peer reviewed. Another defining feature is whether the publication contains primary research rather than presenting opinion pieces or essays on educational issues. If a journal presents primary research, we can evaluate the research using the formal scientific principles outlined above.

If the journal is presenting opinion pieces about what constitutes best practice, we need to trace the link between those opinions and archival peer-reviewed research. We would look to see whether the authors have based their opinions on peer-reviewed research by reading the reference list. Do the authors provide a significant amount of original research citations (is their opinion based on more than one study)? Do the authors cite work other than their own (have the results been replicated)? Are the cited journals peer-reviewed? For example, in the case of best practice for reading instruction, if we came across an article in an opinion-oriented journal such as Intervention in School and Clinic, we might look to see if the authors have cited work that has appeared in such peer-reviewed journals as Journal of Educational Psychology , Elementary School Journal , Journal of Literacy Research , Scientific Studies of Reading , or the Journal of Learning Disabilities .

These same evaluative criteria can be applied to presenters at professional development workshops or papers given at conferences. Are they conversant with primary research in the area on which they are presenting? Can they provide evidence for their methods and does that evidence represent a scientific consensus? Do they understand what is required to justify causal statements? Are they open to the possibility that their claims could be proven false? What evidence would cause them to shift their thinking?

An important principle of scientific evaluation--the connectivity principle (Stanovich, 2001)--can be generalized to scientific thinking in the classroom. Suppose a teacher comes upon a new teaching method, curriculum component, or process. The method is advertised as totally new, which provides an explanation for the lack of direct empirical evidence for the method. A lack of direct empirical evidence should be grounds for suspicion, but should not immediately rule it out. The principle of connectivity means that the teacher now has another question to ask: "OK, there is no direct evidence for this method, but how is the theory behind it (the causal model of the effects it has) connected to the research consensus in the literature surrounding this curriculum area?" Even in the absence of direct empirical evidence on a particular method or technique, there could be a theoretical link to the consensus in the existing literature that would support the method.

For further tips on translating research into classroom practice, see Warby, Greene, Higgins, & Lovitt (1999). They present a format for selecting, reading, and evaluating research articles, and then importing the knowledge gained into the classroom.

Let's take an imaginary example from the domain of treatments for children with extreme reading difficulties. Imagine two treatments have been introduced to a teacher. No direct empirical tests of efficacy have been carried out using either treatment. The first, Treatment A, is a training program to facilitate the awareness of the segmental nature of language at the phonological level. The second, Treatment B, involves giving children training in vestibular sensitivity by having them walk on balance beams while blindfolded. Treatment A and B are equal in one respect--neither has had a direct empirical test of its efficacy, which reflects badly on both. Nevertheless, one of the treatments has the edge when it comes to the principle of connectivity. Treatment A makes contact with a broad consensus in the research literature that children with extraordinary reading difficulties are hampered because of insufficiently developed awareness of the segmental structure of language. Treatment B is not connected to any corresponding research literature consensus. Reason dictates that Treatment A is a better choice, even though neither has been directly tested.

Direct connections with research-based evidence and use of the connectivity principle when direct empirical evidence is absent give us necessary cross-checks on some of the pitfalls that arise when we rely solely on personal experience. Drawing upon personal experience is necessary and desirable in a veteran teacher, but it is not sufficient for making critical judgments about the effectiveness of an instructional strategy or curriculum. The insufficiency of personal experience becomes clear if we consider that the educational judgments--even of veteran teachers--often are in conflict. That is why we have to adjudicate conflicting knowledge claims using the scientific method.

Let us consider two further examples that demonstrate why we need controlled experimentation to verify even the most seemingly definitive personal observations. In the 1990s, considerable media and professional attention were directed at a method for aiding the communicative capacity of autistic individuals. This method is called facilitated communication. Autistic individuals who had previously been nonverbal were reported to have typed highly literate messages on a keyboard when their hands and arms were supported over the typewriter by a so-called facilitator. These startlingly verbal performances by autistic children who had previously shown very limited linguistic behavior raised incredible hopes among many parents of autistic children.

Unfortunately, claims for the efficacy of facilitated communication were disseminated by many media outlets before any controlled studies had been conducted. Since then, many studies have appeared in journals in speech science, linguistics, and psychology and each study has unequivocally demonstrated the same thing: the autistic child's performance is dependent upon tactile cueing from the facilitator. In the experiments, it was shown that when both child and facilitator were looking at the same drawing, the child typed the correct name of the drawing. When the viewing was occluded so that the child and the facilitator were shown different drawings, the child typed the name of the facilitator's drawing, not the one that the child herself was looking at (Beck & Pirovano, 1996; Burgess, Kirsch, Shane, Niederauer, Graham, & Bacon, 1998; Hudson, Melita, & Arnold, 1993; Jacobson, Mulick, & Schwartz, 1995; Wheeler, Jacobson, Paglieri, & Schwartz, 1993). The experimental studies directly contradicted the extensive case studies of the experiences of the facilitators of the children. These individuals invariably deny that they have inadvertently cued the children. Their personal experience, honest and heartfelt though it is, suggests the wrong model for explaining this outcome. The case study evidence told us something about the social connections between the children and their facilitators. But that is something different than what we got from the controlled experimental studies, which provided direct tests of the claim that the technique unlocks hidden linguistic skills in these children. Even if the claim had turned out to be true, the verification of the proof of its truth would not have come from the case studies or personal experiences, but from the necessary controlled studies.

Another example of the need for controlled experimentation to test the insights gleaned from personal experience is provided by the concept of learning styles--the idea that various modality preferences (or variants of this theme in terms of analytic/holistic processing or "learning styles") will interact with instructional methods, allowing teachers to individualize learning. The idea seems to "feel right" to many of us. It does seem to have some face validity, but it has never been demonstrated to work in practice. Its modern incarnation (see Gersten, 2001, Spear-Swerling & Sternberg, 2001) takes a particularly harmful form, one where students identified as auditory learners are matched with phonics instruction and visual and/or kinesthetic learners matched with holistic instruction. The newest form is particularly troublesome because the major syntheses of reading research demonstrate that many children can benefit from phonics-based instruction, not just "auditory" learners (National Reading Panel, 2000; Rayner et al., 2002; Stanovich, 2000). Excluding students identified as "visual/kinesthetic" learners from effective phonics instruction is a bad instructional practice--bad because it is not only not research based, it is actually contradicted by research.

A thorough review of the literature by Arter and Jenkins (1979) found no consistent evidence for the idea that modality strengths and weaknesses could be identified in a reliable and valid way that warranted differential instructional prescriptions. A review of the research evidence by Tarver and Dawson (1978) found likewise that the idea of modality preferences did not hold up to empirical scrutiny. They concluded, "This review found no evidence supporting an interaction between modality preference and method of teaching reading" (p. 17). Kampwirth and Bates (1980) confirmed the conclusions of the earlier reviews, although they stated their conclusions a little more baldly: "Given the rather general acceptance of this idea, and its common-sense appeal, one would presume that there exists a body of evidence to support it. UnfortunatelyÉno such firm evidence exists" (p. 598).

More recently, the idea of modality preferences (also referred to as learning styles, holistic versus analytic processing styles, and right versus left hemispheric processing) has again surfaced in the reading community. The focus of the recent implementations refers more to teaching to strengths, as opposed to remediating weaknesses (the latter being more the focus of the earlier efforts in the learning disabilities field). The research of the 1980s was summarized in an article by Steven Stahl (1988). His conclusions are largely negative because his review of the literature indicates that the methods that have been used in actual implementations of the learning styles idea have not been validated. Stahl concludes: "As intuitively appealing as this notion of matching instruction with learning style may be, past research has turned up little evidence supporting the claim that different teaching methods are more or less effective for children with different reading styles" (p. 317).

Obviously, such research reviews cannot prove that there is no possible implementation of the idea of learning styles that could work. However, the burden of proof in science rests on the investigator who is making a new claim about the nature of the world. It is not incumbent upon critics of a particular claim to show that it "couldn't be true." The question teachers might ask is, "Have the advocates for this new technique provided sufficient proof that it works?" Their burden of responsibility is to provide proof that their favored methods work. Teachers should not allow curricular advocates to avoid this responsibility by introducing confusion about where the burden of proof lies. For example, it is totally inappropriate and illogical to ask "Has anyone proved that it can't work?" One does not "prove a negative" in science. Instead, hypotheses are stated, and then must be tested by those asserting the hypotheses.

Reason-based practice in the classroom

Effective teachers engage in scientific thinking in their classrooms in a variety of ways: when they assess and evaluate student performance, develop Individual Education Plans (IEPs) for their students with disabilities, reflect on their practice, or engage in action research. For example, consider the assessment and evaluation activities in which teachers engage. The scientific mechanisms of systematic empiricism--iterative testing of hypotheses that are revised after the collection of data--can be seen when teachers plan for instruction: they evaluate their students' previous knowledge, develop hypotheses about the best methods for attaining lesson objectives, develop a teaching plan based on those hypotheses, observe the results, and base further instruction on the evidence collected.

This assessment cycle looks even more like the scientific method when teachers (as part of a multidisciplinary team) are developing and implementing an IEP for a student with a disability. The team must assess and evaluate the student's learning strengths and difficulties, develop hypotheses about the learning problems, select curriculum goals and objectives, base instruction on the hypotheses and the goals selected, teach, and evaluate the outcomes of that teaching. If the teaching is successful (goals and objectives are attained), the cycle continues with new goals. If the teaching has been unsuccessful (goals and objectives have not been achieved), the cycle begins again with new hypotheses. We can also see the principle of converging evidence here. No one piece of evidence might be decisive, but collectively the evidence might strongly point in one direction.

Scientific thinking in practice occurs when teachers engage in action research. Action research is research into one's own practice that has, as its main aim, the improvement of that practice. Stokes (1997) discusses how many advances in science came about as a result of "use-inspired research" which draws upon observations in applied settings. According to McNiff, Lomax, and Whitehead (1996), action research shares several characteristics with other types of research: "it leads to knowledge, it provides evidence to support this knowledge, it makes explicit the process of enquiry through which knowledge emerges, and it links new knowledge with existing knowledge" (p. 14). Notice the links to several important concepts: systematic empiricism, publicly verifiable knowledge, converging evidence, and the connectivity principle.

Teachers and Research Commonality in a "what works" epistemology

Many educational researchers have drawn attention to the epistemological commonalities between researchers and teachers (Gersten, Vaughn, Deshler, & Schiller, 1997; Stanovich, 1993/1994). A "what works" epistemology is a critical source of underlying unity in the world views of educators and researchers (Gersten & Dimino, 2001; Gersten, Chard, & Baker, 2000). Empiricism, broadly construed (as opposed to the caricature of white coats, numbers, and test tubes that is often used to discredit scientists) is about watching the world, manipulating it when possible, observing outcomes, and trying to associate outcomes with features observed and with manipulations. This is what the best teachers do. And this is true despite the grain of truth in the statement that "teaching is an art." As Berliner (1987) notes: "No one I know denies the artistic component to teaching. I now think, however, that such artistry should be research-based. I view medicine as an art, but I recognize that without its close ties to science it would be without success, status, or power in our society. Teaching, like medicine, is an art that also can be greatly enhanced by developing a close relationship to science (p. 4)."

In his review of the work of the Committee on the Prevention of Reading Difficulties for the National Research Council of the National Academy of Sciences (Snow, Burns, & Griffin, 1998), Pearson (1999) warned educators that resisting evaluation by hiding behind the "art of teaching" defense will eventually threaten teacher autonomy. Teachers need creativity, but they also need to demonstrate that they know what evidence is, and that they recognize that they practice in a profession based in behavioral science. While making it absolutely clear that he opposes legislative mandates, Pearson (1999) cautions:

We have a professional responsibility to forge best practice out of the raw materials provided by our most current and most valid readings of research...If professional groups wish to retain the privileges of teacher prerogative and choice that we value so dearly, then the price we must pay is constant attention to new knowledge as a vehicle for fine-tuning our individual and collective views of best practice. This is the path that other professions, such as medicine, have taken in order to maintain their professional prerogative, and we must take it, too. My fear is that if the professional groups in education fail to assume this responsibility squarely and openly, then we will find ourselves victims of the most onerous of legislative mandates (p. 245).

Those hostile to a research-based approach to educational practice like to imply that the insights of teachers and those of researchers conflict. Nothing could be farther from the truth. Take reading, for example. Teachers often do observe exactly what the research shows--that most of their children who are struggling with reading have trouble decoding words. In an address to the Reading Hall of Fame at the 1996 meeting of the International Reading Association, Isabel Beck (1996) illustrated this point by reviewing her own intellectual history (see Beck, 1998, for an archival version). She relates her surprise upon coming as an experienced teacher to the Learning Research and Development Center at the University of Pittsburgh and finding "that there were some people there (psychologists) who had not taught anyone to read, yet they were able to describe phenomena that I had observed in the course of teaching reading" (Beck, 1996, p. 5). In fact, what Beck was observing was the triangulation of two empirical approaches to the same issue--two perspectives on the same underlying reality. And she also came to appreciate how these two perspectives fit together: "What I knew were a number of whats--what some kids, and indeed adults, do in the early course of learning to read. And what the psychologists knew were some whys--why some novice readers might do what they do" (pp. 5-6).

Beck speculates on why the disputes about early reading instruction have dragged on so long without resolution and posits that it is due to the power of a particular kind of evidence--evidence from personal observation. The determination of whole language advocates is no doubt sustained because "people keep noticing the fact that some children or perhaps many children--in any event a subset of children--especially those who grow up in print-rich environments, don't seem to need much more of a boost in learning to read than to have their questions answered and to point things out to them in the course of dealing with books and various other authentic literacy acts" (Beck, 1996, p. 8). But Beck points out that it is equally true that proponents of the importance of decoding skills are also fueled by personal observation: "People keep noticing the fact that some children or perhaps many children--in any event a subset of children--don't seem to figure out the alphabetic principle, let alone some of the intricacies involved without having the system directly and systematically presented" (p. 8). But clearly we have lost sight of the basic fact that the two observations are not mutually exclusive--one doesn't negate the other. This is just the type of situation for which the scientific method was invented: a situation requiring a consensual view, triangulated across differing observations by different observers.

Teachers, like scientists, are ruthless pragmatists (Gersten & Dimino, 2001; Gersten, Chard, & Baker, 2000). They believe that some explanations and methods are better than others. They think there is a real world out there--a world in flux, obviously--but still one that is trackable by triangulating observations and observers. They believe that there are valid, if fallible, ways of finding out which educational practices are best. Teachers believe in a world that is predictable and controllable by manipulations that they use in their professional practice, just as scientists do. Researchers and educators are kindred spirits in their approach to knowledge, an important fact that can be used to forge a coalition to bring hard-won research knowledge to light in the classroom.

  • Adams, M. J. (1990). Beginning to read: Thinking and learning about print . Cambridge, MA: MIT Press.
  • Adler, J. E. (1998, January). Open minds and the argument from ignorance. Skeptical Inquirer , 22 (1), 41-44.
  • Anderson, C. A., & Anderson, K. B. (1996). Violent crime rate studies in philosophical context: A destructive testing approach to heat and Southern culture of violence effects. Journal of Personality and Social Psychology , 70 , 740-756.
  • Anderson, R. C., Hiebert, E. H., Scott, J., & Wilkinson, I. (1985). Becoming a nation of readers . Washington, D. C.: National Institute of Education.
  • Arter, A. and Jenkins, J. (1979). Differential diagnosis-prescriptive teaching: A critical appraisal, Review of Educational Research , 49 , 517-555.
  • Beck, A. R., & Pirovano, C. M. (1996). Facilitated communications' performance on a task of receptive language with children and youth with autism. Journal of Autism and Developmental Disorders , 26 , 497-512.
  • Beck, I. L. (1996, April). Discovering reading research: Why I didn't go to law school . Paper presented at the Reading Hall of Fame, International Reading Association, New Orleans.
  • Beck, I. (1998). Understanding beginning reading: A journey through teaching and research. In J. Osborn & F. Lehr (Eds.), Literacy for all: Issues in teaching and learning (pp. 11-31). New York: Guilford Press.
  • Berliner, D. C. (1987). Knowledge is power: A talk to teachers about a revolution in the teaching profession. In D. C. Berliner & B. V. Rosenshine (Eds.), Talks to teachers (pp. 3-33). New York: Random House.
  • Bjorklund, D. F. (1995). Children's thinking: Developmental function and individual differences (Second Edition) . Pacific Grove, CA: Brooks/Cole.
  • Block, C. C., & Pressley, M. (Eds.). (2002). Comprehension instruction: Research-based best practices . New York: Guilford Press.
  • Bronowski, J. (1956). Science and human values . New York: Harper & Row.
  • Bronowski, J. (1973). The ascent of man . Boston: Little, Brown.
  • Bronowski, J. (1977). A sense of the future . Cambridge: MIT Press.
  • Burgess, C. A., Kirsch, I., Shane, H., Niederauer, K., Graham, S., & Bacon, A. (1998). Facilitated communication as an ideomotor response. Psychological Science , 9 , 71-74.
  • Chard, D. J., & Osborn, J. (1999). Phonics and word recognition in early reading programs: Guidelines for accessibility. Learning Disabilities Research & Practice , 14 , 107-117.
  • Cooper, H. & Hedges, L. V. (Eds.), (1994). The handbook of research synthesis . New York: Russell Sage Foundation.
  • Cunningham, P. M., & Allington, R. L. (1994). Classrooms that work: They all can read and write . New York: HarperCollins.
  • Dawkins, R. (1998). Unweaving the rainbow . Boston: Houghton Mifflin.
  • Dennett, D. C. (1995). Darwin's dangerous idea: Evolution and the meanings of life . New York: Simon & Schuster.
  • Dennett, D. C. (1999/2000, Winter). Why getting it right matters. Free Inquiry , 20 (1), 40-43.
  • Ehri, L. C., Nunes, S., Stahl, S., & Willows, D. (2001). Systematic phonics instruction helps students learn to read: Evidence from the National Reading Panel's Meta-Analysis. Review of Educational Research , 71 , 393-447.
  • Foster, E. A., Jobling, M. A., Taylor, P. G., Donnelly, P., Deknijff, P., Renemieremet, J., Zerjal, T., & Tyler-Smith, C. (1998). Jefferson fathered slave's last child. Nature , 396 , 27-28.
  • Fraenkel, J. R., & Wallen, N. R. (1996). How to design and evaluate research in education (Third Edition). New York: McGraw-Hill.
  • Geertz, C. (1973). The interpretation of cultures . New York: Basic Books.
  • Geertz, C. (1979). From the native's point of view: On the nature of anthropological understanding. In P. Rabinow & W. Sullivan (Eds.), Interpretive social science (pp. 225-242). Berkeley: University of California Press.
  • Gersten, R. (2001). Sorting out the roles of research in the improvement of practice. Learning Disabilities: Research & Practice , 16 (1), 45-50.
  • Gersten, R., Chard, D., & Baker, S. (2000). Factors enhancing sustained use of research-based instructional practices. Journal of Learning Disabilities , 33 (5), 445-457.
  • Gersten, R., & Dimino, J. (2001). The realities of translating research into classroom practice. Learning Disabilities: Research & Practice , 16 (2), 120-130.
  • Gersten, R., Vaughn, S., Deshler, D., & Schiller, E. (1997).What we know about using research findings: Implications for improving special education practice. Journal of Learning Disabilities , 30 (5), 466-476.
  • Goswami, U. (1998). Cognition in children . Hove, England: Psychology Press.
  • Gross, P. R., Levitt, N., & Lewis, M. (1997). The flight from science and reason . New York: New York Academy of Science.
  • Hedges, L. V., & Olkin, I. (1985). Statistical Methods for Meta-Analysis . New York: Academic Press.
  • Holton, G., & Roller, D. (1958). Foundations of modern physical science . Reading, MA: Addison-Wesley.
  • Hudson, A., Melita, B., & Arnold, N. (1993). A case study assessing the validity of facilitated communication. Journal of Autism and Developmental Disorders , 23 , 165-173.
  • Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings . Newbury Park, CA: Sage.
  • Jacobson, J. W., Mulick, J. A., & Schwartz, A. A. (1995). A history of facilitated communication: Science, pseudoscience, and antiscience. American Psychologist , 50 , 750-765.
  • Kamil, M. L. (1995). Some alternatives to paradigm wars in literacy research. Journal of Reading Behavior , 27 , 243-261.
  • Kampwirth, R., and Bates, E. (1980). Modality preference and teaching method: A review of the research, Academic Therapy , 15 , 597-605.
  • Kavale, K. A., & Forness, S. R. (1995). The nature of learning disabilities: Critical elements of diagnosis and classification . Mahweh, NJ: Lawrence Erlbaum Associates.
  • Levin, J. R., & O'Donnell, A. M. (2000). What to do about educational research's credibility gaps? Issues in Education: Contributions from Educational Psychology , 5 , 1-87.
  • Liberman, A. M. (1999). The reading researcher and the reading teacher need the right theory of speech. Scientific Studies of Reading , 3 , 95-111.
  • Magee, B. (1985). Philosophy and the real world: An introduction to Karl Popper . LaSalle, IL: Open Court.
  • Mayer, R. E. (2000). What is the place of science in educational research? Educational Researcher , 29 (6), 38-39.
  • McNiff, J.,Lomax, P., & Whitehead, J. (1996). You and your action research project . London: Routledge.
  • Medawar, P. B. (1982). Pluto's republic . Oxford: Oxford University Press.
  • Medawar, P. B. (1984). The limits of science . New York: Harper & Row.
  • Medawar, P. B. (1990). The threat and the glory . New York: Harper Collins.
  • Moats, L. (1999). Teaching reading is rocket science . Washington, DC: American Federation of Teachers.
  • National Reading Panel: Reports of the Subgroups. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction . Washington, DC.
  • Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology , 2 , 175-220.
  • Pearson, P. D. (1993). Teaching and learning to read: A research perspective. Language Arts , 70 , 502-511.
  • Pearson, P. D. (1999). A historically based review of preventing reading difficulties in young children. Reading Research Quarterly , 34 , 231-246.
  • Plotkin, D. (1996, June). Good news and bad news about breast cancer. Atlantic Monthly , 53-82.
  • Popper, K. R. (1972). Objective knowledge . Oxford: Oxford University Press.
  • Pressley, M. (1998). Reading instruction that works: The case for balanced teaching . New York: Guilford Press.
  • Pressley, M., Rankin, J., & Yokol, L. (1996). A survey of the instructional practices of outstanding primary-level literacy teachers. Elementary School Journal , 96 , 363-384.
  • Rayner, K. (1998). Eye movements in reading and information processing: 20 Years of research. Psychological Bulletin , 124 , 372-422.
  • Rayner, K., Foorman, B. R., Perfetti, C. A., Pesetsky, D., & Seidenberg, M. S. (2002, March). How should reading be taught? Scientific American , 286 (3), 84-91.
  • Reading Coherence Initiative. (1999). Understanding reading: What research says about how children learn to read . Austin, TX: Southwest Educational Development Laboratory.
  • Rosenthal, R. (1995). Writing meta-analytic reviews. Psychological Bulletin , 118 , 183-192.
  • Rosnow, R. L., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist , 44 , 1276-1284.
  • Shankweiler, D. (1999). Words to meaning. Scientific Studies of Reading , 3 , 113-127.
  • Share, D. L., & Stanovich, K. E. (1995). Cognitive processes in early reading development: Accommodating individual differences into a model of acquisition. Issues in Education: Contributions from Educational Psychology , 1 , 1-57.
  • Shavelson, R. J., & Towne, L. (Eds.) (2002). Scientific research in education . Washington, DC: National Academy Press.
  • Siegler, R. S. (1991). Children's thinking (Second Edition) . Englewood Cliffs, NJ: Prentice Hall.
  • Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children . Washington, DC: National Academy Press.
  • Snowling, M. (2000). Dyslexia (Second Edition) . Oxford: Blackwell.
  • Spear-Swerling, L., & Sternberg, R. J. (2001). What science offers teachers of reading. Learning Disabilities: Research & Practice , 16 (1), 51-57.
  • Stahl, S. (December, 1988). Is there evidence to support matching reading styles and initial reading methods? Phi Delta Kappan , 317-327.
  • Stanovich, K. E. (1993/1994). Romance and reality. The Reading Teacher , 47 (4), 280-291.
  • Stanovich, K. E. (2000). Progress in understanding reading: Scientific foundations and new frontiers . New York: Guilford Press.
  • Stanovich, K. E. (2001). How to think straight about psychology (Sixth Edition). Boston: Allyn & Bacon.
  • Stokes, D. E. (1997). Pasteur's quadrant: Basic science and technological innovation . Washington, DC: Brookings Institution Press.
  • Swanson, H. L. (1999). Interventions for students with learning disabilities: A meta-analysis of treatment outcomes . New York: Guilford Press.
  • Tarver, S. G., & Dawson, E. (1978). Modality preference and the teaching of reading: A review, Journal of Learning Disabilities , 11, 17-29.
  • Vaughn, S., & Dammann, J. E. (2001). Science and sanity in special education. Behavioral Disorders , 27, 21-29.
  • Warby, D. B., Greene, M. T., Higgins, K., & Lovitt, T. C. (1999). Suggestions for translating research into classroom practices. Intervention in School and Clinic , 34 (4), 205-211.
  • Wheeler, D. L., Jacobson, J. W., Paglieri, R. A., & Schwartz, A. A. (1993). An experimental assessment of facilitated communication. Mental Retardation , 31 , 49-60.
  • Wilkinson, L. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist , 54 , 595-604.
  • Wilson, E. O. (1998). Consilience: The unity of knowledge . New York: Knopf.

For additional copies of this document:

Contact the National Institute for Literacy at ED Pubs PO Box 1398, Jessup, Maryland 20794-1398

Phone 1-800-228-8813 Fax 301-430-1244 [email protected]

NICHD logo

Date Published: 2003 Date Posted: March 2010

Department of Education logo

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

how research help education

Home Surveys Academic Research

Educational Research: What It Is + How to Do It

Educational research is collecting and systematically analyzing information on education methods to explain them better. Learn more.

Education is a pillar in modern society, it provides the tools to develop critical thinking, decision making, and social abilities. Education helps individuals to secure the necessary research skills to secure jobs or to be entrepreneurs in new technologies. This is where educational research takes an important place in the overall improvement of the education system (pedagogy, learning programs, investigation, etc.).

Educational research is the spectrum that involves multiple fields of knowledge that scope the different research problems of the learning system and provides a variety of perspectives to solve the issues and improve in general. Educators need ways to filter through the noise of information to find the best practices to better their jobs and deliver better students. This is why educational research that attaches to the scientific method and creates better ideas and new knowledge is essential. The classroom response system allowed students to answer multiple-choice questions and engage in real-time discussions instantly.

What is educational research?

Educational research is collecting and systematically analyzing information on education methods to explain them better. It should be viewed as a critical, reflexive, and professional activity that adopts rigorous methods to gather data, analyze it, and solve educational challenges to help advance knowledge.

Educational research typically begins with identifying a problem or an academic issue. From there, it involves the research of all the data, the information must be analyzed to interpret it. This process ends with a report where results are presented in an understandable form of speech, which can be used by both the researcher and the educational community.

Why is educational research important?

The primary purpose of educational research is to improve the knowledge it exists towards the pedagogy and educational system as a whole. Improving the learning practices and developing new ways of teaching can be achieved more efficiently when the information is shared by the entire community rather than guarded by one institution. Simply put, we can tell that the main three reasons to conduct educational research are:

  • To explore issues . Undertaking research leads to finding answers to specific questions that can help students, teachers, and administrators. Why is student experience design important in new university models? What is the impact of education on new generations? Why is the importance of language while redacting a survey for a Ph.D.?
  • To shape policy . This type of educational research is conducted to collect information to make sustained judgments that can be informed to societies or institutions to improve the governance of education.
  • To improve the quality . Trying to do something better than what is done now is a common reason for educational research to be done. What if we can improve the quality of education by adopting new processes; what if we can achieve the same outcomes with fewer resources? This is quite common in the educational system, but to adapt, institutions must have a base ground of information, which can be obtained by conducting educational research.

Educational Research Methods

Educational research methods are the tools used to carry out research to prove or not the hypothesis of the study.

     An interview is a qualitative research technique that allows the researcher to gather data from the subject using open-ended questions. The most important aspect of an interview is how it is made, typically, it would be a one-on-one conversation that focuses on the substance of what is asked.

Focus Group

Focus groups are also one of the best example of qualitative data in education or approach to gathering information. The main difference from an interview is that the group is composed of 6 – 10 people purposely selected to understand the perception of a social group. Rather than trying to understand a more significant population in the form of statistics, the focus group is directed by a moderator to keep the group in topic conversation. Hence, all the participants contribute to the research.

Observation

Observation is a method of data collection that incorporates the researcher into the natural setting where the participants or the phenomenon is happening. This enables the researcher to see what is happening in real time, eliminating some bias that interviews or focus groups can have by having the moderator intervene with the subjects.

A survey is a research method used to collect data from a determined population to gain information on a subject of interest. The nature of the survey allows gathering the information at any given time and typically takes no time, depending on the research. Another benefit of a survey is its quantitative approach, which makes it easier to present it comprehensively.

How to do educational research

Like any other type of research, educational research involves steps that must be followed to make the information gathered from it valuable and usable. 

  • Identifying the problem. The first step in the process is to identify the problem or formulate a research question. 
  • Formulating objectives and hypotheses. Research objectives are the goal intended for the research to take place, they must be explicit at the beginning of the research and related to the problem. The hypothesis is a statement of the research in the form of a question, it helps the researcher to decide which research method is going to be used as well as the data that needs to be collected.
  • Deciding the method of research. There are plenty of research methods, but deciding which one is the best for each case depends on the researcher’s objectives and hypothesis planted in the previous step.
  • Collecting the data. The research method determines how the data is going to be collected. Whether it’s going to be an interview, focus group, or survey depends on the research method.
  • Analyzing and interpreting the data. Arranging and organizing the data collected and making the necessary calculations. A correct translation/interpretation of the data is primordial for everyone to understand, not only the researcher.
  • Writing a report. After the analysis and interpretation of data, the researcher will form a conclusion, a result of his research which can be shared with everyone. This will be done through a report, or a thesis, which includes all the information related to the research. It will include a detailed summary of all his work and findings during the research process.

Educational research is crucial for the improvement of the education system, the improvement of the teaching/learning process relies on the information that’s available in the field. Statements without research evidence are nothing but opinions, the gathering and distribution of information are fundamental in order to improve what we have as an educational system, as it provides explanations to the big questions and provides a bigger picture for future generations. 

As stated before, educational research is crucial for improving the education system. In QuestionPro we believe in providing the best tools to academic researchers to keep creating valuable knowledge.

MORE LIKE THIS

customer advocacy software

21 Best Customer Advocacy Software for Customers in 2024

Apr 19, 2024

quantitative data analysis software

10 Quantitative Data Analysis Software for Every Data Scientist

Apr 18, 2024

Enterprise Feedback Management software

11 Best Enterprise Feedback Management Software in 2024

online reputation management software

17 Best Online Reputation Management Software in 2024

Apr 17, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Rick Hess Straight Up

Education policy maven Rick Hess of the American Enterprise Institute think tank offers straight talk on matters of policy, politics, research, and reform. Read more from this blog.

How, Exactly, Is Research Supposed to Improve Education?

  • Share article

This week Dylan Wiliam , eclectic Wales native and emeritus professor at University College London , takes over the blog. Dylan began his career as a math teacher in London (having followed his “jazz-folk” band to the capital city) before eventually stumbling into academe. His books include Creating the Schools Our Children Need and Leadership for Teacher Learning . Across a varied career, he has taught in urban public schools, directed a large-scale testing program, spent three years as senior research director at the Educational Testing Service (ETS), and served a number of roles in university administration, including dean of a school of education at King’s College London. Dylan, whose advice usually costs a pretty penny, will spend the week offering pro bono thoughts and tips to educators struggling to get school going this fall.

As educators, schools and districts work to overcome the damage to students’ education caused by the coronavirus pandemic, it seems obvious that our efforts should be guided by the best evidence we can find about what is likely to be most effective.

The good news is that, over the last 20 or so years, there have been substantial improvements in the way that research findings are summarized and made available to practitioners and policymakers. Increasingly, educational researchers have stepped out of their “ivory towers” and tackled issues of immediate and direct relevance. And, just as importantly, they have taken seriously the task of communicating their research findings to those who might actually use them in real settings (see, for example, here and here ).

The bad news is that producing meaningful syntheses of research turns out to be much more difficult in education than it is in, say, medicine. Too little attention is given to figuring out how even well-established research findings might be implemented in real schools and classrooms.

Let’s start with research synthesis. For many years, research reviews typically took a “narrative” approach. Researchers read the relevant studies and then figured out the best story to tell. Some reviewers were more systematic, tallying the number of positive and negative results in a particular field, but such an approach treated all experiments as if they were the same size and had the same size of impact. In 1976, Gene V. Glass proposed a technique that he called “meta-analysis” by which the results of different studies could be expressed on a common metric, called “effect size,” and thus compared more meaningfully. This approach is now the standard approach to research synthesis in the health sciences.

Unfortunately, as I explain here , meta-analysis is much harder to do well in education for a variety of reasons. Combining different reports on “cooperative learning” might group together studies with very different approaches to cooperative learning, studies with younger students tend to produce larger effect sizes, and different ways of assessing student achievement can produce very different results for the same experiment. Also, since it is much easier to get studies published if the results are dramatic, the studies that are published tend to be the ones where everything went just right, so the published studies tend to overstate the effects in other settings.

These problems are compounded when the results of different meta-analyses are combined in a process sometimes called “meta-meta-analysis.” The number of assumptions made in these analyses make it impossible to determine what is really going on. The whole thing is reminiscent of the old joke about someone who, after a speed-reading course, said, “I was able to go through War and Peace in 20 minutes. It’s about Russia.”

Even when studies are conducted and reported carefully, it is not at all obvious that the results would be relevant across different contexts. While it may seem obvious that studies conducted on undergraduate students are unlikely to provide meaningful insights about how to teach kindergarten, or that those conducted in urban settings may not generalize to rural settings, that intuition can easily get lost when trying to digest the large amounts of information presented in a meta-analysis. It is also important to note that even when we have a well-designed randomized control trial, all we know about is the differences between the control and experimental groups. The schools and districts that chose to participate in the experiment may be different from those that did not.

Class-size-reduction studies are a case in point. Any class-size-reduction program requires additional teachers, so the quality of those teachers is a crucial determinant of the success of the program. Probably the best known such study—the Tennessee STAR study—required recruiting an extra 50 teachers, and it is reasonable to suppose that these were as good as those already employed. However, when such a program requires hundreds, or thousands of extra teachers, it is not at all clear that the additional teachers employed will be as good as those already working in the schools. Worse, the quality of available teachers is likely to vary from district to district. A class-size-reduction program may increase student achievement in one district, yet reduce it in another, due to the difficulty of recruiting good teachers. Similar arguments apply to multitiered systems of support. More intensive instruction in smaller groups is likely to be effective when those teaching the smaller groups are as effective as those teaching the class from which the students were withdrawn. But if those teaching the smaller groups are less effective, then a multitiered system may actually reduce student achievement .

The important point here is that those “on the ground"—the administrators in that district—will know far more about teacher recruitment than those in a state department of education. They have to look at whether the research solves a problem that the district has, how much extra student achievement will be generated, how much it will cost (in money and in educator time), and whether it is possible to implement the reform in their own setting. Research is essential in helping districts avoid things that are unlikely to benefit students, like catering to students’ preferred learning styles , and can identify some “best bets” for schools and districts, but research can never provide step-by-step instructions on how to improve student outcomes.

To simplify somewhat, everything works somewhere, and nothing works everywhere. The important question is, “Under what circumstances might this work” and that is something that those “on the ground” are best placed to determine.

Educators cannot afford to ignore research evidence—there are just too many “blind alleys” that look attractive but are unlikely to help students—but they have to choose judiciously. Some interventions may have small effects, but if they do not take up too much time, they can be highly cost-effective. Others may have larger effects but will take time and energy to implement, and, crucially, what works best for one district may not work for the next district. It is imperative, now in the midst of unprecedented educational challenges more than ever, that district leaders become “critical consumers” of research.

The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Sign Up for EdWeek Update

  • Find My Rep

You are here

What are the benefits of educational research for teachers.

Ask an Expert Rebecca Austin Researching Primary Education

Cultivating a research-based approach to developing your practice provides evidence to effect change in your teaching, your classroom, your school, and beyond. Rebecca Austin, author of Researching Primary Education  and Senior Lecturer at the School of Teacher Education and Development at Canterbury Christchurch University, highlights what the benefits are of research to your practice…

In the context of the debate about what works and why, there is a wide range of benefits to researching your own practice, whether directly feeding into improvement through action research or, more broadly, gaining understanding and knowledge on themes of interest and relevance. This is why research is embedded into initial teacher education. As research becomes embedded in your practice you can gain a range of benefits. Research can:

  • clarify purposes, processes and priorities when introducing change – for example, to  curriculum, pedagogy or assessment  
  • develop your agency, influence, self-efficacy and voice within your own school and  more widely within the profession.

Each of these can involve investigation using evidence from your own setting, along with wider research evidence. 

Chapter Icon

  • Site search

CBT Supervision

The ABC of CBT

CBT for Beginners

CBT Values and Ethics

Reflection in CBT

CBT for Older People

Overcoming Obstacles in CBT

The CBT Handbook

CBT for Personality Disorders

CBMCS Multicultural Training Program

CBMCS Multicultural Reader

CBDNA Journal: Research & Review

An Introduction to CBT Research

CBT for Common Trauma Responses

Person-centred Therapy and CBT

Low-intensity CBT Skills and Interventions

CBT for Depression: An Integrated Approach

CBT with Children, Young People and Families

CBT for Worry and Generalised Anxiety Disorder

Action Research

Journal of Research in Nursing

Product Type plus Created with Sketch. minus Created with Sketch.

  • Textbook (32) Apply Textbook filter
  • Journal (13) Apply Journal filter
  • Academic Book (5) Apply Academic Book filter
  • Professional Book (4) Apply Professional Book filter
  • Reference Book (4) Apply Reference Book filter

Disciplines plus Created with Sketch. minus Created with Sketch.

  • Education (31) Apply Education filter
  • Counselling and Psychotherapy (General) (18) Apply Counselling and Psychotherapy (General) filter
  • Research Methods & Evaluation (General) (18) Apply Research Methods & Evaluation (General) filter
  • Nursing (6) Apply Nursing filter
  • Public Health (4) Apply Public Health filter
  • Psychology (General) (3) Apply Psychology (General) filter
  • Social Work & Social Policy (General) (3) Apply Social Work & Social Policy (General) filter
  • Clinical Medicine (3) Apply Clinical Medicine filter
  • Anthropology & Archaeology (General) (1) Apply Anthropology & Archaeology (General) filter
  • Arts & Humanities (General) (1) Apply Arts & Humanities (General) filter
  • History (General) (1) Apply History (General) filter
  • Business & Management (General) (1) Apply Business & Management (General) filter
  • Communication and Media Studies (General) (1) Apply Communication and Media Studies (General) filter
  • Cultural Studies (General) (1) Apply Cultural Studies (General) filter
  • Economics & Development Studies (General) (1) Apply Economics & Development Studies (General) filter
  • Life & Biomedical Sciences (1) Apply Life & Biomedical Sciences filter
  • Politics & International Relations (1) Apply Politics & International Relations filter
  • Study Skills (General) (1) Apply Study Skills (General) filter
  • Other Health Specialties (1) Apply Other Health Specialties filter

Status plus Created with Sketch. minus Created with Sketch.

  • Published (44) Apply Published filter
  • Forthcoming (1) Apply Forthcoming filter
  • What is Educational Research? + [Types, Scope & Importance]

busayo.longe

Education is an integral aspect of every society and in a bid to expand the frontiers of knowledge, educational research must become a priority. Educational research plays a vital role in the overall development of pedagogy, learning programs, and policy formulation. 

Educational research is a spectrum that bothers on multiple fields of knowledge and this means that it draws from different disciplines. As a result of this, the findings of this research are multi-dimensional and can be restricted by the characteristics of the research participants and the research environment. 

What is Educational Research?

Educational research is a type of systematic investigation that applies empirical methods to solving challenges in education. It adopts rigorous and well-defined scientific processes in order to gather and analyze data for problem-solving and knowledge advancement. 

J. W. Best defines educational research as that activity that is directed towards the development of a science of behavior in educational situations. The ultimate aim of such a science is to provide knowledge that will permit the educator to achieve his goals through the most effective methods.

The primary purpose of educational research is to expand the existing body of knowledge by providing solutions to different problems in pedagogy while improving teaching and learning practices. Educational researchers also seek answers to questions bothering on learner motivation, development, and classroom management. 

Characteristics of Education Research  

While educational research can take numerous forms and approaches, several characteristics define its process and approach. Some of them are listed below:

  • It sets out to solve a specific problem.
  • Educational research adopts primary and secondary research methods in its data collection process . This means that in educational research, the investigator relies on first-hand sources of information and secondary data to arrive at a suitable conclusion. 
  • Educational research relies on empirical evidence . This results from its largely scientific approach.
  • Educational research is objective and accurate because it measures verifiable information.
  • In educational research, the researcher adopts specific methodologies, detailed procedures, and analysis to arrive at the most objective responses
  • Educational research findings are useful in the development of principles and theories that provide better insights into pressing issues.
  • This research approach combines structured, semi-structured, and unstructured questions to gather verifiable data from respondents.
  • Many educational research findings are documented for peer review before their presentation. 
  • Educational research is interdisciplinary in nature because it draws from different fields and studies complex factual relations.

Types of Educational Research 

Educational research can be broadly categorized into 3 which are descriptive research , correlational research , and experimental research . Each of these has distinct and overlapping features. 

Descriptive Educational Research

In this type of educational research, the researcher merely seeks to collect data with regards to the status quo or present situation of things. The core of descriptive research lies in defining the state and characteristics of the research subject being understudied. 

Because of its emphasis on the “what” of the situation, descriptive research can be termed an observational research method . In descriptive educational research, the researcher makes use of quantitative research methods including surveys and questionnaires to gather the required data.

Typically, descriptive educational research is the first step in solving a specific problem. Here are a few examples of descriptive research: 

  • A reading program to help you understand student literacy levels.
  • A study of students’ classroom performance.
  • Research to gather data on students’ interests and preferences. 

From these examples, you would notice that the researcher does not need to create a simulation of the natural environment of the research subjects; rather, he or she observes them as they engage in their routines. Also, the researcher is not concerned with creating a causal relationship between the research variables. 

Correlational Educational Research

This is a type of educational research that seeks insights into the statistical relationship between two research variables. In correlational research, the researcher studies two variables intending to establish a connection between them. 

Correlational research can be positive, negative, or non-existent. Positive correlation occurs when an increase in variable A leads to an increase in variable B, while negative correlation occurs when an increase in variable A results in a decrease in variable B. 

When a change in any of the variables does not trigger a succeeding change in the other, then the correlation is non-existent. Also, in correlational educational research, the research does not need to alter the natural environment of the variables; that is, there is no need for external conditioning. 

Examples of educational correlational research include: 

  • Research to discover the relationship between students’ behaviors and classroom performance.
  • A study into the relationship between students’ social skills and their learning behaviors. 

Experimental Educational Research

Experimental educational research is a research approach that seeks to establish the causal relationship between two variables in the research environment. It adopts quantitative research methods in order to determine the cause and effect in terms of the research variables being studied. 

Experimental educational research typically involves two groups – the control group and the experimental group. The researcher introduces some changes to the experimental group such as a change in environment or a catalyst, while the control group is left in its natural state. 

The introduction of these catalysts allows the researcher to determine the causative factor(s) in the experiment. At the core of experimental educational research lies the formulation of a hypothesis and so, the overall research design relies on statistical analysis to approve or disprove this hypothesis.

Examples of Experimental Educational Research

  • A study to determine the best teaching and learning methods in a school.
  • A study to understand how extracurricular activities affect the learning process. 

Based on functionality, educational research can be classified into fundamental research , applied research , and action research. The primary purpose of fundamental research is to provide insights into the research variables; that is, to gain more knowledge. Fundamental research does not solve any specific problems. 

Just as the name suggests, applied research is a research approach that seeks to solve specific problems. Findings from applied research are useful in solving practical challenges in the educational sector such as improving teaching methods, modifying learning curricula, and simplifying pedagogy. 

Action research is tailored to solve immediate problems that are specific to a context such as educational challenges in a local primary school. The goal of action research is to proffer solutions that work in this context and to solve general or universal challenges in the educational sector. 

Importance of Educational Research

  • Educational research plays a crucial role in knowledge advancement across different fields of study. 
  • It provides answers to practical educational challenges using scientific methods.
  • Findings from educational research; especially applied research, are instrumental in policy reformulation. 
  • For the researcher and other parties involved in this research approach, educational research improves learning, knowledge, skills, and understanding.
  • Educational research improves teaching and learning methods by empowering you with data to help you teach and lead more strategically and effectively.
  • Educational research helps students apply their knowledge to practical situations.

Educational Research Methods 

  • Surveys/Questionnaires

A survey is a research method that is used to collect data from a predetermined audience about a specific research context. It usually consists of a set of standardized questions that help you to gain insights into the experiences, thoughts, and behaviors of the audience. 

Surveys can be administered physically using paper forms, face-to-face conversations, telephone conversations, or online forms. Online forms are easier to administer because they help you to collect accurate data and to also reach a larger sample size. Creating your online survey on data-gathering platforms like Formplus allows you to.also analyze survey respondent’s data easily. 

In order to gather accurate data via your survey, you must first identify the research context and the research subjects that would make up your data sample size. Next, you need to choose an online survey tool like Formplus to help you create and administer your survey with little or no hassles. 

An interview is a qualitative data collection method that helps you to gather information from respondents by asking questions in a conversation. It is typically a face-to-face conversation with the research subjects in order to gather insights that will prove useful to the specific research context. 

Interviews can be structured, semi-structured , or unstructured . A structured interview is a type of interview that follows a premeditated sequence; that is, it makes use of a set of standardized questions to gather information from the research subjects. 

An unstructured interview is a type of interview that is fluid; that is, it is non-directive. During a structured interview, the researcher does not make use of a set of predetermined questions rather, he or she spontaneously asks questions to gather relevant data from the respondents. 

A semi-structured interview is the mid-point between structured and unstructured interviews. Here, the researcher makes use of a set of standardized questions yet, he or she still makes inquiries outside these premeditated questions as dedicated by the flow of the conversations in the research context. 

Data from Interviews can be collected using audio recorders, digital cameras, surveys, and questionnaires. 

  • Observation

Observation is a method of data collection that entails systematically selecting, watching, listening, reading, touching, and recording behaviors and characteristics of living beings, objects, or phenomena. In the classroom, teachers can adopt this method to understand students’ behaviors in different contexts. 

Observation can be qualitative or quantitative in approach . In quantitative observation, the researcher aims at collecting statistical information from respondents and in qualitative information, the researcher aims at collecting qualitative data from respondents. 

Qualitative observation can further be classified into participant or non-participant observation. In participant observation, the researcher becomes a part of the research environment and interacts with the research subjects to gather info about their behaviors. In non-participant observation, the researcher does not actively take part in the research environment; that is, he or she is a passive observer. 

How to Create Surveys and Questionnaires with Formplus

  • On your dashboard, choose the “create new form” button to access the form builder. You can also choose from the available survey templates and modify them to suit your need.
  • Save your online survey to access the form customization section. Here, you can change the physical appearance of your form by adding preferred background images and inserting your organization’s logo.
  • Formplus has a form analytics dashboard that allows you to view insights from your data collection process such as the total number of form views and form submissions. You can also use the reports summary tool to generate custom graphs and charts from your survey data. 

Steps in Educational Research

Like other types of research, educational research involves several steps. Following these steps allows the researcher to gather objective information and arrive at valid findings that are useful to the research context. 

  • Define the research problem clearly. 
  • Formulate your hypothesis. A hypothesis is the researcher’s reasonable guess based on the available evidence, which he or she seeks to prove in the course of the research.
  • Determine the methodology to be adopted. Educational research methods include interviews, surveys, and questionnaires.
  • Collect data from the research subjects using one or more educational research methods. You can collect research data using Formplus forms.
  • Analyze and interpret your data to arrive at valid findings. In the Formplus analytics dashboard, you can view important data collection insights and you can also create custom visual reports with the reports summary tool. 
  • Create your research report. A research report details the entire process of the systematic investigation plus the research findings. 

Conclusion 

Educational research is crucial to the overall advancement of different fields of study and learning, as a whole. Data in educational research can be gathered via surveys and questionnaires, observation methods, or interviews – structured, unstructured, and semi-structured. 

You can create a survey/questionnaire for educational research with Formplu s. As a top-tier data tool, Formplus makes it easy for you to create your educational research survey in the drag-and-drop form builder, and share this with survey respondents using one or more of the form sharing options. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • education research
  • educational research types
  • examples of educational research
  • importance of educational research
  • purpose of educational research
  • busayo.longe

Formplus

You may also like:

Goodhart’s Law: Definition, Implications & Examples

In this article, we will discuss Goodhart’s law in different fields, especially in survey research, and how you can avoid it.

how research help education

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

User Research: Definition, Methods, Tools and Guide

In this article, you’ll learn to provide value to your target market with user research. As a bonus, we’ve added user research tools and...

Assessment Tools: Types, Examples & Importance

In this article, you’ll learn about different assessment tools to help you evaluate performance in various contexts

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Search Close search bar
  • Open menu Close menu

The role of research in teacher education: Reviewing the evidence

Report 01 Jan 2014 1 Comments

  • Education and learning

While many might assume that research should make some contribution to high quality teacher education, BERA and RSA ask precisely what that contribution should be: to initial teacher education, to teachers’ continuing professional development and to school improvement. They also ask how different teacher education systems currently engage with research and what international evidence there is that linking research and teacher education is effective.

Engage with our research

At a time when virtually every government around the world is asking how it can improve the quality of its teaching force,the British Educational Research Association (BERA) and the RSA have come together to consider what contribution research can make to that improvement.

High quality teaching is now widely acknowledged to be the most important school-level factor influencing student achievement. This in turn has focused attention on the importance of teacher education, from initial training and induction for beginning teachers, to on-going professional development to help update teachers’ knowledge, deepen their understanding and advance their skills as expert practitioners. Policy-makers around the world have approached the task of teacher preparation and professional development in different ways, reflecting their distinctive values, beliefs and assumptions about the nature of professional knowledge and how and where such learning takes place.

At a time when teacher education is under active development across the four nations of the United Kingdom, an important question for all those seeking to improve the quality of teaching and learning is how to boost the use of research to inform the design, structure and content of teacher education programmes.

The Inquiry aims to shape debate, inform policy and influence practice by investigating the contribution of research in teacher education and examining the potential benefits of research-based skills and knowledge for improving school performance and student outcomes.

There are four main ways that research can contribute to programmes of teacher education:

  • The content of such programmes may be informed by research-based knowledge and scholarship, emanating from a range of academic disciplines and epistemological traditions.
  • Research can be used to inform the design and structure of teacher education programmes.
  • Teachers and teacher educators can be equipped to engage with and be discerning consumers of research.
  • Teachers and teacher educators may be equipped to conduct their own research, individually and collectively, to investigate the impact of particular interventions or to explore the positive and negative effects of educational practice.

At present, there are pockets of excellent practice in teacher education in different parts of the UK, including some established models and some innovative new programmes based on the model of ‘research-informed clinical practice’. However, in each of the four nations there is not yet a coherent and systematic approach to professional learning from the beginning of teacher training and sustained throughout teachers’ working lives.

There has been a strong focus on the use of data to inform teaching and instruction over the past 20 years. There now needs to be a sustained emphasis on creating ‘research-rich’ and ‘evidence-rich’ (rather than simply ‘data-rich’) schools and classrooms. Teachers need to be equipped to interrogate data and evidence from different sources, rather than just describing the data or trends in attainment.

The priority for all stakeholders (Government, national agencies, schools, universities and teachers’ organisations) should be to work together to create a national strategy for teacher education and professional learning that reflects the principles of ‘research-informed clinical practice’. Rather than privileging one type of institutional approach, these principles should be applied to all institutional settings and organisations where teacher education and professional learning takes place.

Further consideration needs to be given to the best ways of developing such a strategy, in consultation with all the relevant partners.

RSA document

Contributors

Picture of British Educational Research Association (BERA)

Join the discussion

Please login to post a comment or reply

Don't have an account? Click here to register.

Picture of Christy Smither

intrestung info

Page 0 of 1

how research help education

The importance of research and its impact on education

The importance of research and its impact on education

Tertiary education is indeed a big investment, so looking for the right university takes time, patience, and dedication. In case you hadn’t noticed, most universities tend to highlight research as one of their most distinguished and competitive strengths. But the question here is why ?

From an individual point of view, the advantages of research extend beyond having an impressive degree certificate. Through detailed research, students develop critical thinking expertise, as well as effective analytical, research, and communication skills that are globally sought-after and incredibly beneficial. Ultimately, research is essential to economic and social development of our globalised society, forming the foundations governmental policies around the world.

“Knowledge generated by research is the basis of sustainable development, which requires that knowledge be placed at the service of development, be converted into applications, and be shared to ensure widespread benefits,” says Mary-Louise Kearney, Director of the UNESCO Forum on Higher Education, Research and Knowledge.

how research help education

One institution which understands this is the University of Skovde . Though the university is actually much younger than most, its education and internationally-competitive research are highly respected, particularly within the School of Bioscience . The university has a well-developed collaboration between education, research, the business community, and society on both a national and international level.

The school has three divisions : – The Division for Bioinformatics and Ecology, The Division for Cognitive Neuroscience and Philosophy, and The Division for Molecular Biology; offering a total of 13 academic programs at the undergraduate and advanced levels. Besides providing an impressive array of courses, the school has its own research centre known as the Systems Biology Research Center , where research is conducted in the following areas:- Infection Biology , Bioinformatics , Biotechnology , Cognitive Neuroscience and Philosophy and Ecological Modeling . Here are some examples of what Skovde’s experienced research teams have been working on:

how research help education

Infection Biology

Research from the Infection Biology Group focuses on the development of mathematical and statistical models as well as experimental methodology used to help understand the complex systems that make up infection biology. The Group’s current funded projects focus on:

  • Identifying new biomarkers to help diagnose sepsis patients at an early stage
  • Identifying biomarker profiles for immunosuppressive drugs
  • Developing new methods for the detection of plant pathogens

Bioinformatics

Formed by computer science researchers, this area of investigation focuses on the development and application of algorithms for the analysis of biological data. Skovde’s research has incorporated the development of algorithms, software and databases, as well as solving biological research problems with these tools. As part of the research team, students get to work with other researchers such as stem cell and tumour biologists from other groups within the university , industrial partners and other establishments.

Cognitive neuroscience and philosophy

Research in cognitive neuroscience seeks to increase knowledge and understanding of human abilities, reflected in the form of cerebral activity. One of the main goals of the research is to increase our understanding of human consciousness, as well as to study methods that might increase human wellbeing.

how research help education

Biotechnology

Defined as the application of biological organisms, systems, or processes by various industries, stakeholders and researchers, Biotechnology encompasses science and life, improving the value of materials and organisms through pharmaceuticals, crops, livestock and the environment. While research in plant biotechnology seeks to identify specific genes to eliminate various forms of arsenic contamination, Skovde’s research projects in microbial biotechnology can also be used to develop microbial bioreactors. The mussel research project that is run together with ecologists includes development of molecular markers that will enable scientists to identify different mussel species.

Skovde’s high-impact research also extends far beyond the laboratory, where the school collaborates with  Life Science companies, public organisations and other universities to strengthen its research. Through these partnerships, the school has access to specific expertise, highly advanced laboratories and equipment, as well as PhD studies.

Additionally, the university works with business partners like AstraZeneca , Abbott Diagnostics AB and EnviroPlanning to teach industrial PhD students, and also conduct research in the interest of the company. Students who register for an industrial PhD will be employed by these companies, and receive PhD training at the School of Bioscience – a great career experience that enhances their CV tremendously.

In short, studying at a university with a reputable research foundation not only gives you a firm platform on which you can continue learning, but the skills you master also provide a real advantage over others in the real-world.

Click here to view Skovde’s recent research publications

Follow University of Skovde on Facebook and YouTube

All images courtesy of University of Skovde

Liked this then you’ll love these….

​The phenomenal growth of Life Sciences in emerging markets

The role of university research in tackling social challenges and driving economic growth

Popular stories

8 richest men in malaysia and how they made their billions.

8 richest men in Malaysia and how they made their billions

Best money-making side hustles to earn you extra income in 2024

Best money-making side hustles to earn you extra income in 2024

5 types of visas for talented Nigerians looking to live abroad

5 types of visas for talented Nigerians looking to live abroad

Jobs in tech: 6 countries offering visas to skilled foreign graduates

Jobs in tech: 6 countries offering visas to skilled foreign graduates

  • Our Mission

How Teachers Can Learn Through Action Research

A look at one school’s action research project provides a blueprint for using this model of collaborative teacher learning.

Two teachers talking while looking at papers

When teachers redesign learning experiences to make school more relevant to students’ lives, they can’t ignore assessment. For many teachers, the most vexing question about real-world learning experiences such as project-based learning is: How will we know what students know and can do by the end of this project?

Teachers at the Siena School in Silver Spring, Maryland, decided to figure out the assessment question by investigating their classroom practices. As a result of their action research, they now have a much deeper understanding of authentic assessment and a renewed appreciation for the power of learning together.

Their research process offers a replicable model for other schools interested in designing their own immersive professional learning. The process began with a real-world challenge and an open-ended question, involved a deep dive into research, and ended with a public showcase of findings.

Start With an Authentic Need to Know

Siena School serves about 130 students in grades 4–12 who have mild to moderate language-based learning differences, including dyslexia. Most students are one to three grade levels behind in reading.

Teachers have introduced a variety of instructional strategies, including project-based learning, to better meet students’ learning needs and also help them develop skills like collaboration and creativity. Instead of taking tests and quizzes, students demonstrate what they know in a PBL unit by making products or generating solutions.

“We were already teaching this way,” explained Simon Kanter, Siena’s director of technology. “We needed a way to measure, was authentic assessment actually effective? Does it provide meaningful feedback? Can teachers grade it fairly?”

Focus the Research Question

Across grade levels and departments, teachers considered what they wanted to learn about authentic assessment, which the late Grant Wiggins described as engaging, multisensory, feedback-oriented, and grounded in real-world tasks. That’s a contrast to traditional tests and quizzes, which tend to focus on recall rather than application and have little in common with how experts go about their work in disciplines like math or history.

The teachers generated a big research question: Is using authentic assessment an effective and engaging way to provide meaningful feedback for teachers and students about growth and proficiency in a variety of learning objectives, including 21st-century skills?

Take Time to Plan

Next, teachers planned authentic assessments that would generate data for their study. For example, middle school science students created prototypes of genetically modified seeds and pitched their designs to a panel of potential investors. They had to not only understand the science of germination but also apply their knowledge and defend their thinking.

In other classes, teachers planned everything from mock trials to environmental stewardship projects to assess student learning and skill development. A shared rubric helped the teachers plan high-quality assessments.

Make Sense of Data

During the data-gathering phase, students were surveyed after each project about the value of authentic assessments versus more traditional tools like tests and quizzes. Teachers also reflected after each assessment.

“We collated the data, looked for trends, and presented them back to the faculty,” Kanter said.

Among the takeaways:

  • Authentic assessment generates more meaningful feedback and more opportunities for students to apply it.
  • Students consider authentic assessment more engaging, with increased opportunities to be creative, make choices, and collaborate.
  • Teachers are thinking more critically about creating assessments that allow for differentiation and that are applicable to students’ everyday lives.

To make their learning public, Siena hosted a colloquium on authentic assessment for other schools in the region. The school also submitted its research as part of an accreditation process with the Middle States Association.

Strategies to Share

For other schools interested in conducting action research, Kanter highlighted three key strategies.

  • Focus on areas of growth, not deficiency:  “This would have been less successful if we had said, ‘Our math scores are down. We need a new program to get scores up,’ Kanter said. “That puts the onus on teachers. Data collection could seem punitive. Instead, we focused on the way we already teach and thought about, how can we get more accurate feedback about how students are doing?”
  • Foster a culture of inquiry:  Encourage teachers to ask questions, conduct individual research, and share what they learn with colleagues. “Sometimes, one person attends a summer workshop and then shares the highlights in a short presentation. That might just be a conversation, or it might be the start of a school-wide initiative,” Kanter explained. In fact, that’s exactly how the focus on authentic assessment began.
  • Build structures for teacher collaboration:  Using staff meetings for shared planning and problem-solving fosters a collaborative culture. That was already in place when Siena embarked on its action research, along with informal brainstorming to support students.

For both students and staff, the deep dive into authentic assessment yielded “dramatic impact on the classroom,” Kanter added. “That’s the great part of this.”

In the past, he said, most teachers gave traditional final exams. To alleviate students’ test anxiety, teachers would support them with time for content review and strategies for study skills and test-taking.

“This year looks and feels different,” Kanter said. A week before the end of fall term, students were working hard on final products, but they weren’t cramming for exams. Teachers had time to give individual feedback to help students improve their work. “The whole climate feels way better.”

How innovations in teaching and learning help education leapfrog

Subscribe to the center for universal education bulletin, lauren ziegler and lauren ziegler former project director, leapfrogging in education - global economy and development , center for universal education @laurenr_ziegler alejandro paniagua alejandro paniagua consultant - brookings institution.

September 19, 2019

This piece is a summary of the new report, “ Learning to leapfrog: Innovative pedagogies to transform education .”

As the 74th session of the United Nations General Assembly (UNGA) begins, attention is focused on progress toward the Sustainable Development Goals (SDGs). With only a decade left to achieve them, it could take a century before all children are learning at the levels needed to thrive today and in the future. If we are serious about meeting SDG 4 for education, we must reframe our mindset toward the concept of leapfrogging —or rapidly accelerating education progress. New research by the Center for Universal Education (CUE) at Brookings, “ Learning to leapfrog: Innovative pedagogies to transform education ,” focuses on how innovations in teaching and learning can take root and scale, putting SDG 4 within reach. The report brings together previous OECD research on innovative pedagogies and CUE´s “ Leapfrogging Inequality ” book, to rethink and refine six innovative pedagogies that can transform teaching and learning (see figure).

6 clusters

Key finding 1: Innovative pedagogies are needed to transform learning

The report examines how innovative pedagogies are ripe for leapfrogging, that is, the pedagogies target skills that most impact students’ job prospects and social lives and secure the necessary depth and breadth of skills needed for lifelong learning. Many of these approaches can be used in low-resource settings and without technology, which can help close gaps in equity by helping learners who are furthest behind.

Building on the six innovative pedagogies, the report narrows in on example approaches of each. One of the approaches most ripe for leapfrogging is storytelling , an approach based on the gamification pedagogy. Storytelling can be especially impactful for teachers that base their teaching on lectures and traditional drilling, as it raises the quality of teacher narratives and engages students through powerful stories. Storytelling favors dialogue and brings together the apparently opposite “teacher-centered” and “student-centered” approaches, showing how innovation can build on teacher capacities and local beliefs.

Another approach ripe for leapfrogging is problem solving , an approach of the computational thinking pedagogy. This approach both develops a critical skill young people will need throughout their lives, and contrary to popular belief, can work well in low-tech settings. As an example, the CS Unplugged project offers off-line activities such as games, magic tricks, and competitions to show children the kind of thinking that is expected of a computer scientist. The project has become quite popular around the globe and has been translated into 12 languages, making this approach quite accessible.

Peter Tabichi, the 2019 Global Teacher Prize Winner from Kenya, shared how he uses innovations to leapfrog progress in education at the launch of the Brookings report at UNGA in September 2019.

Key finding 2: Three structural changes are needed for innovative pedagogies to flourish

Leapfrogging in education is an ambitious and challenging goal; it cannot be achieved solely through better conceptualizations and awareness of innovative pedagogies. “Learning to leapfrog” argues that policies need to be framed at multiple levels, which include teachers’ personal dispositions and skills, local conditions, and the wider national context of curriculum and policy priorities. In particular, the report calls for three structural changes for these transformations to take root:

  • At the level of the teaching workforce, education decisionmakers should invest in teacher learning and professional development to ensure foundations for quality teaching . These foundations include pedagogical and content knowledge, teaching across a range of student abilities, and ample time for classroom instruction.
  • Looking beyond the existing teaching workforce, education decisionmakers can widen the profile of who can be considered an educator . Doing so brings in specialized expertise and, in many cases (though not all) can “unburden” teachers from administrative responsibilities.
  • The third structural change needed is to properly scaffold and manage hybrid learning environments —partnerships and models that blend formal and nonformal learning and are prevalent in today’s complex education landscape. Scaffolding through model approaches and support materials can ensure these hybrid arrangements offer quality learning that moves the needle on leapfrogging.

Key finding 3: Scaling innovative pedagogies requires systems transformation and leveraging networks

The three structural changes highlighted above point to the importance of scale, and how the adoption of innovation pedagogies will ultimately require system transformation. The report discusses how the goals of scaling should go beyond quantity of learners  reached, and, rather, scale deep change , which includes altering beliefs and norms, diffusing innovation, shifting ownership to those closest to innovation, and continual learning. The report argues that one way to scale deep change is by leveraging education networks. Teachers and other education actors will undoubtedly engage and learn from their peers to implement innovative practices; thus, an important route to scaling will come through the density and dynamism of education networks, such as chains of schools, communities of practice, and teacher networks.

With a decade left to reach the SDGs, the time to leapfrog is now. We hope decisionmakers will take note of the report and its findings. It will be featured amid the many UNGA events in an afternoon discussing innovations in teaching and learning on Monday, September 23rd together with the Education Workforce Initiative , which will launch a separate report on new approaches for how the education workforce can be designed, trained, and developed.

Related Books

Rebecca Winthrop Adam Barton, Eileen McGivney

June 5, 2018

Related Content

David Istance

January 23, 2019

Rebecca Winthrop

April 2, 2019

Rebecca Winthrop, Adam Barton

September 21, 2017

Global Education

Global Economy and Development

Center for Universal Education

Modupe (Mo) Olateju, Grace Cannon

April 15, 2024

Brad Olsen, John McIntosh

April 3, 2024

Darcy Hutchins, Emily Markovich Morris, Laura Nora, Carolina Campos, Adelaida Gómez Vergara, Nancy G. Gordon, Esmeralda Macana, Karen Robertson

March 28, 2024

Children in school uniforms raise their hands while sitting on the floor.

Our research has found a way to help the teacher shortage and boost student learning

how research help education

Laureate Professor of Education, Director Teachers and Teaching Research Centre, University of Newcastle

how research help education

Senior Lecturer in Education, University of Newcastle

Disclosure statement

Jenny Gore receives funding from the Paul Ramsay Foundation, NSW Department of Education, Australian Research Council and the Australian Government.

Andrew Miller receives funding from the Paul Ramsay Foundation, NSW Department of Education, Australian Research Council and the Australian Government.

University of Newcastle provides funding as a member of The Conversation AU.

View all partners

Australian schools are facing unsustainable pressures. There are almost daily reports of too many students falling behind and not enough teachers to teach them. Meanwhile, the teachers we do have are stressed, overworked and lack adequate support in the classroom.

Governments are well aware of these challenges and there is no shortage of efforts to tackle them . We have tutoring schemes for students who are struggling, wellbeing programs for burnt-out teachers, and leadership programs to develop senior teachers.

But this approach is uneven : some measures improve, others don’t, some schools see success, others don’t.

Over more than two decades we have refined a process to help teachers improve their teaching. Over the past five years we have been rigorously testing its impact on Australian students and schools.

Our findings suggest it will not only improve teaching but boost teachers’ morale and lift student outcomes at the same time.

Read more: 'I have been ground down': about 50% of Australian principals and other school leaders are thinking of quitting

Our approach

More than 20 years ago, we developed a framework called the Quality Teaching Model. This is based on decades of research on what kinds of teaching make a difference to student learning. The model is centred on three big ideas:

intellectual quality: students develop a deep understanding of important knowledge

quality learning environment: classrooms are positive and boost learning

significance: learning is connected to students’ lives and the wider world.

Each of these three ideas contains six elements, which are explained in detail in an accompanying guide .

How teachers learn the approach

We then developed “Quality Teaching Rounds”. This is a professional development program in which teachers learn to embed the Quality Teaching Model in the way they teach.

It bring any four teachers together face-to-face or online, within or across schools. They observe and analyse each other’s lessons based on our model. Then they discuss ways to improve their teaching and teaching and learning throughout their school.

It requires at least two teachers per school to attend a two-day workshop followed by four teachers participating in four days of rounds, with no further external input.

The Quality Teaching Rounds program treats teachers as professionals, building on what they already know and do, and honouring the complexity of teaching. It aims to create a non-threatening environment for their development.

It does not dictate particular teaching methods but focuses attention on teachers’ core business: ensuring high-quality student learning experiences.

As a teacher in a regional primary school explains:

It’s increased our [student] engagement because suddenly every component of the lesson is differentiated properly. It’s focused on [students’] own knowledge and building up their knowledge to the next part.

Three woman sit at a table. One looks intently at another who is speaking.

Testing our approach

After a 2014–15 study , we knew Quality Teaching Rounds improved teaching quality and morale. It improved their understanding of good teaching and their professional relationships with colleagues.

But we didn’t yet know if it improved student outcomes.

So, between 2019 and 2023, we have been extensively testing Quality Teaching Rounds. We have tested it using the “gold standard” for research: randomised controlled trials .

This sees participants randomly allocated to either the new or standard (“control”) situation. This allows us to compare whether the new approach makes a difference.

Randomised controlled trials are common in medicine. But they are much rarer in education and require very large samples and are very expensive to run (they are not possible without major government or philanthropic support*).

Our trials involved 1,400 teachers and 14,500 students from 430 schools across New South Wales, Victoria and Queensland.

Read more: Randomised control trials: what makes them the gold standard in medical research?

Testing maths and reading achievement

Our trials measured student achievement in maths and reading in years 3, 4, 5 and 6. We compared results between students whose teachers had been randomly allocated to do Quality Teaching Rounds and control groups of students whose teachers did whatever other professional development was happening in their school.

As well as randomised controlled trials, our research also included case studies, longitudinal research (where we tracked teachers over several years) and evaluation of whole-school approaches embedding Quality Teaching Rounds. This helped understand not only if the approach worked, but how and why.

Across the entire five-year research program, we conducted more than 65,000 student achievement tests, almost 1,500 lesson observations, and more than 27,000 surveys and 400 interviews with students, teachers and school leaders.

We believe no other intervention has been so thoroughly tested in Australian schools or amassed such a comprehensive body of evidence.

A girl answers questions about perimeter calculations on a maths work sheet.

Our findings

Backing up the 2014–15 trial, we found participation in Quality Teaching Rounds significantly improved teaching quality, teacher morale, teacher efficacy and school culture.

Most importantly, three of the four trials also produced robust evidence of positive effects on student achievement.

This amounted to 2–3 months of additional growth in maths and reading compared with the control groups. Across the studies, such effects were found for students from Year 3 to Year 6, and in both NSW and Queensland. These results were slightly stronger in disadvantaged schools, signalling the potential of Quality Teaching Rounds to contribute to greater equity in education.

As a teacher in a metropolitan primary school told us:

For my students, I’ve already seen so many improvements in their confidence and in the quality of the work that they’re producing.

Our broader research found this is because the model gives teachers a shared language for understanding good teaching. The rounds then provide sustained time to work on improving teaching, while respecting and empowering teachers.

Because it focuses on the quality of teaching, it works for teachers across different subjects, faculties and school types. In our study, it has been implemented in public and private schools, primary and secondary schools and even universities .

Read more: No wonder no one wants to be a teacher: world-first study looks at 65,000 news articles about Australian teachers

So far, the federal government has funded 1,600 early-career teachers to do Quality Teaching Rounds between 2023 and 2026, as part of its response to teacher shortages. This is providing access for schools right around the country.

We are also now focusing on a new project designed to support 25 disadvantaged NSW public schools.

Australian governments and education experts know major changes are needed to improve schooling.

Our research shows the Quality Teaching approach has clear potential to address the most pressing concerns – supporting the teaching workforce and achieving excellence and equity for Australian students.

It would cost A$242 million over three years for every teacher in NSW to participate in Quality Teaching Rounds. For comparison, the NSW government invested about $900 million over three years for the catch-up tutoring program for students post-COVID. A December 2023 evaluation found so far, this has had “minimal effect” on student learning.

Schools would still need to make time for teachers to engage in the program. In the middle of a teacher shortage that’s not always simple.

But our research shows the payoffs are worth it. Teachers in our study have already embraced the Quality Teaching Rounds approach: it helps them feel more confident and connected. Perhaps even more importantly, it enhances their work and improves outcomes for their students.

*Our Quality Teaching research has been supported by a philanthropic donation from the Paul Ramsay Foundation and by the NSW Department of Education.

  • Secondary education
  • Primary school
  • High school
  • Student learning
  • Teacher shortage
  • Teacher wellbeing

how research help education

Senior Lecturer - Earth System Science

how research help education

Associate Director, Operational Planning

how research help education

Sydney Horizon Educators (Identified)

how research help education

Deputy Social Media Producer

how research help education

Associate Professor, Occupational Therapy

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.14(7); 2022 Jul

Logo of cureus

Empowering Patients: Promoting Patient Education and Health Literacy

Pradnya brijmohan bhattad.

1 Cardiovascular Medicine, Saint Vincent Hospital, University of Massachusetts Chan Medical School, Worcester, USA

Luigi Pacifico

Patients are generally keen to understand and obtain more information about their medical conditions. There exists a need to develop updated and thorough yet concise patient education handouts and to encourage healthcare providers (HCPs) to use uniform patient education methods.

A thorough review of literature on patient education material was performed prior to starting the study. A comparison with different resources regarding the appropriateness of patient education was done. Educating HCPs to effectively use patient educational materials incorporated into the electronic health record system, including electronic methods, such as the use of a patient portal, to help educate patients. 

Strategies were formulated to reduce the amount of processing and attending time required for fetching appropriate materials and lead to fast, efficient, and effective patient education. To improve the physical and psychosocial wellbeing of a patient, personalized patient education handouts, in addition to verbal education by the HCPs, augment the betterment of patient care via shared decision making and by improving patient satisfaction and health literacy.

Introduction

Patients are often eager to understand and know more about their medical conditions and health situation, and educating them with the most relevant, current, consistent, and updated information helps patients and their families significantly in the medical care and decision-making process [ 1 ]. 

Patients need formal education on the disease condition; they need to know their ailment, understand their symptoms, be educated on the diagnostics, appropriate medication use, and should be taught when to call for help. Several patient education handouts for various conditions are available, and there exists a need to assess which one is better suited for a particular disease/condition encountered and provides concise information. Patient education materials help educate the patients on their health conditions, improves their health literacy, and enhances and promotes informed decision-making based on the most current and updated medical and clinical evidence as well as patient preference [ 2 ].

The aim of this study was to develop updated patient education handouts and materials in addition to verbal counseling of the patients to help them understand the disease condition, diagnostic studies, proper advice on medications, and when to call for help. And to encourage healthcare providers (HCPs) to use uniform patient education materials.

The objectives of this study are 1) the implementation of quality improvement techniques of Plan-Do-Study-Act (PDSA) cycles on patient education in clinical settings; 2) to enhance the delivery of patient education and create awareness amongst the HCPs regarding the importance of patient education and improved health literacy; 3) to verify if patient education handouts have the minimum necessary information that patient should know; 4) to compare patient education handouts from databases integrated in the electronic health record (EHR) with standard patient education database websites like the Centers for Disease Control and Prevention website, and MedlinePlus® site to make sure that they have the minimum necessary information; and 5) to educate and encourage HCPs on the use of appropriate patient education articles in the EHR and utilize an electronic patient portal for patient education, help transition the patient education to an electronic form, and increase efficacy and consistent patient education.

Materials and methods

A comprehensive review of the patient education materials on the most common medical ailments in various clinical settings was performed. We compared the existing patient education database integrated in the EHR with the standard resources such as the CDC, MedlinePlus via retrospective chart study format to ensure the minimum necessary information is available. 

A comparison of existing educational material was completed by analyzing other patient education materials from resources such as UpToDate (the basics/beyond the basics), MedlinePlus, US National Library of Medicine of NIH, CDC, and the US Department of Health and Human Services to ensure that effective, most updated, current, and evidence-based information is provided to the patients from the educational materials.

Search words were incorporated to help search for the educational articles in the existing EHR by the title of the article. Educational materials studied were relevant to the common medical ailments in various clinical settings. The patient handouts were made available in such a way that these should be able to be sent either through an electronic patient portal or printed out.

HCPs were educated in a session with pre- and post-lecture survey qualitative and quantitative questionnaires. The impact of these interventions was further assessed by pre- and post-intervention surveys after educating the HCPs.

Uniform updated patient education handouts were created after comparing them with standard resources. A pre-test survey questionnaire was obtained to discuss with HCPs regarding the current knowledge and practices of the usage of patient education handouts and the understanding of EHR to utilize uniform and standardized patient education handouts. After educating the HCPs, their knowledge regarding the use of EHR to effectively use patient education handouts was tested in a post-test survey questionnaire. After completion of the pre and post-test survey questionnaire by HCPs, analysis of the data performed (Figures ​ (Figures1 1 - ​ -20 20 ).

An external file that holds a picture, illustration, etc.
Object name is cureus-0014-00000027336-i01.jpg

HCPs - healthcare providers

An external file that holds a picture, illustration, etc.
Object name is cureus-0014-00000027336-i02.jpg

"Do you feel that attending and processing times required for fetching appropriate educational articles will be reduced if standard materials are outlined?"

An external file that holds a picture, illustration, etc.
Object name is cureus-0014-00000027336-i15.jpg

“Do you think that efficient patient education is effective in creating and improving adherence to treatment, medication compliance, and for improving overall patient health?”

An external file that holds a picture, illustration, etc.
Object name is cureus-0014-00000027336-i19.jpg

Quality improvement (QI), problem-solving, and gap analysis

QI techniques, including PDSA cycles, to improve patient education implemented in various clinical settings [ 1 ].

Reasons for Action

There is a need for updated and uniform patient education materials in addition to verbal counseling of the patient to help them understand the disease condition, diagnostic studies, proper advice on medications, and when to call for help, thereby enhancing health literacy. There exists several patient education materials for various ailments, and the need to assess which one is better suited for a disease condition and contains concise information.

Initial State

We reviewed the available patient education material from the patient education database integrated in the EHR, and compared it with current standardized resources such as MedlinePlus, US National Library of Medicine of NIH, CDC, and the US Department of Health and Human Services. A thorough review of literature on patient education material was performed prior to starting the study.

We compared more than one source regarding the appropriateness of patient education, most specifically, how to use the medications and when to call for help. The quality of educational materials regarding disease education, diagnostics education, education on medication use, and education on when to call for help was assessed. The resources described above were utilized for comparison.

Gap Analysis

A graph of the gap analysis is displayed in Figure ​ Figure21 21 below.

An external file that holds a picture, illustration, etc.
Object name is cureus-0014-00000027336-i21.jpg

Solution Approach

It was noticed that the educational materials were available only in printed format. Enrolling patients on the electronic patient portal helps send educational materials to the patient as a soft copy in a faster and more efficient electronic format. 

Higher attending and processing time is required for fetching appropriate materials due to the unavailability of exact materials and using non-updated educational materials. Therefore, creating an index of educational articles on commonly encountered medical situations and ensuring that these articles are current and updated might make the process more efficient. 

There is a very limited time availability to impart specific educational elements with the limited appointment times. Appropriately detailed educational materials can be sent to the patient via a patient portal even after the patient encounter has ended. For patients with limited technology/computer use, educational materials can be mailed if they're missed during the encounter. 

Inadequate educational methods were utilized; thus, incorporating educational articles from resources other than the databases in the existing EHR, and using the index of educational articles on commonly encountered medical situations were applied.

Inefficient usage of the operational capacity of EHR for patient education, using database integrated in the EHR, and lack of training were identified. As a result, HCPs were trained on using educational materials for their patients in an efficient manner, and patient education was prioritized.

Rapid Experiment: Plan-Do-Study-Act Cycle

Plan: Plan to use appropriate patient education material from several sources made available in the index of the educational articles.

Do: Counsel and verbally educate the patients, along with providing educational materials. Obtain a verbal read-back from the patients about how to use medications and when to call for help.

Study: Use the teach-back method to make patients explain back the information provided in their own words to see if they understood the disease, diagnostics, medication use, and when to call for help to improve health literacy.

Act: If a patient has questions, address them appropriately and if need be, set up a follow-up appointment. 

Actions Taken

An index of educational materials relevant to the common medical ailments in various clinical settings was created. This index of educational materials was to guide HCPs in choosing appropriate and relevant articles in an efficient, quick, and timely manner for patients in various clinical settings. Effective use of patient educational materials in the database incorporated into the EHR, including electronic methods such as the use of the patient portal to help educate patients, was promoted. Alternate resources other than those from the database in the existing EHR were utilized. Educational materials in printed format were made available for patients with limited technology access. The amount of time required for fetching appropriate materials was reduced by creating and referencing to an index for commonly encountered medical situations.

Efficient and faster patient education was imparted with reduced processing and attending time required. Prioritized health education to improve health literacy. Efficient usage of operational capacity of database integrated in the EHR was undertaken to improve health literacy. HCPs were trained to use patient education materials efficiently. 

What Helped

Fast, efficient, and effective patient education helped patients and their families significantly in medical care and shared decision-making based on the most current and updated clinical evidence and patient preference. Creating an index of educational materials relevant to the medical conditions commonly encountered thereby reduced the amount of processing and attending time required for fetching appropriate materials. Effectively using patient educational materials in the database incorporated into the EHR, including electronic methods such as the use of a patient portal to help educate patients, using soft copy (electronic-copy) reduced requirement of printed materials. Correction of misconceptions that patients may have helped improve health literacy. 

What Went Well

Helping engage, encourage, and empower the patients in participating in their own health care and treatment decisions. Enhanced patient satisfaction and better outcomes (for instance, educating a patient on osteopenia encouraged them to continue/start the vitamin D supplementation, participate in regular exercise, healthy diet preferences, and health promotion). 

What Hindered

High HCP turnover rate with changing schedules hindered consistent use of patient education materials. Insufficient number of HCPs trained for patient education.

What Could Improve

Incorporating educational materials in the video format for patients who do not wish to read or talk about their health situations. Enhanced training of all the HCPs for effective and efficient use of patient education resources to allow consistency in effective patient education.

Personalized patient education engages, encourages, and empowers patients in participating in their own health care and treatment decisions and leading to better outcomes, decreased need for excess diagnostic testing, and enhanced patient satisfaction [ 3 , 4 , 5 ]. This needs motivation on the part of the resident doctors, nurse practitioners, physician assistants, physicians, and the allied staff. 

The Advisory Committee on Training in Primary Care Medicine (ACTPCMD) recommends that Health Resources & Services Administration’s (HRSA) Title VII, Part C, Section 747 and 748 education and training programs should prepare students, faculty, and practitioners to involve patients and caretakers in shared medical decision-making which can happen well with better patient education process [ 6 ].

We as HCPs should cultivate good habits amongst ourselves to ensure patients know about their condition and treatment well. This will help increase medication and treatment compliance amongst patients and enhance the physician-patient relationship to a higher level.

Conclusions

To improve the physical and psychosocial well-being of a patient, personalized patient education materials, in addition to verbal education by the HCPs, augment the betterment of patient care via shared decision making and by improving patient satisfaction. There is a need to reiterate that HCPs understand patients' concerns and provide effective patient education and counseling for effective health care delivery.

The content published in Cureus is the result of clinical experience and/or research by independent individuals or organizations. Cureus is not responsible for the scientific accuracy or reliability of data or conclusions published herein. All content published within Cureus is intended only for educational, research and reference purposes. Additionally, articles published within Cureus should not be deemed a suitable substitute for the advice of a qualified health care professional. Do not disregard or avoid professional medical advice due to content published within Cureus.

The authors have declared that no competing interests exist.

Human Ethics

Consent was obtained or waived by all participants in this study

Animal Ethics

Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue.

  • News & Insights
  • All News & Insights

Environmental justice and education: How schools can help foster a sustainable future

Climate change conference brought together artists, activists, public officials, school board members, higher education leadership, district administrators and teachers to share their commitment to sustaining the environment.

Climate Change conference 2024

On April 4, USC Rossier and Bio Equity Ed , a community-based non-profit in Los Angeles, hosted a conference on “Climate Change and Environmental Justice: The Role of Schools in Planning for a Sustainable Future.” Artists, activists, public officials, school board members, higher education leadership, district administrators and teachers convened by the dozens in the LEED-certified building of the California Endowment. Panelists and conferencegoers alike shared their commitments to transforming communities and building alliances, to sustain just and meaningful change in the face of our changing climate.

In her opening remarks, Veda Ramsay-Stamps EdD ’23 , USC Rossier alumna and founder of Bio Equity Ed, described how disconnected communities of color in most urban areas feel from nature, owing to historic injustices. She recounted the stories of Black and Brown students in Los Angeles, those who grow up miles from idyllic beaches and mountains but who know only the concrete beneath the nearest river and their feet. Artist Lauren Bon then described her work to bend and regenerate the L.A. River, providing water to the Los Angeles State Historic Park, which had once been a trainyard.

As Dean Pedro Noguera told the audience in his introduction: “Every problem facing the world today is an educational challenge. We have to learn what we need to do. And if we think of the climate crisis this way, as an educational challenge, the problem itself becomes less despairing.” Dean Noguera urged those in attendance not only to meet but to act, not merely to educate others to be resilient in the face of climate change but to inspire its solutions.

In breakout sessions, Distinguished Professor Gale Sinatra moderated a discussion with Imogen Herrick  PhD ’23, a USC Rossier alumna now at the University of Kansas, and Paula Carbone , USC Rossier professor of clinical education. They described successful pedagogical approaches to inspiring students into climate action, integrating students’ own identities and experiences. USC Rossier Professor Tracy Poon Tambascia spoke with researchers and architects about how to transform schools and schoolyards into biophilic environments. Community organizers and non-profit leaders shared their experiences greening urban environments with trees and micro-farms, especially in convincing schools to plant more gardens. Teachers and administrators from California’s climate-sensitive Central Valley described their fleets of electric school buses and innovative sustainability programs and curriculum to move students of color.

Climate Change conference 2024

The conference’s keynote speaker, University of California, Merced Chancellor Juan Sánchez Muñoz  outlined his university’s foundational commitment to sustainability in the Central Valley. At UC Merced, which was already the first research university in the United States to be carbon-neutral, Chancellor Muñoz is helping construct a center for sustainable agriculture, a model for what higher education can contribute to environmental sustainability and surrounding communities.

Climate Change conference 2024

In an especially rousing plenary session, Stephen Ritz , a high school teacher from the South Bronx, described how he had turned his classroom and his school, in one of the most economically and environmentally deprived communities in the United States, into a vegetable garden. Ritz the subject of an upcoming documentary called Generation Growth , has not only shared his plant-based curriculum with governments and classrooms around the world, through his Green Bronx Machine non-profit, he turned the very conference room in which he spoke into an organic, immersive experience with greenery.

Naomi Riley, representing Congresswoman Sydney Kamlager-Dove of California’s 37th District, who had directed millions of dollars to sustainability projects, told those in attendance early in the day, “the folks closest to the problem are usually closest to the solution.” Most often, no one is closer to the problems in our communities than the teachers and administrators of local schools. It is they, as Noguera said, who “must begin to think creatively and critically to address climate change and give hope to kids who believe there is no future.”

Pedro  Noguera

  • Pedro Noguera
  • Distinguished Professor of Education
  • Emery Stoops and Joyce King Stoops Dean

Gale M. Sinatra

  • Gale M. Sinatra
  • Stephen H. Crocker Chair
  • Professor of Education and Psychology
  • Associate Dean for Research

Tracy Poon Tambascia

Tracy Poon Tambascia

  • Professor of Higher Education
  • Veronica and David Hagen Chair in Women’s Leadership

Article Type

Article topics.

  • Climate change

Related News & Insights

March 17, 2023

MC3 Sp23

What do science and racism have in common? Denial.

Final USC Rossier Master Class for spring 2023 on Mar. 22

Featured Faculty

  • Shaun Harper

November 15, 2022

Suarez-Orozco Lecture

What are higher education’s responsibilities in times of crisis?

Marcelo Suárez-Orozco, distinguished scholar of education, globalization and migration, spoke with the USC Rossier community on Nov. 2

May 14, 2021

A K-12 teacher helps a student problem-solve at their desk, illustrating principles that can reduce scientific illiteracy

How educators can stem the tide of scientific illiteracy

Editor’s note: The following content was adapted from Gale Sinatra’s 2021 USC Rossier commencement address to graduating students. Science Denial...

Cookies on GOV.UK

We use some essential cookies to make this website work.

We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.

We also use cookies set by other sites to help us deliver content from their services.

You have accepted additional cookies. You can change your cookie settings at any time.

You have rejected additional cookies. You can change your cookie settings at any time.

how research help education

  • Education, training and skills
  • Inspections and performance of education providers
  • Inspection and performance of schools

Subject report series: religious education

An Ofsted subject report into factors that influence the quality of religious education (RE) in schools in England.

Applies to England

Deep and meaningful the religious education subject report.

This subject report evaluates the common strengths and weaknesses of religious education (RE) in the schools visited.

The report shares what we’ve learned and builds on the Ofsted religious education research review which identified factors that can contribute to high-quality school religious education curriculums, the teaching of the curriculum, assessment and systems.

  • See our collection of research reviews that look at the evidence about different curriculum subjects.
  • Find out more about the principles behind Ofsted’s research reviews and subject reports .

Related content

Is this page useful.

  • Yes this page is useful
  • No this page is not useful

Help us improve GOV.UK

Don’t include personal or financial information like your National Insurance number or credit card details.

To help us improve GOV.UK, we’d like to know more about your visit today. We’ll send you a link to a feedback form. It will take only 2 minutes to fill in. Don’t worry we won’t send you spam or share your email address with anyone.

  • International edition
  • Australia edition
  • Europe edition

Ofsted inspector at work

The Guardian view on school exclusions: to help children, gaps in the system must be closed

Improving provision for pupils outside mainstream education ought to be a top priority

T he doubling in the rate of children suspended from school in England since before the pandemic provides alarming evidence of worsening problems with behaviour and lack of in-school support. Persistent disruption was the reason most often cited in figures covering last spring, followed by physical assault against another pupil. The steep rise in suspensions of four- and five-year-olds is a particularly alarming trend, with 10,256 primary-age children suspended for assaulting an adult.

Being out of school is associated with a range of adverse experiences and outcomes, the vast majority of those affected being boys. Pupils with special educational needs and disabilities (Send) and those eligible for free school meals are at disproportionate risk, as are black Caribbean boys and those from some other minority ethnic groups. For this reason, a broad consensus supports the view that exclusions, especially permanent ones, should be avoided whenever possible. The rise in the rate of suspensions, from 1.62% in 2016‑17 to 3.13% last year, points to something going badly wrong.

What looks at first glance like a big jump in permanent exclusions is in fact a return to pre-pandemic normality, with the secondary-school rate of 0.07% unchanged from 2016‑17. In this case, rising numbers reflect a larger cohort. There is, however, no cause for complacency. As Ofsted’s review of alternative provision showed, the current system for excluded pupils is patchy and full of holes. This is unacceptable, especially given that 82% of alternative-provision pupils have special educational needs. Support for Send pupils urgently needs to be improved within schools, while any future curriculum changes must be made with a broad range of pupils in mind.

Permanent exclusion usually follows a failure of some sort. But provision for those who are excluded, most of which is in pupil referral units (PRUs), should be seen as an opportunity too. Only 13% of settings fall below a “good” standard, according to Ofsted. But inspectors were sharply critical of the system’s “overall lack of cohesion”. A shortage of placements, lack of coordination with mainstream schools leading to unsettling transitions, an overreliance on the private sector and poor standards in unregistered settings all cause problems. Children with education, health and care plans (EHCPs) are better looked after than those without – a finding that anyone seeking to understand the huge rise in EHCPs (by 72% since 2019) must take on board.

New national standards for Send provision are promised, and would mark a step in the right direction, along with more joined-up work between education, health and care. Huge regional variations also require attention (London has the lowest exclusion rates), as do those between school types, with some academy trusts excluding pupils at many times the average rate . Special needs funding has been increased by £1bn and will top £10bn this year, so there has been investment. But the rising cost of living and worsening poverty and mental health mean that schools face extremely challenging conditions, while cash-strapped councils lack the funds to meet their legal obligations. A less disjointed system, with strengthened local partnerships at its heart, should be the direction of travel. The promotion of inclusion should be balanced with the building and valuing of alternatives to mainstream schooling that are not overwhelmed by need – or viewed merely as stopgaps or last resorts.

  • Special educational needs

Most viewed

  • Warning Signs and Symptoms
  • Mental Health Conditions
  • Common with Mental Illness
  • Mental Health By the Numbers
  • Individuals with Mental Illness
  • Family Members and Caregivers
  • Kids, Teens and Young Adults
  • Veterans & Active Duty
  • Identity and Cultural Dimensions
  • Frontline Professionals
  • Mental Health Education
  • Support Groups
  • NAMI HelpLine
  • Publications & Reports
  • Podcasts and Webinars
  • Video Resource Library
  • Justice Library
  • Find Your Local NAMI
  • Find a NAMIWalks
  • Attend the NAMI National Convention
  • Fundraise Your Way
  • Create a Memorial Fundraiser
  • Pledge to Be StigmaFree
  • Awareness Events
  • Share Your Story
  • Partner with Us
  • Advocate for Change
  • Policy Priorities
  • NAMI Advocacy Actions
  • Policy Platform
  • Crisis Intervention
  • State Fact Sheets
  • Public Policy Reports

how research help education

How Volunteering Improves Mental Health

February 02, 2022

By Trish Lockard

diverse hands raised holding hearts

While sitting in a waiting room at a doctor’s office in 2014, I struck up a conversation with the woman sitting next to me. As we got acquainted, she told me she was deeply involved with an organization called the National Alliance on Mental Illness (NAMI). I hadn’t heard of it, but I was intrigued because I was not a stranger to mental illness.

My maternal grandmother experienced debilitating depression for years, culminating in her suicide in 1939. My mother was diagnosed with depression and experienced what I believe was PTSD, following her own mother’s suicide. I had grappled with mental health challenges myself, and I had been taking medication for depression and anxiety disorders for many years.

As I learned about NAMI that day, I knew instinctively this was an organization to which I could happily devote my time and energy. I had always shied away from volunteerism because no cause had ever inspired the passion required to keep me motivated. Now, eight years later, I am still a NAMI member and vocal activist for mental health.

With some personal reflection and review of scientific literature, I’ve come to understand that volunteering itself can be an act of self-care.

The Benefits of Volunteerism

Naturally, the dialogue surrounding activism and volunteerism centers on how others will benefit from volunteer work that you do. But years of research demonstrate that there are benefits for volunteers themselves. Whether you are a family member or caregiver for someone with a mental health condition — or have the lived experience yourself — volunteering can be a positive step toward improving your health and yield many benefits:

  • Reducing Stress My work with NAMI demonstrates the ways in which volunteering can counteract the effects of stress, anger and anxiety. This kind of work was my first exploration into long-term volunteerism — and, as is my nature, I sometimes felt a little anxious as I prepared to lead an affiliate board meeting or teach or speak to a group on behalf of NAMI. But I always rose to the occasion because the cause mattered so greatly to me. And afterward, I would feel exhilarated and thrilled by my accomplishments. Gradually, my focus on the work, and the gratitude I received from it, surpassed other issues in my life that caused negative emotions. There was too much to accomplish and too much to look forward to for me to feel down. Ultimately, I noticed that I slept better at night with the knowledge that I was part of a greater good.
  • Increasing Happiness Research has found a correlation between volunteering and happiness. A 2020 study conducted in the United Kingdom found those who volunteered reported being more satisfied with their lives and rated their overall health as better. Respondents who volunteered for at least one month also reported having better mental health than those who did not volunteer.
  • Developing Confidence Volunteering is an opportunity to develop confidence and self-esteem. Your role as a volunteer can also give you a sense of pride and identity, something that can be hard to come by for people with a mental health diagnosis. The better you feel about yourself, the more likely you are to have a positive view of your life and future. Moreover, I’ve found that the sense of accomplishment from serving others can raise self-esteem and self-confidence.

I served for years as the president of my local NAMI affiliate’s board. I was the family and caregiver support group facilitator and, to this day, offer my services as a NAMI Family-to-Family certified instructor. My passion for offering education and support for people with lived experience and their families hasn’t waned because the need hasn’t waned.

Getting Started Volunteering

In 2018, my long-time friend, psychologist Terri L. Lyon, hoped to create an easy-to-follow roadmap for people to identify the cause they are most passionate about (because focusing on one issue is more effective) and determine how to use the gifts they already possess to make a difference for that cause. With me as her editor, she published the book “ What’s On Your Sign? ” in which she introduced her unique “5-Step Activism Path.” The steps are:

  • Find your passion by creating a vision of how you want to change the world
  • Identify the unique gifts you can bring to this activism
  • Craft a unique activism opportunity ideally suited to you
  • Monitor your long-term effectiveness
  • Stay motivated and avoid burnout

Perhaps these steps seem intimidating at first glance — but with reflection and time, they can lead to a meaningful new path. One example of following these steps is Knoxville jewelry artist, Christinea Beane . As someone with mental illness, Christinea makes jewelry for other people struggling with their mental health, to offer hope, raise awareness and remind them that they are not alone.

As I address in the book I co-wrote with Dr. Lyon, “Make a Difference with Mental Health Activism,” we can’t underestimate the personal and wide-reaching impacts of volunteering and activism, particularly in the mental health field. Your work could not only boost your emotional well-being, it could also be a critical step toward ending stigma, achieving parity, and increasing mental health services and support. You can make a difference.

Trish Lockard has been a volunteer for NAMI Tennessee since 2014. Mental health care became her personal passion following her family’s experience with mental illness. Trish is a nonfiction editor, specializing in memoir, and a nonfiction writing coach at Strike The Write Tone . Contact Trish at [email protected] .

Submit To The NAMI Blog

We’re always accepting submissions to the NAMI Blog! We feature the latest research, stories of recovery, ways to end stigma and strategies for living well with mental illness. Most importantly: We feature your voices. Check out our Submission Guidelines for more information.

Recent Posts

NAMI HelpLine is available M-F, 10 a.m. – 10 p.m. ET. Call 800-950-6264 , text “helpline” to 62640 , or chat online. In a crisis, call or text 988 (24/7).

Main Container

Prime Minister of Canada Justin Trudeau

Search form Mobile

Search

  • Strengthening Canadian research and innovation

Subscribe to email updates

Search form

Main content.

how research help education

This website is not compatible with Internet Explorer or older version of Microsoft Edge(version 78 and older).

For full functionality please use a supported browser .

  • news releases

Gen Z and millennials are the engine of our economy. Everything that is created, built, served, and sold in Canada is increasingly being done by millennials and Gen Z. They’re the young parents, the students doing cutting-edge research, the young entrepreneurs with startup ideas. Canada’s success depends on their success.

To secure Canada’s competitive edge, we need to support and empower tomorrow’s problem solvers and make sure every generation reaches their full potential. That’s why we’re investing in cutting-edge research – to create more good jobs, including in innovation and technology – while making education more affordable.

The Prime Minister, Justin Trudeau, today highlighted an over $4.6 billion package of measures from Budget 2024 to strengthen Canadian research and innovation.

Here’s what we’re doing:

Providing $2.6 billion in core research grant funding, scholarships, and fellowships to support our researchers and their ground-breaking discoveries:

  • This includes $1.8 billion in core research grant funding for a 30 per cent increase over five years of Canada’s core research grant programs that support faculty-led research projects. It will indirectly support thousands of graduate student and post-doctoral fellows with their research, including their work on climate action, health emergencies, artificial intelligence, and psychological health.
  • And $825 million over five years to the granting councils to increase the annual value of master’s and doctoral students’ scholarships to $27,000 and $40,000, respectively, and post-doctoral fellowships to $70,000. To make it easier for students and fellows to access support, the enhanced suite of scholarships and fellowship programs will be streamlined into one talent program. This new program will also increase the number of graduate students and post-doctoral fellows benefiting from research scholarships and fellowships by approximately 1,720 each year.
  • This funding will also provide $30 million over three years for Indigenous researchers and their communities, which would be distributed with $10 million each for First Nation, Métis, and Inuit partners.
  • To provide better co-ordination across the federally funded research ecosystem, we will bring together our three research funding organizations within a single new capstone research funding organization. The granting councils will continue to exist within this new organization, and continue supporting excellence in investigator-driven research, including linkages with the health portfolio.
  • Together, these measures will play a critical role in not only supporting Canadian researchers in solving the world’s greatest challenges – but building a generation of highly educated, highly skilled individuals as a foundation of Canada’s future economic growth and prosperity.

Investing $1.3 billion to keep post-secondary education affordable:

  • This funding will extend for an additional year the increase in full-time Canada Student Grants from $3,000 to $4,200 per year, and interest-free Canada Student Loans from $210 to $300 per week. This includes increases to other Canada Student Grants by 40 per cent.
  • It will also increase the housing allowances used by the Canada Student Financial Assistance Program when determining financial need, which will provide additional student aid to approximately 79,000 students each year.
  • These investments will make sure that our younger generations can access quality post-secondary education at an affordable cost.

Investing $734 million to support Canada’s world-leading research infrastructure and institutes :

  • Supporting TRIUMF, Canada’s sub-atomic physics research laboratory, located at the University of British Columbia. This investment will upgrade infrastructure at the facility, keep Canada at the forefront of physics research, and enable new medical breakthroughs and treatments, from drug development to cancer therapy.
  • Investing in CANARIE, a national not-for-profit organization that manages Canada’s ultra high-speed network to connect researchers, educators, and innovators.
  • Providing funding to Saskatoon-based Canadian Light Source, helping scientists and researchers to continue making breakthroughs in areas ranging from climate-resistant crop development to sustainable mining processes.
  • Supporting the Arthur B. McDonald Canadian Astroparticle Physics Research Institute, headquartered at Queen’s University. This funding will help engineers, researchers, and scientists innovate in areas like clean technology and medical imaging.
  • Investing in the University of Saskatchewan’s Centre for Pandemic Research, advancing the study of high-risk pathogens to support vaccine and therapeutic development.

These investments will unlock and accelerate economic growth for Canada. We’re creating opportunities, boosting innovation, and accelerating economic growth – and that’s just some of the things that we are proposing in Budget 2024. Alongside these measures, we’re building more homes faster, investing in health care, and making life more affordable to make sure every generation can get ahead.

“Budget 2024 is about ensuring fairness for the next generation. With these historic investments, we’re investing in Canadian students, researchers, and innovators so they can solve the problems of tomorrow. This will unlock massive economic growth and make Canada stronger, fairer, and more prosperous.” The Rt. Hon. Justin Trudeau, Prime Minister of Canada
“Our government is securing the future of top-tier research and innovation in Canada by investing in younger generations today. This is about fostering homegrown research talent and encouraging Canadian brainpower to scale-up their innovative ideas in Canada ‒ all as part of our work to help younger generations get ahead.” The Hon. Chrystia Freeland, Deputy Prime Minister and Minister of Finance
“Today’s research is tomorrow’s economy. That’s why Budget 2024 supports Canadian researchers at the forefront of discovery and innovation as they continue to position Canada as a global leader in science research. These investments reflect the ambition and vision of our next generation of researchers.” The Hon. François-Philippe Champagne, Minister of Innovation, Science and Industry

Quick Facts

  • An estimated total cost of $1.1 billion in 2024-25 for the increased student grants and loans, which will be available for the 2024-25 school year.
  • An estimated cost of $154.6 million over five years, starting in 2024-25, and $32.3 million per year ongoing to modernize shelter allowances.
  • $399.8 million over five years, starting in 2025-26, for TRIUMF.
  • $176 million over five years, starting in 2025-26, for CANARIE.
  • $83.5 million over three years, starting in 2026-27, for Canadian Light Source.
  • $45.5 million over five years, starting in 2024-25, for the Arthur B. McDonald Canadian Astroparticle Physics Research Institute.
  • $30 million over three years, starting in 2024-25, for the University of Saskatchewan’s Centre for Pandemic Research at the Vaccine and Infectious Disease Organization.
  • Since 2016, the federal government has committed: more than $16 billion to support scientific discovery, develop Canadian research talent, and attract top researchers from around the planet; and over $2 billion to foster growth across Canada’s AI ecosystem and digital infrastructure.
  • Since 2016, the federal government has supported more than 638,000 post-secondary students per year, on average, with more than $38.4 billion in up-front grants and interest-free loans – enabling young Canadians to pursue their education, regardless of their background. To ensure this support keeps up with the cost of an education, the government permanently increased Canada Student Grants by 50 per cent to $3,000. As outlined above, Budget 2024 announced the government’s intention to extend for an additional year the increase in full-time Canada Student Grants from $3,000 to $4,200 per year, and interest-free Canada Student Loans from $210 to $300 per week.
  • The Government of Canada’s Budget 2024 was tabled in the House of Commons by the Deputy Prime Minister and Minister of Finance on April 16, 2024.
  • The Strategic Science Fund, which announced the results of its first competition in December 2023, providing support to 24 third-party science and research organizations starting in 2024-25.
  • Canada recently concluded negotiations to be an associate member of Horizon Europe, which would enable Canadians to access a broader range of research opportunities under the European program starting this year.
  • The steady increase in federal funding for extramural and intramural science and technology by the government, which was 44 per cent higher in 2023 relative to 2015.
  • Budget 2024 also includes a $2.4 billion package of measures to accelerate job growth in Canada’s AI sector, boost productivity by helping researchers and businesses develop and adopt AI, and ensure this is done responsibly.  Learn more .

Related Products

  • Backgrounder: Economic Growth and Productivity
  • Backgrounder: Fairness for Younger Generations

Associated Links

  • Fairness for Every Generation
  • Budget 2024: Fairness for Every Generation

IMAGES

  1. Introduction to Research in Education, 9th Edition

    how research help education

  2. (PDF) The Benefits of Self-research in Education: A Teacher-researcher

    how research help education

  3. Why research? Exploring the reasons for The Education Hub’s raison d

    how research help education

  4. Educational Research

    how research help education

  5. Importance of Research

    how research help education

  6. What is Educational Research? + [Types, Scope & Importance]

    how research help education

VIDEO

  1. What can we change about how we teach research methods? with Jo Ferrie

  2. Introduction to Library Resources and Services for Graduate Students

  3. Educational research

  4. HOW TO GET UNDERGRADUATE RESEARCH!! (BASIC SCIENCE!)

  5. Advancing Knowledge and Learning

  6. Importance of Education Research

COMMENTS

  1. Using Research to Improve Teaching

    Removing Barriers to New Information. For starters, research is crucial for education. It helps us learn and create new knowledge. Teachers learning how to translate research into practice can help contribute toward continuous improvement in schools. However, not all research is beneficial or easily applicable.

  2. PDF The Vital Role of Research in Improving Education

    education research in part as "the scientific field of study that examines education and learning processes and the human attributes, interactions, organizations, and institutions that ... short-term impacts to look at how students learn best can help policymakers and education leaders make larger, future-facing decisions about how our ...

  3. Using Research and Reason in Education: How Teachers Can Use ...

    Education is informed by formal scientific research through the use of archival research-based knowledge such as that found in peer-reviewed educational journals. Preservice teachers are first exposed to the formal scientific research in their university teacher preparation courses (it is hoped), through the instruction received from their ...

  4. 40+ Reasons Why Research Is Important in Education

    Research gives us better knowledge workers. There is a tremendous value for our society from student participation in scientific research. At all levels - undergraduate, master's, and Ph.D. —students learn the scientific method that has driven progress since the Enlightenment over 300 years ago.. They learn to observe carefully and organize collected data efficiently.

  5. Educational Research: What It Is + How to Do It

    Education is a pillar in modern society, it provides the tools to develop critical thinking, decision making, and social abilities. Education helps individuals to secure the necessary research skills to secure jobs or to be entrepreneurs in new technologies. This is where educational research takes an important place in the overall improvement of the education system (pedagogy, learning ...

  6. How, Exactly, Is Research Supposed to Improve Education?

    Research is essential in helping districts avoid things that are unlikely to benefit students, like catering to students' preferred learning styles , and can identify some "best bets" for ...

  7. How educational research could play a greater role in

    Between 2019 and 2022, the Institute of Educational Sciences, the research and evaluation arm of the U.S. Education Department, distributed US$473 million in 255 grants to improve educational ...

  8. What are the benefits of educational research for teachers?

    As research becomes embedded in your practice you can gain a range of benefits. Research can: help you find solutions to particular problems arising in your classroom or school. underpin professional learning of knowledge, skills and understanding. connect you with sources of information and networks of professional support.

  9. Research in Education: Sage Journals

    Research in Education provides a space for fully peer-reviewed, critical, trans-disciplinary, debates on theory, policy and practice in relation to Education. International in scope, we publish challenging, well-written and theoretically innovative contributions that question and explore the concept, practice and institution of Education as an object of study.

  10. National Center for Education Research (NCER) Home Page, a part of the

    The National Center for Education Research (NCER) supports rigorous, scientifically based research that addresses the nation's most pressing education needs, from early childhood to postgraduate studies. NCER supports research through competitive grants to research and development centers, candidates for doctoral training in the education sciences and small businesses.

  11. How teachers can use research effectively in their classroom

    This article discusses four key considerations for using research well in the classroom, along with initial resources and practical guides to support teachers to engage with research. 1. Research comes from a variety of sources. The educators in our survey told us about the challenges they face in accessing research.

  12. Education Research and Methods

    Education Research and Methods. IES seeks to improve the quality of education for all students—prekindergarten through postsecondary and adult education—by supporting education research and the development of tools that education scientists need to conduct rigorous, applied research. Such research aims to advance our understanding of and ...

  13. Assessing the Quality of Education Research Through Its Relevance to

    Federal education policies such as the No Child Left Behind Act (NCLB) and the Every Student Succeeds Act (ESSA) promote the use of evidence in education policymaking (Arce-Trigatti et al., 2018; Penuel et al., 2017; Wentworth et al., 2017).The federal government has also played an important role in funding knowledge utilization centers in the past decade with an emphasis on measuring research ...

  14. Using research to improve education under the Every Student ...

    The 'Every Student Succeeds Act' also creates a new program to support research on innovations in education. Using the existing infrastructure of the regional lab network can help identify ...

  15. What is Educational Research? + [Types, Scope & Importance]

    Research. What is Educational Research? + [Types, Scope & Importance] Education is an integral aspect of every society and in a bid to expand the frontiers of knowledge, educational research must become a priority. Educational research plays a vital role in the overall development of pedagogy, learning programs, and policy formulation.

  16. The role of research in teacher education: Reviewing the evidence

    Research can be used to inform the design and structure of teacher education programmes. Teachers and teacher educators can be equipped to engage with and be discerning consumers of research. Teachers and teacher educators may be equipped to conduct their own research, individually and collectively, to investigate the impact of particular ...

  17. The importance of research and its impact on education

    The university has a well-developed collaboration between education, research, the business community, and society on both a national and international level. The school ... Group focuses on the development of mathematical and statistical models as well as experimental methodology used to help understand the complex systems that make up ...

  18. How Teachers Can Use Action Research for Professional Learning

    For other schools interested in conducting action research, Kanter highlighted three key strategies. Focus on areas of growth, not deficiency: "This would have been less successful if we had said, 'Our math scores are down. We need a new program to get scores up,' Kanter said. "That puts the onus on teachers.

  19. How the Psychology of Education Contributes to Research With a Social

    Connection of research with the social priority goals of sustainable development. The line of research responds to UN Sustainable Development Goal 4 on Quality Education: Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all. Therefore, the research is aligned with one of the social priorities.

  20. How innovations in teaching and learning help education leapfrog

    New research by the Center for Universal Education (CUE) at Brookings, " Learning to leapfrog: Innovative pedagogies to transform education ," focuses on how innovations in teaching and ...

  21. PDF The Role of Research in Education

    The Irish Journal of Education, 1967, i, 1, pp.5-14. THE ROLE OF RESEARCH. IN. EDUCATION. B en M orris. University of Bristol. Interest in educational research has vastly increased in recent years. It is now realized that research is necessary in order to provide a basis for educational planning and to assess the effects of such planning.

  22. PDF Importance and Necessity of Research in Education

    The main purpose of research in education should be to liberate, and encourage equality, egalitarianism and equality of opportunity. Ideology can be hazardous and precarious. Besides making provision of academic skills and contributing towards skills development and growth and progression of the students.

  23. Our research has found a way to help the teacher shortage and boost

    Our approach. More than 20 years ago, we developed a framework called the Quality Teaching Model. This is based on decades of research on what kinds of teaching make a difference to student ...

  24. How to change and what to change?

    Helga Dorner is an Associate Professor and Director at the Institute of Research on Adult Education and Knowledge Management at Eötvös Loránd University, Hungary. She researches teaching and learning and mentoring in higher education. She is also a Fellow of the Higher Education Academy.

  25. Empowering Patients: Promoting Patient Education and Health Literacy

    Patient education materials help educate the patients on their health conditions, improves their health literacy, ... All content published within Cureus is intended only for educational, research and reference purposes. Additionally, articles published within Cureus should not be deemed a suitable substitute for the advice of a qualified ...

  26. Environmental justice and education: How schools can help foster a

    At UC Merced, which was already the first research university in the United States to be carbon-neutral, Chancellor Muñoz is helping construct a center for sustainable agriculture, a model for what higher education can contribute to environmental sustainability and surrounding communities. Stephen Ritz founder for Green Bronx Machine

  27. Subject report series: religious education

    The report shares what we've learned and builds on the Ofsted religious education research review which identified factors that can contribute to high-quality school religious education ...

  28. The Guardian view on school exclusions: to help children, gaps in the

    What looks at first glance like a big jump in permanent exclusions is in fact a return to pre-pandemic normality, with the secondary-school rate of 0.07% unchanged from 2016‑17.

  29. How Volunteering Improves Mental Health

    My passion for offering education and support for people with lived experience and their families hasn't waned because the need hasn't waned. The nights after I taught a class, led a support group meeting, or gave an interview to the local newspaper were the nights I felt most balanced and fulfilled.

  30. Strengthening Canadian research and innovation

    These investments will make sure that our younger generations can access quality post-secondary education at an affordable cost. Investing $734 million to support Canada's world-leading research infrastructure and institutes: Supporting TRIUMF, Canada's sub-atomic physics research laboratory, located at the University of British Columbia.