U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Committee on Assessing Behavioral and Social Science Research on Aging; Feller I, Stern PC, editors. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington (DC): National Academies Press (US); 2007.

Cover of A Strategy for Assessing Science

A Strategy for Assessing Science: Behavioral and Social Research on Aging.

  • Hardcopy Version at National Academies Press

4 Progress in Science

This chapter examines theories and empirical findings on the overlapping topics of progress in science and the factors that contribute to scientific discoveries. It also considers the implications of these findings for behavioral and social science research on aging. The chapter first draws on contributions from the history and sociology of science to consider the nature of scientific progress and the paths that lead to realizing the potential scientific and societal outcomes of scientific activity. It considers indicators that might be used to assess progress toward these outcomes. The chapter then examines factors that contribute to scientific discovery, drawing eclectically on the history and sociology of science as well as on theories and findings from organizational behavior, policy analysis, and economics.

  • THEORIES OF SCIENTIFIC PROGRESS

The history and sociology of science have produced extensive bodies of scholarship on some of these themes, generating in the process significant ongoing disagreements among scholars (see, e.g., Krige, 1980 ; Cole, 1992 ; Rule, 1997 ; Bowler and Morus, 2005 ). Most of this work focuses on processes and historical events in the physical and life sciences; relatively little of it addresses the social and behavioral sciences (or engineering, for that matter), except possibly subfields of psychology (e.g., Stigler, 1999 ). It is legitimate to ask whether this research even applies to the behavioral and social sciences ( Smelser, 2005 ). 1

We do not attempt an encyclopedic coverage nor a resolution of the debates, past and continuing, on such questions. Rather, we draw on this research to make more explicit the main issues underlying the tasks of prospective assessment of scientific fields for the purpose of setting priorities in federal research agencies, given the uncertain outcomes of research.

The history of science has produced several general theories about how science develops and evolves over long periods of time. A 19th century view is that of Auguste Comte, who argued that there is a hierarchy of the sciences, from the most general (astronomy), followed historically and in other ways by physics, chemistry, biology, and sociology. Sciences atop the hierarchy are characterized as having more highly developed theories; greater use of mathematical language to express ideas; higher levels of consensus on theory, methods, and the significance of problems and contributions to the field; more use of use theory to make verifiable predictions; faster obsolescence of research, to which citations drop off rapidly over time; and relatively fast progress. Sciences at the bottom of the hierarchy are said to exhibit the opposite characteristics ( Cole, 1983 ).

Many adherents to this hierarchical view place the natural sciences toward the top of the hierarchy and the social sciences toward the bottom. 2 In this view, advances in the “higher” sciences, conceived in terms of findings, concepts, methodologies, or technologies that are thought to be fundamental, are held to flow down to the “lower” sciences, while the reverse flow rarely occurs. Although evidence of such a unidirectional flow from donor to borrower disciplines does exist ( Losee, 1995 ), there are counterexamples. Historians and sociologists of science have offered evidence against several of these propositions, and particularly dispute the claimed association of natural science with the top of the hierarchy and social science with the bottom (e.g., Bourdieu, 1988 ; Cetina, 1999 ; Steinmetz, 2005 ). The picture is more complex, as noted below.

By far the best known modern theory of scientific progress is that of Thomas Kuhn (1962) , which focuses on the major innovations that have punctuated the history of science in the past 350 years, associated with such investigators as Copernicus, Galileo, Lavoisier, Darwin, and Einstein. Science, in Kuhn’s view, is usually a problem-solving activity within clear and accepted frameworks of theory and practice, or “paradigms.” Revolutions occur when disparities or anomalies arise between theoretical expectation and research findings that can be resolved only by changing fundamental rules of practice. These changes occur suddenly, Kuhn claims, in a process akin to Gestalt shifts: in a relative instant, the perceived relationships among the parts of a picture shift, and the whole takes on a new meaning. Canonical examples include the Copernican idea that the Earth revolves around the Sun, Darwin’s evolutionary theory, relativity in physics, and the helical model of DNA.

A quite different account is that of John Desmond Bernal (1939) . Inspired by Marxist social science and ideals of planned social progress, Bernal saw basic science progressing most vigorously when it was harnessed to practical efforts to serve humanity’s social and economic needs (material well-being, public health, social justice). Whereas in Kuhn’s view science progressed according to its inner logic, Bernal asserted that intellectual and practical advances could be engineered and managed.

Another tradition of thought, stemming from Derek Price’s (1963) vision of a quantitative “science of science,” has focused less on how innovations arise than on how they spread and how their full potential is exploited by small armies of scientists. Mainly pursued by sociologists of science, this line of analysis has focused on the social structure of research communities (e.g., Hagstrom, 1965 ), competition and cooperation in institutional systems ( Merton, 1965 ; Ben-David, 1971 ), and structured communication in schools of research or “invisible colleges” (e.g., Crane, 1972 ). These efforts, while focused mainly on how science works, may imply principles for stimulating scientific progress and innovation.

There are also evolutionary models of scientific development, such as that of the philosopher David Hull (1988) . Extending Darwin’s account of evolution by variation and selection, Hull argues that scientific concepts evolve in the same way, by social or communal selection of the diverse work of individual scientists. In evolutionary views, science continually produces new ideas, which, like genetic mutations, are essentially unpredictable. Their ability to survive and expand their niches depends on environmental factors.

Bruno Latour and Steve Woolgar (1979) also offer an account of a selective struggle for viability among scientific producers. The vast majority of scientific papers quickly disappear into the maw of the scientific literature. The few that are used by other scientists in their work are the ones that determine the general direction of science progress. In evolutionary and competitive models, a possible function of science managers is to shape the environment that selects for ideas so as to propagate research that is judged to promote the agency’s scientific and societal goals.

Stephen Cole (1992) emphasized a distinction between the frontier and the core of science that seems consistent with an evolutionary view. Work at the frontiers of sciences is characterized by considerable disagreement; as science progresses over time, disagreement decreases as processes such as empirical confirmation and paradigm shift select out certain ideas, while others become part of the received wisdom.

Although the view that different sciences have similar features at their respective frontiers is not unchallenged ( Hicks, 2004 ), we have found the idea of frontier and core science to be useful in examining the extent to which insights from the history and sociology of science, fields that have concentrated their attention predominantly on the natural sciences, also apply to the social and behavioral sciences.

Cole (1983 , 1992) reports considerable evidence to suggest that different fields of science have similar features at the frontier, even if they are very different at the core. In the review of research proposals and journal submissions, an activity at the frontier of knowledge, he concludes that consensus about the quality of research is not systematically higher in the natural sciences than in the social sciences, citing the standard deviations of reviewers’ ratings of proposals to the National Science Foundation, which were twice as large in meteorology as in economics.

In the core, represented by undergraduate textbooks, the situation appears to be quite different. Cole (1983) found that in textbooks published in the 1970s, the median publication date of the references cited in both physics and chemistry was before 1900, while the median publication date in sociology was post-1960. Sociology texts cited an average of about 800 references, while chemistry and physics texts generally cited only about 100. Moreover, a comparison of texts from the 1950s and the 1970s indicated that the material covered, as well as the sources cited, were much the same in both periods in physics and chemistry, whereas in sociology, the newer texts cited only a small proportion of the sources cited in the earlier texts.

Cole interpreted these findings as indicating that core knowledge in physics and chemistry was both more consensual and more stable over time than core knowledge in sociology. Such findings suggest that even though sciences may differ greatly at the core, for the purpose of assessing the progress of science at the frontiers of research fields, insights from the study of the natural sciences are likely to apply to the social sciences as well. They also point to the need to differentiate between “vitality,” as indicated by ferment at the frontier, and scientific progress as indicated by movement of knowledge from the frontier to the core. 3 These findings suggest that the policy challenges for research managers making prospective judgments at the frontiers of research fields are quite similar across the sciences.

  • NATURE OF SCIENTIFIC PROGRESS

Scientific progress can be of various types—discoveries of phenomena, theoretical explanations or syntheses, tests of theories or hypotheses, acceptance or rejection of hypotheses or theories by the relevant scientific communities, development of new measurement or analytic techniques, application of general theory to specific theoretical or practical problems, development of technologies or useful interventions to improve human health and well-being from scientific efforts, and so forth. Consequently, many different developments might be taken as indicators, or measures, of progress in science.

Science policy decision makers need to consider the progress and potential of scientific fields in multiple dimensions, accepting that the absence of detectable advance on a particular dimension is not necessarily evidence of failure or poor performance. Drawing on Weinberg’s (1963) classification of internal and external criteria for formulating scientific choices, we make the practical distinction between internally defined types of scientific progress, that is, elements of progress defined by intellectual criteria, and externally defined types of progress, defined in terms of the contributions of science to society. Managers of public investments in science need to be concerned with both.

Scientific Progress Internally Defined

The literatures in the history of science and in science studies include various analyses and typologies of scientific and theoretical progress (e.g., Rule, 1997 ; Camic and Gross, 1998 ; Lamont, 2004 ). This section presents a distillation of insights from this research into a short checklist of major types of scientific progress. The list is intended as a reminder to participants in science policy decisions that assess the progress of scientific fields of the variety of kinds of progress science can make. Recognizing that these broad categories overlap and also that they are interdependent, with each kind of progress having the potential to influence the others, directly or indirectly, the list is intended to simplify a very complex phenomenon to a manageable level.

Types of Scientific Progress

Discovery. Science makes progress when it demonstrates the existence of previously unknown phenomena or relationships among phenomena, or when it discovers that widely shared understandings of phenomena are wrong or incomplete.

Analysis. Science makes progress when it develops concepts, typologies, frameworks of understanding, methods, techniques, or data that make it possible to uncover phenomena or test explanations of them. Thus, knowing where and how to look for discoveries and explanations is an important type of scientific progress. Improved theory, rigorous and replicable methods, measurement techniques, and databases all contribute to analysis.

Explanation. Science makes progress when it discovers regularities in the ways phenomena change over time or finds evidence that supports, rules out, or leads to qualifications of possible explanations of these regularities.

Integration. Science makes progress when it links theories or explanations across different domains or levels of organization. Thus, science progresses when it produces and provides support for theories and explanations that cover broader classes of phenomena or that link understandings emerging from different fields of research or levels of analysis.

Development. Science makes progress when it stimulates additional research in a field or discipline, including research critical of past conclusions, and when it stimulates research outside the original field, including interdisciplinary research and research on previously underresearched questions. It also develops when it attracts new people to work on an important research problem.

Recent scientific activities supported by the Behavioral and Social Research (BSR) Program of the National Institute on Aging (NIA) have yielded progress in the form of scientific advances of most of the above types. We cite only a few examples.

  • Discovery: The improving health of elderly populations. An example is analyses of data from Sweden, which has the longest running national data set on longevity, that have shown that the maximum human life span has been increasing since the 1860s, that the rate of increase has accelerated since 1969, and that most of the change is due to improved probabilities of survival of individuals past age 70 ( Wilmoth et al., 2000 ). Parallel trends have been discovered among the elderly in the form of declining physical disability, which declined in the United States from 26 percent of the elderly population in 1982 to 20 percent in 1999 (e.g., Manton and Gu, 2001 ), and declining cognitive impairment (e.g., Freedman et al., 2001 , 2002 ). Such findings together suggest overall improvements in the health of elderly populations in high-income countries.
  • Analysis: Longitudinal datasets for understanding processes of aging. The Health and Retirement Study ( Juster and Suzman, 1995 ), a major ongoing longitudinal study that assesses the health and socioeconomic condition of aging Americans in which BSR played a central entrepreneurial role, has provided data that made possible, among other things, some of the discoveries about declining disability already noted. International comparative data sets on health risk factors and health outcomes, such as the Global Burden of Disease dataset ( Ezzati et al., 2002 ), have also made significant scientific progress possible.
  • Explanation: Questioning and refining understandings. Several BSR-funded research programs have yielded findings that called into question widely held views about aging processes. Examples include findings that question the beliefs that more health care spending leads to better health outcomes ( Fisher et al., 2003a , 2003b ), that increasing life expectancy implies increased health care expenditures ( Lubitz et al., 2003 ), that unequal access to health care is the main explanation for higher mortality rates among older people of lower socioeconomic status (e.g., Adda et al., 2003 ; Adams et al., 2003 ), and that aging is a purely biological process unaffected by personal or cultural beliefs ( Levy, 2003 ). Other BSR-sponsored research has provided evidence that a previously noted association of depression with heart disease may be explained in part by a process in which negative affect suppresses immune responses ( Rosenkranz et al., 2003 ).
  • Integration and development: Creating a biodemography of aging. BSR supported and brought together “demographers, evolutionary theorists, genetic epidemiologists, anthropologists, and biologists from many different scientific taxa” ( National Research Council, 1997 :v) to seek coherent understandings of human longevity that are consistent with knowledge at levels from genes to populations and data from human and nonhuman species). This effort has helped to attract researchers from other fields into longevity studies, add vigor to this research field, and put the field on a broader and firmer interdisciplinary base of knowledge.

Paths to Scientific Progress

Scientific progress is widely recognized as nonlinear. Some new ideas have led to rapid revolutions, while other productive ideas have had lengthy gestation periods or met protracted resistance. Still other new ideas have achieved overly rapid, faddish acceptance followed by quick dismissal. An earlier generation of research in the history and sociology of science documented variety and surprise as characteristics of scientific progress, but it was not followed by broad transdisciplinary studies that developed and tested general theories of scientific progress.

No theory of scientific progress exists, or is on the horizon, that allows prediction of the future development of new scientific ideas or specifies how the different types of scientific progress influence each other—although they clearly are interdependent. Rather, recent studies by historians of science and practicing scientists typically emphasize the uncertainty surrounding which of a series of findings emerging at any point in time will be determinative of the most productive path for future scientific inquiries and indeed of the ways in which these findings will be used. Only in hindsight does the development of various experimental claims and theoretical generalizations appear to have the coherence that creates a sense of a linear, inexorable path.

Science policy seems to be in particular need of improved basic understanding of the apparently uncertain paths of scientific progress as a basis for making wiser, more efficient investments. Without this improved understanding, extensive investments into collecting and analyzing data on scientific outputs are unlikely to provide valid predictors of some of the most important kinds of scientific progress. Political and bureaucratic pressures to plan for steady progress and to assess it with reliable and valid performance indicators will not eliminate the gaps in basic knowledge that must be filled in order to develop such indicators.

Despite the incompleteness of knowledge, the findings of earlier research remain a suggestive and potentially useful resource for practical research managers. They suggest a variety of state-of-knowledge propositions that are consistent with our collective experience on multiple advisory and review panels across several federal science agencies. We consider the following propositions worthy of consideration in discussions of how science managers can best promote scientific progress:

  • Scientific discoveries are initially the achievements of individuals or small groups and arise in varied and largely unpredictable ways: the larger and more important the discoveries, the less predictable they would have been.
  • The great majority of scientific products have limited impact on their fields; there are only a few major or seminal outputs. Whether or not new scientific ideas or methods become productive research traditions depends on an uncertain process that may extend over considerable time. Sometimes the impacts of research are quite different from those anticipated by the initial research sponsors, the researchers, or the individuals or organizations that first make use of it. For example, the Internet, which was developed as a means of fostering scientific communication among geographically dispersed researchers, has now become a leading channel for entertainment and retail business, among other things.
  • Existing procedures for allocating federal research funds are most effective at the mid-level of scientific innovation, where there is consensus among established fields about the importance of questions and the direction and content of emerging questions in those fields.
  • The uncertainties of scientific discovery and the difficulties of accurately identifying turning points and sharp departures in scientific inquiry suggest that research managers will do best with a varied portfolio of projects, including both mainstream and discontinuous or exploratory research projects. These uncertainties also suggest that assessment of a program’s investments in research is most appropriately made at the portfolio rather than the project level.
  • The portfolio concept also applies to a program’s investments in analysis: in advancing the state of theoretical understanding, tools, and databases. Scientific progress in both the natural and social sciences may either follow or precede the development of new tools (instruments, models, algorithms, databases) that apply to many problems. Contrary to simple models of scientific progress that have theory building as the grounding for empirical research or data collection as the foundation for theory building, the process is not linear or unidirectional. 4 Program investments in theory building, tool development, and data collection can all contribute to scientific progress, but it is very difficult to predict which kinds of investments will be most productive at any given time (see National Research Council, 1986 , 1988 ; Smelser, 1986 ).
  • Scientific progress sometimes arises from efforts to solve technological or social problems in environments that combine concerns with basic research and with application. It can also arise in environments insulated from practical concerns. And progress can involve first one kind of setting and then the other (see Stokes, 1997 ).

Interdisciplinarity and Scientific Progress

The claim that the frontiers of science are generally located at the interstices between and intersections among disciplines deserves explicit attention because it is increasingly found in the conclusions and recommendations of national commissions and NRC committees (e.g., National Research Council, 2000b ; Committee on Science, Engineering, and Public Policy, 2004 ) and in statements by national science leaders. 5 Scholarship in the history and sociology of science is consistent with competing views on this claim. A considerable body of recent scholarship has noted that exciting developments often come at the edges of established research fields and at the boundaries between fields ( Dogan and Pahre, 1990 ; Galison, 1999 ; Boix-Mansilla and Gardner, 2003 ; National Research Council, 2005b ). Moreover, interdisciplinary thinking has become more integral to many areas of research because of the need to understand “the inherent complexity of nature and society” and “to solve societal problems” ( National Research Council, 2005b :2).

The idea is that scientific advances are most likely to arise, or are most easily promoted, when scientists from different disciplines are brought together and encouraged to free themselves from disciplinary constraints. A good example to support this idea is the rapid expansion and provocative results of research on the biodemography of aging that followed the 1996 NRC workshop on this topic ( National Research Council, 1997 ). The workshop occasioned serious efforts to develop and integrate related research fields.

To the extent that interdisciplinarity is important to scientific progress and for gaining the potential societal benefits of science, it is important for research managers to create favorable conditions for interdisciplinary contact and collaboration. In fact, for some time BSR has been seeking explicitly to promote both multidisciplinarity and interdisciplinarity ( Suzman, 2004 ). For example, when the Health and Retirement Study was started in 1990, it was explicitly designed to be useful to economists, demographers, epidemiologists, and psychologists, and explicit efforts were made to convince those research communities that the study was not for economists only. BSR has reorganized itself and redefined its areas of interest on issue-oriented, interdisciplinary lines; sought out leading researchers and funded them to do what was expected to be ground-breaking and highly visible research in interdisciplinary fields; supported workshops and studies to define new interdisciplinary fields (e.g., National Research Council, 1997 , 2000a , 2001c ); created broadly based multidisciplinary panels to review proposals in emerging interdisciplinary areas; and funded databases designed to be useful to researchers in multiple disciplines for addressing the same problems, thus creating pressure for communication across disciplines. Some of the results, such as those already mentioned, have been notably productive and potentially useful.

The available studies seem to support the following conclusions about the favorable conditions for interdisciplinary science ( Klein, 1996 ; Rhoten, 2003 ; National Research Council, 2005b ):

  • Successful interdisciplinary research requires both disciplinary depth and breadth of interests, visions, and skills, integrated within research groups.
  • The success of interdisciplinary research groups depends on institutional commitment and research leadership with clear vision and teambuilding skills.
  • Interdisciplinary research requires communication among people from different backgrounds. This may take extra time and require special efforts by researchers to learn the languages of other fields and by team leaders to make sure that all participants both contribute and benefit.
  • New modes of organization, new methods of recruitment, and modified reward structures may be necessary in universities and other research organizations to facilitate interdisciplinary interactions.
  • Both problem-oriented organization of research organizations and the ability to reorganize as problems change facilitate interdisciplinary research.
  • Funding organizations may need to design their proposal and review criteria to encourage interdisciplinary activities.

Several conditions favorable to interdisciplinary collaboration can be affected by the actions of funders of research. For example, science agencies can encourage or require interdisciplinary collaboration in the research they support, support activities that specifically bring researchers together from different disciplines to address a problem of common interest, provide additional funds or time to allow for the development of effective interdisciplinary communication in research groups or communities, and organize their programs internally and externally around interdisciplinary themes. They can ask review panels to consider how well groups and organizations that propose interdisciplinary research provide conditions, such as those above, that are commonly associated with successful interdisciplinary research. And they might also ensure that groups reviewing interdisciplinary proposals include individuals who have successfully led or participated in interdisciplinary projects.

Encouraging interdisciplinary research may have pitfalls, though. It is possible for funds to be offered but for researchers to fail to propose the kinds of interdisciplinary projects that were hoped for. Sometimes interdisciplinary efforts take hold, but they fail to produce important scientific advances or societal benefits. Interdisciplinarity can also become a mantra. If disciplines are at times presented as silos—independent units with no connections among them—interdisciplinary fields may also become silos that happen to straddle two fields. At any point in time, an observer can identify numerous new research trajectories, several involving novel combinations of existing disciplines. Thus, alongside recently institutionalized fields, such as biotechnology, materials science, information sciences, and cognitive (neuro)sciences, are claimants for scientific attention and programmatic support, such as vulnerability sciences, prevention science, and neuroeconomics.

Little is known about how to predict whether a new interdisciplinary field will take off in a productive way. Floral metaphors about budding fields are not always carried to the desired conclusion: many budding fields lack the intellectual or methodological germplasm to do more than pop up and quickly wither. It is at least as difficult to assess the prospects of interdisciplinary fields as of disciplinary ones, and probably more so ( Boix-Mansilla and Gardner, 2003 ; National Research Council, 2005b ). 6

Federal agency science managers can act as entrepreneurs of interdisciplinary fields, so that their expansion from an interest of a small number of researchers into a recognizable cluster of activity may reflect the level of external support from federal agencies and foundations. As a field develops, though, a good indicator of vitality may be the exchange of ideas with other fields and particularly the export of ideas from the new field to other scientific fields or to practical use. But progress in interdisciplinary fields may be hard to determine from recourse to such indicators alone. Fields can be vital without exporting ideas to other fields. Policy analysis, now a well-established academic field of instruction and research, engages researchers from several social science disciplines, but it is a net importer of ideas ( MacRae and Feller, 1998 ; Reuter and Smith-Ready, 2002 ).

It is worth noting that support for interdisciplinary research, although it has unique benefits, may be a relatively high-risk proposition because it requires high-level leadership skills and innovative organizational structures. These characteristics of interdisciplinary research may pose special challenges for research managers in times of tightening budgets, when pressures for risk aversion may conflict with the need to develop innovative approaches to scientific questions and societal needs.

Contributions of Science to Society

In government agencies with practical missions, investments in science are appropriately judged both on internal scientific grounds and on the basis of their contributions to societal objectives. In the case of NIA, these objectives largely concern the improved longevity, health, and well-being of older people ( National Institute on Aging, 2001 ). There are many ways research can contribute to these objectives. For simplicity, we group the societal objectives of science into four broad categories.

Identifying issues. Science can contribute to society by identifying problems relating to the health and well-being of older people that require societal action or sometimes showing that a problem is less serious than previously believed.

Finding solutions. Science can contribute to society by developing ways to address issues or solve problems, for example, by improving prevention or treatment of diseases, improving health care delivery systems, improving access to health care, or developing new products or services that contribute to the longevity, health, or quality of life for older people in America.

Informing choices. Science can contribute to society by providing accurate and compelling information to public officials, health care professionals, and the public and thus promoting better informed choices about life and health by older people and better informed policy decisions affecting them.

Educating the society. Science can contribute to society by producing fundamental knowledge and developing frameworks of understanding that are useful for people facing their own aging and the aging of family members, making decisions in the private sector, and participating as citizens in public policy decisions. Science can also contribute by educating the next generation of scientists.

Research on science utilization, a field that was most vital in the 1970s and that has seen some revival recently, has examined the ways in which scientific results, particularly social science results, may be used, particularly in government decisions (for recent reviews, see Landry et al., 2003 , and Romsdahl, 2005 , for some classic treatments, see Caplan, 1976 ; Weiss, 1977 , 1979 ; Lindblom and Cohen, 1979 ). In terms of the above typology, this research mainly examines the use or nonuse of research results for informing choices by public policy actors. It does not much address the use of results by ordinary citizens, medical practitioners, the mass media, or other users involved in identifying issues and finding solutions, other than policy solutions. The most general classification in this research tradition of the ways social science is used is for enlightenment (i.e., providing a broad conceptual base for decisions) and as instrumental input (e.g., providing specific policy-relevant data). In addition, researchers note that social science results may be used to provide justification or legitimization for decisions already reached or as a justification for postponing decisions ( Weiss, 1979 ; Oh, 1996 ; Romsdahl, 2005 ).

Federal science program managers face the challenges of establishing causal linkages between past research program activities and societal impacts and of projecting societal impacts from current and planned research activities. The challenges are substantial. Even when findings from social and behavioral science research influence policies and practices in the public and private sectors and may therefore be presumed to contribute to human well-being, they are seldom determinative. Indicators exist or could be created for many societal impacts of research ( Cozzens et al., 2002 ; Bozeman and Sarewitz, 2005 ). In addition, evidence that the results of research are used, for example, in government decisions, may be considered an interim indicator of ultimate societal benefit, presuming that the decisions promote people’s well-being.

Limits exist, however, to the ability of a mission agency to translate findings from the research it funds into practice. For the research findings of the National Institutes of Health (NIH) in general and NIA-BSR in particular, contributions to societal or individual well-being require the complementary actions of myriad other actors and organizations in government and the private sector, including state and local governments, insurance companies, nursing homes, physicians’ practices, and individuals. According to Balas and Boren (2000 :66), “studies suggest that it takes an average of 17 years for research evidence to reach clinical practice.” Similarly lengthy processes and circuitous connections link research findings to more enlightened or informed policy making ( Lynn, 1978 ).

A scientific development also may contribute to society in the above ways even if working scientists do not judge it to be a significant contribution on scientific grounds. For example, surveys sponsored by BSR produce data, for example on declining rates of disability among older people, that may be very useful for health care planning without, by themselves, contributing anything more to science than a phenomenon to be explained. Thus, it is appropriate for assessments of research progress to consider separately the effects of research activity on scientific and societal criteria. Scientific activities and outputs may contribute to either of these two kinds of desirable outcomes or to both.

Interpreting Scientific Progress

The extent to which particular scientific results constitute progress in knowledge or contribute to societal well-being is often contested. This is especially the case when scientific findings are uncertain or controversial and when they can be interpreted to support controversial policy choices. Many results in applied behavioral and social science have these characteristics. Disagreements arise over which research questions are important enough to deserve support (that is, over which issues constitute significant social problems), about whether or not a finding resolves a scientific dispute or has unambiguous policy implications, and about many other aspects of the significance of scientific outputs. The more controversial the underlying social issues, the further such disagreements are likely to penetrate into the details of scientific method. Interested parties may use their best rhetorical tools to “frame” science policy issues and may even attempt to exercise power by influencing legislative or administrative decision makers to support or curtail particular lines of research.

These aspects of the social context of science are relevant for the measurement and assessment of scientific progress and its societal impact. They underline the recognition that the meaning of assessments of scientific progress may not follow in any straightforward way from the evidence the assessments produce. Assessing science, no matter how rigorous the methods that may be used, is ultimately a matter of interpretation. The possibility of competing interpretations of evidence is ever-present when using science indicators or applying any other analytic method for measuring the progress and impact of science. In Chapter 5 , we discuss a strategy for assessing science that recognizes this social context while also seeking an appropriate role for indicators and other analytic approaches.

  • INDICATORS OF SCIENTIFIC PROGRESS

Research managers understandably want early indicators of scientific progress to inform decisions that must be made before the above types of substantive progress can be definitively shown. Although scientific progress is sometimes demonstrable very quickly, recent histories of science, as noted above, tend to emphasize not only the length of time required for research findings to generate a new consensus but also the uncertainties at the time of discovery regarding what precisely constitutes the nature of the discovery. Time lag and impact may depend on various factors, including the type of research and publication and citation practices in the field. A longitudinal research project can be expected to take longer to yield demonstrable progress than a more conceptual project.

Research Vitality and Scientific Progress

Expressions of scientific interest and intellectual excitement, sometimes referred to as the vitality of a research field, have been suggested as a useful source of early indicators of scientific progress as defined from an internal perspective. Such indications of the development of science are of particular interest to science managers because many of them might potentially be converted into numerical indicators. They include the following:

  • Established scientists begin to work in a new field.
  • Students are increasingly attracted to a field, as indicated by enrollments in new courses and programs in the field.
  • Highly promising junior scientists choose to pursue new concepts, methods, or lines of inquiry.
  • The rate of publications in a field increases.
  • Citations to publications in the field increase both in number and range across other scientific fields.
  • Publications in the new field appear in prominent journals.
  • New journals or societies appear.
  • Ideas from a field are adopted in other fields.
  • Researchers from different preexisting fields collaborate to work on a common set of problems.

Research on the nanoscale is an area that illustrates vitality by such indicators and that is beginning to have an impact on society and the economy. Zucker and Darby (2005 :9) point to the rate of increase in publishing and patenting in nanotechnology since 1986 as being of approximately the same order of magnitude as the “remarkable increase in publishing and patenting that occurred during the first twenty years of the biotechnology revolution…. Since 1990 the growth in nano S&T articles has been remarkable, and now exceeds 2.5 percent of all science and engineering articles.” Major scientific advances are often marked by flurries of research activity, and many observers expect that such indications of research vitality presage major progress in science and applications.

However, research vitality does not necessarily imply future scientific progress. For example, research on cold fusion was vital for a time precisely because most scientists believed it would not lead to progress. In the social sciences, many fields have shown great vitality for a period of time, as indicated by numbers of research papers and citations to the central works, only to decline rapidly in subsequent periods. Rule (1997) , in his study of progress in social science, discusses several examples from sociology, including the grand social theory of Talcott Parsons (1937 , 1964) , ethno-methodology (e.g., Garfinkel, 1967 ), and interaction process analysis (e.g., Bales, 1950 ). Although these fields were vital for a time, in longer retrospect many observers considered them to have been far less important to scientific progress than they had earlier appeared to be. Rule suggests several possible interpretations of this kind of historical trajectory: the fields that looked vital were in fact intellectual dead-ends; research in the fields did make important contributions that were so thoroughly integrated into thinking in the field that they became common knowledge and were no longer commonly cited; and the fields represented short-term intellectual tastes that lost currency with a shift in theoretical concerns. With enough hindsight, it may be possible to decide which interpretation is most correct, although disagreements remain in many specific cases. But the resource allocation challenge for a research manager, given multiple alternative fields whose aggregate claims for support exceed his or her program budget, is to make the correct interpretation of research vitality prospectively: that is, to project whether the field will be judged in hindsight to have produced valuable contributions or to have been no more than a fad or an intellectual dead-end.

Another trajectory of research is problematic for research managers who would use vitality as an indicator of future potential. Some research findings or fields lie dormant for considerable periods without showing signs of vitality, before the seminal contributions gain recognition as major scientific advances. Such findings have been labeled as “premature discoveries” ( Hook, 2002 ) and “sleeping beauties” ( van Raan, 2004b ). These are not findings that are resisted or rejected; rather, they are unappreciated, or their uses or implications are not initially recognized ( Stent, 2002 ). In effect, the contribution of such discoveries to scientific progress or societal needs or both lies dormant until there is some combination of independent discoveries that reveal the potency of the initial discovery. In such cases, vitality indicators focused predominately on the discovery and its related line of research would have been misleading as predictors of long-term scientific importance.

An instructive example of the limitations of vitality measures as early indicators in the social sciences is the intellectual history of John Nash’s approach to game theory—an approach that was recognized, applied, and then dismissed as having limited utility, only to reemerge again as a major construct (the Nash equilibrium), not only in the social and behavioral sciences but also in the natural sciences. As recounted by Nasar (1998) , the years following Nash’s seminal work at RAND in the early 1950s were a period of flagging interest in game theory. Luce and Raiffa’s authoritative overview of the field in 1957 observed: “We have the historical fact that many social scientists have become disillusioned with game theory. Initially there was a naïve band-wagon feeling that game theory solved innumerable problems of sociology and economics, or that, at least it made their solution a practical matter of a few years’ work. This has not turned out to be the case” (quoted in Nasar, 1998 :122). In later retrospect, game theory became widely influential in the social and natural sciences, and Nash was awarded the Nobel Memorial Prize in Economics in 1994.

The complexity of the relationship between the quantity of scientific activity being undertaken during a specific period and the pace of scientific progress (or the rate at which significant discoveries are made) can perhaps be illustrated by analogy to a bicycle race: a group of researchers, analogous to the peloton or pack in a bicycle race, proceeds along together over an extended period until a single individual or a small group attempts a breakaway to win the race. Some breakaways succeed and some fail, but because of the difficulties of making progress by working alone (wind resistance, in the bicycle race analogy), individuals need the cooperation of a group to make progress over the long run and to create the conditions for racing breakaways or scientific breakthroughs. When scientific progress follows this model, fairly intense activity is a necessary but not sufficient condition for progress. Alternatively, the pack may remain closely clustered together for extended periods of time, advancing apace yet with a sense that little progress toward victory, however specified, is being made ( Horan, 1996 ).

In our judgment, these various trajectories of scientific progress imply that quantitative indicators, such as citation counts, require interpretation if they are to be used as part of the prospective assessment of fields . Moreover, the implications of intensified activity in a research area may be quite different depending on the mission goals and the perspective of the agency funding the work. Significant research investments can create activity in a field by encouraging research and supporting communication among communities of researchers. But activity need not imply progress, at least not in terms of some of the indicators listed above, such as the export of ideas to other fields. If research managers conflate the concepts of scientific activity and progress, they can create self-fulfilling prophecies by simply creating scientific activity. These warnings become increasingly important as technical advances in data retrieval and mining make it easier to create and access quantitative indicators of research vitality and as precepts of performance assessment increase pressures on research managers to use quantitative indicators to assess the progress and value of the research they support.

Indicators of Societal Impact

A variety of events may indicate that scientific activities have generated results that are likely to have practical value, even though such value may not (yet) have been realized. Such events might function as leading indicators of the societal value of research. These events typically occur outside research communities. For example:

  • Research is cited as the basis for patents that lead to licenses.
  • Research is used to justify policies or laws or cited in court opinions.
  • Research is prominently discussed in trade publications of groups that might apply it.
  • Research is used as a basis for practice or training in medicine or other relevant fields of application.
  • Research is cited and discussed in the popular press as having implications for personal decisions or for policy.
  • Research attracts investments from other sources, such as philanthropic foundations.

Some of these potential indicators are readily quantifiable, so, like bibliometric indicators, they are attractive means by which science managers can document the value of their programs. But as with quantitative indicators of research vitality, the meaning of quantitative indicators of societal impact is subject to differing interpretations. For example, as studies of science utilization have emphasized, the use of research to justify policy changes may mean that the research has changed policy makers’ thinking or only that it provides legitimation for previously determined positions. Moreover, policy makers have been known to use research to justify a policy when the relevant scientific community is in fact sharply divided about the importance or even the validity of the cited research. Such research nevertheless has societal impact, even if not of the type the scientists may have expected.

  • FACTORS THAT CONTRIBUTE TO SCIENTIFIC DISCOVERIES

Historically, analysis of the factors that contribute to scientific discoveries has occurred at least at three different levels of analysis. Macro-level studies have considered the effects of the structures of societies—their philosophical, social, political, religious, cultural, and economic systems ( Hart, 1999 ; Jones, 1988 ; Shapin, 1996 ). Meso-level analyses have examined the effects of functional and structural features of “national research and innovation systems”—for example, the relative apportionment of responsibility and public funding for scientific inquiry among government entities, quasi-independent research institutes, and universities ( Nelson, 1993 ). Microlevel studies have examined the associations between indicators of progress and such factors as the organization of research units and the age of the researcher ( Deutsch et al., 1971 ).

The programmatic latitude of any single federal science unit to adjust its actions to promote scientific discovery relates almost exclusively to micro-level factors. Even then, agency policies, legislation, and higher level executive branch policies may limit an agency’s options. For this reason, we look most closely at micro-level factors. It is nevertheless worth examining the larger structural factors affecting conditions for scientific discovery, if only to understand the implicit assumptrions likely to be accepted by BSR’s advisers and staff.

A convenient means of documenting contemporary thinking on the factors that contribute to scientific advances is to examine the series of “benchmarking” studies of the international standing of U.S. science in the fields of materials science, mathematics, and immunology made by panels of scientists under the auspices of the National Academies’ Committee on Science, Engineering, and Public Policy (COSEPUP). The benchmarking was conducted as a methodological experiment in response to a series of studies that had sought to establish national goals for U.S. science policy and to mesh these goals with the performance reporting requirements of the Government Performance and Results Act ( Committee on Science, Engineering, and Public Policy, 1993 , 1999a ; National Research Council, 1995a ).

The benchmarking reports covered the fields of mathematics ( Committee on Science, Engineering, and Public Policy, 1997 ), materials science ( Committee on Science, Engineering, and Public Policy, 1998 ), and immunology ( Committee on Science, Engineering, and Public Policy, 1999b ); they represented attempts to assess whether U.S. science was achieving the stated goals of the National Goals report ( Committee on Science, Engineering, and Public Policy, 1993 ) that the United States should be among the world leaders in all major areas of science and should maintain clear leadership in some major areas of science. These reports can be used to infer the collective beliefs across a broad range of the U.S. scientific community about the factors that contribute to U.S. scientific leadership, and implicitly to the factors that foster major scientific discoveries. The reports are also of interest because several of the factors they cite—for example, initiation of proposals by individual investigators, reliance on peer-based merit review—are the cynosures of proposals to modify the U.S. science system.

Across the three benchmarking reports, the core repeatedly cited as necessary for scientific progress was adequate facilities, quality and quantity of graduate students attracted to a field (and their subsequent early career successes in the field), diversity in funding sources, and adequate funding. In addition, with regard to the comparative international strength and the leadership position of U.S. science in these fields, the reports placed special emphasis on the “structure and financial-support mechanisms of the major research institutions in the United States” and on its organization of higher education research ( Committee on Science, Engineering and Public Policy, 1999b :35). Also highlighted as a contributing factor in “fostering innovation, creativity and rapid development of new technologies” was the “National Institutes of Health (NIH) model of research-grant allocation and funding: almost all research (except small projects funded by contracts) is initiated by individual investigators, and the decision as to merit is made by a dual review system of detailed peer review by experts in each subfield of biomedical science” (p. 36). 7

We accept the proposition that adequate funds to support research represents a necessary condition for sustained progress in a scientific field. Research progress also depends on the supply of researchers (including the number, age, and creativity of current and prospective researchers) and the organization of research, including the number and disciplinary mix of researchers engaged in a project or program and structure of the research team.

Supply of Researchers

The number, creativity, and age distribution of researchers in a field together affect the pace of scientific progress in the field. Numbers are important to the extent that the ability to generate scientific advances is randomly distributed through a population of comparably trained researchers. Fields with a larger number of active researchers can be expected to generate more scientific advance than fields with smaller such numbers. The pace of scientific advance across fields presumably also varies with their ability to attract the most able/creative/productive scientists. The attractiveness of a field at any point in time is likely to depend on its intellectual excitement (the challenges of the puzzles that it poses), its societal significance, the resources flowing to it to support research, and the prospects for longer term productive and gainful careers. Fields that exhibit these characteristics are likely to attract relatively larger cohorts of younger scientists; if scientific creativity is inversely correlated with age, such fields may be expected to exhibit greater vitality than those with aging cohorts of scientists.

This view is supported by much expert judgment and a number of empirical studies. For example, a study by the National Research Council (1998 :1) noted that “The continued success of the life-science research enterprise depends on the uninterrupted entry into the field of well-trained, skilled, and motivated young people. For this critical flow to be guaranteed, young aspirants must see that there are exciting challenges in life science research and they need to believe that they have a reasonable likelihood of becoming practicing independent scientists after their long years of training to prepare for their careers.”

Career opportunities for scientists affect the flow of young researchers into fields. Recent studies of career opportunities in the life sciences have noted that a “crisis of expectations” arises when career prospects fall short of scientific promise ( Freeman et al., 2001 ). Similar observations have been made at other times for the situations in physics, mathematics, computer science, and some fields of engineering. Studies also point, in general, to a decline in research productivity around midcareer. As detailed by Stephan and Levin (1992) , the decline reflects influences on both the willingness and ability of researchers to do scientific research. Older scientists are also seen to be slower to accept new ideas and techniques than are younger scientists. 8

Organization of Research

Since World War II, the social contract by which the federal government supports basic research has involved channeling large amounts of this support through awards to universities, much of that through grants to individual investigators. It is appropriate to consider whether such choices continue to be optimal and to consider related questions concerning the determinants of the research performance of individual faculty and of specific institutions or sets of institutions ( Guston and Keniston, 1994 ; Feller, 1996 ).

As detailed above, U.S. support of academic research across many fields, including aging research, is predicated on the proposition that “little science is the backbone of the scientific enterprise…. For those who believe that scientific discoveries are unpredictable, supporting many creative researchers who contribute to S&T, or the science base is prudent science policy” ( U.S. Congress Office of Technology Assessment, 1991 :146). Against this principle, trends toward “big science” and the requirements of interdisciplinary research have opened up the question of the optimal portfolio of funding mechanisms and award criteria to be employed by federal science agencies. Of special interest here as an alternative to the traditional model of single investigator–initiated research are what have been termed “industrial” models of research ( Ziman, 1984 ) or Mode II research; that is, research undertakings characterized by collaboration or teamwork among members of research groups participating in formally structured centers or institutes. Requests for proposals directed toward specific scientific, technological, and societal objectives; initiatives supporting collaborative, interdisciplinary modes of inquiry organized as centers rather than as single principal investigator projects; and use of selection criteria in addition to scientific merit are by now well-established parts of the research programs of federal science agencies, including NIH and the National Science Foundation. 9

A recurrent issue for federal science managers and for scientific communities is the relative rate of return to alternative arrangements, such as funding mechanisms. Making such comparisons is challenging. First, different research modes (e.g., single investigator–initiated proposals and multidisciplinary, center-based proposals submitted in response to a Request for Application) may produce different kinds of outputs. Single-investigator awards, typically described as the backbone of science, are intended cumulatively to build a knowledge base that affects clinical practice or public policy, to support the training of graduate students, to promote the development of networks of researchers and practitioners, and more—but no single awardee is expected to do all these things. Center awards also are expected to contribute to scientific progress—indeed to yield “value added” above the progress that can come from multiple single-investigator awards—but unlike single-investigator awards, they are typically expected to devote explicit attention to the other outcomes, such as translating the results of basic research into clinical practice. Because different modes of research support are expected to support different mixes of program objectives, direct comparisons of “performance” or “productivity” between or among them involves a complex set of weightings and assessments, both in terms of defining and measuring scientific progress and in assigning weights to the different kinds of scientific, programmatic, and societal objectives against which research is evaluated.

Little empirical evidence exists to inform comparisons among modes of research support. Empirical studies, most frequently in the form of bibliometric analyses, exist to compare the productivity of interdisciplinary research units, but these studies are not designed to answer the question of how much scientific progress would have been achieved had the funds allocated to such units been apportioned instead among a larger and more diverse number of single investigator awards ( Feller, 1992 ). Detailed criteria, for example, have been advanced to evaluate the performance of NIH’s center programs ( Institute of Medicine, 2004 ), and a number of center programs have been evaluated. However, these evaluations have not added up to a systematic assessment. 10

Expert judgment, historical assessment, and analysis of trends in science provide some support for core propositions about the sources of the vitality of U.S. science: adequate and sustainable funding; multiple, decentralized, funding streams; strong reliance on investigator-initiated proposals selected through competitive, merit-based review; coupling basic research with graduate education; and supplementary funding for capital-intensive modes of inquiry, interdisciplinary collaboration, targeted research objectives, and translation of basic research findings into clinical practice or technological innovations. Still, these principles may not provide wise guidance for the support of behavioral and social science research on aging, for three reasons. First, these observations come from experience with the life sciences, engineering sciences, and physical sciences, and it is not known whether the dynamics of scientific inquiry and progress are the same in the social and behavioral sciences. Second, it is not known whether recent trends in scientific inquiry, such as in the direction of interdisciplinarity, will continue, stop, or soon lead to a fundamental transformation in the way in which cutting-edge science (including in research on aging) is done. Third and perhaps most important, applying these principles presumes an environment of increasing total funds for research. In the more austere budget environment now projected for NIH and its subunits, it will not be possible to increase funding for all modes of support. Turning to existing research for guidance may prove of limited value for making trade-offs among competing funding paradigms.

  • IMPLICATIONS FOR DECISION MAKING
  • No theory exists that can reliably predict which research activities are most likely to lead to scientific advances or to societal benefit. The gulf between the decision-making environment of the research manager and the historian or other researcher retrospectively examining the emergence and subsequent development of a line of research is reflected in Weinberg’s (2001 :196) observation, “In judging the nature of scientific progress, we have to look at mature scientific theories, not theories at the moments when they are coming into being.” The history of science shows that evidence of past performance and current vitality, that is, of interest among scientists in a topic or line of research, are imperfect predictors of future progress. Thus, although it seems reasonable to expect that a newly developing field that generates excitement among scientists from other fields is a good bet to make progress in the near future, this expectation rests more on anecdote than on systematic empirical research. Notwithstanding the continuing search for improved quantitative measures and indicators for prospective assessment of scientific fields, practical choices about research investments will continue to depend on judgment. We address the prospects and potential roles of quantitative and other methods of science assessment in Chapter 5 .
  • Science produces diverse kinds of benefits; consequently, assessing the potential of lines of research is a challenging task. Assessments should carefully apply multiple criteria of benefit. Science proceeds toward improving understanding and benefiting society on several fronts, but often at an uneven pace, so that a line of research may show rapid progress on one dimension or by one indicator while showing little or no progress on another. In setting research priorities among lines of research, it is important to consider evidence of past accomplishments on the several dimensions of scientific advances (discovery, analysis, explanation, integration, and development) and of contributions to society (e.g., identifying issues, finding solutions, informing choices). The policy implications of a finding that a line of research is not currently making much progress on one or more dimensions are not self-evident. Such an assessment might be used as a rationale for decreasing support (because the funds may be expected to be poorly spent), for increasing support (for example, if the poor performance is attributed to past underfunding), or for making investments to redirect the field so as to reinvigorate it. A field that appears unproductive may be stagnant, fallow, or pregnant. Telling which is not easy. Judgment can be aided by the assessments of people close to the field, although not just those so close as to have a vested interest in its survival or growth. The same kind of advice is useful for judging the proper timing for efforts to invest in fields in order to keep them alive or to reinvigorate them.
  • Portfolio diversification strategies that involve investment in multiple fields and multiple kinds of research are appropriate for decision making, considering the inherent uncertainties of scientific progress. Through such strategies, research managers can minimize the consequences of overreliance on any single indicator of research quality or progress or any single presumption about what kinds of research are likely to be most productive. It is appropriate to diversify along several dimensions, including disciplines, modes of support, emphasis on theoretical or applied objectives, and so forth. Diversification is also advisable in terms of the kinds of evidence relied on to make decisions about what to support. For example, when quantitative indicators and informed peer judgment suggest supporting different lines of research, it is worth considering supporting some of each.
  • Research managers should seek to emphasize investing where their investments are most likely to add value. This consideration may affect emphasis on types of scientific progress, research organizations and modes of support, and areas of support.
  • Types of scientific progress. Even as they continue to pursue support of major scientific and programmatic advances, research managers may also find it productive to support improvements in databases and analytic techniques, efforts to integrate knowledge across fields and levels of analysis, efforts to examine underresearched questions, and the entry of new people to work on research problems.
  • Research organizations and modes of support. Research managers should consider favoring support to research organizations or in modes that have been shown to have characteristics that are likely to promote progress, either generally or for specific fields or lines of scientific inquiry. NIH has multiple funding mechanisms available that would allow support for particular types of organizations ( Institute of Medicine, 2004 ). An ongoing study by Hollingsworth (2003 :8) identifies six organizational characteristics as “most important in facilitating the making of major discoveries” (see Box 4-1 ). Research managers might consider the findings of such studies in making choices about what kinds of organizations to support, especially in efforts to promote scientific innovation.
  • Areas of support. Some fields may have sufficient other sources of funds that they do not need NIA support, or only need small investments from NIA to leverage funds from other sources. In other fields, however, BSR may be the only viable sponsor for the research. BSR managers may reasonably choose to emphasize supporting research in such fields because of the unlikelihood of leveraging funds. The value-added issue also affects decisions on modes of support and types of research to support.
  • Interdisciplinary research. BSR should continue to support issue-focused interdisciplinary research to promote scientific activities and collaborations related to its mission that might not emerge from existing scientific communities and organizations structured around disciplines. Interdisciplinary research has significant potential to advance scientific objectives that research management can promote, such as scientific integration and development and scientists’ attention to societal objectives of science consistent with BSR’s mission. Moreover, BSR has a good track record of promoting these objectives through its support of selected areas of interdisciplinary, issue-focused research. BSR should continue to solicit research in areas that require interdisciplinary collaboration, to support data sets that can be used readily across disciplines, to fund interdisciplinary workshops and conferences, and to support cross-institution, issue-focused interdisciplinary research networks. Supporting such research requires special efforts and skills of research managers but holds the promise of yielding major advances that would not come from business-as-usual science.

Characteristics of Organizations That Produced Major Biomedical Discoveries: The Hollingsworth Study. Rogers Hollingsworth and colleagues (Hollingsworth and Hollingsworth, 2000; Hollingsworth, 2003) have been examining the characteristics of biomedical (more...)

It is often argued that progress in the behavioral and social sciences is qualitatively different from progress in the natural sciences. As noted in a National Research Council review of progress in the behavioral and social sciences ( Gerstein, 1986 :17), “Because they are embedded in social and technological change, subject to the unpredictable incidence of scientific ingenuity and driven by the competition of differing theoretical ideas, the achievements of behavioral and social science research are not rigidly predictable as to when they will occur, how they will appear, or what they might lead to.” The unstated (and untested) implication is that this unpredictability is more characteristic of the social sciences than the natural sciences. Another view states: “In the natural sciences, a sharp division of labor between the information-gathering and the theory-making functions is facilitated by an approximate consensus on the definition of research purposes and on the conceptual economizers guiding the systematic selection and organization of information. In the social sciences, where the subject matter of research and the comparatively lower level of theoretical agreement generally do not permit comparable consensus on the value and utility of information extracted from phenomena, sharp division of labor between empirical and theoretical tasks is less warranted” ( Ezrahi, 1978 :288). Even the same techniques are thought to have quite different roles in the social and natural sciences: “The role of statistics in social science is thus fundamentally different from its role in much of the physical science, in that it creates and defines the objects of study much more directly. Those objects are no less real than those of the physical science. They are even more often much better understood. But despite the unity of statistics—the same methods are useful in all areas—there are fundamental differences, and these have played a role in the historical development of all these fields” ( Stigler, 1999 :199).

Some observers even question the claims of the behavioral and social sciences to standing as sciences. As observed in a recent text on the history of science, “In the end, perhaps the most interesting question is: Did the drive to create a scientific approach to the study of human nature achieve its goal? For all the money and effort poured into creating a body of practical information on the topic, many scientists in better established areas remain suspicious, pointing to a lack of theoretical coherence that undermines the analogy with the ‘hard’ sciences” ( Bowler and Morus, 2005 :314–315).

According to Cole (2001 :37), “The problem with fields like sociology is that they have virtually no core knowledge. Sociology has a booming frontier but none of the activity at that frontier seems to enter the core.”

As noted by Galison (1999 :143), “Experimentalists … do not march in lockstep with theory…. Each subculture has its own rhythms of change, each has its own standards of demonstration, and each is embedded differently in the wider culture of institutions, practices, inventions and ideas.”

Rita Colwell, former director of the National Science Foundation, has stated that “Interdisciplinary connections are absolutely fundamental. They are synapses in this new capability to look over and beyond the horizon. Interfaces of the sciences are where the excitement will be the most intense” ( Colwell, 1998 ).

As stated in a recent National Research Council (2005b :150) report, “A remaining challenge is to determine what additional measures, if any, are needed to assess interdisciplinary research and teaching beyond those shown to be effective in disciplinary activities. Successful outcomes of an interdisciplinary research (IDR) program differ in several ways from those of a disciplinary program. First, a successful IDR program will have an impact on multiple fields or disciplines and produce results that feed back into and enhance disciplinary research. It will also create researchers and students with an expanded research vocabulary and abilities in more than one discipline and with an enhanced understanding of the interconnectedness inherent in complex problems.”

Consistent with the belief that competitive, merit-based review is key to creating the best possible conditions for scientific advance is the articulation of how “quality” is to be achieved and gauged under the Research and Development Investment Criteria established by the Office of Science and Technology Policy and the Office of Management and Budget on June 5, 2005: “A customary method for promoting quality is the use of a competitive, merit-based process” ( http://www ​.whitehouse ​.gov/omb/memoranda/m03-15.pdf , p. 7).

As Max Planck famously remarked, “a new scientific truth does not triumph by convincing its opponents and making them see the light, but because the its opponents eventually die, and a new generation grows up that is familiar with it.” Stephan and Levin (1992 :83) write: “empirical studies of Planck’s principle for the most part confirm the hypothesis that older scientists are slower than their younger colleagues are to accept new ideas and that eminent older scientists are the most likely to resist. The operative factor in resistance, however, is not age per se but, rather, the various indices of professional experience and prestige correlated with age …. [Y]oung scientists … may also be less likely to embrace new ideas, particularly if they assess such a course as being particularly risky.” Thus, a graying scientific community affects the rate of scientific innovation directly by being less productive and indirectly by being slow to accept new ideas as they emerge.

Interdisciplinary research and the industrial model of research are often found together, but they are not identical. One may organize centers based primarily on researchers from a single discipline, and researchers from several disciplines may collaborate, as co-principal investigators or as loosely coupled teams, on one-time awards. At NIH, research center grants “are awarded to extramural research institutions to provide support for long-term multidisciplinary programs of medical research. They also support the development of research resources, aim to integrate basic research with applied research and transfer activities, and promote research in areas of clinical applications with an emphasis on intervention, including prototype development and refinement of products, techniques, processes, methods, and practices” ( Institute of Medicine, 2004 ).

“NIH does not have formal regular procedures or criteria for evaluating center programs. From time to time, institutes conduct internal program reviews or appoint external review panels, but these ad hoc assessments are usually done in response to a perception that the program is no longer effective or appropriate rather than as part of a regular evaluation process. Most of these reviews rely on the judgment of experts rather than systematically collected objective data, although some formal program evaluations have been performed by outside firms using such data” ( Institute of Medicine, 2004 :121).

  • Cite this Page National Research Council (US) Committee on Assessing Behavioral and Social Science Research on Aging; Feller I, Stern PC, editors. A Strategy for Assessing Science: Behavioral and Social Research on Aging. Washington (DC): National Academies Press (US); 2007. 4, Progress in Science.
  • PDF version of this title (931K)

In this Page

Other titles in this collection.

  • The National Academies Collection: Reports funded by National Institutes of Health

Recent Activity

  • Progress in Science - A Strategy for Assessing Science Progress in Science - A Strategy for Assessing Science

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process

A Beginner's Guide to Starting the Research Process

Research process steps

When you have to write a thesis or dissertation , it can be hard to know where to begin, but there are some clear steps you can follow.

The research process often begins with a very broad idea for a topic you’d like to know more about. You do some preliminary research to identify a  problem . After refining your research questions , you can lay out the foundations of your research design , leading to a proposal that outlines your ideas and plans.

This article takes you through the first steps of the research process, helping you narrow down your ideas and build up a strong foundation for your research project.

Table of contents

Step 1: choose your topic, step 2: identify a problem, step 3: formulate research questions, step 4: create a research design, step 5: write a research proposal, other interesting articles.

First you have to come up with some ideas. Your thesis or dissertation topic can start out very broad. Think about the general area or field you’re interested in—maybe you already have specific research interests based on classes you’ve taken, or maybe you had to consider your topic when applying to graduate school and writing a statement of purpose .

Even if you already have a good sense of your topic, you’ll need to read widely to build background knowledge and begin narrowing down your ideas. Conduct an initial literature review to begin gathering relevant sources. As you read, take notes and try to identify problems, questions, debates, contradictions and gaps. Your aim is to narrow down from a broad area of interest to a specific niche.

Make sure to consider the practicalities: the requirements of your programme, the amount of time you have to complete the research, and how difficult it will be to access sources and data on the topic. Before moving onto the next stage, it’s a good idea to discuss the topic with your thesis supervisor.

>>Read more about narrowing down a research topic

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research in progress meaning

So you’ve settled on a topic and found a niche—but what exactly will your research investigate, and why does it matter? To give your project focus and purpose, you have to define a research problem .

The problem might be a practical issue—for example, a process or practice that isn’t working well, an area of concern in an organization’s performance, or a difficulty faced by a specific group of people in society.

Alternatively, you might choose to investigate a theoretical problem—for example, an underexplored phenomenon or relationship, a contradiction between different models or theories, or an unresolved debate among scholars.

To put the problem in context and set your objectives, you can write a problem statement . This describes who the problem affects, why research is needed, and how your research project will contribute to solving it.

>>Read more about defining a research problem

Next, based on the problem statement, you need to write one or more research questions . These target exactly what you want to find out. They might focus on describing, comparing, evaluating, or explaining the research problem.

A strong research question should be specific enough that you can answer it thoroughly using appropriate qualitative or quantitative research methods. It should also be complex enough to require in-depth investigation, analysis, and argument. Questions that can be answered with “yes/no” or with easily available facts are not complex enough for a thesis or dissertation.

In some types of research, at this stage you might also have to develop a conceptual framework and testable hypotheses .

>>See research question examples

The research design is a practical framework for answering your research questions. It involves making decisions about the type of data you need, the methods you’ll use to collect and analyze it, and the location and timescale of your research.

There are often many possible paths you can take to answering your questions. The decisions you make will partly be based on your priorities. For example, do you want to determine causes and effects, draw generalizable conclusions, or understand the details of a specific context?

You need to decide whether you will use primary or secondary data and qualitative or quantitative methods . You also need to determine the specific tools, procedures, and materials you’ll use to collect and analyze your data, as well as your criteria for selecting participants or sources.

>>Read more about creating a research design

Prevent plagiarism. Run a free check.

Finally, after completing these steps, you are ready to complete a research proposal . The proposal outlines the context, relevance, purpose, and plan of your research.

As well as outlining the background, problem statement, and research questions, the proposal should also include a literature review that shows how your project will fit into existing work on the topic. The research design section describes your approach and explains exactly what you will do.

You might have to get the proposal approved by your supervisor before you get started, and it will guide the process of writing your thesis or dissertation.

>>Read more about writing a research proposal

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Is this article helpful?

Other students also liked.

  • Writing Strong Research Questions | Criteria & Examples

What Is a Research Design | Types, Guide & Examples

  • How to Write a Research Proposal | Examples & Templates

More interesting articles

  • 10 Research Question Examples to Guide Your Research Project
  • How to Choose a Dissertation Topic | 8 Steps to Follow
  • How to Define a Research Problem | Ideas & Examples
  • How to Write a Problem Statement | Guide & Examples
  • Relevance of Your Dissertation Topic | Criteria & Tips
  • Research Objectives | Definition & Examples
  • What Is a Fishbone Diagram? | Templates & Examples
  • What Is Root Cause Analysis? | Definition & Examples

What is your plagiarism score?

  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Science, health, and public trust.

September 8, 2021

Explaining How Research Works

Understanding Research infographic

We’ve heard “follow the science” a lot during the pandemic. But it seems science has taken us on a long and winding road filled with twists and turns, even changing directions at times. That’s led some people to feel they can’t trust science. But when what we know changes, it often means science is working.

Expaling How Research Works Infographic en español

Explaining the scientific process may be one way that science communicators can help maintain public trust in science. Placing research in the bigger context of its field and where it fits into the scientific process can help people better understand and interpret new findings as they emerge. A single study usually uncovers only a piece of a larger puzzle.

Questions about how the world works are often investigated on many different levels. For example, scientists can look at the different atoms in a molecule, cells in a tissue, or how different tissues or systems affect each other. Researchers often must choose one or a finite number of ways to investigate a question. It can take many different studies using different approaches to start piecing the whole picture together.

Sometimes it might seem like research results contradict each other. But often, studies are just looking at different aspects of the same problem. Researchers can also investigate a question using different techniques or timeframes. That may lead them to arrive at different conclusions from the same data.

Using the data available at the time of their study, scientists develop different explanations, or models. New information may mean that a novel model needs to be developed to account for it. The models that prevail are those that can withstand the test of time and incorporate new information. Science is a constantly evolving and self-correcting process.

Scientists gain more confidence about a model through the scientific process. They replicate each other’s work. They present at conferences. And papers undergo peer review, in which experts in the field review the work before it can be published in scientific journals. This helps ensure that the study is up to current scientific standards and maintains a level of integrity. Peer reviewers may find problems with the experiments or think different experiments are needed to justify the conclusions. They might even offer new ways to interpret the data.

It’s important for science communicators to consider which stage a study is at in the scientific process when deciding whether to cover it. Some studies are posted on preprint servers for other scientists to start weighing in on and haven’t yet been fully vetted. Results that haven't yet been subjected to scientific scrutiny should be reported on with care and context to avoid confusion or frustration from readers.

We’ve developed a one-page guide, "How Research Works: Understanding the Process of Science" to help communicators put the process of science into perspective. We hope it can serve as a useful resource to help explain why science changes—and why it’s important to expect that change. Please take a look and share your thoughts with us by sending an email to  [email protected].

Below are some additional resources:

  • Discoveries in Basic Science: A Perfectly Imperfect Process
  • When Clinical Research Is in the News
  • What is Basic Science and Why is it Important?
  • ​ What is a Research Organism?
  • What Are Clinical Trials and Studies?
  • Basic Research – Digital Media Kit
  • Decoding Science: How Does Science Know What It Knows? (NAS)
  • Can Science Help People Make Decisions ? (NAS)

Connect with Us

  • More Social Media from NIH

Book cover

Principles of Research Methodology pp 1–14 Cite as

Overview of the Research Process

  • Phyllis G. Supino EdD 3  
  • First Online: 01 January 2012

Research is a rigorous problem-solving process whose ultimate goal is the discovery of new knowledge. Research may include the description of a new phenomenon, definition of a new relationship, development of a new model, or application of an existing principle or procedure to a new context. Research is systematic, logical, empirical, reductive, replicable and transmittable, and generalizable. Research can be classified according to a variety of dimensions: basic, applied, or translational; hypothesis generating or hypothesis testing; retrospective or prospective; longitudinal or cross-sectional; observational or experimental; and quantitative or qualitative. The ultimate success of a research project is heavily dependent on adequate planning.

  • Coronary Artery Bypass Grafting
  • Prospective Research
  • Control Hospital
  • Putative Risk Factor
  • Elective Coronary Artery Bypass Grafting

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Calvert J, Martin BR (2001) Changing conceptions of basic research? Brighton, England: Background document for the Workshop on Policy Relevance and Measurement of Basic Research, Oslo, 29–30 Oct 2001. Brighton, England: SPRU.

Google Scholar  

Leedy PD. Practical research. Planning and design. 6th ed. Upper Saddle River: Prentice Hall; 1997.

Tuckman BW. Conducting educational research. 3rd ed. New York: Harcourt Brace Jovanovich; 1972.

Tanenbaum SJ. Knowing and acting in medical practice. The epistemological policies of outcomes research. J Health Polit Policy Law. 1994;19:27–44.

Article   PubMed   CAS   Google Scholar  

Richardson WS. We should overcome the barriers to evidence-based clinical diagnosis! J Clin Epidemiol. 2007;60:217–27.

Article   PubMed   Google Scholar  

MacCorquodale K, Meehl PE. On a distinction between hypothetical constructs and intervening variables. Psych Rev. 1948;55:95–107.

Article   CAS   Google Scholar  

The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research: The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. Washington: DHEW Publication No. (OS) 78–0012, Appendix I, DHEW Publication No. (OS) 78–0013, Appendix II, DHEW Publication (OS) 780014; 1978.

Coryn CLS. The fundamental characteristics of research. J Multidisciplinary Eval. 2006;3:124–33.

Smith NL, Brandon PR. Fundamental issues in evaluation. New York: Guilford; 2008.

Committee on Criteria for Federal Support of Research and Development, National Academy of Sciences, National Academy of Engineering, Institute of Medicine, National Research Council. Allocating federal funds for science and technology. Washington, DC: The National Academies; 1995.

Busse R, Fleming I. A critical look at cardiovascular translational research. Am J Physiol Heart Circ Physiol. 1999;277:H1655–60.

CAS   Google Scholar  

Schuster DP, Powers WJ. Translational and experimental clinical research. Philadelphia: Lippincott, Williams & Williams; 2005.

Woolf SH. The meaning of translational research and why it matters. JAMA. 2008;299:211–21.

Robertson D, Williams GH. Clinical and translational science: principles of human research. London: Elsevier; 2009.

Goldblatt EM, Lee WH. From bench to bedside: the growing use of translational research in cancer medicine. Am J Transl Res. 2010;2:1–18.

PubMed   Google Scholar  

Milloy SJ. Science without sense: the risky business of public health research. In: Chapter 5, Mining for statistical associations. Cato Institute. 2009. http://www.junkscience.com/news/sws/sws-chapter5.html . Retrieved 29 Oct 2009.

Gawande A. The cancer-cluster myth. The New Yorker, 8 Feb 1999, p. 34–37.

Kerlinger F. [Chapter 2: problems and hypotheses]. In: Foundations of behavioral research 3rd edn. Orlando: Harcourt, Brace; 1986.

Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2:e124. Epub 2005 Aug 30.

Andersen B. Methodological errors in medical research. Oxford: Blackwell Scientific Publications; 1990.

DeAngelis C. An introduction to clinical research. New York: Oxford University Press; 1990.

Hennekens CH, Buring JE. Epidemiology in medicine. 1st ed. Boston: Little Brown; 1987.

Jekel JF. Epidemiology, biostatistics, and preventive medicine. 3rd ed. Philadelphia: Saunders Elsevier; 2007.

Hess DR. Retrospective studies and chart reviews. Respir Care. 2004;49:1171–4.

Wissow L, Pascoe J. Types of research models and methods (chapter four). In: An introduction to clinical research. New York: Oxford University Press; 1990.

Bacchieri A, Della Cioppa G. Fundamentals of clinical research: bridging medicine, statistics and operations. Milan: Springer; 2007.

Wood MJ, Ross-Kerr JC. Basic steps in planning nursing research. From question to proposal. 6th ed. Boston: Jones and Barlett; 2005.

DeVita VT, Lawrence TS, Rosenberg SA, Weinberg RA, DePinho RA. Cancer. Principles and practice of oncology, vol. 1. Philadelphia: Wolters Klewer/Lippincott Williams & Wilkins; 2008.

Portney LG, Watkins MP. Foundations of clinical research. Applications to practice. 2nd ed. Upper Saddle River: Prentice Hall Health; 2000.

Marks RG. Designing a research project. The basics of biomedical research methodology. Belmont: Lifetime Learning Publications: A division of Wadsworth; 1982.

Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet. 1991;337:867–72.

Download references

Author information

Authors and affiliations.

Department of Medicine, College of Medicine, SUNY Downstate Medical Center, 450 Clarkson Avenue, 1199, Brooklyn, NY, 11203, USA

Phyllis G. Supino EdD

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Phyllis G. Supino EdD .

Editor information

Editors and affiliations.

, Cardiovascular Medicine, SUNY Downstate Medical Center, Clarkson Avenue, box 1199 450, Brooklyn, 11203, USA

Phyllis G. Supino

, Cardiovascualr Medicine, SUNY Downstate Medical Center, Clarkson Avenue 450, Brooklyn, 11203, USA

Jeffrey S. Borer

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media, LLC

About this chapter

Cite this chapter.

Supino, P.G. (2012). Overview of the Research Process. In: Supino, P., Borer, J. (eds) Principles of Research Methodology. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-3360-6_1

Download citation

DOI : https://doi.org/10.1007/978-1-4614-3360-6_1

Published : 18 April 2012

Publisher Name : Springer, New York, NY

Print ISBN : 978-1-4614-3359-0

Online ISBN : 978-1-4614-3360-6

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

This website may not work correctly because your browser is out of date. Please update your browser .

Interim reports

Interim (or progress) reports present the interim, preliminary, or initial evaluation findings.

Interim reports are scheduled according to the specific needs of your evaluation users, often halfway through the execution of a project. The interim report is necessary to let a project’s stakeholders know how an intervention is going. It provides information that will help the funders and other decision-makers determine whether to continue with the current direction, where to make adjustments if necessary, revise goals, add more resources or in the worst-case scenario, to shut it down.

An interim report is similar to a final report, in that it includes a summary, a brief description of the progress, the evaluation thus far, and an overview of the financial situation. Any delays or deviations to the plan are included and explained, as well as any comparison between actual compared to expected results.

Advice for using this method

To avoid critical issues being interpreted incorrectly, begin interim reports by stating the following:

  • Which data collection activities are being reported on and which are not;
  • When the final evaluation results will be available;
  • Any cautions for readers in interpreting the findings.

Advice taken from Torres et al., 2005

This detailed example of a progress report describes Oxfam's work in Haiti following a large earthquake. It is intended to account to donors, partner organizations, allies, staff, and volunteers.

"Within every picture is a hidden language that conveys a message, whether it is intended or not. This language is based on the ways people perceive and process visual information.

This book from Torres, Preskill and Piontek has been designed to support evaluators to incorporate creative techniques in the design, conduct, communication and reporting of evaluation findings.

This guide is an IDRC publication with a module dedicated to writing a research report including information on layout and design.

This guide from the University of Wisconsin Cooperative Extension, provides a range of tips and advice for planning and writing evaluation reports that are concise and free of jargon. 

Davies, L. (2012). Haiti Progress Report January-December 2011 . Oxford, UK: Oxfam GB. Retrieved from https://policy-practice.oxfam.org/resources/haiti-progress-report-january-december-2011-200732/

Oxfam GB Evaluation Guidelines (accessed 2012-05-08): https://www.alnap.org/help-library/oxfam-gb-evaluation-guidelines

Stetson, Valerie. (2008). Communicating and reporting on an evaluation: Guidelines and Tools. Catholic Relief Services and American Red Cross, Baltimore and Washington, USA. Retrieved from: https://www.alnap.org/help-library/communicating-and-reporting-on-an-evaluation-guidelines-and-tools

Torres, Rosalie T., Hallie Preskill and Mary E. Piontek. (2005). Evaluation Strategies for Communicating and Reporting: Enhancing Learning in Organizations (Second Edition). University of Mexico.

USAID. (2010). Performance monitoring & evaluation tips: Constructing an evaluation report. Retrieved from:  https://pdf.usaid.gov/pdf_docs/pnadw117.pdf

Expand to view all resources related to 'Interim reports'

  • Are you writing an evaluation report?
  • Quick tips for planning evaluation reports

'Interim reports' is referenced in:

Framework/guide.

  • Rainbow Framework :  Develop reporting media

Back to top

© 2022 BetterEvaluation. All right reserved.

National Academies Press: OpenBook

Research Priorities for Airborne Particulate Matter: III. Early Research Progress (2001)

Chapter: 3. review of research progress and status, review of research progress and status, introduction.

I n this chapter, the committee reviews the progress made in implementing the particulate-matter (PM) research portfolio over the period from 1998 (the year in which the portfolio was first recommended by the committee) to the middle of 2000. Because that period represents the initial stage of the PM research program, the committee's assessment necessarily focused more on continuing and planned research projects than on published results.

For each of the 10 topics in the research portfolio, the committee first characterizes the status of relevant research and progress, including the approximate numbers of studies in progress on various subtopics (the committee did not attempt to list all relevant research projects but did attempt to capture the major studies across the spectrum of the research in progress), then considers the adequacy of the current research in addressing specific needs as identified in its first two reports, and finally applies the first three evaluation criteria discussed in Chapter 2 : scientific value, decisionmaking value, and feasibility and timing. The remaining three criteria—largely cross-cutting—are considered in more general terms in Chapter 4 . The committee's next report, due near the end 2002, will consider research in relation to these criteria in more detail.

RESEARCH TOPIC 1. OUTDOOR MEASURES VERSUS ACTUAL HUMAN EXPOSURES

What are the quantitative relationships between concentrations of partic ulate matter and gaseous copollutants measured at stationary outdoor air- monitoring sites and the contributions of these concentrations to actual personal exposures, especially for subpopulations and individuals?

In its first report (NRC 1998), the committee recommended that information be obtained on relationships between total personal exposures and outdoor concentrations of PM. Specifically, the committee recommended longitudinal panel studies, in which groups of 10-40 persons would be studied at successive times to examine the relationship between their exposures to PM and the corresponding outdoor concentrations. The studies were intended to focus not only on the general population, but also on subpopulations that could be susceptible 1 to the effects of PM exposures, such as the elderly, children, and persons with respiratory or cardiovascular disease. It was recommended that some of the exposure studies include measurements of PM with an aerodynamic diameter of 2.5 µ m or less (PM 2.5 ), PM with an aerodynamic diameter of 10 µ m or less (PM 10 ), and gaseous copollutants. It was expected that the investigations would quantify the contribution of outdoor sources to personal and indoor exposures. The design and execution of studies were to take about 3 years, and the suggestion was made to conduct the studies at various geographical locations in different seasons.

Research Progress and Status

Substantial research is in progress, and some studies, started before the committee's first report, have been completed. Results of

recent panel studies of personal exposure conducted in Wageningen, Netherlands (Janssen et al. 1999), Boston, MA (Rojas-Bracho et al. 2000), Baltimore, MD (Sarnat 2000; Williams et al. 2000), and other places suggest that 12-15 measurements per person are sufficient to examine relationships between personal exposures and outdoor PM concentrations. These longitudinal panel studies have increased the understanding of the relationships between personal exposures and outdoor concentrations more than did earlier cross-sectional exposure studies. Several additional longitudinal panel studies are going on in other U.S. cities, including New York, NY; Atlanta, GA; Los Angeles, CA; Research Triangle Park, NC; and Seattle, WA. A number of research and funding organizations—including academic institutions, the U.S. Environmental Protection Agency (EPA), the Health Effects Institute (HEI), the Electric Power Research Institute (EPRI), and the American Petroleum Institute (API)—already have been engaged in this effort. Collectively, the studies should provide an understanding of the relationships between personal exposures and outdoor pollutant concentrations in a large number of geographic areas in the United States.

Several insights have been gained from the results of completed studies (Janssen et al. 1999, 2000; Ebelt et al. 2000; Rojas-Bracho et al. 2000; Sarnat et al. 2000; Williams et al. 2000). These studies have observed significant differences among study participants in the relationship between personal exposures and outdoor concentrations. When such relationships were analyzed for each person, substantial variability was found. Because outdoor concentrations exhibited little spatial variability, the heterogeneity was attributed to differences in indoor concentrations. Indeed, indoor concentrations were found to be an excellent predictor of personal exposures for most study participants, independently of city (Baltimore or Boston), season (winter or summer), and panel (elderly; chronic obstructive pulmonary disease, or COPD; or children). The finding that indoor concentration is an excellent predictor of personal exposure is not surprising, in that people spend more than 80% of their time indoors (EPA 1996a). Apart from exposures to tobacco smoke and emissions from cooking, which produce long-term increases in PM exposures of around 30 µ g/m 3 (Spengler et al. 1981) and 15-20 µ g/m 3 (Ozkaynak 1996), respectively,

home activities that were expected to produce particles, such as vacuum-cleaning and dusting (EPA 1996; Ozkaynak 1996), were found to explain very little of the total variability in personal exposures (Rojas-Bracho et al. 2000). In general, indoor sources tend to operate intermittently and, when measured by continuous monitors, can produce indoor concentrations as high as several hundred micrograms per cubic meter (Abt et al. 2000). The impact of these indoor (or other microenvironmental) peak concentrations can be captured only by real-time or semicontinuous personal monitors (Howard-Reed et al. 2000). However, when such short-term increases in concentration are averaged, their contributions to the average 24-hr indoor concentrations or personal exposures are estimated to be small.

Analyses of data from the study of elderly people in Baltimore (Sarnat et al. 2000) and the study of COPD patients in Boston (Rojas-Bracho et al. 2000) have demonstrated that ventilation (rate of exchange of indoor with outdoor air) is the measure that most strongly influences the relationship of personal-exposure to outdoor concentration. Personal exposure data were classified into three groups based on reported home ventilation status, a surrogate for the rate of exchange of indoor with outdoor air. Homes were classified as “well,” “moderately,” or “poorly” ventilated, as defined by the distribution of the fraction of time that windows were open while a person was in an indoor environment. When the PM datasets were stratified into these ventilation groups and analyzed cross-sectionally, strong relationships between personal exposures and outdoor concentrations were observed for well-ventilated homes and, to a lesser extent, for moderately ventilated homes. However, a low correlation coefficient was found for the poorly ventilated homes. Those findings suggest that for homes with no smokers and little cooking activity most of the variability in indoor concentrations, as well as in personal exposures of occupants, is due to the varied impact of outdoor sources on the indoor environment. That effect is underscored by the influence of air-exchange rates on the relationship between indoor and outdoor concentrations when no activities are occurring in the homes. For instance, for well-ventilated homes, indoor-to-outdoor particle ratios are close to 1.0, whereas for homes with low rates of exchange and

no activities, indoor-to-outdoor ratios can be substantially lower (about 0.4-0.6) (Abt et al. 2000; Long et al. 2000).

Home ventilation rates are expected to vary with season, geographical location, and home characteristics; that implies that the relationship of human exposures to outdoor PM concentrations will also vary with these factors. Therefore, PM risk relationships estimated from epidemiological studies might differ by city, season, and overall home characteristics. However, the additional influence of personal activity patterns on the overall relationship between human exposure and outdoor PM concentrations is also relevant to interpretation of the results of observational studies. The pattern of reported findings is still based on a small number of studies, and replication of the results will be needed from current or recently completed studies in other cities before firm conclusions can be drawn.

Adequacy of Current Research in Addressing Research Needs

Considerable effort is going into examining the relationship between ambient particle concentrations and personal exposures. Several longitudinal panel studies are being conducted in various geographic locations, including New York, NY; Atlanta, GA; Los Angeles and Fresno, CA; and Seattle, WA (see Table 3.1 ). Collectively, these studies are assessing exposures of healthy subjects and susceptible subpopulations (such as those with COPD; myocardial infarction, or MI; or asthma) to PM and some gaseous copollutants (such as ozone, sulfur dioxide, and nitrogen dioxide). The studies are expected to greatly expand the database on personal exposures, indoor and outdoor concentrations, human activities, and home characteristics. They are also expected to improve understanding of factors that influence the relationship between ambient concentrations and personal exposures. Therefore, as new information from the panel studies accumulates, it appears that, in spite of the time needed to initiate them, many of the elements of research topic 1 are being addressed. Most of the studies have not been completed; their findings are expected to appear in the peer-reviewed literature in the next several years.

TABLE 3.1 Current Studies Relevant to Research Topic 1

Many of the recently completed and current studies examine the relationship between ambient concentrations of gaseous pollutants and personal exposures. Understanding that relationship will provide profiles of multipollutant exposures that can inform understanding of research topic 7 (combined effects of PM and gaseous pollutants). In

addition, understanding of differences between personal exposure and ambient concentrations for a suite of gaseous pollutants and PM will provide input into analyses of measurement error in a multi-pollutant context (see research topic 10 , analysis and measurement).

Application of Evaluation Criteria

Scientific value.

The current panel exposure studies are straightforward and have expanded on findings from previous investigations. They have used well-established research tools for conducting personal and micro-environmental measurements. They have also relied on field protocols developed as part of previous exposure studies, (such as the Particle Total Exposure Assessment Methodology (PTEAM) study (Pellizzari et al. 1993). The studies are generally designed to assess the range of exposures including those that occur in the home, in the workplace, and while traveling. To a large extent, the scientific value of these investigations will be judged by the appropriateness of their design. It appears that the study designs, (such as repeated measurements of a small number of people) can adequately address the scientific questions in research topic 1 .

Completed studies have indicated key factors that influence outdoor-personal relationships. Preliminary results suggest that for homes with no smokers and little cooking activity, home ventilation rate (or air-exchange rate) is the most important modifier of personal exposure. To a great extent, ventilation rate controls the impact of both outdoor and indoor sources on the indoor environment, where people spend most of their time. If correct, this observation implies that such entities as home characteristics, season, and location could be more important determinants of personal exposure than activities and type of susceptible subpopulation studied.

The panel studies will also produce a large set of data on human activities and home characteristics. These data will substantially enrich the existing information and will be available to other researchers

involved in human-exposure assessment investigations (such as EPA 's National Human Exposure Assessment Survey).

Decisionmaking Value

Exposure assessment is of paramount importance for understanding the effects of ambient particles and for developing cost-effective exposure-control strategies. The current studies should allow the scientific community and decisionmakers to understand the factors that affect the relationship between personal exposure and outdoor concentrations. That will be accomplished through the continued development of personal-exposure monitoring tools that allow a better understanding of the sources of exposure, physical and chemical properties of PM, and sampling durations that could be relevant to the subpopulations being studied. Although the panel studies are based on small numbers of participants (10-50 per panel), they are addressing factors that influence relationships between outdoor air and personal exposures. This is the first step in attempts to develop a comprehensive exposure model, which is a key research tool in the source-exposure-dose-response paradigm.

Feasibility and Timing

Sampling and analytical procedures, time-activity questionnaires, and other related methods necessary for conducting the panel studies have been adequately tested. They have been implemented successfully in various geographical locations by various research groups (such as Janssen et al. 1999, 2000; Ebelt et al. 2000; Rojas-Bracho et al. 2000; Sarnat et al. 2000; Williams et al. 2000). Therefore, it is expected that the current longitudinal panel studies will be completed without great difficulty. Although there was some delay in initiating some of the studies, abundant personal and microenvironmental measurements have been collected. Reporting of results from research related to this topic began during the summer of 2000, and the re-

maining studies should be reported.within about 2 years, a year later than originally planned.

RESEARCH TOPIC 2. EXPOSURES OF SUSCEPTIBLE SUBPOPULATIONS TO TOXIC PARTICULATE-MATTER COMPONENTS

What are the exposures to biologically important constituents and spe cific characteristics of particulate matter that cause responses in potentially susceptible subpopulations and the general population?

The committee recommended that after obtaining and interpreting results of studies from research topic 1 human exposure-assessment studies examine exposures to specific chemical constituents of PM considered relevant to health effects. To make research topic 2 investigations more practicable, it will be necessary to characterize susceptible subpopulations more fully, identify toxicologically important chemical constituents or particle-size fractions, develop and field-test exposure-measurement techniques for relevant properties of PM, and design comprehensive studies to determine population exposures.

Methods of measuring personal exposures to particles of various physical properties (such as particle number and size) or chemical properties (such as sulfate, nitrate, carbon, and other elements) are available and are being field-tested. Methods of measuring personal exposures to some gaseous copollutants—such as ozone, nitrogen dioxide, and sulfur dioxide—are also used. As interest in personal-exposure measurements increases, new sampling and analytical techniques will probably emerge.

The results of the longitudinal panel studies discussed under research topic 1 should facilitate the design of cost-effective protocols for future exposure studies that focus on PM components considered in determining toxicity. These studies will be based on toxicity and epidemiological studies that are successful in identifying particle properties of interest over the next few years; because they will prob-

ably not get under way for several years, the committee is planning to evaluate their progress in its next report.

RESEARCH TOPIC 3. CHARACTERIZATION OF EMISSION SOURCES

What are the size distribution, chemical composition, and mass-emission rates of particulate matter emitted from the collection of primary-particle sources in the United States, and what are the emissions of reactive gases that lead to secondary particle formation through atmospheric chemical reactions?

In its second report, the committee created a separate set of research recommendations that address measurement of the size distribution and chemical composition of PM emissions from sources. Characterization of the emission rates of reactive gases that can form particles on reaction in the atmosphere was also emphasized, including the need to maintain emission data on sulfur oxides, nitrogen oxides, ammonia, and volatile organic compounds (VOCs) (specifically those components that lead to particle formation).

The committee noted that traditional emission inventories have focused on representing PM mass emissions summed over all particles smaller than a given size, without detailed accounting of the particle-size distribution or chemical composition. Health-effects research recommended by the committee emphasized identification of the specific chemical components or size characteristics of the particles that are most directly related to the biological mechanisms that lead to the health effects of airborne particles. Detailed information on the size and composition of particle emissions from sources is important for this process of hazard identification and effective regulation. In the near term, toxicologists and epidemiologists need to know the size and composition of particles emitted from key emission sources to form hypotheses about the importance of particle characteristics and to give priority to their evaluation in laboratory- and field-based health-effects studies. In the longer term, detailed information on

particle size and composition will be needed for the design of effective air-quality control programs if those programs become more precisely targeted at the most biologically active components of the atmospheric particle mixture.

Detailed data on the particle size distribution and chemical composition of emissions from sources are also needed to support the application and evaluation of air-quality models that relate source emissions to ambient-air pollutant concentrations and chemical composition. These models are central to the process of evaluating emission-control strategies in advance of their adoption. Source-oriented models for particle transport and new particle formation can require detailed data on particle size and composition for use in condensation-evaporation calculations. Chemical mass-balance (CMB) receptor-oriented air-quality models determine source contributions to ambient particle concentrations by computing the best-fit linear combination of source chemical-composition profiles needed to reconstruct the chemical composition of atmospheric samples. These CMB models inherently require the use of accurate data on the chemical composition of particle emissions at their source. Finally, emissions data on particle chemical composition and size will be needed in the future to support detailed studies of air-quality model performance. Even when the regulated pollutant is fine-particle mass, assurances are needed that air-quality models are getting the right answers for the right reasons. Model-evaluation studies conducted in a way that tests a model's ability to account for ambient particle size and chemical composition can be used to confirm that the model has arrived at agreement between the predicted and observed mass-concentration values for the correct reasons.

In light of those needs for data on the size and chemical composition of particle emissions from sources, the committee's second report outlined the following set of research needs: establish standard source-test methods for measurement of particle size and chemical composition, characterize primary particle size and composition of emissions from the most important sources, develop new measurement methods and use of data to characterize sources of gas-phase ammonia and semivolatile organic vapors, and translate new source-

test procedures and source-test data into comprehensive national emission inventories.

Establish Standard Source-Test Methods for Measurement of Particle Size and Chemical Composition

Research into the establishment of new source-test methods for measurement of fine- particle chemical composition is under way at EPA. A dilution source sampler for measurement of emissions from stationary sources has been built and tested. It permits measurement of particle size distributions and elemental carbon, organic carbon, speciated organic compounds, inorganic ions, and trace elements. The inorganic ions typically would include sulfates, nitrates, ammonium, and chlorides. Catalytic trace metals are included among the more than 30 trace elements that will be measured. These measurements are aligned with many of the potentially hazardous characteristics of the particles that have been identified by the committee and include determination of size-fractionated PM mass, PM surface area, PM number concentration, transition metals, soot, polycyclic aromatic hydrocarbons (PAHs), sulfates, nitrates, and some copollutants. It is not clear whether plans are being made to measure strong acids, bioaerosols, peroxides, or free radicals, which constitute other categories of concern to the health-effects community in determining the toxicity of particles. The methods being developed can be used to collect data on volatile and semivolatile organic vapor emissions and could be adapted to measure ammonia emissions. Methods for dilution source sampling of diesel exhaust particles are also under development.

EPA has conducted field tests of these advanced emission-measurement methods for open biomass burning, residential wood stoves, heavy-duty diesel trucks, and small oil-fired boilers. Construction dust emissions have also been measured. Plans for the near future include measurement of PM emissions from diesel trucks, wood-

fired boilers, large residual oil-fired boilers, jet aircraft engines, and coal-fired boilers. In addition, dilution source sampling to determine particle size and composition by comparable methods is being supported by EPA through the Science to Achieve Results (STAR) grants program (biomass smoke), American Petroleum Institute (API) (petroleum-refinery processes), the Coordinating Research Council or CRC (diesel trucks), the California Air Resources Board, the National Park Service (NPS), the Department of Defense (motor vehicles, boilers, and so on), and the Department of Energy.

Those dilution source-sampling methods have been developed for research purposes and are being used to gather data to prepare accurate emission inventories. However, the new methods have not yet replaced earlier methods for testing to establish and enforce emission limits. EPA 's Office of Air and Radiation (OAR) is evaluating a dilution-based source-testing procedure for PM 2.5 compliance source-testing that might be proposed in the 2002 Federal Register.

Characterize Primary Particle Size and Composition of Emissions

In its second report, the committee advised EPA that development of new source-test methods would probably require substantial attention during FY 2000 and 2001. It was suggested that the new methods be used to characterize a larger number of sources over a 5-year period, beginning in FY 2002, because this information will be needed to revise the nation's emission inventories. EPA's method-development effort is well under way as recommended, but it is too early to expect large-scale application of the new methods.

In the course of development and testing of the new source-measurement methods, emissions from about six important source types have been characterized by EPA according to their particle size distributions and chemical composition, and another six will be characterized in the near future. Beyond those advances, EPA OAR reports that current resources will not support plans to conduct measurements of PM emissions from other stationary sources with either newly developed or more traditional source-test methods. Historically, few states

have devoted substantial resources to source testing for the purposes of emission-inventory development. Some source testing has been supported by government agencies other than EPA (such as CARB, the state of Colorado, NPS, and DOE) and by industry (for example, CRC, EPRI, and API). The committee located more than 150 projects related to source testing either under way or recently completed, with studies generally distributed as shown in Table 3.2 . However, few of these studies use methods, such as the dilution source-sampling system being developed by EPA, that fully characterize particle size and chemical composition.

The small number of sources scheduled for full characterization falls far short of a well-designed comprehensive testing program that would lead to more-accurate emission inventories. EPA has noted its reply to the committee's questions about the range of sources to be tested that “ORD can only test a limited number of source categories annually with currently available staff and funding. In addition, the ORD method development effort is unable to test sources within any one category under the full range of operating conditions typically encountered in the field. As previously stated, the number and diversity of sources means that, at any foreseeable resource level, many years would be needed to test a representative sample of all distinctive types of sources” (EPA response to questions from the committee

TABLE 3.2 PM Emissions-Related Research

dated June 2000). In its second report, the committee recommended that EPA plan to systematically achieve nearly complete characterization of emissions by particle size and composition for sources that contribute about 80% of the primary particle emissions nationally. The committee notes that now is the time to begin planning the selection of sources to be tested during the 5-year cycle beginning in FY 2002 to achieve that objective.

In its second report, the committee specifically recommended an expanded source-testing program at the level of an additional $5 million per year, beginning in FY 2002. That recommendation, if followed, will remove the program's current financial constraints. Therefore, it is appropriate to begin planning for a comprehensive source-testing program that will systematically measure the particle size distribution, particle chemical composition, and gaseous particle precursor emissions characteristics of a reasonably complete set of the relevant sources over a 5-year period. Consultations should be held with researchers in health effects, exposure, source-oriented air-quality modeling, and receptor-oriented air-quality modeling to solicit recommendations on sources to be tested and any additional chemical and physical dimensions that should be measured during the national source-testing program.

Develop New Measurement Methods and Use of Data to Characterize Sources of Gas-Phase Ammonia and Semivolatile Organic Vapors.

Methods for measurement of ammonia from nonpoint sources, such as hog-feeding facilities and highway operation of motor vehicles, have been tested by EPA ORD during the last year. Additional measurements of ammonia emissions from animal husbandry are planned for next year. Semivolatile organic compound emissions are among the dimensions listed as measurable by the research-grade dilution source-sampling procedures developed by ORD. As in the previous discussion of fine-particle emission characterization, there appears to be no program in place that will characterize more than a

handful of the relevant emission source types within the foreseeable future. Before FY 2002, a plan should be put into place for a comprehensive source-testing program that will lead to the creation of a national ammonia emission inventory based on credible and recent experimental data.

Translate New Source-Test Procedures and Source-Test Data into Comprehensive National Emission Inventories

EPA maintains a national regulatory emission inventory for PM 2.5 , PM 10 , and gases that act as particle precursors. The PM emission inventory is primarily a mass-emission inventory that does not extend to particle size distributions and particle chemical composition. EPA maintains a file of source chemical-composition profiles that can be used to estimate particle chemical-composition in many cases. These source chemical-composition profiles need to be brought up to date through a continuing program of literature review and additional source testing.

Funds appear to be available to incorporate data from new emission measurements into the national emission inventory. Although the new data are incorporated into the inventory continuously as they are collected, there is no specific date for completion of a truly new inventory. This process might appear to be one of continuous improvement, but that is not necessarily the case. Technologies used in various types of emission sources change over time. As a result, older emission data can become obsolete faster than the program of continuous improvement can keep up with the changes, especially if the emission inventory program does not have a systematic schedule for review and replacement of existing data. Highway diesel engines, for example, could be scheduled for new source-characterization experiments, but it is possible that many other diesel-engine types used in heavy-duty off-highway applications —such as construction equipment, railroad locomotives, and ships—are represented by obsolete source-test data as these technologies change over time.

The committee has recommended the compilation, beginning in

FY 2006, of a thoroughly revised national emission inventory for PM as a function of particle size and composition, and for gaseous particle precursors based on the new source-test data generated in accordance with the above recommendations. The infrastructure exists to support this work, and the committee has recommended new funds of $1 million per year to finance the effort over several years, beginning in FY 2006.

Application of Evaluation Criteria Scientific Value

There is great scientific value to the research under way to develop new source-test methods and demonstrate their capabilities to measure particle size, particle chemical composition, and rates of emission of ammonia and semivolatile organic compounds. This information is needed to guide exposure-assessment studies and help toxicologists and epidemiologists form potential hypotheses about components of PM that could be hazardous to human health. The emission data are also needed to support tests of advanced air-quality models that seek to relate pollutant emissions to ambient-air quality. Emission data that describe particle size and chemical composition are needed to permit the calculation of gas-to-particle conversion rates and support calculations of heterogeneous chemical reactions that occur clouds in clouds, haze, and fog. Furthermore, when emission data on particle size and composition are available, air-quality models that account for particle size and composition can be put to very demanding tests that ensure that they are producing the right answers for the right reasons.

Decisions about alternative emission-control policies have to be based on an accurate understanding of the relative strength and possible toxicity of emissions from various sources. Accurate emission

inventories are absolutely fundamental to the decisionmaking process. Although there is scientific merit in the work that is under way to develop new source-test methods, the potentially important benefits to the decisionmaking process of more-complete and accurate knowledge of particle emissions evaluated according to size and composition can be realized only if EPA proceeds to expand its present source-testing program substantially by FY 2002, in accordance with the committee 's recommendations. EPA should now develop a comprehensive plan for systematically translating the new source-test methods into a completed comprehensive national emissions inventory based on contemporary source tests of comparable quality. There is still ample opportunity to plan that future source-test program. The first step would involve the systematic creation of a master list of sources that most need to be tested over a specific period. The timeline for this testing must allow for the incorporation of revised and updated data into an overall emission inventory of predetermined quality and completeness by the time the next round of PM implementation plans must be drafted.

In the committee's second report, it was estimated that five to 15 source-testing campaigns would need to be directed at different source types each year for a 5-year period beginning in FY 2002 to bring new source-test methods to bear in creation of a reasonably complete emission inventory for particle size and composition based on contemporary data of high quality. EPA ORD is conducting about six such testing campaigns per year, at a cost of about $2.3 million per year while it is in the method-development phase that precedes the work of source testing for an emission inventory. That is reasonably consistent with the committee's recommendation that about $2.5 million per year should be spent during FY 2000 and 2001 on method-development research. On the basis of the observation that EPA ORD alone has been able to conduct about six source-test campaigns per year with an annual budget of $2.3 million, it seems reasonable that funds of $5-$7.5 million per year, as recommended by the committee

for FY 2002-2006, will be sufficient to support the proposed testing needed for a thorough upgrade of the emission inventory. With the FY 2002-2006 timeline, EPA has at least a year in which to draft a plan that identifies the sources to be tested in the future to ensure reasonably complete representation (a goal of about 80% coverage on a mass basis) of the national fine- particle emission inventory. Although some of the remarks by EPA in reply to committee questions appear to assume that a reasonably complete reworking of the emission inventory is beyond the planning horizon of the agency, the goal of a high-quality inventory for particle size and chemical composition is not out of reach. Drafting of a comprehensive plan that preselects sources to be tested and sets priorities for the work to be done over about a 5-year period will help to ensure the success of the research effort.

RESEARCH TOPIC 4. AIR-QUALITY MODEL DEVELOPMENT AND TESTING

What are the linkages between emission sources and ambient concentra tions of the biologically important components of particulate matter?

The focus of this research topic is development and testing of source-oriented and receptor-oriented models that represent the linkages between emission sources and ambient concentrations of the most biologically important components of PM. Comprehensive source-oriented models for PM are still under development; before they are ready for regulatory applications, they require more-certain emission inventories (see research topic 3 ) and an improved understanding of the chemical and physical processes that determine the size distribution and chemical composition of ambient particles. Receptor-oriented models have been used to apportion particle mass measurements to primary emission sources through a mathematical comparison of chemical profiles of ambient PM samples with the profiles of emission-source types. However, better mathematical tools and chemical tracers are needed to resolve additional sources and to handle secondary species. Before the models can be used with suffi-

cient confidence, both the receptor-oriented and source-oriented approaches need to be tested with observations from intensive field programs and then compared with each other.

Air-Quality Model Development

Source-oriented models.

EPA has developed its major new modeling platform, MODELS 3, over the last decade. MODELS 3 is just beginning to be deployed and has not yet been extensively tested. It has been developed in a specific configuration, Community Model for Air Quality (CMAQ), primarily for modeling ozone. Scientific reviews of MODELS 3 have focused primarily on its ability to provide adequate representations of chemical processes to estimate ozone. Only recently has there been active consideration of incorporating PM formation and transport into the model.

The atmospheric-science community had limited interaction with EPA during the development of MODELS 3. In EPA's response to the committee 's questions, the agency suggested that there was limited interaction because EPA faces relatively few major uncertainties about atmospheric processes and it is simply a matter of time before all the science that is needed to produce adequate estimates will be incorporated into the model. The committee did not get an indication as to whether MODELS 3 had been sufficiently tested with regard to PM formation and transport.

Table 3.3 presents a summary of the current studies identified by EPA and others as sources of information on atmospheric processes. These studies demonstrate the efforts under way to understand the processes governing atmospheric phenomena. However, the committee does not believe that current or planned efforts are sufficiently organized to effectively assess and use the information obtained through these studies.

EPA has developed a second model, the Regulatory Modeling System for Aerosols and Deposition (REMSAD), that is designed to simu-

TABLE 3.3 Summary of Current Studies Identified by EPA as Sources of Information on Atmospheric Processes

late the concentrations and chemical composition of primary and secondary PM 2.5 concentrations, PM 10 concentrations, and depositions of acids, nutrients, and toxic chemicals. To reduce computational time and costs, REMSAD uses simpler chemistry and physics modules than MODELS 3. REMSAD has been applied to model concentrations of total PM 2.5 and PM 2.5 species (sulfate, nitrate, organic carbon, elemental carbon, and other directly emitted PM 2.5 ) over the conterminous United States for every hour of every day in 1990. Annual, seasonal, and daily averages from the 1990 base case have been compared with data from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network and the Clean Air Status and Trends Network (CASTNET). Sensitivity analyses have also been conducted for changes in SO x , NO x , ammonia, and directly emitted PM 2.5. Because of the lack or sparseness of available data on many areas of the United States (for example, IMPROVE provided only two 24-hour-average concentrations per week for a few dozen sites in 1990), there has not been an effective national evaluation of the model for PM. It is not clear whether REMSAD's simplified representations of chemistry

adequately capture the complex atmospheric processes that govern observed particle concentrations.

A number of other source-oriented PM models are being developed by individual investigators at universities or consulting companies. Seigneur et al. (1998, 1999) reviewed 10 Eulerian grid models: seven for episodic applications and three for long-term applications. The episodic models are the California Institute of Technology (CIT) model, the Denver Air Quality Model (DAQM), the Gas, Aerosol, Transport, and Radiation (GATOR) model, the Regional Particulate Model (RPM), the SARMAP Air Quality Model with Aerosols (SAQM-AERO), the Urban Airshed Model Version IV with Aerosols (UAM-AERO), and the Urban Airshed Model Version IV with an aerosol module based on the Aerosol Inorganic Model (UAM-AIM). The long-term models are the REMSAD, the Urban Airshed Model Version IV with Linearized Chemistry (UAM-LC), and the Visibility and Haze in the Western Atmosphere (VISHWA) model. In addition, several university groups are developing additional PM models that are primarily extensions of the CIT model to other areas of the country.

It appears that none of the models reviewed by Seigneur et al. (1998, 1999) is suitable for simulating PM ambient concentrations under a wide range of conditions. The following limitations were identified in both episodic and long-term models:

Most models need improvement, albeit to various extents, in their treatment of sulfate and nitrate formation in the presence of fog, haze, and/or clouds.

All models need improvement, albeit to various extents, in their treatment of secondary organic particle formation.

The urban-scale models will require modifications if they are to be applied to regional scales.

All models but one lack subgrid-scale treatment of point-source plumes.

In addition to the limitations identified above, the reliability of the simplified treatments of chemistry used for estimating the effect of emission changes on PM concentrations in the long-term models has

not been sufficiently tested. An alternative approach for predicting annual average PM concentrations has also not been adequately tested. This approach—to be used by EPA in applications of MODELS 3/CMAQ—is to approximate a full year by combining several typical meteorological scenarios with appropriate weighting factors and applying an episodic model separately to each scenario. The validity of the approach depends on, among other things, the meteorological representativeness of the selected scenarios. The approach has not yet been the subject of a comprehensive evaluation, so its validity is unknown.

In addition to the limitations indicated above regarding the formulations of PM source models, it must be noted that the application of PM source models requires input data for emission, meteorology, and ambient concentrations of PM and gases. For example, it might be possible to improve the models by incorporating more information on atmospheric processes, but any apparent improvements will need to be tested for their success in reproducing observations during specific meteorological situations. Substantial uncertainties are still associated with PM emission inventories, as described under research topic 3 . Ammonia and VOC emission inventories involve large gaps that will affect the predictions of secondary PM, and uncertainties in natural and anthropogenic emissions of mineral dust will affect the predictions of primary PM. Moreover, the most comprehensive modeling input databases are for California regions, and there are insufficient data on other states.

Emission and process data related to specific components of PM are notably lacking. A long-term research goal is to identify specific physical or chemical components in PM that are primarily responsible for the adverse health effects. EPA models focus on total mass concentrations and major PM constituents, such as sulfate and nitrate. However, future models could expand their focus to size distributions and other chemical constituents.

Efforts are under way at academic and other research institutions (such as EPRI) to improve air-quality models. Efforts are also under way to link and integrate air-quality models with exposure and dose

models. However, it appears that there is no coordinated effort to compare the various models with one another or to use improvements developed for one or another model to improve the others, particularly those earmarked for regulatory applications. It is not clear that the appropriate commitment is being made to have the best models available at the local air-quality management levels for use in PM planning efforts.

Receptor-Oriented Models

There has been very little support for the development and testing of new receptor-oriented models. Such models are used to identify and quantitatively appropriate an ambient PM sample from a given location (receptor) to its sources. The CMB model has been rewritten to run under the Windows operating system, and EPA has supported factor-analysis model development under a single STAR grant. Both products are now in the process of external review, with probable release at the end of 2000. A new version of the EPA source-profile library will run under modern PC operating systems. However, only 16 profiles have been added to the library since its revision in 1992—an indication of the lack of incorporation of recently published source profiles. Seigneur et al. (1998, 1999) reviewed existing receptor models, including back-trajectory-based analyses to locate candidate source areas, alternative factor-analysis models based on least-squares fitting, and alternative solution methods for the CMB. However, further development and testing are required before they can be widely distributed for air-quality management purposes.

There is a particular need for data-analysis tools to handle newly emerging monitoring technologies, such as aerosol mass spectroscopy. A number of approaches have been presented in the literature, but they are typically applied to only a single location or region. There has not been an extensive effort to test the effectiveness of these alternative methods or to review their potential use in future development of air-quality management strategies.

Air-Quality Model Testing

To test the predictive capability of models, it is necessary to have both the model input data and appropriately detailed sets of air-quality observations to compare with the model outputs. Earlier field campaigns —such as the Southern Oxidants Study (SOS; www2.ncsu.edu/ncsu/CIL/southern_oxidants/index.html ), the Northern Front Range Air Quality Study (NFRAQS; www.nfraqs.colostate.edu/index2.html ), the Southern California Air Quality Study (SCAQS; Lawson 1990), and the San Joaquin Valley Air Quality Study (SJVAQS; Lagarias and Sylte 1991; Chow et al. 1998)—provide the necessary data for model testing. (SOS, until recently, and SJVAQS have limited utility for PM models because they focused primarily on ozone.) Although those earlier efforts provided insights into basic atmospheric processes, only some of the supersite monitoring stations can possibly have sufficient data to validate regional-scale air-quality models. It is clear that more field campaigns are needed to provide data to test the predictive capability of state-of-the-science PM models.

A number of large studies are just starting, such as the supersite efforts, and continuing studies, such as SOS. However, there do not appear to be plans to use these databases fully for testing air-quality models. ORD personnel have suggested that EPA will use the data to test MODELS 3/CMAQ, but it is not clear to the committee that there will be effective internal and external review. The committee is aware of no defined plans to compare MODELS 3/CMAQ with any of the other similar-scale models. Such comparisons are needed, and a plan for evaluation and revision of the EPA model should be developed as part of EPA's PM research program.

This lack of effort to use available data effectively highlights the need for EPA to be active in defining the nature of the data needed to test Models 3/CMAQ fully and compare it with other independently developed models. There is now an agreement among the Baltimore, New York City, and Pittsburgh supersites and the NARSTO NE-OPS (Northeast Oxidant and Particle study) program in Philadelphia to operate in an intensive mode during July 2001. There will be only sparse upper-air measurements using LIDAR in Baltimore and Philadel-

phia. Although it might be too late to organize extensive additional upper-air measurements for those particular studies, this is one example of the kind of opportunity that EPA should be actively seeking, particularly in the eastern and southeastern United States, where such large-scale, detailed data are lacking.

It might be possible to build on the speciation network, once it is in place, to develop appropriate field campaigns. By operating these systems more intensively (more frequently than daily) and supplementing the speciation network monitors with particle-size measurement devices, it would be possible to provide a suitable database for regional-scale model testing.

The testing of receptor models is similarly incomplete. It appears that there has been no clear plan for development, testing, and deployment of additional receptor-modeling tools. CRC recently supported an effort to evaluate receptor models for VOCs, using grid models to produce test data. EPA has had a small effort to compare factor-analysis models, but a more extensive program will be needed to provide the full range of tools necessary for a comprehensive analysis, particularly for PM 2.5.

There is substantial support of current studies that are expected to make substantial contributions to the understanding of atmospheric processes. The development of source-oriented models represents the codification of a portion of new knowledge into an organized framework for application. The testing of individual models and the comparison of the results of multiple models can help to identify the effects of different approaches to incorporation of the knowledge into prediction based upon models. Such comparisons will help to refine the available knowledge. The development of better algorithms for source- and receptor-oriented models will also be substantial scientific advances.

Air-quality models are essential for making regulatory decisions. They provide the critical information required to develop the effective and efficient air-quality management strategies that are needed for state implementation plans (SIPs), which are developed when areas are found to be in nonattainment of the PM National Ambient Air Quality Standards (NAAQS). Regional-scale models are needed for reducing visibility impairment, acid precipitation, and other adverse environmental effects. Improved models would also provide critical exposure-related data that could be used in health studies to examine the relationships between ambient PM concentrations and health. There is insufficient effort to test the models developed by EPA and others or to use extensive comparison with other models to ascertain the differences and similarities in the results. Such efforts would provide further improvements in the models and greater confidence in the decisions based on the model results. In addition, it is important to link air-quality models with exposure models. EPA is collaborating with other organizations to develop a population exposure model for PM to provide such a linkage.

The development and testing of models is highly feasible. The increase in computational power permits the incorporation of greater numbers of observations and understanding of atmospheric processes into source-oriented models. The same computational power permits much more sophisticated methods for data analysis to be used in receptor-oriented models.

The new PM monitoring program provides a base from which data can be obtained for testing of the models. If research topic 3 is appropriately implemented, the necessary source data and source-oriented models will be readily available. The source profiles developed under research topic 3 will also provide data to introduce into recep-

tor-oriented models. Thus, the plan for testing models presented in the committee's second report can be used to provide the necessary tests and improvements for models. However, the planning process should be started immediately to allow time for important regional-scale field studies. It should be noted that the data needed for model testing and evaluation are not necessarily the same as data needed for developing exposure metrics, which is discussed later in this chapter.

There is also time to develop, test, and deploy advanced receptor models more fully before requirements arise from the SIP process. However, there must be a more concerted effort to recognize the need for improved models and improved data for existing models.

Integration and Planning

There appears to be insufficient effort in organizing and carrying out the field studies that would provide the data for thorough evaluation of existing models; only a small effort to leverage the investment is being made in the PM monitoring program to provide these data. There is a large body of historical data that would be of great value in model testing if they could be processed in a standard way into a central repository from which they can be easily accessed. It appears that EPA does not yet recognize the need for full model testing, so it has not mobilized the needed resources.

It appears that there has been no comprehensive planning for the development and deployment of receptor-oriented models. The current ad hoc approach to receptor-model development will not provide the additional tools essential to develop future state implementation plans. With the development of several new factor-analysis models, there has been some effort to compare them, but there is still no evidence of a plan for developing and implementing improved models in the context and timeframe of what will be needed for the PM 2.5 SIP process.

RESEARCH TOPIC 5. ASSESSMENT OF HAZARDOUS PARTICULATE-MATTER COMPONENTS

What is the role of physicochemical characteristics of particulate matter in eliciting adverse health effects?

The initial research portfolio (NRC 1998) outlined a research agenda designed to improve the understanding of the roles of specific characteristics of ambient PM (such as particle size distribution, particle shape, and chemical constituents) in determining the toxicity for underlying adverse health outcomes associated with PM exposure. The research plan indicated not only studies aimed at determining the relevance of those characteristics, but also work designed to evaluate the dose metrics that have been used to relate PM exposure to health effects in epidemiological and toxicological evaluations. Research was needed to develop PM surrogates, that is, material with specified characteristics for use in toxicity studies. In its second report, the committee (NRC 1999) reconfirmed the importance of this kind of investigation.

The nature of the chemical or physical characteristics of ambient PM that might account for its biological activity remains a critically important component of the PM research portfolio. In addition to providing mechanistic plausibility for epidemiological findings related to PM, an understanding of the relationship between mechanisms of biological action and specific PM characteristics will be a key element in selecting future control strategies. The following list of particle characteristics potentially relevant to health risk is large and possibly variable across health effects:

Size-fractionated PM mass concentration

PM surface area

PM number concentration

Transition metals

Soot and organic chemicals

Bioaerosols

Sulfate and nitrate

Peroxides and other free radicals

Those particle characteristics may be associated with cardiovascular disease, acute respiratory infection, chronic obstructive pulmonary disease, asthma, and morbidity. Inspection of this list, which could be expanded, makes clear the challenge that is faced by the investigative community and by research managers who need to focus resources toward the key relationships between particle characteristics and health effects.

In addressing this research topic, both toxicological and epidemiological approaches are needed. Hypotheses advanced from data in one domain need to be tested in the other, complementary domain. Greater certainty will be achieved as evidence from the laboratory and the population converges and as integrative research models merge the population and laboratory data into a common framework. For example, particles obtained from filters in the Utah Valley, the site of epidemiological studies of health risks posed by particles from a steel mill, have been assessed for toxicity in laboratory systems (Frampton et al. 1999; Soukup et al. 2000; Dye et al. in press). The availability of particle concentrators could also facilitate the implementation of integrative research models, in that animals and people can be exposed to a comparable mixture of real-world particles.

The general methodological issues arising in connection with this research topic are akin to the problem of assessing the toxicity of a mixture and determining the specific characteristics that are responsible for its toxicity. Particles, in fact, constitute a mixture: urban atmospheres are contaminated by diverse sources, and the characteristics of particles can change and vary among regions. The difficulties of studying mixtures have been addressed by numerous panels, including committees of the National Research Council (NRC 1988). Accepted and informative research models have not yet been developed, and even attempting to characterize several toxicity-determining characteristics of mixtures has proved challenging.

One objective of research related to this topic was to assess relevant dose metrics for PM to explain adverse health outcomes. EPA

routinely measures size-specific mass concentration, previously PM 10 and now PM 2.5 as well. The selection of these concentrations and the timeframe over which they are measured (24 hours) reflect technological feasibility to a greater extent than fit with time-exposure-response relationships of PM with health risk. Routine regulatory monitoring provides only 24-hour averaged mass concentrations, but special monitoring programs —including those instituted in support of epidemiological studies, the speciation sites, and the supersites program—offer the opportunity to explore alternative dose (exposure) metrics. Newer techniques to monitor PM 2.5 over shorter periods, and even continuously, are being developed and tested.

Another objective, to evaluate the role of particle size in toxicological responses to PM and that related to epidemiological outcomes, focuses on the size of particles that are relevant to the health effects observed in the epidemiological studies. To date, the associations of PM with both illness and death have been demonstrated in studies using indexes that incorporate particles with a large range of sizes (such as total suspended particles, or TSP, and PM 10 ). These studies have drawn on the available data. PM 10 , of course, includes particles in all smaller size categories and thus includes PM 2.5 and ultrafine particles (those smaller than 0.1 µ m in diameter). Ultrafine particles probably make up a very small fraction of PM 10 mass, but pathophysiological considerations and some initial toxicological findings have focused attention on the hypothesis that such smaller particles may be responsible for some toxicological responses that lead to the epidemiological findings.

New work directed at this research topic has been based largely on toxicological approaches. Forty-eight toxicology projects described in the HEI database were identified as potentially related to the topic.

Ambient PM is a complex mixture that contains various chemical components in various size fractions. Evaluation of whether biologi-

cal responses to PM are nonspecific—that is, are due merely to inhalation of any particle—or depend on specific PM properties is a critical focus of current research. Regarding the latter possibility, research performed in the recent past has indicated some specific potential characteristics that appear to be involved in PM-induced health effects. A compilation of these (Mauderly et al. 1998) is as follows: size-fractionated particle mass concentration; particle surface area; particle number concentration, which is generally related to the ultrafine component of PM; transition metals (especially the fraction soluble in vivo); acids; organic compounds (especially PAHs); bioaerosols; sulfates and nitrates, typically existing in ambient air as ammonium or sodium compounds; peroxides and free radicals that can accompany, and help to form, particles; soot (elemental carbon, black carbon, or light-absorbing carbon); and correlated cofactors (such as the presence of gaseous pollutants and variations in meteorology). The current toxicological research portfolio as reported in the HEI database was examined with regard to each of those specific chemical or physical characteristics.

Size-Fractionated Particle Mass Concentration. The mass concentration—the mass (weight) of collected PM per unit volume of sampled air, generally within some selected particle-size range or below some upper size cutoff—was the exposure metric most commonly evaluated in relation to health effects in the studies in the HEI database. More-recent epidemiological studies of ambient particles have generally focused on the 0.1- to 2.5- µ m size range, although there have been a few studies of coarse particles. In a number of cases, the particle-size cutoffs used in toxicity studies differed from those commonly used to define size fractions obtained with ambient monitoring networks, namely PM 2.5, coarse PM (PM 10 minus PM 2.5 ), and PM 10 , and TSP. For example, some toxicity studies have used mass in sizes termed PM 1.7 and PM 3.7-20 , whereas others have included particles of up to 4 µ m in the definition of “fine” particles. Definitions of specific size fractions, such as “fine” and “coarse,” used in toxicological research should be consistent with those used in ambient monitoring studies and in epidemiological studies.

PM Surface Area. A few studies in the HEI database address particle surface area in the context of particle size, specifically in terms of its relation to health effects. Surface area is also relevant to the absorption of gases onto particle surfaces.

PM Number Concentration. A few studies address the issue of particle number concentration, which is generally used to describe exposures to the ultrafine particles in ambient PM. These include in vivo and in vitro studies, the former including clinical-exposure studies and involving particles ranging from 0.01 to 0.1 µ m.

Transition Metals. The transition metals include titanium (Ti), vanadium (V), chromium (Cr), manganese (Mn), iron (Fe), cobalt (Co), nickel (Ni), copper (Cu), zinc (Zn), cadmium (Cd), and mercury (Hg). Some—Cr, Mn, Co, Ni, Cd, and Hg—are both transition metals and listed as EPA hazardous air pollutants. Although a number of studies address the issue of toxic metals, many use material containing a mix of various metals, and few specify single metals for evaluation. For example, residual-oil fly ash particles containing nickel and vanadium are commonly used in toxicity studies related to PM; such studies have involved both animal in vivo and in vitro study designs. Other studies have examined pulmonary inflammation related to Fe, V, Zn, and Ni; DNA damage related to Cr(VI); and oxidative stress after exposure to Fe. In general, however, exposure doses in these studies have been high and not relevant to ambient exposure. Metal concentrations found in ambient and source samples should serve as broad exposure guidelines for these experiments. Because in vitro studies often involve material extracted from ambient-air filters, methods of filter extraction (for example, water extraction vs. acid digestion) and analysis need to be standardized.

Acids. Health effects of exposure to acid aerosols have been extensively studied and are specifically addressed in the current controlled-exposure research portfolio in only a few studies reported in the HEI database. However, ambient acidity is generally not analyzed in filters obtained from studies that use ambient-particle concentrators. Such data would be valuable for comparison with pub-

lished epidemiological data on health effects. Furthermore, little information on specific types of acids is given in the project descriptions in the HEI database.

Soot, Organic Compounds, and Associated PAHs. The effects of PAHs are specifically addressed in only one in vivo study described in the HEI database. Characteristics of combustion-related organic chemicals will be explored further by the EPA-sponsored PM centers. Studies of diesel exhaust, diesel soot, and black carbon focus mainly on elemental carbon. More research with emphasis on organic speciation is needed to evaluate potential health effects. Because there are so many organic compounds in ambient PM, a subset specifically related to pollution sources needs to be defined for in vivo and in vitro studies.

Bioaerosols. One of the subjects clearly in need of evaluation is the role of biological agents in adverse health effects related to ambient PM exposure. Biological agents that might be involved in PM-induced response are diverse, and few are being evaluated. One class, endotoxins, has been identified as having the ability to induce or potentiate adverse health effects induced by PM, and some studies are addressing this issue. Another antigen being evaluated for its role in PM-related health effects is that derived from dust mites. Although exposure to dust mites largely occurs indoors, it may offer an informative example.

Sulfates and Nitrates. Sulfate has been examined in several studies, but nitrate and other nitrogen species have been largely ignored except as components of complex particle mixtures.

Peroxides and Other Free Radicals. One in vivo study addresses the role of peroxide in PM toxicity. This is a subject on which further research is needed.

Copollutants. There is an increasing effort in the research portfolio to evaluate the potential for interaction between PM and gaseous copollutants. The gases of potential concern include O 3 , NO 2 ,

SO 2 , CO, and irritant hydrocarbons. Data on precursor gases (especially HNO 3 , NH 3 , and SO 2 ) are important to relate ambient secondary particles to health effects. This subject is discussed further with regard to research topic 7 .

The committee is unable to identify any studies reported in the HEI database that address the issue of experimental PM surrogates that can mimic daily, seasonal, and regional particle characteristics. Specific in vivo and in vitro tests provide snapshots of adverse effect. Understanding of the role of PM characteristics in eliciting biological responses, measurements of PM components or analyses of aerosol filters should include extensive chemical speciation, consistent with the national PM 2.5 chemical-speciation network, in which mass, elements (40 elements from sodium to uranium), ions (nitrate, sulfate, ammonium, and water-soluble sodium and potassium), and carbon (organic and elemental) are determined. Furthermore, there needs to be a reconciliation of ambient concentrations with the exposures used in controlled studies. Concentrations of specific chemical compounds in PM vary widely. For example, V and Ni are often found at less than 0.01 µ g/m 3 in ambient samples, although most other metals are typically found at 0.01-0.5 µ g/m 3 . But crust-related materials (such as aluminum (Al), silicon (Si), calcium (Ca), Fe, and Mn) are often present at 0.5-10 µ g/m 3 , and many toxicity studies use concentrations as high as about 100-5,000 µ g/m 3 . The relevance of such high-exposure studies for materials present at much lower concentrations in ambient air must be considered in controlled-exposure studies. Furthermore, the ratio of specific chemical species in ambient air to those occurring in experimental atmospheres must be considered in experimental-study designs.

Epidemiology

Epidemiologists have approached the problem of mixtures or, in this instance, the toxicity-determining characteristics of particles, by evaluating risk in relation to heterogeneity in exposure, whether over time or across geographical regions. For example, time-series studies identified particles in urban air as a key determinant of morbidity and

mortality by evaluating risk for events on a day-by-day basis in relation to changing daily concentrations of particles and other pollutants. Statistical models, such as Poisson regression, are used to “separate” the effects of one pollutant from those of another. Comparisons have also been made across regions that have different pollution characteristics. Panel studies can also be used for this purpose.

In applying the epidemiological approach in investigating particle characteristics, there is a need to have measurements of particles in general and of the specific characteristics of interest over the period of the study. Because monitoring for particle characteristics of specific interest has been limited, opportunities for testing hypotheses related to those characteristics have also been somewhat limited, and few studies that incorporate substantial monitoring of both particle concentration and other specific characteristics have been carried out. One example is afforded by the work carried out in Erfurt, Germany, where particle mass, concentration, and numbers have been carefully tracked for a decade. The resulting data have been used to support several epidemiological studies of health effects in that community (Peters et al. 1997). EPA's supersite program will also offer a platform for carrying out observational studies on particle characteristics related to health risk.

Epidemiological data, if sufficiently abundant, can be used for testing alternative dose metrics. Statistical modeling approaches can be used to test which exposure metrics are most consistent with the data; with this general approach, the fit of the statistical model to the data is compared across exposure metrics, and the metric that best fits the data is given preference, assuming that its biological plausibility is at least equivalent to that of alternatives. For example, 2-day running averages might be compared with 24-hour averages, or peak values obtained with continuous monitoring might be contrasted with averages over longer periods. If a strong preference for one metric over alternatives is to be gained, the data requirements of this approach are substantial. Epidemiological studies relevant to this research topic need to be large and require data on the exposure metrics to be compared.

Table 3.4 shows that the number of potentially informative epidemiological studies is small. Studies in Erfurt, Germany, and in Atlanta

TABLE 3.4 Number of Epidemiological Studies Relating Health Outcomes to Target Pollutants a

capture mass concentrations, particle counts, and acidity; other studies are addressing ultrafine particles and risk of myocardial infarction in Augsburg, Germany, and in Atlanta. Several time-series studies include measurements of sulfate and acid aerosols, and a number of panel studies also incorporate measurements of a variety of particle characteristics. PM components are being considered in a small number of existing or planned studies. More data will probably be needed, particularly to obtain evidence related to the general issue of exposure metrics as applied to population risk. A number of current studies should provide information on the risks posed by ultrafine particles; these risks are the focus of one of the PM centers. Panel studies conducted by EPA will also contribute useful information.

There is considerable effort in evaluating physiochemical properties of PM in relation to biological effects. However, it has generally been concerned with only a few chemical characteristics; the largest body of work involves metals. Other potentially important PM characteristics as illustrated in Table 3.4 , have received less attention. Current work is beginning to address the issue of exposure or dose metrics other than mass concentration, although most studies continue to evaluate health effects in terms of total mass concentration during exposure. The relevance of high doses used in many controlled exposure studies to the low exposures to some components of ambient PM remains a subject that must be more adequately considered in study design than it is now.

In its second report, the committee noted that although most of the research activities recommended in its first report were being addressed or planned by EPA or other organizations, studies in one cross-cutting research topic of critical importance did not yet appear to be adequately under way or planned: studies of the effects of long-term exposure to PM and other major air pollutants. The committee recommended that efforts be undertaken to conduct epidemiological studies of the effects of long-term exposures to particle constituents, including ultrafine particles. There does not yet appear to be a sys-

tematic, sustained plan for implementing studies of human chronic exposure, including examining ultrafine particles.

This research topic is a key scientific question in the understanding of PM and health: Are effects of PM nonspecific—that is, determined only by the mass dose delivered to target sites—or do they depend on the specific physical and/or chemical characteristics of the particles? Data relevant to this question would be informative as to cardiopulmonary and/or systemic effects and therefore would guide mechanistic research. Thus, the scientific value of this research topic remains high. Identification of characteristics that produce adverse responses in controlled studies will allow comparison with PM properties obtained from epidemiological evaluations and will thus provide important confirmation of the role of specific properties in adverse health outcomes. There should be coordination between toxicological and epidemiological studies, including use of a consistent terminology for such PM characteristics as specific size fractions, so that study comparisons are possible not only between the two disciplines, but also among different controlled-exposure studies.

Integration across exposure assessment, toxicology, and epidemiology will be critical for obtaining a comprehensive body of evidence on this research topic that can guide decisionmakers from health effects back to responsible emission sources. Epidemiological studies need to include sufficient exposure assessment to guide toxicity studies of PM characteristics. Opportunities should be sought to apply hybrid research models that combine toxicological and epidemiological research.

Evidence on the particle characteristics that determine risk could have a profound influence on decisionmaking. At present, an ap-

proach of regulating particle mass in general is followed, in the recognition that particles vary substantially in size, makeup, and chemical properties. There are multiple sources of PM, and decisionmakers need guidance on whether some sources are producing more hazardous particles or whether all sources produce particles of equivalent toxicity.

Epidemiological research alone will not provide sufficiently certain evidence on this research topic; joint toxicological and epidemiological study is required. However, epidemiological data will be critical for decisionmakers, in that such data will confirm laboratory- based findings and hypotheses.

This is one of the most challenging research topics in the committee 's research portfolio. In the laboratory setting, characteristics of particles can be controlled through experimental design, so carefully measured studies of particles that have specific characteristics can be assessed. In the population setting, in contrast, participants in epidemiological studies inhale PM that has multiple sources and that changes in characteristics as participants move from location to location over the day and possibly even in one location at different times. Data on substantial numbers of persons will be needed to test hypotheses related to particle characteristics. Nonetheless, epidemiological studies can be carried out for this purpose; one of the most effective approaches is likely to be the panel study, with specific, tailored monitoring for particle characteristics of interest. Such studies are feasible, as shown, for example, by the studies in Erfurt (Peters et al. 1997).

RESEARCH TOPIC 6. DOSIMETRY: DEPOSITION AND FATE OF PARTICLES IN THE RESPIRATORY TRACT

What are the deposition patterns and fate of particles in the respiratory tract of individuals belonging to presumed susceptible subpopulations?

The committee's recommended research portfolio (NRC 1998) outlined research needed to improve understanding of the deposition of particles in the respiratory tract, their translocation, and their clearance. The recommendations encompassed the development of new data and predictive models and the validation of the models for respiratory-tract structure; respiratory variables; total, regional, and local deposition; and particle clearance. Also included were the micro-dosimetry of particles and particle-derived hazardous chemical species and metabolites in intrapulmonary and extrapulmonary tissues.

Information on dosimetry is important for decisionmaking because it is critical to understanding of the exposure-dose-response relationship that is key to setting the NAAQS. It is also important for understanding of how exposure-dose-response relationships differ between normal and especially susceptible subpopulations, if the standard is to be adjusted to protect sensitive people. Knowledge of interspecies differences is important for extrapolating results from animals to humans.

The committee's recommendations focused on dosimetry in people potentially more susceptible to particles because of respiratory abnormalities or age (children and the elderly). A large portion of the population is in one or more of the categories of concern. Most people spend at least one-fourth of their lives in stages during which lungs are developing or senescent. In 1997, an estimated 44.3 million adults were former smokers and 48 million were current smokers (ALA 2000a); many smokers develop some degree of airway abnormality. Asthma afflicts over 17 million Americans, including 5 million children whose lungs are still developing (ALA 2000b). COPD afflicts about 16.4 million people (ALA 2000c). All respiratory diseases together kill one of seven Americans (ALA 2000d). The focus of past dosimetry research —almost entirely on normal young adult humans and animals—leaves us with little ability to estimate exposure-dose-response relationships in the above subpopulations.

In its second report (NRC 1999), the committee confirmed its initial recommendations, added a recommendation for research to bolster interspecies dosimetry extrapolation models, and re-emphasized the need for dosimetric research in animals to focus on models of human susceptibility factors.

Several sources of information were examined to assess current research and recent research progress on the dosimetry of PM. The review of current research centered on the HEI-EPA database on PM research. The database was examined as of August 2000 for research projects and programs, including dosimetric research in the abstracts. Numerous additional past or current projects were evident from published reports, but because of uncertainty as to whether these projects were continuing, only those listed in the HEI database were included in Table 3.5 . In all, 22 project descriptions were identified as apparently responsive to the dosimetry research needs.

New information since the 1996 criteria document was assessed by examining published papers and abstracts from meetings. A search of the recent published literature was conducted by using numerous

TABLE 3.5 Summary of Dosimetry Projects and Reports

key words pertaining to the research recommendations. The first external review draft of the new criteria document for PM (EPA 1999) was examined for its portrayal of new information since the last criteria document, published in 1996. References added to the revised dosimetry chapter as of September 2000 were also reviewed. Published abstracts from 1999 and 2000 meetings of the American Thoracic Society (ATS 1999, 2000), HEI (HEI 1999, 2000), and the Society of Toxicology (SOT 1999, 2000) were examined for relevant research, as were the abstracts from the 1999 meeting of the International Society for Aerosols in Medicine (ISAM 1999). The abstracts and papers from the Third Colloquium on Particulate Air Pollution and Human Health in June 1999 (Phalen and Bell 1999) and from the “PM2000” conference in January 2000 (AWMA 2000) were also examined for relevant completed research. The evaluation of reports was limited to reviewing of abstracts. Published papers were not reviewed in detail, and authors were not queried.

In all, 62 papers and 59 presentation abstracts were identified as potentially relevant to the dosimetry research needs as set forth by the committee. On review of abstracts, some proved to fall outside the scope of the recommended research portfolio, and many more related to the recommendations only indirectly. A total of 96 reports were considered relevant to the dosimetry research needs. Although this review undoubtedly missed some potentially relevant reports, it was considered sufficient to provide a reasonable evaluation of the extent to which the recommendations are being addressed. The results of the review are summarized below, by categories according to the committee 's research recommendations (in italics). A numerical summary of the projects and reports is presented in Table 3.5 .

Conduct research on deposition of particles in the respiratory tracts of individuals having respiratory abnormalities presumed to increase suscepti bility to particles, and on the differences in deposition between these suscepti ble subpopulations and normals.

Obtain quantitative data on lung morphology and respiration for individuals of different ages and having respiratory abnormalities.

Research using advanced imaging and reconstruction techniques is producing new information on the effects of age, sex, and several types of abnormalities on airway dimensions. This information can serve as the foundation of mathematical models of deposition in abnormal airways. Some researchers are using stereolithography to construct physical models of airways from stereo images and computer-controlled etching of solid media. Other researchers are using magnetic resonance imaging to create airway images and develop digital data from which structures can be modeled or physical replicas can be machined. These techniques show promise for obtaining new morphological data useful for modeling deposition in a broad range of airway abnormalities. It is likely that, in some cases, these approaches will allow acquisition of data for more varied subjects and at a greater rate than is practical with traditional postmortem airway casting.

A modest amount of work is continuing with the more traditional methods of evaluating solid casts made from cadaver lungs and airways and measurements of airway dimensions with light microscopy of lung sections.

Information on the effect of age and respiratory abnormalities on breathing patterns and dosimetry in humans has been expanded substantially in the last 2 years. The EPA intramural program is the strongest contributor in this field. Laboratories working in this field are addressing the variables of age, sex, asthma, COPD, and cystic fibrosis. Inclusion of a broader range of susceptibility factors and particle types is needed. For example, there is little emphasis on people who have respiratory infections or edema related to cardiopulmonary failure. Most studies have measured only total particle uptake; information on regional and local dosimetry is also needed.

Determine the effects on deposition of particle size, hygroscopicity, and respiratory variables in individuals with respiratory abnormalities.

New information has been obtained on the influence of sex on regional (pulmonary vs. tracheobronchial and extrathoracic) fractional deposition and on differences between children and adults, and these data are being extended by current projects. Work on the effects of respiratory abnormalities on regional or local deposition has been limited largely to modeling or work with airway replicas. There has been little validation of the models with measurements of living subjects. An important advance has been the finding that total fractional deposition is greater in people who have asthma and COPD and in smokers than in people who have normal lungs. Total fractional deposition has been found to be similar in normal elderly people and young adults. More emphasis is needed on regional and local deposition in lungs and airways of susceptible subjects.

The influences of particle size and hygroscopicity on deposition have been addressed by some studies, but only a small portion of this work has included subjects or airway replicas that have abnormalities or different ages. There appears to be little emphasis on the influence of particle and respiratory variables on deposition in susceptible people or on the development of predictive models that incorporate these variables. In addition, only a few particle types and only a few of the many common combinations of ambient particle sizes and compositions have been studied.

As in the past, there continues to be only modest effort aimed at identifying the type and location of particles retained in lungs at autopsy. Although the locations are sometimes characterized as reflecting sites of particle deposition, the results typically reflect sites of retention of only the most biopersistent classes of deposited particles and might not reflect accurately the sites of deposition or the dose of the full spectrum of inhaled particles. When coupled with evaluations of accompanying tissue changes, this approach provides useful information on the relationship between long-term particle retention and disease.

Develop mathematical models for predicting particle deposition in susceptible individuals and validate the models by measurements in individuals having those conditions.

Several recently completed or current efforts have led, and will continue to lead, to the development and refinement of models for predicting deposition in abnormal lungs. Most efforts have focused on the effect of flow limitation in conducting airways and on the heterogeneity of local particle deposition.

Very few efforts have included validation of models by measurements in living subjects. Only two of the reports in this category involved validation experiments—one for deposition in asthmatics and one for the effect of particle size on deposition in rats. More emphasis is needed on model validation and on modeling a greater range of susceptibility factors.

Develop information on interspecies differences and similarities in the deposition of ultrafine particles in abnormal vs. normal respiratory tracts.

Although this recommendation focused on ultrafine particles, there is a dearth of information on deposition of particles of any size in animals that have respiratory abnormalities. As noted in the toxicology sections that follow, continued effort is needed to develop, refine, and validate animal models of human respiratory abnormalities. Progress has been made, but it has been accompanied by little effort to examine particle dosimetry in the models. Although a few laboratories are attempting to develop and refine mathematical models for interspecies adjustments in particle deposition, there is still little attempt to validate the models by comparing deposition in animals and humans directly, and only one group is generating comparative data on the deposition of ultrafine particles.

Several projects have developed models to predict comparative deposition in normal rats and humans, and most can be adapted for ultrafine particles. Other animal species have been largely ignored. The committee 's first report recommended increased development and use of animal models of human susceptibility factors, as described in other sections. Because differences in deposited dose can contribute substantially to differences in the models' response, there is a need for more work on particle deposition in animal models of respiratory abnormalities.

Translocation, Clearance, and Bioavailability

Conduct research on the translocation and clearance of particles and the bioavailability of particle-borne compounds in the respiratory tracts of individuals having respiratory abnormalities presumed to confer increased susceptibility. Determine differences in the disposition of particles between these susceptible subpopulations and normals.

New information is beginning to accumulate that shows that respiratory abnormalities can have variable effects on short-term clearance of inhaled particles deposited on conducting airways. As is the case for deposition, information on clearance is being developed in both the pharmaceutical and environmental fields. There are data on short-term airway clearance in adult humans who have asthma, chronic bronchitis, and COPD, including comparisons with normal subjects.

Although the available information is still sketchy, it reveals both the potential importance and the complexity of the issue. For example, Svartengren et al. (1996) did not find that clearance from small ciliated airways of unprovoked asthmatics differed from that of normal people, but later (Svartengren et al. 1999) found that more particles were retained in airways of asthmatics than of normal subjects when the allergic asthmatics were challenged with allergen before deposition of the particles. There is little information on the influence of respiratory abnormalities on longer-term clearance from the pulmonary region and little information on age-related differences. Some data suggest that there is little influence of age or sex on particle clearance in normal humans.

Several recent studies have demonstrated the importance of the bioavailability (solubility) of particleborne metals in eliciting adverse responses. A modest amount of work is being done on the bioavailability of particleborne organic compounds. Little if any effort is being expended to determine differences in bioavailability or the importance of bioavailability between normal and abnormal respiratory tracts.

It appears that differences in particle clearance are not yet being incorporated into models for predicting differences between normal

and susceptible people in the dosimetry of particles or particle-associated compounds.

Determine the disposition of ultrafine particles after deposition in th e respiratory tract, and whether respiratory abnormalities alter the disposition pathways or rates.

Despite the current interest in potential differences between the disposition of fine and ultrafine particles after deposition in the respiratory tract, little progress has been made, and little work appears to be under way. The technical difficulty of measuring small amounts of ultrafine particles in various intrapulmonary and extrapulmonary locations continues to be a deterrent to progress. The recent development of 13 C-labeled ultrafine carbon particles is likely to advance this field, and tracer technologies need to be developed and applied for use with other types of ultrafines.

Sufficient work has been done to confirm that solid ultrafine particles can penetrate into the circulatory system and reach other organs, but quantitative data are still lacking. There has been no apparent effort to study the dosimetry of nonsolid ultrafine condensates. Moreover, there has been no work on the disposition of ultrafines in either humans or animals that have respiratory abnormalities. As investigative techniques are developed, it is important that they be applied to both normal and abnormal subjects.

Develop information on interspecies differences and similarities in the translocation, bioavailability, and clearance of particles in abnormal vs. normal respiratory tracts.

Little research appears to have been completed recently or to be under way addressing interspecies differences in particle clearance, translocation, or bioavailability in either normal or abnormal respiratory tracts. Recent work demonstrated marked differences in the sites of retention of fine particles in lungs of normal rats and nonhuman primates, but at lung loadings much higher than would result from environmental exposures. There are some new data and reviews on

particle clearance in different species, but the committee is unable to identify any direct intercomparisons among species or comparisons in the presence of respiratory abnormalities.

Adequacy of Current Research in Addressing Information Needs

Although the volume of dosimetric work shown in Table 3.5 reflects a level of effort commensurate with the committee's recommendations, there is not yet an adequate focus on the specific information needs described by the committee. Only a portion of the work has addressed characteristics other than age and sex; there has been insufficient work on the impact of respiratory abnormalities. The committee called for development and validation of mathematical models for predicting deposition and clearance in abnormal lungs. There has been only modest advancement in the modeling of dosimetry in susceptible people and little effort to validate the models. Efforts to improve interspecies extrapolation models continue in a few laboratories but, again, little effort to validate the models. There has been little effort to assess dosimetry of any type in animal models of human respiratory abnormalities. Many potentially important aspects of respiratory abnormalities—such as microdosimetry in tissues and cells, bioavailability of particleborne compounds, translocation and clearance, and handling of diverse particle types—have been addressed little or not at all. Although the level of effort might appear adequate, the degree of focus is not yet adequate.

Among the many programs, studies, and recent reports contributing new information on the dosimetry of particles, only a portion are focused specifically on dosimetric issues. Much of the information was produced as a byproduct of research focused on health responses to inhaled particles, rather than on particle dosimetry. That is appropriate, but effort is needed to make investigators broadly aware of the need for dosimetric information to encourage them to develop and publish the data as a specific, albeit opportunistic, product of their research. In a related vein, our review demonstrated that relevant information is being produced as a byproduct of pharmaceutical re-

search. That suggests the importance of looking beyond the traditional environmental research community when searching for and summarizing information relevant to environmental dosimetric issues.

The information on particle deposition in potentially susceptible subgroups has grown since the 1996 PM criteria document; results have demonstrated important differences in total fractional deposition in some disease states. The findings support the importance of the committee's recommendations. Work is needed on a wider range of susceptibility conditions, and more emphasis is needed on regional and local deposition (deposition “hot spots”) in susceptible people.

Much less information has been, or is apparently being, produced on differences in the clearance and translocation of deposited particles and in bioavailability of and cellular response to particleborne compounds due to age or respiratory abnormalities. Although many adverse responses might be most strongly moderated by deposition, some might be more strongly influenced by the amount and location of retained dose. Translocation and bioavailability issues remain important for an understanding of response mechanisms.

The research recommendations noted ultrafine particles as a specific class on which more dosimetric information is needed. The effort focused on ultrafines is modest and addresses a narrow range of ultrafine-particle types. Like coarse and fine particles, ultrafines include diverse physiochemical classes that can be expected to behave differently when deposited.

There is not yet an adequate effort to determine the dosimetry of particles of any type in animals that are used to study characteristics of human susceptibility. If the animal models of susceptibility are to be useful, differences in particle deposition and disposition, as well as differences in response, must be considered. Not only might differences in dosimetry help to explain differences in response on a total or regional dose basis, but the models might also be useful for predicting the influences of abnormalities on local deposition in susceptible people on whom such data might never be obtained directly. Research sponsors need to explicitly encourage investigators to evaluate dosimetry as an integral component of the characterization of the responses of animal models.

The scientific value of this research is generally high. Nearly all the work noted above builds on previous knowledge in a logical way that will lead to a more integrated understanding of PM-related health effects. Most of the dosimetric data collected in response to PM research needs will also have high value for other purposes, such as understanding and predicting the dosimetry of inhaled pharmaceuticals in normal vs. abnormal respiratory tracts and in animal models vs. humans. Findings pointing toward differences in respiratory control, anatomy, and defenses are raising issues likely to lead to more studies that will provide a more complete understanding of respiratory-tract structure and function.

Although insufficient effort is being expended to evaluate dosimetry in animal models of respiratory abnormalities, the resulting data will have high scientific value for determining the extent to which differences in health responses between normal and susceptible people are due to differences in dose and differences in responsiveness. This information is important for the selection and interpretation of the animal models.

Considering the previous lack of data on dosimetry in people who have respiratory abnormalities or animal models of these conditions, almost any such data would have scientific value. As results accumulate, it will be important to focus on more-specific and more-detailed issues, for example, on local and regional deposition rather than total deposition.

The results of this research will have a direct bearing on the setting of air-quality standards in two principal ways: providing the dose component of dose-response information required to set the standard, and providing information on the dose component of susceptibility as input into the adjustment of the standard for protection of sensitive subpopulations.

Knowledge of differences between the deposited doses received by normal people and those who have respiratory abnormalities will play a direct role in the estimating of safe and hazardous PM exposures. In this role, dosimetry is an equal partner in the exposure-dose-response paradigm that is integral to risk assessment. In addition, knowledge of dosimetry in animal models of susceptibility will play an indirect role in decisionmaking by influencing the selection of appropriate models, the interpretation of results of the use of the models, and the understanding of the role of dose variables in the susceptibility of humans.

Lack of feasibility is not impeding the progress of dosimetric research. As noted in the committee's first report ( NRC 1998), there are few technical limitations on obtaining the needed data. An exception might be current technical limitations on detecting ultrafine particles in tissues and fluids.

The research gaps identified above result from inadequate coverage of topics, not from inadequate research tools or personnel. It remains true, as stated in the first report, that with the combination of modest funding and its direction toward key information gaps, most dosimetric issues could be resolved soon. It is clear that not all important topics are being covered, although most of the time originally projected for this work has been spent. Without greater attention to targeting particular gaps, key issues might not be adequately resolved.

RESEARCH TOPIC 7. COMBINED EFFECTS OF PARTICULATE MATTER AND GASEOUS POLLUTANTS

How can the effects of particulate matter be disentangled from the ef fects of other pollutants? How can the effects of long-term exposure to par ticulate matter and other pollutants be better understood?

PM exists in outdoor air in a pollutant mixture that also contains gases. Thus, biological effects attributed to PM alone in an observational study might also include those of other pollutants that arise independently or through interactions with PM. There might be chemical interactions between gases and PM, or gases can be adsorbed onto particles and thus carried into the lung. Interactions can also occur in the process of deposition on lung airway surfaces and later through lung injury. Research relevant to this topic includes toxicological and clinical studies that examine the effects of gaseous copollutants on the health impacts of PM.

The committee's first two reports (NRC 1998, 1999) indicated that it is important to consider the effects of combined exposures to particles and copollutants when characterizing health risks associated with PM exposure. This research topic remains of critical importance because epidemiological studies might not be able to characterize fully the specific contributions of PM and gases in causing health outcomes. Thus, mechanistic studies are needed to determine the relative roles that various components of ambient pollution play in observed health effects of exposure to atmospheric mixtures.

The HEI database was examined to determine the research status of this topic. A number of current studies involve pre-exposure to high levels of ambient gases (such as ozone and sulfur dioxide) to induce pulmonary pathology in animals so that effects of PM in a compromised host model can be assessed. However, those types of studies are not considered to fit this research theme. A number of studies are using concentrated ambient PM (CAP), and such exposure atmospheres might include ambient gases unless they are specifically scrubbed out before entering the exposure system. However, it was often not possible from a study description in the database to determine whether the effects of these gases on response to PM were being examined. One group of researchers is exposing animals specifically to highly complex emission atmospheres to determine the rela-

tive contributions of PM and gaseous copollutants to various health effects.

Studies of interactions of gaseous copollutants with PM are being conducted with both animal and controlled human-exposure studies. Fewer studies are examining such effects in vitro. Endpoints span the array of effects observed in populations but focus largely on cardiovascular effects, inflammatory response, and mediators. Some animal studies and some human studies also involve the use of compromised hosts to compare effects with those occurring in normal animals and humans. As with all animal toxicity studies, it is important to be able to relate responses to human responses. That is specifically addressed as a goal in only one study program being performed at one of the EPA-sponsored PM centers.

One of the gaseous copollutants of major concern with regard to interaction with PM is ozone, and this copollutant is the subject of the greatest research effort. That is evident in Table 3.6 , which shows the list of gaseous pollutants being studied and the number of research projects addressing them. However, some attention is also being given to other gases of potential concern, such as sulfur dioxide and nitrogen dioxide. Other suggested modulators of PM-induced effects are receiving little attention. The role of ambient gases should receive more attention in studies with CAP because these types of expo sures are the most realistic and do not require the generation of “sur-

TABLE 3.6 Gaseous-Copollutant Studies

rogate” atmospheres. Opportunities should be sought to augment CAP with concentrated gaseous pollutants or to scrub out specific residual gasses.

PM in outdoor air is one component of a complex mixture that varies over time and also geographically on both small and large spatial scales. PM is one of the six pollutants in outdoor air regulated as “criteria pollutants.” In part, driven by the needs of evidence-based regulation, epidemiologists and other researchers have attempted to separate the effects of PM from those of other pollutants, even though they are often components of the same mixtures and their concentrations are often correlated, reflecting their shared sources. The effects of the individual components of the mixture can be assessed in time-series approaches with multivariate statistical methods or in designs that incorporate contrasts in exposures to mixtures by drawing participants from locations that have different pollutant mixtures (for example, with higher and lower ozone concentrations).

In addressing the “combined effects” of PM and other pollutants, one of the scientific questions of interest is whether the risks to health associated with PM exposure vary with the concentrations of other pollutants. For example, are risks posed by PM to children who have asthma higher in communities that typically have higher background concentrations of ozone than in other communities? Epidemiologists refer to this phenomenon as “effect modification, ” and its presence is generally assessed with statistical methods that test for interaction in multivariable models. Effect modification that is positive, or synergistic, results in greater risks than would be predicted on the basis of estimates of risk posed by PM itself. Studies of effect modification need substantial sample sizes if statistical power is to be sufficient.

Studies on combined effects need to include information on PM and the copollutants of interest. Epidemiological studies of diverse design are potentially relevant to this topic. As for studies of mixtures

generally, a precise characterization of combined effects requires a substantial body of data.

Examination of the HEI research inventory shows that many studies in progress should provide relevant information on modification of PM risks by other pollutants. The range of PM indicators across the studies is broad, but most studies include monitoring results for the principal gaseous pollutants of concern. Samples range from too small and consequently uninformative to large enough to provide insights into combined effects.

Although attention to the issue of effects of gaseous copollutants on the toxicity of PM is increasing, the current controlled-exposure research portfolio aimed at assessing the role of gaseous pollutants in health effects of PM is not adequate. The use of CAP can provide valuable information on effects of exposure to complex mixtures. Furthermore, the research effort in evaluating the role of gases in influencing particle effects seems to be lagging behind the effort in studying of specific components of PM in the absence of gaseous copollutants. The epidemiological research portfolio on this topic is relatively substantial; as most epidemiological studies of PM include data on gaseous copollutants. There does not yet appear to be a systematic, sustained plan for implementing studies of chronic exposure.

The criteria pollutants have long been addressed as though their effects on health were independent, with recognition that they exist as components of complex mixtures in the air. Rather than seeking to characterize mixture toxicity overall, researchers have sought to determine, experimentally or in observational data, whether the pres-

ence of one pollutant changes the effect of another (a phenomenon referred to in epidemiology as “effect modification”). Findings on effect modification inform estimates of risk posed by mixtures and suggest hypotheses for followup laboratory investigation.

Present regulations are based on the tenet that effects of individual pollutants are independent and that public-health goals can be met by keeping individual pollutants at or below mandated concentrations. Epidemiological demonstration of effect modification for PM effects by other pollutants, such as ozone, would indicate that the regulatory structure does not fully reflect the actual risks to the population.

Epidemiological and controlled-exposure studies of effect modification or interaction can be carried out; in fact, most contemporary studies include the requisite data on other pollutants. Thus, studies could be readily carried out now to explore whether other prevalent pollutants affect risks posed by PM. Methods for experiments involving mixed atmospheres are available. Analytical information derived from evaluation of atmospheres in epidemiological studies can help to determine specific components of mixed atmospheres to be used in controlled-exposure protocols.

RESEARCH TOPIC 8. SUSCEPTIBLE SUBPOPULATIONS

What subpopulations are at increased risk of adverse health outcomes from particulate matter?

A number of subgroups within the population at large are postulated to be susceptible to the effects of inhaled PM. They include

people who have COPD, asthma, or coronary heart disease; the elderly; and infants. Also, fetuses are possibly susceptible. Those groups have long been assumed to be susceptible to the effects of air pollution, in general, and therefore assumed to be at risk from PM. Epidemiological data support that assumption, as does understanding of the compromised organ systems of people with chronic heart and lung diseases and of the physiologic and immunologic vulnerability of infants and the elderly. A number of epidemiological and controlled-exposure investigations are now directed at characterizing health effects of PM in those subpopulations. Other populations might also be at excess risk from PM, and the committee considers that this research topic includes both subpopulations already considered susceptible and others yet to be identified.

In susceptible subpopulations, there is likely to be a range of vulnerability reflecting the severity of underlying disease. For example, in persons with asthma, there is a broad distribution of level of lung function and of increased nonspecific airway responsiveness, a hallmark of the disease. The degree of susceptibility can also depend on the temporal exposure pattern. However, data to support such biologically based speculations are still notably lacking. For example, whether all children are equally at risk or only children who are exercising or who have specific predisposing factors, such as a history of atopy or asthma or other respiratory disease history, is unknown. In adults, the interplay among factors that determine susceptibility, such as the presence of both COPD and coronary heart disease, is not yet understood. Findings of both acute and chronic morbidity and mortality studies suggest that those with prior respiratory disease are more susceptible to acute changes in ambient PM concentrations.

Although from the early days of air-pollution research hypotheses have been proposed related to increased susceptibility of selected fractions of the population, much of that work has been directed at identifying acute morbid events during acute exposures. For example, research in London in the 1950s followed up on the observation that many of the excess deaths noted in the December 1952 fog were of persons who were already quite sick, many with heart or lung disease. Panels of people with chronic bronchitis were followed during the 1950s and 1960s with monitoring of pulmonary function and symp-

toms. Those studies followed a design now referred to as a panel study, which involves following a susceptible subpopulation with relatively detailed tracking of their status. This model is particularly useful for assessing acute effects of exposure and can provide evidence relevant from both the clinical and the public-health perspectives. More recently, work has been directed toward testing whether exposure to particles can contribute to initiation of disease, as well as exacerbating existing conditions. To date, the collective evidence indicates that there are susceptible subpopulations, particularly of people who have chronic heart or lung diseases.

Controlled-Exposure Studies

The committee identified 53 animal and human studies in the HEI database that specifically addressed the issue of subpopulations susceptible to PM-induced diseases ( Table 3.7 ). In several cases, a study identified more than one susceptible subpopulation; for these, each population group was entered into the table.

Almost all the studies concern with diseases of the respiratory and cardiac systems; only one concerns increased susceptibility to cancer induction. Twelve studies concern age as a risk factor. The disease states of concern include pulmonary allergies, asthma, bronchitis, emphysema, COPD, and cardiac disease. Twenty-four of the studies involve human subjects, and 29 use animal models intended to mimic human disease.

The particulate atmospheres most frequently being used for toxicity studies are those with CAPs, carbon black, and residual-oil fly ash delivered via inhalation or intratracheal instillation. The duration of the exposures is variable but typically only hours or a few days; this contrasts with epidemiological studies that involve chronically exposed populations.

A strength of the studies is their focus on the major human diseases that have been identified by epidemiological studies as placing

TABLE 3.7 Controlled-Exposure Studies on Effects of PM on Susceptible Subpopulations

people at risk from exposure to PM. An additional strength is that epidemiological studies in which exposures cannot be controlled are complemented with controlled-exposure studies of humans and laboratory animals.

There are difficulties in investigating susceptible populations. The effect of PM exposure is not large enough to be readily and precisely detected without carrying out fairly large studies. Identifying study participants can be difficult, particularly if emphasis is placed on the most susceptible persons. Frail elderly persons and persons with advanced heart and lung disease, for example, might be reluctant to participate if study protocols are demanding. In contrast, experimental studies involve very small populations and typically short observation periods. In laboratory-animal studies, investigators typically attempt to circumvent this issue of population size by increasing the level of exposure or dose. It is common for all the treated animals to manifest disease or some other response. However, a critical ques-

tion is whether the disease states observed and the underlying mechanisms of pathogenesis with short-term high exposures (doses) studied over periods of days, and occasionally weeks, are relevant to assessing the risks posed by exposure over long periods, even at high ambient doses.

Beyond the extrapolation issue, the adequacy of the design of each toxicity study must be addressed. For example, in most cases, the investigators are typically studying relatively young animals, usually in the first fourth of their normal life span, whereas in humans a substantial portion of the disease of concern occurs in the last fourth of the normal life span. Many of the human diseases of concern are chronic, with periods of acute exacerbation. It is crucial that additional effort be directed at evaluating the animal models to assess the degree to which they mimic human disease.

Rationales for selection of the exposure atmospheres, exposure concentrations, and exposure durations were not always readily apparent from the project descriptions. There is also a special need to articulate the relevance of using intratracheal instillation, which delivers a large dose of particles at once, in contrast with the chronic exposures of concern for human populations.

A general concern for the health effects of air pollution, including particles, on susceptible persons has permeated epidemiological research on air pollution. This concern has become increasingly focused as the body of evidence has expanded and led to hypothesis-driven studies of susceptible subpopulations. In addition, with the recognition that much of the morbidity and mortality associated with PM exposure appears to have been from cardiovascular diseases, the efforts to understand susceptibility have expanded greatly beyond considerations of chronic respiratory conditions, particularly asthma and COPD, to include persons with underlying heart disease.

About 70 funded studies using epidemiological databases were reviewed to identify those directed to understanding the impact of

particulate pollution on susceptible subjects, patients, or populations. In general, the studies can be divided into those related to people who have an underlying chronic condition (such as asthma, COPD, or pre-existing coronary arterial disease), those related to persons free of disease but considered to be at increased risk because of a relatively high pollution dose resulting from exertion or exercise, and those related to persons generally at risk for increased morbidity or mortality (such as the elderly). Across the studies, a wide variety of measures of exposure are used, and insights can be gained on some aspects of particle characteristics and toxicity. However, only a few of the strata of the matrix defined by subpopulation and particle characteristics are being addressed.

Taken together, the efforts under way indicate that a rigorous evaluation of risks posed by PM exposure of susceptible subpopulations with established diseases—such as asthma, COPD, coronary arterial disease, heart failure, and hypertension—can be expected. The evidence will be primarily in relation to PM mass as the exposure metric. The groups that have been or are being studied, as summarized in the examined HEI database, include subjects potentially at risk and patients. Few studies are identified specifically as targeted to ethnic minority populations.

Efforts are also under way to explore pathogenesis and intermediate markers of risk, including changes in blood concentrations of inflammatory markers and clotting factors; additional predictors of cardiac risk, including changes in heart-rate variability; and other risk factors for sudden death.

Several studies directed toward an understanding of mechanisms of putative cardiac effects in humans are being carried out in the EPA, at several of the EPA-sponsored PM centers, and through other funding agencies. These include panel studies and clinical studies of healthy persons and potentially high-risk persons exposed to ambient PM and to CAP. These studies are being conducted by multidisciplinary teams that include expertise in exposure assessment, epidemiology, and clinical toxicology. Investigating the role of PM in initiating disease is more challenging, and less progress can be expected in understanding how susceptibility plays a role in initiation of chronic

diseases, simply because the susceptible groups have been less well defined.

In all the studies mentioned above, most of the efforts are directed to explaining acute effects of relatively short-term modeled or directly measured exposures to ambient particles. In a few instances, copollutants or other gases are also being considered. In only a very few cases are effects of chronic exposure being considered; in those cases, long-term exposure is being modeled for relatively recently measured exposures and historical extrapolations of known industrial or ambient particles. Better modeling of past exposure is needed, to develop new efforts directed toward the understanding of chronic effects in potentially susceptible groups. Such data would also be useful in conjunction with studies of factors that determine the development of susceptibility.

There is increasing use of animal models and humans with chronic heart or lung disease in studies to evaluate effects of PM exposure. However, the animal studies need to mimic the human disease state of interest properly. Better modeling of past exposure is needed to develop new efforts to understand chronic effects in potentially susceptible subpopulations. Collection of such data in conjunction with studies of factors that determine the development of susceptibility would be useful.

The hypothesis that particular groups in the population have increased susceptibility has been long advanced and supported by substantial epidemiological evidence. In fact, a general acceptance of the hypothesis has led to focusing of effort in a large number of projects

on the assessment of acute air-pollution effects on morbidity and mortality in selected groups of potentially susceptible persons. The results have been relatively consistent in demonstrating modest effects of particles as measured by mass. The same susceptible subpopulations will need to be reinvestigated, and previously unrecognized subpopulations will need to be considered as hypotheses concerning toxicity-determining characteristics of particles are increasingly refined.

Data on susceptible populations are critical to decisionmakers because the Clean Air Act requires that protection against risks posed by air pollution be extended to almost all persons. Standards are, in fact, intended to provide protection with “an adequate margin of safety.” Sufficient studies are under way to identify and reduce uncertainty related to susceptible groups with respect to acute effects of particle mass. However, for each individual study and for the studies as a group, it is important to anticipate how the results will influence decisions in establishing a NAAQS for PM—that is, will the information obtained provide an improved scientific basis for a decision on appropriate standards for ambient PM? It appears that few of the investigators have adequately considered this matter in a critical manner, especially for the controlled-exposure studies.

The only practical way to increase the number of investigations with regard to either acute or chronic exposures is to undertake studies in conjunction with current supersite or speciation-site data collections or with the use of additional exposure-data sources in the future. There is continuing development of animal models that mimic various aspects of potentially susceptible human conditions. Thus, this field continues to evolve.

RESEARCH TOPIC 9. MECHANISMS OF INJURY

What are the underlying mechanisms (local pulmonary and systemic) that can explain the epidemiologicalal findings of mortality/morbidity associ ated with exposure to ambient particulate matter?

Epidemiological studies have associated various health outcomes with exposure to ambient PM. Controlled-exposure studies are attempting to provide plausible underlying biological mechanisms for these health effects. The results have indicated a number of potential biological responses by which PM could underly possible pulmonary or systemic responses to PM exposures, many of which have been related to specific particulate characteristics, such as chemical or particle size. The major potential biological responses which have been suggested as underlying the reported human health effects from ambient PM exposures include oxidative stress, pulmonary inflammation, airway hyperreactivity, and alterations in the cardiovascular system, such as changes in blood viscosity, rate and pattern of heartbeat, and heart-rate variability. The issue of mechanistic plausibility has been addressed with animal models, in vitro systems, and clinical models. Of the studies described in the HEI database, about 50% involve animal toxicology, and the other 50% are roughly evenly divided between in vitro and clinical and studies. The relative apportionment of research effort for specific mechanisms of PM-induced responses and the allocation of these efforts among the three research approaches are indicated in Table 3.8 .

Research Topic 9a. Animal Models

What are the appropriate animal models to use in studies of particulate matter toxicity?

As previously noted, epidemiological studies suggest that exposure to low concentrations of PM is associated with morbidity or mortality in susceptible people and not in normal healthy people.

TABLE 3.8 Mechanistic Studies

Experimental data show that healthy animals exposed to similar low concentrations of PM also show little to no effect. Animal models are needed to mimic susceptible human subpopulations, because, without supporting data from animal studies, it is difficult to identify individual toxic materials in ambient PM and the mechanisms by which they induce damage to human and pulmonary and cardiovascular systerms. The occurrence of some pathological conditions in an exposed population can establish the probability that some of or all the pollutants produce damages but, in any reasonable time frame, it cannot always differentiate the effects, if any, of specific pollutants or the mechanisms of their action. That will ultimately require controlled exposures of animals to individual pollutants and relevant mixtures and then

measurements of response. In the initial stages of investigating the toxicity of PM and copollutants, it was sufficient to determine a correlation between their presence in inspired air and disease. Now, however, animal models are clearly needed to establish causality, help to unravel cellular mechanisms, and help to elucidate specific PM components that produce responses.

In assessing progress toward the development of animal models, the committee found projects to be distinguished by their heterogeneity. Of the 47 relevant studies identified, most used young normal animals, which were not models for susceptible disease. Fewer studies used older animals as models to evaluate the effects of age, and others used animal models of disease, such as asthma and hypersensitivity, chronic lung diseases, and cardiac dysfunction. Normal or mutant animals were used in some studies.

There are a number of difficulties in developing animal models of human diseases. Deposition of particles in animal lungs differs in both rate and location from that in human lungs, and there is a need for detailed knowledge of the distribution of deposition in animal lungs so that it can be related to deposition in human lungs (see research topic 6 ). Advanced scaling and modeling of the lung airways in animals should be encouraged. The cellular mechanisms by which the pertinent lung and cardiovascular diseases are produced in humans and by which particles exacerbate or initiate these conditions are not understood so it is difficult to produce analogous pathological conditions in animals. The lung contains more types of cells than most other organs and thus provides the opportunity for numerous types of interactions between cells exposed to PM atmospheres and increases the complexity of particle-tissue interaction.

It has been possible to mimic some aspects of specific human diseases in animals. Therefore, it might be necessary to be satisfied with modeling and studying only part of a disease constellation at a time. For example, “asthma-like” allergic conditions have been mod-

eled by sensitizing animals to various foreign proteins. That might produce marked contraction of airway smooth muscle on appropriate challenge but not involve other aspects of human asthma, such as inflammation and mucus gland hypertrophy.

It is encouraging that numerous animal models are being used to measure the effects of exposure to PM. However, a substantial number of studies exposed healthy normal animals to particles, and this is not necessarily a useful model of exposure of susceptible humans. Even though animal models of cardiac and lung disease are being used to investigate the effect of particles, relevance to the human situation must be considered. Research to develop models that more closely mimic the natural history of human diseases caused by air pollution should be emphasized. Models need to be well characterized and validated before use.

The use of animal models that mimic susceptible human populations is important for the study of effects of ambient or surrogate PM. However, all models must be validated for their relevance to the human condition. Validated models will provide important insights into the mechanisms of action of ambient PM and associated pollutants.

Studies that use validated animal models will assist in the evaluation of particle characteristics that underlie human health effects of exposure to ambient PM. They will provide input into the standards-

setting process by contributing information needed to determine margins of safety for exposure.

Continued development and use of appropriate animal models are required. The necessary tools for such development are readily available.

Research Topic 9b. In Vitro Studies

What are the appropriate in vitro models to use in studies of particulate- matter toxicity?

In vitro studies are important in helping to determine underlying toxicological mechanisms. They remain a necessary complement to animal and clinical evaluations.

The HEI database and the proceedings of the PM 2000 meeting list 34 studies related to this research topic. However, three of the studies do not deal with in vitro methods, and four are not relevant to the PM issue, but rather address issues of occupational and fibrogenic particle exposure. Most in vitro studies with PM are still conducted without considering the important issue of relevant doses to be used or, at a minimum, the use of a study design incorporating dose-response assessments. Many studies also focus on only one particle type collected from different ambient sources without including any control particles; in general, this type of study design should be avoided.

Several in vitro studies reported in the database are based on findings of animal studies that use very high doses of a specific particle type. Although state-of-the-art methods of cellular and molecular

toxicology are applied, the lack of an adequate justification for doses, the lack of control particles, and the presence of insufficient discussion of these important issues make the interpretation of results difficult. The results, contrary to what investigators of those studies conclude, will not be directly applicable to an understanding of pathophysiological mechanisms of PM action, nor will they be useful for the validation of high-dose animal studies as models of human respiratory-tract responses to much lower doses. Conclusions that are based on high doses do not provide arguments for the biological plausibility of effects of ambient PM. At best, the studies could contribute mechanistic information on PM effects in occupationally exposed workers whose lungs are generally exposed to a particulate compound at several milligrams per cubic meter.

On the positive side, several studies that are under way do use appropriate dose-response designs. Recognizing the need to use lower doses, these studies compare the toxicity of different particle types and responses in animal vs. human cells; this facilitates extrapolation of in vivo responses in animals to humans. Although high doses are also delivered, the studies are valuable with respect to a toxicological evaluation of potentially reactive components, but will require followup studies with more realistic doses. Another well-designed study includes a comparison of responses in airway biopsy cells from normals and asthmatics for an in vitro determination of relative sensitivities to ambient PM. One study in this category of comparative in vitro studies evaluated the response of human bronchial epithelial cells to PM collected before and after a steel mill closure; the goal was to identify the importance of differing PM composition—in this case related to transition metals—for inducing adverse health effects.

Several other studies use methods of in vitro priming—for example, with lipopolysaccharides—of specific respiratory-tract cells, including alveolar macrophages and epithelial cells, to compare responses of oxidative stress induction by PM in sensitized cells and normal cells. These studies are aimed at assessing mechanistic concepts of PM toxicity and contribute to the establishment of a good basis for designing further in vivo studies.

Two planned in vitro studies are designed to investigate age differences by using cells from young and old animals and applying a variety

of doses down to very low ones. Plans of one group of investigators include delivery of particles in the airborne state to in vitro cell cultures so that the dosing will be similar to in vivo conditions. The importance of coculture of different cell types is realized in one study in which an in vitro lung-slice technology is used to compare responses to a variety of PM of different sources and to surrogate control particles. One in vitro study is aimed at evaluating mutagenic effects of airborne PM and associated organic compounds, addressing long-term effects. However, administered doses and the use of a dose-response design are not indicated, and it is necessary to consider these issues in studies addressing potential long-term effects.

The current and planned in vitro studies are designed to investigate several components of PM by using a number of end points, such as changes in the levels of inflammatory cytokines, and chemokines, release of oxidants, and oxidative stress responses. The issues of age-dependent responses and modulation of responses in cells from susceptible subjects are also being investigated. However, many current in vitro studies do not use or consider appropriate doses but, instead, are using unrealistic high doses; a dose-response design is still the exception in these types of studies. Despite those shortcomings, which need to be rectified, comparative in vitro toxicity studies to establish concepts and elucidate mechanistic events of PM toxicity are valuable additions to the database.

Specific mechanistic hypotheses related mainly to PM-induced effects are being tested at several laboratories. Although in vitro models are used for investigating mechanisms of PM-induced toxicity, the relevance of identified mechanistic pathways is highly question-

able when they are based on high doses, as is the case in most of the current studies. A major gap is a lack of testing of the validity of conclusions for specific mechanisms by using relevant low doses; this is due in large part to the lack of a demonstrated causal relationship between relatively low PM exposures and adverse effects in controlled in vivo studies. Thus, in vitro studies have their greatest scientific value when they are designed on the basis of results of controlled whole-animal or clinical studies, involve relatively realistic exposures, and test specific mechanistic hypotheses.

Mechanistic information at the cellular and molecular levels obtained from well-designed in vitro studies can contribute to the weight of evidence regarding a causal relationship between PM exposure and health effects. That will reduce uncertainties related to the plausibility of observed adverse PM effects. Knowledge gained about mechanisms of PM toxicity will contribute greatly to the scientific justification of the PM standards.

In vitro studies clearly are feasible in many laboratories. It is important for special attention to be directed toward the use of relevant doses. Moreover, the development of appropriate new methods for in vitro studies should be encouraged, including airborne-particle exposures of cell cultures, use of cells from compromised lungs, and use of genetically modified cells. Because the developmental phase of these models is potentially long, useful results might not become available very soon.

Research Topic 9c. Clinical Models

What are the appropriate clinical models to use in studies of particulate matter toxicity?

Clinical studies are controlled exposures of humans. In the case of PM, such studies are designed to use laboratory-generated surrogate particles or concentrated ambient-air particles. The use of human subjects avoids the need to extrapolate results from other species. Both normal and susceptible subpopulations can be studied, and physiologic, cellular, immunologic, electrocardiographic and vascular end points, as well as symptoms, can be assessed. Elucidation of responses in humans is key to understanding the importance of ambient pollution and determining the nature of adverse health effects of PM exposure.

Review of the HEI database and proceedings of the PM 2000 meeting identified about 10 active human-exposure studies. All are using particles of concern, which include CAP, ultrafine carbon, ultrafine acidic sulfates, diluted diesel exhaust, and smoke from burning of vegetable matter. Studies are under way in healthy volunteers, asthmatics, and atopic people. Studies in people who have COPD or cardiac disease are planned. The clinical studies focus on evaluation of pulmonary and systemic responses, such as pulmonary inflammation and injury to epithelial cells; cardiac rhythm, rate, and variability; initiation of the coagulation cascade; and symptoms.

Few laboratories are equipped to perform clinical studies of PM. However, the similarities in their protocols enhance the likelihood of obtaining useful data. For example, studies with CAP and ultrafine particles have incorporated prolonged electrocardiographic monitoring after exposure. All studies include physiologic assessments of lung function, and indicators of airway inflammation in nasal or bronchoalveolar lavage fluid, induced sputum, or exhaled air (such as nitric oxide). In addition, coagulation indexes in blood are examined in some of the studies. In selected cases, efforts have been made to centralize analytical studies in a core laboratory for standardization of techniques.

There are a number of difficulties in establishing clinical models to study PM. Although the particle concentrators allow exposure to

relevant atmospheres, the mixtures vary from day to day and, typically, minimal chemical analyses of the particles are performed. If responses to CAP are variable, it is not possible to determine whether the variability resulted from differences in human susceptibility or in particle chemistry. In contrast, studies with surrogate particles result in reproducible exposures but mimic only selected aspects of ambient particulate pollution. Furthermore, the epidemiological data suggest that the most severely ill are at risk of pollutant effects; these subgroups cannot be used in controlled clinical studies. Because clinical studies by design are limited to short-term exposures, they will rarely be able to contribute to an understanding of development of chronic disease secondary to exposure to particles.

The particle-exposure systems used in clinical studies include environmental chambers, facemasks, and mouthpieces. Each design offers specific advantages, but the mouthpiece studies with ultrafine particles have incorporated measurements of total particle deposition. One clinical study will investigate the interaction of particles with ozone, another plans to incorporate metals into the particles, and virtually all include some level of exercise to enhance minute ventilation, thus increasing the inhaled dose of pollutants.

The current and planned clinical studies are designed to investigate CAP and several specific components of PM (such as size, acids, metals, and diesel exhaust) with a number of pulmonary and systemic end points. Studies are under way in susceptible subpopulations and are planned in other subgroups with pre-existing disease. Despite the limited facilities available for clinical research, the array of studies under way should provide valuable information on PM toxicity.

Clinical studies present an opportunity to examine responses to

PM in both healthy and susceptible subpopulations. Carefully designed controlled exposures provide information on symptomatic, physiologic, and cellular responses in both healthy and at-risk groups. They also provide important insights into mechanisms of action of PM. Such studies can provide needed information on PM deposition and retention in healthy and susceptible subpopulations (see research topic 6 ).

Clinical studies often provide important information for regulatory decisions. Assessing acute responses in groups that have chronic diseases will establish important insights into plausible mechanistic pathways. In addition, they provide crucial data on relative differences in responsiveness between healthy and potentially at-risk populations.

Studies are under way in several laboratories. They should provide highly relevant information for the next review of PM for regulatory decisions.

RESEARCH TOPIC 10. ANALYSIS AND MEASUREMENT

To what extent does the choice of statistical methods in the analysis of data from epidemiological studies influence estimates of health risks from exposures to particulate matter? Can existing methods be improved? What is the effect of measurement error and misclassification on estimates of the association between air pollution and health?

The first report of this committee (NRC 1998) outlined several methodological issues that needed further study. These included the

choice of statistical methods for analyzing data obtained from other studies, especially epidemiologic studies. Because more than one method can be used to analyze data, it will be important to understand the extent to which alternative approaches can influence analytical results. In addition, new study designs will require new approaches to analyze the data. These include development of analytical methods to examine several constituents and fractions of PM in an effort to understand their associations with health end points and design of models and approaches to incorporate new biological insights. Specific attention was given to measurement error, an issue inherent in most epidemiological studies that use ambient-air data to characterize subjects' exposure. The committee's second report (NRC 1999) reiterated those needs and noted the existence of relevant research and papers nearing completion.

Review of scientific literature, meeting abstracts, and the HEI database identified extensive progress on several methodological subjects. The review was intended to evaluate the extent to which the research needs previously identified by the committee are being addressed and to stimulate further targeted research.

General Methodological Issues

Model development and evaluation.

Over the last several years, there has been considerable development of time-series data-analysis methods, which have provided much of the evidence on the association between PM exposures and health effects. The methods assess the variation in day-to-day mortality or morbidity counts with variation in PM concentrations on the same or previous day. Although systematic and comprehensive comparisons of alternative methods have not been reported, limited comparisons have suggested that results are relatively robust to the statistical approach used. However, the choices of input variables and data have been shown occasionally to influence results (Lipfert et al. 2000). That is particularly true with respect to the choice of pollution variables in

the statistical models. The presence of other variables in the models can influence the association between health measures and particulate air pollution.

The application of the time-series studies has been facilitated by recent advances in hardware and software and by the development of statistical approaches that can appropriately account for the data structure of the daily time series. Time-series analyses were initially conducted on a single location that had been selected primarily on the basis of data availability, rather than representing selection from a defined sampling frame. Meta-analysis was then used to summarize the data and to gain a more precise estimate of the effect of PM on mortality or morbidity. Recently, studies of more-formal, multicity designs have been conducted. These approaches have a priori plans for selecting locations and have standardized statistical methods across locations. The European Air Pollution and Health: A European Approach (APHEA) project (Katsouyanni et al. 1995) is a pioneering effort that initially analyzed routinely collected data from 15 European cities in 10 countries with a common statistical protocol, examining mortality and emergency hospitalizations in some cities. In the United States, the HEI has funded the National Morbidity, Mortality and Air Pollution Study (NMMAPS) (Samet et al. 2000, 2001). The NMMAPS includes analyses of mortality and morbidity separately; a joint analysis of morbidity and mortality is planned. For the mortality analysis, the NMMAPS investigators used a sampling frame defined by U.S. counties. The 90 largest urban areas (by population) were selected, and the daily mortality data for 1987-1994 were analyzed to assess associations with PM and other pollutants.

The methods used in the APHEA project and the NMMAPS show the potential power of multicity approaches. The potential selection bias of only a single or a few locations is avoided. Combining information across locations, increases power and heterogeneity. In addition, health effects can be compared between regions that have similar air-pollution levels.

Other research efforts involving model development are the exploration of distributed-lag models (Schwartz 2000a; Zanobetti et al. 2000), efforts to understand the dose-response relationship between PM exposure and health effects (Schwartz 2000b; Smith et al. 2000;

Schwartz and Zanobetti 2000), and examination of alternative ways of analyzing the relationship between air-quality data and health end points (Beer and Ricci 1999; Sunyer at al. 2000; Tu and Piegorsch 2000; Zhang et al. 2000). Other research efforts have also aimed at combining results from several studies, including those by Stroup et al. (2000) and Sutton et al. (2000).

Measurement Error

The difference between actual exposures and measured ambient-air concentrations is termed measurement error. Measurement error can occur when measures of ambient air pollution are used as an index of personal exposure. For PM, the three sources of measurement error are instrument error (the accuracy and precision of the monitoring instrument), error resulting from the nonrepresentativeness of a monitoring site (reflected by the spatial variability of the pollutant measured), and differences between the average personal exposure to a pollutant and the monitored concentration (influenced by microenvironmental exposures).

With regard to assessing the impact of outdoor exposures, the most important source of measurement error is related to the representativeness of the placement of monitors. In acute studies, other sources of error will not vary substantially from day to day. But in chronic studies, the most important errors are those associated with microenvironmental exposures. The presence of indoor sources of PM and the influence of home characteristics on penetration of outdoor particles into the indoor environment can be a source of substantial exposure error. The influence of home characteristics is important because it varies with geographical location, climate, socioeconomic factors, and season. Because those factors could introduce systematic errors, they must be considered in the analysis and interpretation of results of chronic epidemiological studies. They are often taken into account by using not direct measures of exposure, but surrogate measures that would influence the exposures, such as smoking in the household, the presence of gas stoves, and air conditioning.

Measurement error is of particular concern in studies intended to

isolate the effects of particles from those of gases or to distinguish the effects of individual particle species or size fractions from each other. When several population variables are included in the same analyses and the different variables have different magnitudes and types of measurement error, the issue of estimating the associations between health responses and specific variables is even more complicated. A well-measured but benign substance might serve as the best empirical predictor of community health effects, rather than a poorly measured but toxic substance that is similarly distributed in the atmosphere. The problem is that most pollutants tend to be similarly distributed, so collocated time series of pollutant measurements tend to covary because all pollutants are modulated by synoptic meteorological conditions. Long-term averages of pollutant concentrations tend to covary across cities because the rates of many categories of emissions tend to increase roughly with population. Various methods are available to adjust statistical analyses for the effects of differential measurement error (Fuller 1987; Carroll et al. 1995).

Several statistical issues must be considered in addressing measurement error. A full discussion of these issues is found in Fuller (1987) and Carroll et al. (1995). The most important is the type of model in which the measurement error is imbedded. Generally, in linear models, measurement error can be understood if it is assumed that errors are independent of each other and of other variables in the model and that they follow the same statistical distribution. However, it is common for measurement-error distribution and properties not to be readily apparent, for example in ambient-air quality data, because “true” measurements of personal exposure have not been available. Recent studies have generated data that will provide a better understanding of the properties of measurement error. Until its specific properties are understood, its consequences will be unclear. For instance, Stefanski (1997) cites examples from a linear model in which the regression coefficient could be biased in either direction or unbiased, depending on the characteristics of the measurement error. The issues are increasingly complex as one moves to multiple-regression models (Carroll and Galindo, 1998) and then to nonlinear models.

Development of a framework or method will be useful in consider-

ing the effects of measurement error on population-mortality relative risks (Zeger et al. 2000). The framework demonstrates that for a wide range of circumstances the impacts of measurement error will either lead to underestimates of association or have a negligible effect. Combined with some of the data now being generated, the framework promises considerable progress toward an understanding of measurement error.

Harvesting is an issue raised by time-series mortality studies. The term “harvesting” refers to the question of whether deaths from air pollution occur in people who are highly susceptible and near death (and die a few days earlier because of air pollution than they otherwise would have) or the air pollution leads to the death of people who are not otherwise near death.

Many studies have identified associations between daily mortality and air-quality variables measured at the same time or a few days before deaths, but none of them has been able to address fully the issue of harvesting, although several recent analyses (Zeger et al. 1999, Schwartz 2000c) suggest that the findings of daily time-series studies do not reflect mortality displacement alone. Several analytical approaches have been proposed to address harvesting, and they need to be tried on additional data sets and refined to quantify better the degree of life-shortening associated with PM and other pollutants. Four recent papers examine this issue from different perspectives (Smith et al. 1999; Zeger at al. 1999; Murray and Nelson 2000; Schwartz 2000c).

Spatial Analytical Methods

An important issue in the analysis of data from studies that examine the association between city-specific mortality and long-term average pollutant concentrations, is whether observations of individual subjects are independent or correlated. Spatial correlation in

mortality can result from common social and physical environments among residents of the same city. Air pollution can be spatially autocorrelated as a result of broad regional patterns stemming from source and dispersion patterns.

In a recent reanalysis of data from the study by Pope et al. (1995), which examined associations between mortality in 154 cities throughout the United States and fine-particle and sulfate concentrations, Krewski et al. (2000) developed and applied new methods to allow for the presence of spatial autocorrelation in the data. The methods included two-stage random-effects regression methods, which were used to account for spatial patterns in mortality data and between-and within-city specific-particle air pollution levels, and application of spatial filtering to remove regional patterns in the data. Taking spatial autocorrelation into account in this manner increased the estimate of the mortality ratios associated with exposure to PM and led to wider confidence limits than in the original analysis; it was assumed that all individuals in the study represented independent observations.

The initial work on the development of analytical methods for the analysis of community-level data that exhibit clear spatial patterns warrants further investigation. Failure to take such spatial patterns into account can lead to bias in the estimates of mortality associated with long-term exposure to fine particles and to inaccurate indications of statistical significance.

The recent research appears to address the research gaps and needs addressed by the committee. That is especially true for the measurement-error and harvesting issues. Because this research is new, it needs to be digested and applied to several data sets to increase our understanding. Data that are available or being collected allow further testing of the applications and methods. However, several subjects for further research are: elucidation of the statistical properties of the new spatial approaches discussed, consideration of

alternative ways of addressing spatial autocorrelation in the data, and application of such spatial analytical methods to additional data sets.

The research has been well conducted with strong statistical tools. In addition, it has taken advantage of the existing literature and statistical tools while applying them to new subjects. However, the statistical tools have been applied to few data sets. The value of the research will increase as it is applied to more data sets and as approaches and results from the various studies are compared, synthesized, and reconciled.

The research can contribute substantially to decisionmaking. Understanding of potential influence of model approaches on results is key to adequate use of the research findings. Because measurement error can affect the results, insights into the influence of measurement error will assist in the interpretation of the results and ultimately increase their influence in decisionmaking. Understanding of harvesting will help to place estimates of effects on mortality in a public-health perspective.

Feasibility is not a deterrent to the research in this field. It appears that extensive results will be available within the timeframe laid out by this committee.

Regulatory standards are already on the books at the the U.S. Environmental Protection Agency (EPA) to address health risks posed by inhaling tiny particles from smoke, vehicle exhaust, and other sources.

At the same time, Congress and EPA have initiated a multimillion dollar research effort to better understand the sources of these airborne particles, the levels of exposure to people, and the ways that these particles cause damage.

To provide independent guidance to the EPA, Congress asked the National Research Council to study the relevant issues. The result is a series of four reports on the particulate-matter research program. The first two books offered a conceptual framework for a national research program, identified the 10 most critical research needs, and described the recommended timing and estimated costs of such research.

This, the third volume, begins the task of assessing the progress made in implementing the research program. The National Research Council ultimately concludes that the ongoing program is appropriately addressing many of the key uncertainties. However, it also identifies a number of critical specific subjects that should be given greater attention. Research Priorities for Airborne Particulate Matter focuses on the most current and planned research projects with an eye toward the fourth and final report, which will contain an updated assessment.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 06 June 2022

Perspectives on scientific progress

  • Michela Massimi   ORCID: orcid.org/0000-0001-6626-9174 1  

Nature Physics volume  18 ,  pages 604–606 ( 2022 ) Cite this article

2287 Accesses

3 Citations

65 Altmetric

Metrics details

Against the backdrop of various philosophical accounts, this Comment argues for the need of a human rights approach to scientific progress, which requires us to rethink how we view scientific knowledge.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 print issues and online access

195,33 € per year

only 16,28 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Bush, V. Science, The Endless Frontier Letter of Transmittal (NSF, 1945/1960).

Popper, K. The Logic of Scientific Discovery (Hutchinson, 1959).

Popper, K. Conjectures and Refutations: The Growth of Scientific Knowledge (Hutchinson, 1963).

Lakatos, I. & Musgrave, A. (eds) Criticism and the Growth of Knowledge (Cambridge Univ. Press, 1970).

Niiniluoto, I. Is Science Progressive? (D. Reidel, 1984).

Kuhn, T. S. The Structure of Scientific Revolutions 2nd edn (Univ. Chicago Press, 1962/1970).

Bird, A. in The Oxford Handbook of Philosophy of Science (ed. Humphreys, P.) 544–563 (Oxford Univ. Press, 2015).

Dellsén, F. Philos. Compass 13 , e12525 (2018).

Article   Google Scholar  

Universal Declaration of Human Rights (United Nations, 1948); https://www.un.org/en/about-us/universal-declaration-of-human-rights

Porsdam Mann, S. et al. Proc. Natl Acad. Sci. USA 115 , 10820–10823 (2018).

Porsdam, H. & Porsdam Mann, S. The Right to Science: Then and Now (Cambridge Univ. Press, 2021).

The Right to Enjoy the Benefits of Scientific Progress and its Applications (UNESCO, 2009); https://unesdoc.unesco.org/ark:/48223/pf0000185558

Douglas, H. Science, Policy and the Value-Free Ideal (Univ. Pittsburgh Press, 2009).

Kitcher, P. Science, Truth, and Democracy (Oxford Univ. Press, 2001).

Wallace, W. A. Prelude to Galileo: Essays on Medieval and Sixteenth-Century Sources of Galileo’s Thought Boston Studies in the Philosophy of Science 62 (D. Reidel, 1981).

Massimi, M. Monist 105 , 214–228 (2022).

Massimi, M. Perspectival Realism (Oxford Univ. Press, 2022); https://go.nature.com/3t8iMJX

Kant, I. Toward Perpetual Peace and Other Writings on Politics, Peace, and History (ed. Kleingeld, P.; transl. Colclasure, D. L.) (Yale Univ. Press, 2006).

Kleingeld, P. Kant and Cosmopolitanism (Cambridge Univ. Press, 2011).

Benhabib, S. Another Cosmopolitanism (ed. Post, R.) (Oxford Univ. Press, 2006).

Waldron, J. Univ. Mich. J. Law Reform 25 , 751–793 (1992).

Google Scholar  

Massimi, M. Preprint at PhilSci http://philsci-archive.pitt.edu/20485/ (2022).

Download references

Author information

Authors and affiliations.

School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK

Michela Massimi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Michela Massimi .

Ethics declarations

Competing interests.

The author declares no competing interest.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Massimi, M. Perspectives on scientific progress. Nat. Phys. 18 , 604–606 (2022). https://doi.org/10.1038/s41567-022-01637-5

Download citation

Published : 06 June 2022

Issue Date : June 2022

DOI : https://doi.org/10.1038/s41567-022-01637-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research in progress meaning

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Process – Steps, Examples and Tips

Research Process – Steps, Examples and Tips

Table of Contents

Research Process

Research Process

Definition:

Research Process is a systematic and structured approach that involves the collection, analysis, and interpretation of data or information to answer a specific research question or solve a particular problem.

Research Process Steps

Research Process Steps are as follows:

Identify the Research Question or Problem

This is the first step in the research process. It involves identifying a problem or question that needs to be addressed. The research question should be specific, relevant, and focused on a particular area of interest.

Conduct a Literature Review

Once the research question has been identified, the next step is to conduct a literature review. This involves reviewing existing research and literature on the topic to identify any gaps in knowledge or areas where further research is needed. A literature review helps to provide a theoretical framework for the research and also ensures that the research is not duplicating previous work.

Formulate a Hypothesis or Research Objectives

Based on the research question and literature review, the researcher can formulate a hypothesis or research objectives. A hypothesis is a statement that can be tested to determine its validity, while research objectives are specific goals that the researcher aims to achieve through the research.

Design a Research Plan and Methodology

This step involves designing a research plan and methodology that will enable the researcher to collect and analyze data to test the hypothesis or achieve the research objectives. The research plan should include details on the sample size, data collection methods, and data analysis techniques that will be used.

Collect and Analyze Data

This step involves collecting and analyzing data according to the research plan and methodology. Data can be collected through various methods, including surveys, interviews, observations, or experiments. The data analysis process involves cleaning and organizing the data, applying statistical and analytical techniques to the data, and interpreting the results.

Interpret the Findings and Draw Conclusions

After analyzing the data, the researcher must interpret the findings and draw conclusions. This involves assessing the validity and reliability of the results and determining whether the hypothesis was supported or not. The researcher must also consider any limitations of the research and discuss the implications of the findings.

Communicate the Results

Finally, the researcher must communicate the results of the research through a research report, presentation, or publication. The research report should provide a detailed account of the research process, including the research question, literature review, research methodology, data analysis, findings, and conclusions. The report should also include recommendations for further research in the area.

Review and Revise

The research process is an iterative one, and it is important to review and revise the research plan and methodology as necessary. Researchers should assess the quality of their data and methods, reflect on their findings, and consider areas for improvement.

Ethical Considerations

Throughout the research process, ethical considerations must be taken into account. This includes ensuring that the research design protects the welfare of research participants, obtaining informed consent, maintaining confidentiality and privacy, and avoiding any potential harm to participants or their communities.

Dissemination and Application

The final step in the research process is to disseminate the findings and apply the research to real-world settings. Researchers can share their findings through academic publications, presentations at conferences, or media coverage. The research can be used to inform policy decisions, develop interventions, or improve practice in the relevant field.

Research Process Example

Following is a Research Process Example:

Research Question : What are the effects of a plant-based diet on athletic performance in high school athletes?

Step 1: Background Research Conduct a literature review to gain a better understanding of the existing research on the topic. Read academic articles and research studies related to plant-based diets, athletic performance, and high school athletes.

Step 2: Develop a Hypothesis Based on the literature review, develop a hypothesis that a plant-based diet positively affects athletic performance in high school athletes.

Step 3: Design the Study Design a study to test the hypothesis. Decide on the study population, sample size, and research methods. For this study, you could use a survey to collect data on dietary habits and athletic performance from a sample of high school athletes who follow a plant-based diet and a sample of high school athletes who do not follow a plant-based diet.

Step 4: Collect Data Distribute the survey to the selected sample and collect data on dietary habits and athletic performance.

Step 5: Analyze Data Use statistical analysis to compare the data from the two samples and determine if there is a significant difference in athletic performance between those who follow a plant-based diet and those who do not.

Step 6 : Interpret Results Interpret the results of the analysis in the context of the research question and hypothesis. Discuss any limitations or potential biases in the study design.

Step 7: Draw Conclusions Based on the results, draw conclusions about whether a plant-based diet has a significant effect on athletic performance in high school athletes. If the hypothesis is supported by the data, discuss potential implications and future research directions.

Step 8: Communicate Findings Communicate the findings of the study in a clear and concise manner. Use appropriate language, visuals, and formats to ensure that the findings are understood and valued.

Applications of Research Process

The research process has numerous applications across a wide range of fields and industries. Some examples of applications of the research process include:

  • Scientific research: The research process is widely used in scientific research to investigate phenomena in the natural world and develop new theories or technologies. This includes fields such as biology, chemistry, physics, and environmental science.
  • Social sciences : The research process is commonly used in social sciences to study human behavior, social structures, and institutions. This includes fields such as sociology, psychology, anthropology, and economics.
  • Education: The research process is used in education to study learning processes, curriculum design, and teaching methodologies. This includes research on student achievement, teacher effectiveness, and educational policy.
  • Healthcare: The research process is used in healthcare to investigate medical conditions, develop new treatments, and evaluate healthcare interventions. This includes fields such as medicine, nursing, and public health.
  • Business and industry : The research process is used in business and industry to study consumer behavior, market trends, and develop new products or services. This includes market research, product development, and customer satisfaction research.
  • Government and policy : The research process is used in government and policy to evaluate the effectiveness of policies and programs, and to inform policy decisions. This includes research on social welfare, crime prevention, and environmental policy.

Purpose of Research Process

The purpose of the research process is to systematically and scientifically investigate a problem or question in order to generate new knowledge or solve a problem. The research process enables researchers to:

  • Identify gaps in existing knowledge: By conducting a thorough literature review, researchers can identify gaps in existing knowledge and develop research questions that address these gaps.
  • Collect and analyze data : The research process provides a structured approach to collecting and analyzing data. Researchers can use a variety of research methods, including surveys, experiments, and interviews, to collect data that is valid and reliable.
  • Test hypotheses : The research process allows researchers to test hypotheses and make evidence-based conclusions. Through the systematic analysis of data, researchers can draw conclusions about the relationships between variables and develop new theories or models.
  • Solve problems: The research process can be used to solve practical problems and improve real-world outcomes. For example, researchers can develop interventions to address health or social problems, evaluate the effectiveness of policies or programs, and improve organizational processes.
  • Generate new knowledge : The research process is a key way to generate new knowledge and advance understanding in a given field. By conducting rigorous and well-designed research, researchers can make significant contributions to their field and help to shape future research.

Tips for Research Process

Here are some tips for the research process:

  • Start with a clear research question : A well-defined research question is the foundation of a successful research project. It should be specific, relevant, and achievable within the given time frame and resources.
  • Conduct a thorough literature review: A comprehensive literature review will help you to identify gaps in existing knowledge, build on previous research, and avoid duplication. It will also provide a theoretical framework for your research.
  • Choose appropriate research methods: Select research methods that are appropriate for your research question, objectives, and sample size. Ensure that your methods are valid, reliable, and ethical.
  • Be organized and systematic: Keep detailed notes throughout the research process, including your research plan, methodology, data collection, and analysis. This will help you to stay organized and ensure that you don’t miss any important details.
  • Analyze data rigorously: Use appropriate statistical and analytical techniques to analyze your data. Ensure that your analysis is valid, reliable, and transparent.
  • I nterpret results carefully : Interpret your results in the context of your research question and objectives. Consider any limitations or potential biases in your research design, and be cautious in drawing conclusions.
  • Communicate effectively: Communicate your research findings clearly and effectively to your target audience. Use appropriate language, visuals, and formats to ensure that your findings are understood and valued.
  • Collaborate and seek feedback : Collaborate with other researchers, experts, or stakeholders in your field. Seek feedback on your research design, methods, and findings to ensure that they are relevant, meaningful, and impactful.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

Research Questions

Research Questions – Types, Examples and Writing...

Enago Academy

Why Are Progress Reports Important in Research?

' src=

Did your supervisor ask you to write a progress report? A significant part of your supervisor’s job is to be a project manager and be accountable to funders. Remember, your research project is just one cog in the wheel of science in your lab. Therefore, your supervisor will need reports on the progress of each project. Their job is to evaluate your progress and adjust your action plan if things go wrong. These reports are not simply a report of your results.

Set SMART Goals

First, set “SMART” goals for your project (SMART = Specific, Measurable, Attainable, Realistic and Time-targeted goals). I like SMART goals because they are more than a list of things to achieve. They are part of an action plan. Your progress reports will help you check whether your goals are on track.

Progress Report Intervals

Your supervisor will tell you how often they need progress reports. The time interval depends on the nature of your work. I like to monitor my projects regularly , on a daily, weekly and monthly basis.

  • Daily: this is an informal walk around the lab where I have a quick chat with my team members on what they are doing. I also set myself a daily “to do” list.
  • Weekly: my team meets once a week for coffee and an informal journal club. After discussing a paper, we give a quick verbal report on our progress during the past week and any challenges that have arisen. If we find that someone is struggling or their work needs further discussion, we schedule a meeting.
  • Monthly: once a month we write a formal progress report of our work and meet on a one-to-one basis to discuss it.

Purpose of Progress Reports

Your report should include your results obtained so far, experiments you are working on, plans for future work and any problems you might have with your work. It is a report on your overall plan. This plan needs constant assessment to ensure you reach your goals and to help you make informed decisions and justify any changes.

Progress reports also keep stakeholders informed. Anyone involved with your project wants to know:

  • That you are working on the project and making progress.
  • What findings you have made so far.
  • That you are evaluating your work as you
  • If you foresee any challenges.
  • Any problems you encounter are being addressed.
  • That you can manage your plan and schedule.
  • That your project will be completed on time.

How to Write a Progress Report

Ask your supervisor if they have a template that they want you to use. Supervisors that manage many projects find it easier to keep track of all the information if it is presented in a consistent format. Write your report in concise, simple language. Progress report styles vary. However, most reports require the following sections :

  • Project information . State the project name, any project ID codes, the names of all the researchers involved, report date and anticipated completion date.
  • Introduction : This is a summary of your Write a short overview of the purpose of the project and its main objectives. You could add a summary of the results obtained so far, future goals, how much of the project has been completed, whether it will be completed on time, and whether you are within the budget.
  • Progress: This section gives details of your objectives and how much you have completed so far. List your milestones, give details of your results, and include any tables and figures here. Some stakeholders like a completion rate which can be given as a percentage.
  • Risks and Issues: Discuss any challenges that have arisen or that you Describe how you plan to solve them. If you need to make changes to your project, give reasons in this section.
  • Round off with a reassuring paragraph that your research is on schedule. Give a summary of goals you will be working on next and when you expect to complete them.

Progress reports are an essential part of the research. They help to manage projects and secure funding. Many stakeholders need to know that you have completed certain stages of your project before releasing further funds.

Have you written any progress reports?  Let us know how you manage your projects in the comments section below.

Rate this article Cancel Reply

Your email address will not be published.

research in progress meaning

Enago Academy's Most Popular Articles

Understand Academic Burnout: Spot the Signs & Reclaim Your Focus

  • Career Corner
  • Trending Now

Recognizing the signs: A guide to overcoming academic burnout

As the sun set over the campus, casting long shadows through the library windows, Alex…

How to Promote an Inclusive and Equitable Lab Environment

  • Diversity and Inclusion

Reassessing the Lab Environment to Create an Equitable and Inclusive Space

The pursuit of scientific discovery has long been fueled by diverse minds and perspectives. Yet…

How To Write A Lab Report | Traditional vs. AI-Assisted Approach

  • AI in Academia
  • Reporting Research

How to Improve Lab Report Writing: Best practices to follow with and without AI-assistance

Imagine you’re a scientist who just made a ground-breaking discovery! You want to share your…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

Concept Papers

  • Promoting Research

Concept Papers in Research: Deciphering the blueprint of brilliance

Concept papers hold significant importance as a precursor to a full-fledged research proposal in academia…

13 Behavioral Questions & Tips to Answer Them Like a Pro!

7 Steps of Writing an Excellent Academic Book Chapter

When Your Thesis Advisor Asks You to Quit

research in progress meaning

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

research in progress meaning

What should universities' stance be on AI tools in research and academic writing?

Research Process: 8 Steps in Research Process

What Is Rsearch Process

The research process starts with identifying a research problem and conducting a literature review to understand the context. The researcher sets research questions, objectives, and hypotheses based on the research problem.

A research study design is formed to select a sample size and collect data after processing and analyzing the collected data and the research findings presented in a research report.

What is the Research Process?

There are a variety of approaches to research in any field of investigation, irrespective of whether it is applied research or basic research. Each research study will be unique in some ways because of the particular time, setting, environment, and place it is being undertaken.

Nevertheless, all research endeavors share a common goal of furthering our understanding of the problem, and thus, all traverse through certain primary stages, forming a process called the research process.

Understanding the research process is necessary to effectively carry out research and sequence the stages inherent in the process.

How Research Process Work?

Research Process: 8 Steps In Research Process

Eight steps research process is, in essence, part and parcel of a research proposal. It is an outline of the commitment that you intend to follow in executing a research study.

A close examination of the above stages reveals that each of these stages, by and large, is dependent upon the others.

One cannot analyze data (step 7) unless he has collected data (step 6). One cannot write a report (step 8) unless he has collected and analyzed data (step 7).

Research then is a system of interdependent related stages. Violation of this sequence can cause irreparable harm to the study.

It is also true that several alternatives are available to the researcher during each stage stated above. A research process can be compared with a route map.

The map analogy is useful for the researcher because several alternatives exist at each stage of the research process.

Choosing the best alternative in terms of time constraints, money, and human resources in our research decision is our primary goal.

Before explaining the stages of the research process, we explain the term ‘iterative’ appearing within the oval-shaped diagram at the center of the schematic diagram.

The key to a successful research project ultimately lies in iteration: the process of returning again and again to the identification of the research problems, methodology, data collection, etc., which leads to new ideas, revisions, and improvements.

By discussing the research project with advisers and peers, one will often find that new research questions need to be added, variables to be omitted, added or redefined, and other changes to be made. As a proposed study is examined and reexamined from different perspectives, it may begin to transform and take a different shape.

This is expected and is an essential component of a good research study.

Besides, examining study methods and data collected from different viewpoints is important to ensure a comprehensive approach to the research question.

In conclusion, there is seldom any single strategy or formula for developing a successful research study, but it is essential to realize that the research process is cyclical and iterative.

What is the primary purpose of the research process?

The research process aims to identify a research problem, understand its context through a literature review, set research questions and objectives, design a research study, select a sample, collect data, analyze the data, and present the findings in a research report.

Why is the research design important in the research process?

The research design is the blueprint for fulfilling objectives and answering research questions. It specifies the methods and procedures for collecting, processing, and analyzing data, ensuring the study is structured and systematic.

8 Steps of Research Process

Identifying the research problem.

Identifying The Research Problem

The first and foremost task in the entire process of scientific research is to identify a research problem .

A well-identified problem will lead the researcher to accomplish all-important phases of the research process, from setting objectives to selecting the research methodology .

But the core question is: whether all problems require research.

We have countless problems around us, but all we encounter do not qualify as research problems; thus, these do not need to be researched.

Keeping this point in mind, we must draw a line between research and non-research problems.

Intuitively, researchable problems are those that have a possibility of thorough verification investigation, which can be effected through the analysis and collection of data. In contrast, the non-research problems do not need to go through these processes.

Researchers need to identify both;

Non-Research Problems

Statement of the problem, justifying the problem, analyzing the problem.

A non-research problem does not require any research to arrive at a solution. Intuitively, a non-researchable problem consists of vague details and cannot be resolved through research.

It is a managerial or built-in problem that may be solved at the administrative or management level. The answer to any question raised in a non-research setting is almost always obvious.

The cholera outbreak, for example, following a severe flood, is a common phenomenon in many communities. The reason for this is known. It is thus not a research problem.

Similarly, the reasons for the sudden rise in prices of many essential commodities following the announcement of the budget by the Finance Minister need no investigation. Hence it is not a problem that needs research.

How is a research problem different from a non-research problem?

A research problem is a perceived difficulty that requires thorough verification and investigation through data analysis and collection. In contrast, a non-research problem does not require research for a solution, as the answer is often obvious or already known.

Non-Research Problems Examples

A recent survey in town- A found that 1000 women were continuous users of contraceptive pills.

But last month’s service statistics indicate that none of these women were using contraceptive pills (Fisher et al. 1991:4).

The discrepancy is that ‘all 1000 women should have been using a pill, but none is doing so. The question is: why the discrepancy exists?

Well, the fact is, a monsoon flood has prevented all new supplies of pills from reaching town- A, and all old supplies have been exhausted. Thus, although the problem situation exists, the reason for the problem is already known.

Therefore, assuming all the facts are correct, there is no reason to research the factors associated with pill discontinuation among women. This is, thus, a non-research problem.

A pilot survey by University students revealed that in Rural Town-A, the goiter prevalence among school children is as high as 80%, while in the neighboring Rural Town-A, it is only 30%. Why is a discrepancy?

Upon inquiry, it was seen that some three years back, UNICEF launched a lipiodol injection program in the neighboring Rural Town-A.

This attempt acted as a preventive measure against the goiter. The reason for the discrepancy is known; hence, we do not consider the problem a research problem.

A hospital treated a large number of cholera cases with penicillin, but the treatment with penicillin was not found to be effective. Do we need research to know the reason?

Here again, there is one single reason that Vibrio cholera is not sensitive to penicillin; therefore, this is not the drug of choice for this disease.

In this case, too, as the reasons are known, it is unwise to undertake any study to find out why penicillin does not improve the condition of cholera patients. This is also a non-research problem.

In the tea marketing system , buying and selling tea starts with bidders. Blenders purchase open tea from the bidders. Over the years, marketing cost has been the highest for bidders and the lowest for blenders. What makes this difference?

The bidders pay exorbitantly higher transport costs, which constitute about 30% of their total cost.

Blenders have significantly fewer marketing functions involving transportation, so their marketing cost remains minimal.

Hence no research is needed to identify the factors that make this difference.

Here are some of the problems we frequently encounter, which may well be considered non-research problems:

  • Rises in the price of warm clothes during winter;
  • Preferring admission to public universities over private universities;
  • Crisis of accommodations in sea resorts during summer
  • Traffic jams in the city street after office hours;
  • High sales in department stores after an offer of a discount.

Research Problem

In contrast to a non-research problem, a research problem is of primary concern to a researcher.

A research problem is a perceived difficulty, a feeling of discomfort, or a discrepancy between a common belief and reality.

As noted by Fisher et al. (1993), a problem will qualify as a potential research problem when the following three conditions exist:

  • There should be a perceived discrepancy between “what it is” and “what it should have been.” This implies that there should be a difference between “what exists” and the “ideal or planned situation”;
  • A question about “why” the discrepancy exists. This implies that the reason(s) for this discrepancy is unclear to the researcher (so that it makes sense to develop a research question); and
  • There should be at least two possible answers or solutions to the questions or problems.

The third point is important. If there is only one possible and plausible answer to the question about the discrepancy, then a research situation does not exist.

It is a non-research problem that can be tackled at the managerial or administrative level.

Research Problem Examples

Research problem – example #1.

While visiting a rural area, the UNICEF team observed that some villages have female school attendance rates as high as 75%, while some have as low as 10%, although all villages should have a nearly equal attendance rate. What factors are associated with this discrepancy?

We may enumerate several reasons for this:

  • Villages differ in their socio-economic background.
  • In some villages, the Muslim population constitutes a large proportion of the total population. Religion might play a vital role.
  • Schools are far away from some villages. The distance thus may make this difference.

Because there is more than one answer to the problem, it is considered a research problem, and a study can be undertaken to find a solution.

Research Problem – Example #2

The Government has been making all-out efforts to ensure a regular flow of credit in rural areas at a concession rate through liberal lending policy and establishing many bank branches in rural areas.

Knowledgeable sources indicate that expected development in rural areas has not yet been achieved, mainly because of improper credit utilization.

More than one reason is suspected for such misuse or misdirection.

These include, among others:

  • Diversion of credit money to some unproductive sectors
  • Transfer of credit money to other people like money lenders, who exploit the rural people with this money
  • Lack of knowledge of proper utilization of the credit.

Here too, reasons for misuse of loans are more than one. We thus consider this problem as a researchable problem.

Research Problem – Example #3

Let’s look at a new headline: Stock Exchange observes the steepest ever fall in stock prices: several injured as retail investors clash with police , vehicles ransacked .

Investors’ demonstration, protest and clash with police pause a problem. Still, it is certainly not a research problem since there is only one known reason for the problem: Stock Exchange experiences the steepest fall in stock prices. But what causes this unprecedented fall in the share market?

Experts felt that no single reason could be attributed to the problem. It is a mix of several factors and is a research problem. The following were assumed to be some of the possible reasons:

  • The merchant banking system;
  • Liquidity shortage because of the hike in the rate of cash reserve requirement (CRR);
  • IMF’s warnings and prescriptions on the commercial banks’ exposure to the stock market;
  • Increase in supply of new shares;
  • Manipulation of share prices;
  • Lack of knowledge of the investors on the company’s fundamentals.

The choice of a research problem is not as easy as it appears. The researchers generally guide it;

  • own intellectual orientation ,
  • level of training,
  • experience,
  • knowledge on the subject matter, and
  • intellectual curiosity.

Theoretical and practical considerations also play a vital role in choosing a research problem. Societal needs also guide in choosing a research problem.

Once we have chosen a research problem, a few more related steps must be followed before a decision is taken to undertake a research study.

These include, among others, the following:

  • Statement of the problem.
  • Justifying the problem.
  • Analyzing the problem.

A detailed exposition of these issues is undertaken in chapter ten while discussing the proposal development.

A clear and well-defined problem statement is considered the foundation for developing the research proposal.

It enables the researcher to systematically point out why the proposed research on the problem should be undertaken and what he hopes to achieve with the study’s findings.

A well-defined statement of the problem will lead the researcher to formulate the research objectives, understand the background of the study, and choose a proper research methodology.

Once the problem situation has been identified and clearly stated, it is important to justify the importance of the problem.

In justifying the problems, we ask such questions as why the problem of the study is important, how large and widespread the problem is, and whether others can be convinced about the importance of the problem and the like.

Answers to the above questions should be reviewed and presented in one or two paragraphs that justify the importance of the problem.

As a first step in analyzing the problem, critical attention should be given to accommodate the viewpoints of the managers, users, and researchers to the problem through threadbare discussions.

The next step is identifying the factors that may have contributed to the perceived problems.

Issues of Research Problem Identification

There are several ways to identify, define, and analyze a problem, obtain insights, and get a clearer idea about these issues. Exploratory research is one of the ways of accomplishing this.

The purpose of the exploratory research process is to progressively narrow the scope of the topic and transform the undefined problems into defined ones, incorporating specific research objectives.

The exploratory study entails a few basic strategies for gaining insights into the problem. It is accomplished through such efforts as:

Pilot Survey

A pilot survey collects proxy data from the ultimate subjects of the study to serve as a guide for the large study. A pilot study generates primary data, usually for qualitative analysis.

This characteristic distinguishes a pilot survey from secondary data analysis, which gathers background information.

Case Studies

Case studies are quite helpful in diagnosing a problem and paving the way to defining the problem. It investigates one or a few situations identical to the researcher’s problem.

Focus Group Interviews

Focus group interviews, an unstructured free-flowing interview with a small group of people, may also be conducted to understand and define a research problem .

Experience Survey

Experience survey is another strategy to deal with the problem of identifying and defining the research problem.

It is an exploratory research endeavor in which individuals knowledgeable and experienced in a particular research problem are intimately consulted to understand the problem.

These persons are sometimes known as key informants, and an interview with them is popularly known as the Key Informant Interview (KII).

Reviewing of Literature

Reviewing Research Literature

A review of relevant literature is an integral part of the research process. It enables the researcher to formulate his problem in terms of the specific aspects of the general area of his interest that has not been researched so far.

Such a review provides exposure to a larger body of knowledge and equips him with enhanced knowledge to efficiently follow the research process.

Through a proper review of the literature, the researcher may develop the coherence between the results of his study and those of the others.

A review of previous documents on similar or related phenomena is essential even for beginning researchers.

Ignoring the existing literature may lead to wasted effort on the part of the researchers.

Why spend time merely repeating what other investigators have already done?

Suppose the researcher is aware of earlier studies of his topic or related topics . In that case, he will be in a much better position to assess his work’s significance and convince others that it is important.

A confident and expert researcher is more crucial in questioning the others’ methodology, the choice of the data, and the quality of the inferences drawn from the study results.

In sum, we enumerate the following arguments in favor of reviewing the literature:

  • It avoids duplication of the work that has been done in the recent past.
  • It helps the researcher discover what others have learned and reported on the problem.
  • It enables the researcher to become familiar with the methodology followed by others.
  • It allows the researcher to understand what concepts and theories are relevant to his area of investigation.
  • It helps the researcher to understand if there are any significant controversies, contradictions, and inconsistencies in the findings.
  • It allows the researcher to understand if there are any unanswered research questions.
  • It might help the researcher to develop an analytical framework.
  • It will help the researcher consider including variables in his research that he might not have thought about.

Why is reviewing literature crucial in the research process?

Reviewing literature helps avoid duplicating previous work, discovers what others have learned about the problem, familiarizes the researcher with relevant concepts and theories, and ensures a comprehensive approach to the research question.

What is the significance of reviewing literature in the research process?

Reviewing relevant literature helps formulate the problem, understand the background of the study, choose a proper research methodology, and develop coherence between the study’s results and previous findings.

Setting Research Questions, Objectives, and Hypotheses

Setting Research Questions, Objectives, And Hypotheses

After discovering and defining the research problem, researchers should make a formal statement of the problem leading to research objectives .

An objective will precisely say what should be researched, delineate the type of information that should be collected, and provide a framework for the scope of the study. A well-formulated, testable research hypothesis is the best expression of a research objective.

A hypothesis is an unproven statement or proposition that can be refuted or supported by empirical data. Hypothetical statements assert a possible answer to a research question.

Step #4: Choosing the Study Design

Choosing The Study Design

The research design is the blueprint or framework for fulfilling objectives and answering research questions .

It is a master plan specifying the methods and procedures for collecting, processing, and analyzing the collected data. There are four basic research designs that a researcher can use to conduct their study;

  • experiment,
  • secondary data study, and
  • observational study.

The type of research design to be chosen from among the above four methods depends primarily on four factors:

  • The type of problem
  • The objectives of the study,
  • The existing state of knowledge about the problem that is being studied, and
  • The resources are available for the study.

Deciding on the Sample Design

Deciding On The Sample Design

Sampling is an important and separate step in the research process. The basic idea of sampling is that it involves any procedure that uses a relatively small number of items or portions (called a sample) of a universe (called population) to conclude the whole population.

It contrasts with the process of complete enumeration, in which every member of the population is included.

Such a complete enumeration is referred to as a census.

A population is the total collection of elements we wish to make some inference or generalization.

A sample is a part of the population, carefully selected to represent that population. If certain statistical procedures are followed in selecting the sample, it should have the same characteristics as the population. These procedures are embedded in the sample design.

Sample design refers to the methods followed in selecting a sample from the population and the estimating technique vis-a-vis the formula for computing the sample statistics .

The fundamental question is, then, how to select a sample.

To answer this question, we must have acquaintance with the sampling methods.

These methods are basically of two types;

  • probability sampling , and
  • non-probability sampling .

Probability sampling ensures every unit has a known nonzero probability of selection within the target population.

If there is no feasible alternative, a non-probability sampling method may be employed.

The basis of such selection is entirely dependent on the researcher’s discretion. This approach is called judgment sampling, convenience sampling, accidental sampling, and purposive sampling.

The most widely used probability sampling methods are simple random sampling , stratified random sampling , cluster sampling , and systematic sampling . They have been classified by their representation basis and unit selection techniques.

Two other variations of the sampling methods that are in great use are multistage sampling and probability proportional to size (PPS) sampling .

Multistage sampling is most commonly used in drawing samples from very large and diverse populations.

The PPS sampling is a variation of multistage sampling in which the probability of selecting a cluster is proportional to its size, and an equal number of elements are sampled within each cluster.

Collecting Data From The Research Sample

Collect Data From The Research Sample

Data gathering may range from simple observation to a large-scale survey in any defined population. There are many ways to collect data. The approach selected depends on the objectives of the study, the research design, and the availability of time, money, and personnel.

With the variation in the type of data (qualitative or quantitative) to be collected, the method of data collection also varies .

The most common means for collecting quantitative data is the structured interview .

Studies that obtain data by interviewing respondents are called surveys. Data can also be collected by using self-administered questionnaires . Telephone interviewing is another way in which data may be collected .

Other means of data collection include secondary sources, such as the census, vital registration records, official documents, previous surveys, etc.

Qualitative data are collected mainly through in-depth interviews, focus group discussions , Key Informant Interview ( KII), and observational studies.

Process and Analyze the Collected Research Data

Processing And Analyzing The Collected Research Data

Data processing generally begins with the editing and coding of data . Data are edited to ensure consistency across respondents and to locate omissions if any.

In survey data, editing reduces errors in the recording, improves legibility, and clarifies unclear and inappropriate responses. In addition to editing, the data also need coding.

Because it is impractical to place raw data into a report, alphanumeric codes are used to reduce the responses to a more manageable form for storage and future processing.

This coding process facilitates the processing of the data. The personal computer offers an excellent opportunity for data editing and coding processes.

Data analysis usually involves reducing accumulated data to a manageable size, developing summaries, searching for patterns, and applying statistical techniques for understanding and interpreting the findings in light of the research questions.

Further, based on his analysis, the researcher determines if his findings are consistent with the formulated hypotheses and theories.

The techniques used in analyzing data may range from simple graphical techniques to very complex multivariate analyses depending on the study’s objectives, the research design employed, and the nature of the data collected.

As in the case of data collection methods, an analytical technique appropriate in one situation may not be suitable for another.

Writing Research Report – Developing Research Proposal, Writing Report, Disseminating and Utilizing Results

Writing Research Report - Developing Research Proposal, Writing Report, Disseminating And Utilizing Results

The entire task of a research study is accumulated in a document called a proposal or research proposal.

A research proposal is a work plan, prospectus, outline, offer, and a statement of intent or commitment from an individual researcher or an organization to produce a product or render a service to a potential client or sponsor .

The proposal will be prepared to keep the sequence presented in the research process. The proposal tells us what, how, where, and to whom it will be done.

It must also show the benefit of doing it. It always includes an explanation of the purpose of the study (the research objectives) or a definition of the problem.

It systematically outlines the particular research methodology and details the procedures utilized at each stage of the research process.

The end goal of a scientific study is to interpret the results and draw conclusions.

To this end, it is necessary to prepare a report and transmit the findings and recommendations to administrators, policymakers, and program managers to make a decision.

There are various research reports: term papers, dissertations, journal articles , papers for presentation at professional conferences and seminars, books, thesis, and so on. The results of a research investigation prepared in any form are of little utility if they are not communicated to others.

The primary purpose of a dissemination strategy is to identify the most effective media channels to reach different audience groups with study findings most relevant to their needs.

The dissemination may be made through a conference, a seminar, a report, or an oral or poster presentation.

The style and organization of the report will differ according to the target audience , the occasion, and the purpose of the research. Reports should be developed from the client’s perspective.

A report is an excellent means that helps to establish the researcher’s credibility. At a bare minimum, a research report should contain sections on:

  • An executive summary;
  • Background of the problem;
  • Literature review;
  • Methodology;
  • Discussion;
  • Conclusions and
  • Recommendations.

The study results can also be disseminated through peer-reviewed journals published by academic institutions and reputed publishers both at home and abroad. The report should be properly evaluated .

These journals have their format and editorial policies. The contributors can submit their manuscripts adhering to the policies and format for possible publication of their papers.

There are now ample opportunities for researchers to publish their work online.

The researchers have conducted many interesting studies without affecting actual settings. Ideally, the concluding step of a scientific study is to plan for its utilization in the real world.

Although researchers are often not in a position to implement a plan for utilizing research findings, they can contribute by including in their research reports a few recommendations regarding how the study results could be utilized for policy formulation and program intervention.

Why is the dissemination of research findings important?

Dissemination of research findings is crucial because the results of a research investigation have little utility if not communicated to others. Dissemination ensures that the findings reach relevant stakeholders, policymakers, and program managers to inform decisions.

How should a research report be structured?

A research report should contain sections on an executive summary, background of the problem, literature review, methodology, findings, discussion, conclusions, and recommendations.

Why is it essential to consider the target audience when preparing a research report?

The style and organization of a research report should differ based on the target audience, occasion, and research purpose. Tailoring the report to the audience ensures that the findings are communicated effectively and are relevant to their needs.

30 Accounting Research Paper Topics And Ideas For Writing

Your email address will not be published. Required fields are marked *

Logo for University System of New Hampshire Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7. COMMON DOCUMENT TYPES

You write a progress report to inform a supervisor, associate, or client about progress you have made on a project over a specific period of time. Periodic progress reports are common on projects that go on for several months (or more). Whoever is paying for this project wants to know whether tasks are being completed on schedule and on budget. If the project is not on schedule or on budget, they want to know why and what additional costs and time will be needed.

Progress reports answer the following questions for the reader:

  •  How much of the work is complete?
  • What part of the work is currently in progress?
  • What work remains to be done?
  • When and how will the remaining work be completed?
  • What changes, problems or unexpected issues, if any, have arisen?
  • How is the project going in general?

Purpose of a Progress Report

The main function of a progress report is persuasive:  to reassure clients and supervisors that you are making progress, that the project is going smoothly, and that it will be completed by the expected date — or to give reasons why any of those might not be the case. They also offer the opportunity to do the following:

  • Provide a brief look at preliminary findings or in-progress work on the project
  • Give your clients or supervisors a chance to evaluate your work on the project and to suggest or request changes
  • Give you a chance to discuss problems in the project and thus to forewarn the recipients
  • Force you to establish a work schedule, so that you will complete the project on time.

Format of a Progress Report

Depending on the size of the progress report, the length and importance of the project, and the recipient, a progress report can take forms ranging from a short informal conversation to a detailed, multi-paged report. Most commonly, progress reports are delivered in following forms:

  • Memo :  a short, semi-formal report to someone within your organization (can range in length from 1-4 pages)
  • Letter :  a short, semi-formal report sent to someone outside your organization
  • Formal report:  a long, formal report sent to someone within or outside of your organization
  • Presentation :  an oral presentation given directly to the target audience.

Organizational Patterns for Progress Reports

The recipient of a progress report wants to see what you’ve accomplished on the project, what you are working on now, what you plan to work on next, and how the project is going in general. The information is usually arranged with a focus either on time or on task, or a combination of the two:

  • Focus on time:   shows time period (previous, current, and future) and tasks completed or scheduled to be completed in each period
  • Focus on specific tasks:   shows order of tasks (defined milestones) and progress made in each time period
  • Focus on larger goals :  focus on the overall effect of what has been accomplished.

Information can also be arranged by report topic. You should refer to established milestones or deliverables outlined in your original proposal or job specifications. Whichever organizational strategy you choose, your report will likely contain the elements described below.

1. Introduction

Review the details of your project’s purpose, scope, and activities. The introduction may also contain the following:

  • date the project began; date the project is scheduled to be completed
  • people or organization working on the project
  • people or organization for whom the project is being done
  • overview of the contents of the progress report.

2. Project status

This section (which could have sub-sections) should give the reader a clear idea of the current status of your project. It should review the work completed, work in progress, and work remaining to be done on the project, organized into sub-sections by time, task, or topic. These sections might include

  • Direct reference to milestones or deliverables established in previous documents related to the project
  • Timeline for when remaining work will be completed
  • Any problems encountered or issues that have arisen that might affect completion, direction, requirements, or scope.

3.  Conclusion

The final section provides an overall assessment of the current state of the project and its expected completion, usually reassuring the reader that all is going well and on schedule. It can also alert recipients to unexpected changes in direction or scope, or problems in the project that may require intervention.

4.  References section if required.

Technical Writing Essentials Copyright © by Suzan Last and UNH College of Professional Studies Online is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

ORIGINAL RESEARCH article

Analyzing the efficacy of comprehensive testing: a comprehensive evaluation provisionally accepted.

  • 1 Qassim University, Saudi Arabia

The final, formatted version of the article will be published soon.

This study aimed to examine the variations in comprehensive exam results in the English department at Qassim University in Saudi Arabia across six semesters, focusing on average score, range, and standard deviation, as well as overall student achievements. Additionally, it sought to assess the performance levels of male and female students in comprehensive tests and determine how they differ over the past six semesters. The research design utilized both analytical and descriptive approaches, with quantitative analysis of the data using frequency statistics such as mean, standard deviation, and range. The data consisted of scores from six consecutive exit exams. The findings reveal that male students scored slightly higher on average than female students, with minimal difference (p=.07). Moreover, male scores exhibited more variability and spread, indicating varying performance levels. These results suggest the need for further investigation into the factors that contribute to gender-based differences in test performance. Furthermore, longitudinal studies tracking individual student performance over multiple semesters could offer a more in-depth understanding of academic progress and the efficacy of comprehensive exam practices.

Keywords: comprehensive testing, EFL students, Evaluation, gender differences, quantitative research

Received: 15 Nov 2023; Accepted: 09 Apr 2024.

Copyright: © 2024 Alolaywi, Alkhalaf and Almuhilib. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mx. Yasamiyan Alolaywi, Qassim University, Buraidah, Saudi Arabia

People also looked at

  • Search Search Please fill out this field.
  • Economic News

US Economy News Today: Bowman Sees Risk of Another Rate Hike If Inflation Progress Stalls

Taylor Tompkins has worked for more than a decade as a journalist covering business, finance, and the economy. She has logged thousands of hours interviewing experts, analyzing data, and writing articles to help readers understand economic forces. She is the Economics Editor for news at Investopedia.

research in progress meaning

Welcome to Investopedia's economics live blog, where we explain what the day's news says about the state of the U.S. economy and how that's likely to affect your finances. Here we compile data releases, economic reports, quotes from expert sources and anything else that helps explain economic issues and why they matter to you.

Today, the economy gained jobs for the 39th consecutive month, coming in far above economists' expectations. Later, economists will get additional information on households' credit levels.

Fed Officials Say Higher Neutral Rate Could Mean Fewer Fed Rate Cuts

While all eyes are on how Federal Reserve officials seem to be diverging in opinion on the correct path ahead , they also have been steadily shifting their thoughts on where rates will ultimately end up.

During their last meeting, officials raised their long-term interest rate forecast for the first time since 2019, now projecting that interest rates will settle at a “ neutral rate ”  of 2.6%.

This neutral rate represents where interest rates should be set under normal economic conditions to maintain low unemployment while keeping long-term inflation confined to the Fed's 2% annual growth target.  

Fed officials have said in the past that the neutral rate is difficult to calculate. But some economists suggest the neutral rate may be higher than the current estimates.

In that case, interest rates may not be high enough now to be sufficiently “restrictive." That could mean fewer rate cuts would be needed over time, or that rates could possibly need to be raised higher, former U.S. Treasury Secretary Larry Summers said today in an interview with Bloomberg.

“It seems the evidence is overwhelming that the neutral rate is far higher than the Federal Reserve supposes,” Summers said.

In addition to questioning the timing of rate cuts , some Fed officials are joining Summers in wondering whether their projections for the neutral rate are too low.

“In my view, given potential structural changes in the economy, like higher investment demand relative to available savings, it is quite possible that the level of the federal funds rate consistent with low and stable inflation will be higher than before the pandemic,” Federal Reserve Gov. Michelle Bowman said Friday.

In her remarks the same day at Duke University, Dallas Fed President Lorie Logan said several factors could suggest the neutral rate is higher than current estimates. Those include growing federal deficits, the energy transition, nearshoring the supply chain, and productivity increases spurred through technology like artificial intelligence (AI).

“As I formulate my views on appropriate policy, I’m taking evidence of sustained shifts in the neutral rate into account, alongside all the other evidence on the economic and financial outlook,” Logan said. 

Fed’s Bowman Raises Possibility of Rate Hikes, Says Not at Point for Rate Cuts

Citing the risks of long-term inflation on the economy, Federal Reserve Governor Michelle Bowman held open the possibility of further rate hikes in prepared comments given at the Manhattan Institute in New York.

“While it is not my baseline outlook, I continue to see the risk that at a future meeting we may need to increase the policy rate further should progress on inflation stall or even reverse,” she said.

While there was progress on inflation last year, she said it was still unclear whether that would continue into 2024, she said. The Fed was still not at a point where it was “appropriate” to lower interest rates, she added. 

“Reducing our policy rate too soon or too quickly could result in a rebound in inflation, requiring further future policy rate increases to return inflation to 2% over the longer run,” she said. 

-Terry Lane

Fed’s Logan Says ‘Too Soon’ to Consider Rate Cuts

In highlighting worries over recent inflation readings, Dallas Federal Reserve President Lorie Logan said she had concerns high interest rates weren’t doing enough to reduce price pressures.

“I believe it’s much too soon to think about cutting interest rates. I will need to see more of the uncertainty resolved about which economic path we’re on,” Logan said in remarks to Duke University.

Logan’s comments come as the Federal Reserve held interest rates at 23-year highs at their last meeting , but forecast three interest rate cuts in 2024. However, continued strong economic data has raised questions about whether inflation would remain too high to allow the Fed to cut rates. 

So far, inflation reports in 2024 have been higher than expected . Logan described January’s inflation report as “disappointing” and February’s data as “better, ”  but still not great.  

“To be clear, the key risk is not that inflation might rise—though monetary policymakers must always remain on guard against that outcome—but rather that inflation will stall out and fail to follow the forecast path all the way back to two percent in a timely way,” Logan said. 

Overall, Logan said businesses in her region were optimistic over a “soft landing,” with a survey showing that half of the firms surveyed in Texas expected demand to pick up during the next six months.

“The risk of an abrupt deterioration in economic activity appears to be fading,” she said.

Yellen Visits China, Talks Manufacturing Capacity

U.S. Treasury Secretary  Janet Yellen  said on Friday that concerns are growing over the global economic fallout from China's excess manufacturing capacity as she kicked off a four-day visit there.

"Overcapacity can lead to large volumes of exports at depressed prices," she said in a speech in the southern city of Guangzhou, noting that overcapacity would undercut American businesses, as well as those globally, including those from India and Mexico. "And it can lead to over concentration of supply chains, posing a risk to global economic resilience," she said.

She said the  European Union  had  launched an investigation into subsidies for Chinese electric vehicle exports to Europe , and that many countries see overcapacity from China "really growing as a threat" and are "quite determined to make sure that their industries—particularly EVs, batteries, solar—do not go out of business because of artificially cheap exports from China."

"Overcapacity isn’t a new problem, but it is intensified and we are seeing emerging risks in new sectors; specifically, direct and indirect government support is currently leading to production capacity that significantly exceeds China’s domestic demand as well as what the global market can bear," she said. Apart from saying China’s factories risk producing more than the world can easily absorb, she also criticized China’s government for “unfair” treatment of American and other foreign companies.

Read more about Yellen's visit here .

-Fatima Attarwala

Jobs Surprise Economists With More Growth Than Anticipated

The economy gained 303,000 jobs in March, according to a report from the Bureau of Labor Statistics released Friday. That's far more than the 200,000 economists surveyed by  Dow Jones Newswires  and  The Wall Street Journal expected. It's up from February's revised 270,000 and still far above the 191,000 average in the five years leading up to the pandemic.

The economy has gained jobs every month since December 2020, with March marking the 39th straight month of job gains.

The  unemployment rate declined to 3.8% from 3.9%, not far from the 50-year low of 3.4% it reached last April. 

Today's report is the latest evidence of the economy’s resilience under the weight of the  Federal Reserve’s high interest rates , which have kept borrowing costs on all kinds of loans at their highest in decades to restrain inflation. 

Read more about the jobs report here .

Thomas Barwick / Getty Images

Federal Reserve Board. " Summary of Economic Projections ."

X.com. “ @LHSummers,  Apr 5, 2024, 12:36 PM .”

Federal Reserve Board. “ Risks and Uncertainty in Monetary Policy: Current and Past Considerations .”

Dallas Federal Reserve. “ Sustainably restoring price stability: progress so far and risks ahead .”

Bureau of Labor Statistics. " Employment Situation Summary. "

Federal Reserve Economic Data. " All Employees, Total Nonfarm ."

research in progress meaning

  • Terms of Service
  • Editorial Policy
  • Privacy Policy
  • Your Privacy Choices

We researched the astrology of a solar eclipse so you don’t have to. Here's what we found

Astrologers predict people will experience big changes, transformations or breakthroughs.

research in progress meaning

The solar eclipse is fast approaching. Also known as the “Great North American Eclipse” for its path of totality that will pass through several cities in the United States, next week’s eclipse will attract visitors to various parts of New York state due its proximity to the path of totality.

Next week’s eclipse is not only historically significant, but it is also significant astrologically. Eclipses are considered special, highly valued events that have great impacts on people’s lives and behaviors. Astrologers across the internet expect patterns of change and transformation to emerge in people’s lives as a result of the upcoming eclipse .

Before getting into those details, let’s cover some basics.

Of interest: New York astrologer talks April 8 solar eclipse in Aries, Mercury retrograde and more

Why does astrology matter for the eclipse?

According to the website of West Texas A&M University physics professor Dr. Christopher S. Baird , astrology is “the belief that the alignment of stars and planets affects every individual's mood, personality, and environment, depending on when (they were) born.” Because astrology is so rooted in the positioning of stars and planets, a total solar eclipse – when the moon travels between the earth and the sun, completely blocking all light from the sun temporarily – is significant.

Solar eclipse 2024: Where can I watch in the Lower Hudson Valley and New York?

The astrological significance of a solar eclipse

“Eclipses have significant astrological influence,” Lauren Ash writes for Horoscope.com , “often highlighting turning points in a person’s life and bringing about profound changes and opportunities for growth.”

According to Ash, solar eclipses “symbolize new beginnings and offer opportunities for renewal.” They are powerful forces that bring about big changes that can impact the direction of people’s lives, personal aspirations, ambitions or self-image.

Astrologers also acknowledge that the impacts of eclipses are not necessarily predictable. “They are naturally erratic,” Lisa Stardust writes for Teen Vogue , “making the universe's information unpredictable, exciting, and unforeseen.”

Eclipses are known for their ability to “illuminate matters that need to be worked on,” Lisa Stardust continues. “Eclipses hold a special place in astrology, as they are believed to be moments when we step into our fate and destiny.”

Astrologer Susan Miller, who founded AstrologyZone.com , agrees.

“They are some of the most dramatic tools that the universe uses to get us to pay attention to areas in our life that need to change,” Miller writes. “Vast changes will come to your doorstep.”

Story continues after gallery.

What to expect from the upcoming eclipse, astrologically speaking

Next week’s eclipse is expected to similarly bring transformation and progress according to Ash, Stardust and Miller. All three astrologers write to expect big, bold changes, fresh starts or breakthroughs.

“It's time to release,” Stardust writes. “Let go of whatever's not working for us or aligned with ourselves and bring in every new beginning that comes from some other beginning's end.”  

“After all,” Stardust continues, “This is a total solar eclipse of the heart, so we are moving to what we desire and love.”

But the changes may not occur all at once. Miller writes, “The changes could happen instantly, but they also can occur over a period of months.”

Activities to help embrace astrologers’ predictions for the eclipse

Astrologers have several suggestions for those interested in understanding and embracing the Great North American Eclipse’s effects. Astrologer Kerry King, who wrote about the impacts of an eclipse in 2022 for Metro UK , recommends engaging in activities like meditation or journaling, spending time in nature or spending time with the people you value most during an eclipse.

To learn more about how and where to watch the partial total eclipse in the Lower Hudson Valley, click here .

IMAGES

  1. Infographic: 6 Steps on How to Write a Progress Report

    research in progress meaning

  2. Research Methodology Progress

    research in progress meaning

  3. Flow diagram of research progress. In this figure, we graphically

    research in progress meaning

  4. Research Process Guide

    research in progress meaning

  5. Research progress from a basic idea to a published work.

    research in progress meaning

  6. Research progress flow diagram.

    research in progress meaning

VIDEO

  1. How to make progress report for research paper

  2. progress meaning- Diploma in ICT Dec 22

  3. Research Forum 2

  4. Meta Mind-Reading Technology: The Future of Communication and AI

  5. Research Report

  6. Ferrari Sydney and Ferrari North Shore's #IWD2024 event

COMMENTS

  1. Research-in-Progress (RIP): Tips

    1. Present early and often. Better to reconsider your design before submitting the IRB, collecting data, or writing the manuscript. 2. Present weeks or months before key deadlines. You'll be more willing to incorporate major changes and have time to present again. 3. Invite faculty. Ask your mentor (s) to come.

  2. Cutoff for a paper "in progress" on CV

    3. For me, it looks bad to list in-progress papers under the same heading as published and submitted papers. It comes across like you want people to skim your CV and think you've completed more papers than you really have. And after all, anyone can say they have tons of great papers in progress. It's not verifiable.

  3. What is Research? Definition, Types, Methods and Process

    Research is defined as a meticulous and systematic inquiry process designed to explore and unravel specific subjects or issues with precision. This methodical approach encompasses the thorough collection, rigorous analysis, and insightful interpretation of information, aiming to delve deep into the nuances of a chosen field of study.

  4. Progress in Science

    Progress in Science. This chapter examines theories and empirical findings on the overlapping topics of progress in science and the factors that contribute to scientific discoveries. It also considers the implications of these findings for behavioral and social science research on aging. The chapter first draws on contributions from the history ...

  5. A Beginner's Guide to Starting the Research Process

    Step 1: Choose your topic. First you have to come up with some ideas. Your thesis or dissertation topic can start out very broad. Think about the general area or field you're interested in—maybe you already have specific research interests based on classes you've taken, or maybe you had to consider your topic when applying to graduate school and writing a statement of purpose.

  6. Explaining How Research Works

    Placing research in the bigger context of its field and where it fits into the scientific process can help people better understand and interpret new findings as they emerge. A single study usually uncovers only a piece of a larger puzzle. Questions about how the world works are often investigated on many different levels.

  7. Overview of the Research Process

    Research is a rigorous problem-solving process whose ultimate goal is the discovery of new knowledge. Research may include the description of a new phenomenon, definition of a new relationship, development of a new model, or application of an existing principle or procedure to a new context. Research is systematic, logical, empirical, reductive, replicable and transmittable, and generalizable.

  8. Interim reports

    Interim reports. Interim (or progress) reports present the interim, preliminary, or initial evaluation findings. Interim reports are scheduled according to the specific needs of your evaluation users, often halfway through the execution of a project. The interim report is necessary to let a project's stakeholders know how an intervention is ...

  9. Review of Research Progress and Status

    In this chapter, the committee reviews the progress made in implementing the particulate-matter (PM) research portfolio over the period from 1998 (the year in which the portfolio was first recommended by the committee) to the middle of 2000.Because that period represents the initial stage of the PM research program, the committee's assessment necessarily focused more on continuing and planned ...

  10. Perspectives on scientific progress

    Prospects for a cosmopolitan right to scientific progress. Matthew Sample. Irina Cheema. Nature Physics (2022) Against the backdrop of various philosophical accounts, this Comment argues for the ...

  11. The 'Research in Progress' article category

    ASRHE has introduced a novel article category named 'Research in Progress'. While the journal's website provides a succinct definition of this category, initial submissions indicate that further guidance is required to highlight requirements and opportunities. The editors have decided to approach this challenge by constructing an audio editorial, recorded in a conversational format ...

  12. Research Process

    Research Process. Definition: Research Process is a systematic and structured approach that involves the collection, analysis, and interpretation of data or information to answer a specific research question or solve a particular problem. Research Process Steps. Research Process Steps are as follows: Identify the Research Question or Problem

  13. Why Are Progress Reports Important in Research?

    Progress reports also keep stakeholders informed. Anyone involved with your project wants to know: That you are working on the project and making progress. What findings you have made so far. That you are evaluating your work as you. If you foresee any challenges. Any problems you encounter are being addressed.

  14. Progress Reports

    Internal research workers use progress reports to report on their work to managers and others within their own organizations. Progress reports are useful tools for management in keeping track of work progress in their groups, and they also furnish researchers a structure for monitoring their own commitments and levels of support. ...

  15. Q: What does the status "Decision in progress" mean?

    If the status changed to "Under review" directly after submission without any interim changes, it is likely to be the initial editorial screening. If that is the case, the status "Decision in progress" might be indicative of a rejection. However, if the "Under review" referred to the external peer review, you can be hopeful.

  16. Defining Progress in Educational Research

    educational research improves or can improve practice is the more basic issue of what is the nature of progress in educational research. If one is to improve practice with re-search, one ideally would know what constitutes good educational research and what constitutes prog-ress in such research. The ability to define good educational research

  17. What Is the Big Deal About Populations in Research?

    In research, there are 2 kinds of populations: the target population and the accessible population. The accessible population is exactly what it sounds like, the subset of the target population that we can easily get our hands on to conduct our research. While our target population may be Caucasian females with a GFR of 20 or less who are ...

  18. Research In Progress synonyms

    existing research. ongoing investigation. work in development. current investigations. investigation under way. investigations under way. ongoing investigations. ongoing studies. search in progress.

  19. Research Process: 8 Steps in Research Process

    Setting Research Questions, Objectives, and Hypotheses. Step #4: Choosing the Study Design. Deciding on the Sample Design. Collecting Data From The Research Sample. Process and Analyze the Collected Research Data. Writing Research Report - Developing Research Proposal, Writing Report, Disseminating and Utilizing Results.

  20. What Is Scientifically-Based Research on Progress Monitoring?

    This document describes research on progress monitoring in the areas of reading, spelling, and mathematics at grades 1-6. Experimental research, which documents how teachers can use progress monitoring to enhance student progress, is available for one form of progress monitoring: Curriculum-Based Measurement (CBM). ... If the mean of a test is ...

  21. 7.3 Progress Reports

    Format of a Progress Report. Depending on the size of the progress report, the length and importance of the project, and the recipient, a progress report can take forms ranging from a short informal conversation to a detailed, multi-paged report. Most commonly, progress reports are delivered in following forms:

  22. The Dynamic Componential Model of Creativity and Innovation in

    DOI: 10.1016/J.RIOB.2016.10.001 Corpus ID: 44444992; The Dynamic Componential Model of Creativity and Innovation in Organizations: Making Progress, Making Meaning @article{Amabile2016TheDC, title={The Dynamic Componential Model of Creativity and Innovation in Organizations: Making Progress, Making Meaning}, author={Teresa M. Amabile and Michael G. Pratt}, journal={Research in Organizational ...

  23. Frontiers

    The research design utilized both analytical and descriptive approaches, with quantitative analysis of the data using frequency statistics such as mean, standard deviation, and range. ... tracking individual student performance over multiple semesters could offer a more in-depth understanding of academic progress and the efficacy of ...

  24. US Economy News Today: Bowman Sees Risk of Another Rate Hike If

    The unemployment rate declined to 3.8% from 3.9%, not far from the 50-year low of 3.4% it reached last April. Today's report is the latest evidence of the economy's resilience under the weight ...

  25. Epigenetic regulations under plant stress: A cereals perspective

    In the era of global climate change, abiotic stresses are the most prominent factors limiting crop productivity worldwide. Besides the gradual increase in global mean temperature, altered weather patterns, frequency of heat spikes, and drought episodes are some of the major consequences of changing climate, which threatens global food security. This problem warrants immediate attention and ...

  26. Here's what the solar eclipse means for astrology

    According to Ash, solar eclipses "symbolize new beginnings and offer opportunities for renewal.". They are powerful forces that bring about big changes that can impact the direction of people ...