UCLA Office of the Human Research Protection Program

Conducting Risk-Benefit Assessments and Determining Level of IRB Review

Regulatory background.

Investigators should understand the concept of minimizing risk when designing research and conduct a risk-benefit assessment to determine the level of IRB review of the research. In the protocol application the Investigator should:

  • Assess potential risks and discomforts associated with each intervention or research procedure;
  • Estimate the probability that a given harm may occur and its severity;
  • Explain measures that will be taken to prevent and minimize potential risks and discomforts;
  • Describe the benefits that may accrue directly to subjects; and
  • Discuss and the potential societal benefits that may be expected from the research.

Risks to subjects who participate in research should be justified by the anticipated benefits to the subject or society. This requirement is found in all codes of research ethics, and is a central requirement in the Federal regulations ( 45 CFR 46.111  and  21 CFR 56.111 ). Two of the required criteria for granting IRB approval of the research are:

  • Risks to subjects are  minimized  by using procedures which are consistent with sound research design and which do not unnecessarily expose subjects to risk, and whenever appropriate, by using procedures already being performed on the subjects for diagnostic or treatment purposes.
  • Risks to subjects are  reasonable  in relation to anticipated benefits, if any, to subjects, and the importance of the knowledge that may reasonably be expected to result. In evaluating risks and benefits, the IRB Committee will consider  only those risks and benefits that may result from the research , as distinguished from risks and benefits of therapies subjects would receive even if not participating in the research.

Definitions

Benefit:  A helpful or good effect, something intended to help, promote or enhance well-being; an advantage.

Risk: The probability of harm or injury (physical, psychological, social, or economic) occurring as a result of participation in a research study. Both the probability and magnitude of possible harm may vary from minimal to significant.

Minimal Risk:  A risk is minimal when “the probability and magnitude of harm or discomfort anticipated in the proposed research are not greater in and of themselves than those ordinarily encountered in  daily life of the general population  or during the performance of  routine physical or psychological examinations or tests .” Examples of procedures that typically are considered no more than minimal risk include: collection of blood or saliva, moderate exercise, medical record chart reviews, quality of life questionnaires and focus groups. See Expedited review categories for a complete listing.

Minimal Risk for Research involving Prisoners:  The definition of minimal risk for research involving prisoners differs somewhat from that given for non-institutionalized adults. A risk is minimal when, "the probability and magnitude of physical or psychological harm that is normally encountered in the daily lives, or in the routine medical, dental or psychological examinations of  healthy persons ."

Privacy:  Privacy is about people and their sense of being in control of others access to them or to information about themselves.

Confidentiality:  Confidentiality is about how identifiable, private information that has been disclosed to others is used and stored. People share private information in the context of research with the expectation that it be kept confidential and will not be divulged except in ways that have been agreed upon.

Types of Risks to Research Subjects

Physical Harms:  Medical research often involves exposure to pain, discomfort, or injury from invasive medical procedures, or harm from possible side effects of drugs, devices or new procedures. All of these should be considered "risks" for purposes of IRB review.

  • Some medical research is designed only to measure the effects of therapeutic or diagnostic procedures applied in the course of caring for an illness. Such research may not entail any significant risks beyond those presented by medically indicated interventions.
  • Research designed to evaluate new drugs, devices or procedures typically present more than minimal risk and involve risks that are unforeseeable that could cause serious or disabling injuries.

Psychological Harms:  Participation in research may result in undesired changes in thought processes and emotion (e.g., episodes of depression, confusion, feelings of stress, guilt, and loss of self-esteem). Most psychological risks are minimal or transitory, but some research has the potential for causing serious psychological harm.

  • Stress and feelings of guilt or embarrassment may arise from thinking or talking about one's own behavior or attitudes on sensitive topics such as drug use, sexual preferences, selfishness, and violence.
  • Stress may be induced when the researchers manipulate the subjects' environment to observe their behaviors and reactions. The possibility of psychological harm is heightened when behavioral research involves an element of deception.

Social and Economic Harms:  Some losses of privacy and breaches of confidentiality may result in embarrassment within one's business or social group, loss of employment, or criminal prosecution.

  • Areas of particular sensitivity involve information regarding alcohol or drug abuse, mental illness, illegal activities, and sexual behavior.
  • Some social and behavioral research may yield information about individuals that could be considered stigmatizing to individual subjects or groups of subjects. (e.g., as actual or potential carriers of a gene; individuals prone to alcoholism). Confidentiality safeguards must be strong in these instances.
  • Participation in research may result in additional actual costs to individuals. Any anticipated costs to research participants should be described to prospective subjects during the consent process.

Privacy Risks:  Loss of privacy in the research context usually involves either covert observation or participant observation of behavior that the subjects consider private. It can also involve access and use of private information about the subjects. The IRB must make two determinations:

  • Is the loss of privacy involved acceptable in light of the subjects' reasonable expectations of privacy in the situation under study; and
  • Is the research question of sufficient importance to justify the intrusion?

Breach of Confidentiality Risks:  Absolutely confidentiality cannot be guaranteed and is always a potential risk of participation in research. A breach of confidentiality is sometimes confused with loss of privacy, but it is a different risk. Loss of privacy concerns access to private information about a person or to a person's body or behavior without consent; confidentiality of data concerns safeguarding information that has been given voluntarily by one person to another. It is important to recognize that a breach of confidentiality may result in psychological harm to individuals (embarrassment, guilt, stress, etc.) or in social harm.

Conducting Risk-Benefit Assessments

Role of the Investigator:  When designing research studies, investigators are responsible for conducting an initial risk-benefit assessment using the steps outlined in the diagram below.

Role of the IRB:  The IRB ultimately is responsible for evaluating the potential risks and weighing the probability of the risk occurring and the magnitude of harm that may result. It must then judge whether the anticipated benefit, either of new knowledge or of improved health for the research subjects, justifies asking any person to undertake the risks. The IRB cannot approve research in which the risks are judged unreasonable in relation to the anticipated benefits. The IRB must:

  • Identify the risks associated with the research, as distinguished from the risks of therapies the subjects would receive even if not participating in research;
  • Determine that the risks will be minimized to the extent possible;
  • Identify the probable benefits to be derived from the research;
  • Determine that the risks are reasonable in relation to be benefits to subjects, if any, and the importance of the knowledge to be gained; and
  • Assure that potential subjects will be provided with an accurate and fair description (during consent) of the risks or discomforts and the anticipated benefits.

Diagram 1: Steps for Conducting a Risk-Benefit Assessment

Diagram 1: Steps for Conducting a Risk-Benefit Assessment

Ways to Minimize Risk

  • Provide complete information in the protocol regarding the experimental design and the scientific rationale underlying the proposed research, including the results of previous animal and human studies.
  • Assemble a research team with sufficient expertise and experience to conduct the research.
  • Ensure that the projected sample size is sufficient to yield useful results.
  • Collect data from conventional (standard) procedures to avoid unnecessary risk, particularly for invasive or risky procedures (e.g., spinal taps, cardiac catheterization).
  • Incorporate adequate safeguards into the research design such as an appropriate data safety monitoring plan, the presence of trained personnel who can respond to emergencies.
  • Store data in such a way that it is impossible to connect research data directly to the individuals from whom or about the data pertain; limit access to key codes and store separately from the data.
  • Incorporate procedures to protect the confidentiality of the data (e.g., encryption, codes, and passwords) and follow UCLA IRB guidelines on  Data Security in Research .

Levels of IRB Review

Exempt research.

Although the category is called "exempt," this type of research does require IRB review and registration. The exempt registration process is much less rigorous than an expedited or full-committee review. To qualify, research must fall into 8 federally-defined exempt categories. These categories present the lowest amount of risk to potential subjects because, generally speaking, they involve either collection of anonymous or publicly-available data, or conduct of the least potentially-harmful research experiments. For additional information see OHRPP Exempt Guidance .

  • Anonymous surveys or interviews
  • Passive observation of public behavior without collection of identifiers
  • Retrospective chart reviews with no recording of identifiers
  • Analyses of discarded pathological specimens without identifiers

Expedited Research

To qualify for an expedited review, research must be no more than minimal risk and fall into nine (9) federally-defined expedited categories. These categories involve collection of samples and data in a manner that is not anonymous and that involves no more than minimal risk to subjects. For additional information see  OHRPP Expedited Guidance .

  • Surveys and interviews with collection of identifiers
  • Collection of biological specimens (e.g., hair, saliva) for research by noninvasive means
  • Collection of blood samples from healthy volunteers
  • Studies of existing pathological specimens with identifiers

Full Board Research

Proposed human subject research that does not fall into either the exempt or expedited review categories must be submitted for full committee review. This is the most rigorous level of review and, accordingly, is used for research projects that present greater than minimal risk to subjects. The majority of biomedical protocols submitted to the IRB require full Committee review. For additional information see  OHRPP Full Board Guidance .

  • Clinical investigations of drugs and devices
  • Studies involving invasive medical procedures or diagnostics
  • Longitudinal interviews about illegal behavior or drug abuse
  • Treatment interventions for suicidal ideation and behavior

Regulations and References

  • DHHS 45 CFR 46.110
  • DHHS 45 CFR 46.111(a)(1-2)
  • FDA 21 CFR 56.110
  • FDA 21 CFR 56.111(a)(1-2)
  • OHRP IRB Guidebook, Chapter 3: Basic IRB Review, Section A, Risk/Benefit Analysis
  • Open access
  • Published: 20 April 2012

The risk-benefit task of research ethics committees: An evaluation of current approaches and the need to incorporate decision studies methods

  • Rosemarie D L C Bernabe 1 ,
  • Ghislaine J M W van Thiel 1 ,
  • Jan A M Raaijmakers 2 &
  • Johannes J M van Delden 1  

BMC Medical Ethics volume  13 , Article number:  6 ( 2012 ) Cite this article

15k Accesses

18 Citations

1 Altmetric

Metrics details

Research ethics committees (RECs) are tasked to assess the risks and the benefits of a trial. Currently, two procedure-level approaches are predominant, the Net Risk Test and the Component Analysis.

By looking at decision studies, we see that both procedure-level approaches conflate the various risk-benefit tasks, i.e., risk-benefit assessment, risk-benefit evaluation, risk treatment, and decision making. This conflation makes the RECs’ risk-benefit task confusing, if not impossible. We further realize that RECs are not meant to do all the risk-benefit tasks; instead, RECs are meant to evaluate risks and benefits, appraise risk treatment suggestions, and make the final decision.

As such, research ethics would benefit from looking beyond the procedure-level approaches and allowing disciplines like decision studies to be involved in the discourse on RECs’ risk-benefit task.

Peer Review reports

Research ethics committees (RECs) are tasked to do a risk-benefit assessment of proposed research with human subjects for at least two reasons: to verify the scientific/social validity of the research since an unscientific research is also an unethical research; and to ensure that the risks that the participants are exposed to are necessary, justified, and minimized [ 1 ].

Since 1979, specifically through the Belmont Report, the requirement for a “systematic, nonarbitrary analysis of risks and benefits” has been called for, though up to the present, commentaries about the lack of a generally acknowledged suitable risk-benefit assessment method continue [ 1 ]. The US National Bioethics Advisory Commission (US-NBAC), for example, stated the following in its 2001 report on Ethical and Policy issues in Research Involving Human Participants:

"An IRB’s 1

An institutional review board (IRB) is synonymous to an ethics committee. For consistency’s sake, we shall use REC throughout this paper.

assessment of risks and potential benefits is central to determining that a research study is ethically acceptable and would protect participants, which is not an easy task, because there are no clear criteria for IRBs to use in judging whether the risks of research are reasonable in relation to what might be gained by the research participant or society [ 2 ]."

The lack of a universally accepted risk-benefit assessment criteria does not mean that the research ethics literature says nothing about it. Within this same 2001 report, the US-NBAC recommended Weijer and Miller’s Component Analysis to RECs in evaluating clinical researches. As a reaction to Weijer and P. Miller, Wendler and F. Miller proposed the Net Risk Test. For convenience sake, we shall use the term “procedure-level approaches” [ 3 ] to refer to the models of Weijer et al. and Wendler et al.

In spite of their ideological differences, both procedure-level approaches are procedural in the sense that both approaches propose a step-by-step process in doing the risk-benefit assessment. In this paper, we shall not tackle their differences; rather, we are more interested in their similarities. We are of the position that both approaches fall short of providing an evaluation procedure that is systematic and nonarbitrary precisely because they conflate the various risk-benefit tasks, i.e., risk-benefit analysis, risk-benefit evaluation, risk treatment, and decision making [ 4 – 6 ]. As such, we recommend clarifying what these individual tasks refer to, and to whom these tasks must go. Lastly, we shall assert that RECs would benefit by looking into the current inputs of decision studies on the various risk-benefit tasks.

The procedure-level approaches

Charles Weijer and Paul Miller’s Component Analysis (Figure 1 ) requires research protocol procedures or “components” to be evaluated separately, since the probable benefits of one component must not be used to justify the risks that another component poses [ 2 ]. In this system, RECs would need to make a distinction between procedures in the protocol that are with and those that are without therapeutic warrant since therapeutic procedures would need to be analyzed differently compared to those that are non-therapeutic. It works on the assumption that a therapeutic warrant, that is, the reasonable belief that participants may directly benefit from a procedure, would justify more risks for the participants [ 7 ]. As such, therapeutic procedures ought to be evaluated based on the following conditions, in chronological order: that clinical equipoise exists, that is, that there is an “honest professional disagreement in the community of expert practitioners as to the preferred treatment” [ 8 ]; the “procedure is consistent with competent care; and risk is reasonable in relation to potential benefits to subjects” [ 7 ]. Non-therapeutic procedures, on the other hand, would need to be evaluated on the following conditions: the “risks are minimized and are consistent with sound scientific design; risks are reasonable in relation to knowledge to be gained; and if vulnerable population is involved, (there must be) no more than minor increase over minimal risk” [ 7 ]. Lastly, the REC would need to determine if both therapeutic and non-therapeutic procedures are acceptable [ 7 ]. If all components “pass”, then the “research risks are reasonable in relation to anticipated benefits” [ 7 ].

figure 1

Component Analysis [ 7 , 9 ].

David Wendler and Franklin Miller, on the other hand, developed the Net-Risk Test (Figure 2 ) as a reaction to the Component Analysis. This system requires RECs to first “minimize the risks of all interventions included in the study” [ 10 ]. After which, the REC ought to review the remaining risks by first looking at each intervention in the study, and evaluating if the intervention “offers a potential for clinical benefit that compensates for its risks and burdens” [ 10 ]. If an intervention does offer a potential benefit that can compensate for the risks, then the intervention is acceptable; otherwise, the REC would need to determine whether the net risk is “sufficiently low and justified by the social value of the intervention” [ 10 ]. By net risk, they refer to the “risks of harm that are not, or not entirely, offset or outweighed by the potential clinical benefits for participants” [ 11 ]. If the net risks are sufficiently low and are justified by the social value of the intervention, then the intervention is acceptable; otherwise, it is not. Lastly, the REC would need to “calculate the cumulative net risks of all the interventions…and ensure that, taken together, the cumulative net risks are not excessive” [ 10 ].

figure 2

The Net Risk Test [ 10 ].

Recently, Rid and Wendler elaborated the Net Risk Test through a seven-step framework (see Figure 3 ) that is meant to offer a chronological, “systematic and comprehensive guidance” for the risk-benefit evaluations of RECs [ 11 ]. As we could see from Figure 3 , most of the steps are the same as that of the previously explained Net Risk Test; the main addition of the framework is the first step, which is to ensure and enhance the study’s social value. In this first step, Rid and Wendler meant that RECs, at the start of their risk-benefit evaluation, ought to “ensure the study methods are sound”; “ensure that the study passes a minimum threshold of social value”; and “enhance the knowledge to be gained from the study” [ 11 ]. It is only after the social value of the study has been identified, evaluated, and enhanced could the RECs identify the individual interventions and then go through the other steps, i.e., the steps we have earlier discussed in the Net Risk Test.

figure 3

Seven-step framework for risk-benefit evaluations in biomedical research [ 11 ].

The procedure-level approaches and the conflation of risk-benefit analysis, risk-benefit evaluation, risk treatment, and decision making

These procedure-level approaches may be credited for providing some form of a framework for the risk-benefit assessment tasks of RECs. They have also provided RECs with a framework that includes and puts into perspective certain ethical concepts that may or may not have been considered in REC evaluations, but are now procedurally necessary concepts. Weijer and Miller, for example, made it necessary for RECs to always consider therapeutic warrant, equipoise, and minimal risk when evaluating the risk-benefit balance of a study. Wendler and Miller on the other hand, provided RECs with the concept of net risk. In spite of these contributions, these approaches presuppose (maybe unwittingly) that risk-benefit analysis, risk-benefit evaluation, risk treatment, and decision making can all be conflated. This, in our view, is a major error that ought to be corrected since from this error flow other problems, problems that unavoidably make the procedures unsystematic and arbitrary. To substantiate our view, we first have to make a necessary detour by discussing the distinction between risk-benefit analysis, risk-benefit evaluation, risk treatment, and decision making [ 4 , 5 ]. After which, we shall show how the conflation is present in the procedure-level approaches and how such a conflation leads to difficult problems.

Distinction between risk-benefit analysis, risk-benefit evaluation, risk treatment, and decision making

Decisions on benefits and risks in fact involve four activities: risk-benefit analysis, risk-benefit evaluation, risk treatment, and decision making [ 4 – 6 ]. In the current debate, these terms are used as if they are interchangeable. Precisely because these four activities have four different demands, it must be made clear that the problem is not merely on terminological preference; that is, the problem cannot be solved by simply “agreeing” to use one term over another. In risk studies, the risk-benefit task concretely demands four separate activities [ 4 , 6 ]. Hence, these terms are not interchangeable, and their order must be chronological. The distinctions among these tasks and the necessity of their chronological ordering are as follows.

Risk-benefit analysis refers to the “systematic use of information to identify initiating events, causes, and consequences of these initiating events, and express risk (and benefit)” [ 4 ]. This, risk-benefit analysis refers to 1.) gathering of risk and benefit events, causes, and consequences; and 2.) presenting this wealth of information in a systematic and comprehensive way, in accordance with the purpose why such information is systematized in the first place. There are a number of risk analysis methods such as fault tree analysis, event tree analysis, Bayesian networks, Monte Carlo simulation, and others [ 4 ]. The multi criteria decision analysis (MCDA) method, mentioned by the EU Committee for Medicinal Products for Human Use (CHMP) in the Reflection Paper on Benefit Risk Assessment Methods in the Context of the Evaluation of Marketing Authorization Applications of Medicinal Products for Human Use [ 12 ] , proposes the use of a value tree in analyzing the risk-benefit balance of a drug, for example. Adjusted to drug trials, a drug trial risk-benefit analysis value tree could look like (Figure 4 ).

figure 4

Risk-benefit analysis value tree .

In this value tree (Figure 4 ), we used King and Churchill’s typology of harms and benefits [ 1 ]. From each of the branches, the risk analyst would fill in information about a specific study. Of course, there could be more than one input under each category, depending on the nature of the drug trial being analyzed. Also, this value tree serves as an example; this is not the only way that benefits and risks may be analyzed within the context of drug trials. The best way to analyze risks and benefits within this context is something that ought to be further discussed and developed. Our aim is simply to show that a method such as a value tree is capable of encapsulating and framing the multidimensional nature of the causes and consequences of the benefits and risks of a study within one “tree.” This provides a functional risk-benefit picture from which the risks and the benefits may be evaluated, i.e., risk-benefit evaluation.

Risk-benefit evaluation refers to the “process of comparing risk (and benefit) against given risk (and benefit) criteria to determine the significance of the risk (and the benefit)” [ 4 ]. There are a number of methods to evaluate benefits and risks. Within the MCDA model for example, the “identification of the risk-benefit criteria; assessment of the performance of each option against the criteria; the assignment of weight to each criterion; and the calculation of the weighted scores at each level and the calculation of the overall weighted scores”[ 13 ] would constitute risk evaluation. The multriattribute utility theory (MAUT) is yet another example of an evaluation method. The MAUT is a theory that is basically “concerned with making tradoffs among different goals” [ 14 ]. This theory factors in human values, values defined as “the functions to use to assign utilities to outcomes” [ 14 ]. From the value tree “inputs,” the evaluator would then need to assign weights to each of these inputs. The purpose of plugging in weights is to establish the importance of each input, according to the evaluators. This is tantamount to establishing criteria, or identifying and making explicit the evaluators’ definition of acceptable risk. Next, the evaluators would need to plug in numerical values as the utility values of those that are being evaluated. These values would be multiplied to the weight. The latter values, when summed, would constitute the total utility value. To illustrate, if, for example, an REC wishes to make an evaluation of a psychotropic study drug and the standard drug, an REC may come up with MAUT chart like (Table 1 ).

Just like the value tree, our purpose is not to endorse only one way of doing the evaluation. Our purpose is merely to illustrate that such a decision study tool is capable of explicitly showing the following: a.) the inputs that the evaluators think must play a role in the evaluation; b.) the values of the evaluators, through the scores they have provided; c.) the importance they give to each of the factors/inputs through the weights that they have provided, d.) how the things compared (in this case, the study drug and the standard drug) fare given a, b and c ; and e.) a global perspective of what a, b, c, and d amount to, i.e., through the total utility value.

In the risk-benefit literature in research ethics, we find statements that such an algorithm is undesirable because it “yields one and only one verdict about the risk-benefit profile of each possible protocol” [ 11 ]. On this issue, CMHP’s Reflection is instructive. The scores in quantitative evaluations are valuable not because of some absolute value, but because these scores can

"…focus the discussion by highlighting the divergences between the assessors and stakeholders concerning choice for weights. The benefit of such analysis methods is that the degree and nature of these divergences can be assessed, even in advance of any compound’s review. The same method might be used with the weights (e.g., of different stakeholders) and make both the differences and the consequences of those differences more explicit. If the analyses agree, decision-makers can be more comfortable with a decision. If the analyses disagree, exact sources of the differences in view will be identified, and this will help focus the discussion on those topics [ 12 ]."

Thus, the scores are meant to allow the evaluators to know each others’ values, similarities, differences, and divergences. The divergences and differences could aid in focusing the REC discussion and figure out problem areas in a deliberate, transparent, coherent, and less intuitive manner [ 15 ].

Risk-benefit analysis and evaluation together constitute risk-benefit assessment [ 4 ] .

Once risks and benefits have been evaluated versus the evaluators’ given criteria, risk evaluation allows evaluators to decide “which risks need treatment and which do not” [ 6 ]. In decision studies, amplifying benefits and modifying risks are possible only after a global understanding of it through risk assessment has been achieved. Thus, after risk-benefit assessment comes risk treatment. By risk treatment, we refer to the “process of selection and implementation of measures to modify risk…measures may include avoiding, optimizing, transferring, or retaining risk” [ 4 ]. In terms of trials, risk treatment would refer to enhancing the trial’s social value, reducing the risks to the participants, and enhancing the participants’ benefits [ 11 ]. There may be concerns especially from REC members who have been used to minimizing risk immediately after its identification that this process necessitates them to suspend such move until risk evaluation is done, a procedure that may be counter-intuitive for some. However, the process of “immediately cutting the risks” also have passed through the process of evaluation, although intuitively and implicitly. An REC member who says that the risks of a certain procedure may be minimized or that the risks are unnecessary given the research question has already implicitly gone through a personal evaluation of what is and what is not necessary in such a clinical trial.

After investigating on the possibilities to modify risk and amplify the benefits, the decision makers would then have to finally decide whether the risks of the trial are justified given the benefits. By decision making , we refer to the final discussion of the REC on whether benefits truly outweigh risks, i.e., given all the information provided, are the risks of the trial ethically acceptable due to the merits of the probable benefits?

It is important to note that in the risk literature [ 4 , 13 ], the CHMP Reflection [ 12 ], and the CIOMS report [ 16 ], the risk-benefit tasks are assumed to be done interdependently and that the tasks are reflective of various values, interests, and ethical perspectives. At least for marketing authorization and marketed drug evaluation purposes, the sponsor and/or the investigator are assumed to be responsible for risk-benefit assessment and to a certain extent, the proposal of risk treatment measures. It makes a lot of sense that the sponsor ought to be responsible for risk analysis precisely because in this task, “experts on the systems and activities being studied are usually necessary to carry out the analysis” [ 4 ]. The regulatory authorities, on the other hand, are expected to provide guidelines for the risk-benefit analysis criteria. They also ought to provide their own version of risk-benefit evaluation to determine areas of divergences and differences, to extensively discuss risk treatment measures and options, and finally to deliberate and decide based on all these inputs.

Conflation of the various risk-benefit tasks by the procedure-level approaches

At the most superficial level, we notice that Wendler and Rid used the terms “risk-benefit assessment” and “risk-benefit evaluation” interchangeably to refer to the one and the same Net Risk Test [ 11 , 17 ]. Nevertheless, it could be argued that this is just a matter of misuse of terms, and that such does not substantially affect the approach that is proposed. Thus, we would need to look deeper into the Net Risk Test to justify our claim that it conflates the various risk-benefit tasks.

In the latest seven-step framework of the Net Risk Test, what ought to be a framework for risk-benefit evaluation of RECs ended up incorporating aspects of risk-benefit assessment, risk treatment, and decision making. The first step, that is, ensuring and enhancing the study’s social value, is risk treatment. The second step, that is, identifying the research interventions, is risk analysis. The third and fourth steps, which are the evaluation and reduction of risks to participants, and the evaluation and enhancing of potential benefits to participants, both fall into risk-benefit evaluation and risk treatment. It is worthwhile to note that in the Net Risk Test, the evaluation and the treatment of risks and benefits were not preceded by the identification of these risks and benefits; instead, prior to the third and fourth steps is the step to identify research interventions, a necessary but incomplete step in risk-benefit analysis. The fifth step, that is, the evaluation whether the interventions pose net risks, is risk-benefit evaluation. The sixth step, which is to evaluate whether the net risks are justified by the potential benefits of other interventions, is decision making. The last step, which is to evaluate whether the remaining net risks are justified by the study’s social value, is also decision making. Thus, the Net Risk Test in principle encompasses all the risk-benefit tasks without taking into account the distinctions, the chronological order among the various tasks, nor the division of labor in the various risk-benefit tasks.

The Component Analysis, just like the Net Risk Test, does the same conflation. In the process of distinguishing procedures into either therapeutic or non-therapeutic, the REC members would first need to identify the procedures to assess, i.e., risk analysis. The REC members would then need to evaluate therapeutic procedures differently compared to non-therapeutic procedures. Therapeutic procedures have to be evaluated on whether clinical equipoise exists, and whether the procedure is consistent with competent care. These two criteria may be considered as ethical principles that ought to be present in the deliberation towards decision making. Thus, these are decision making tasks. Next, the REC members would need to determine if the therapeutic procedure is reasonable in relation to the potential benefits to subjects. Since REC members need to answer questions of “reasonability,” this is a decision making task that presupposes risk-benefit evaluation. Non-therapeutic procedures, on the other hand, would necessitate the assessor to evaluate if risks are minimized and if risks are consistent with sound scientific design. This is risk treatment. Next, the assessor would need to verify if the risk of the non-therapeutic procedure is reasonable in relation to knowledge to be gained. Again, this is a decision making task that presupposes risk-benefit evaluation. In cases where vulnerable patients are involved, the REC members would need to verify if no more than minor increase over minimal risk is involved; this is a discussion that is likely to be present in the deliberation towards decision making, which also presupposes risk-benefit evaluation. Lastly, the assessor would need to make a decision if both therapeutic and non-therapeutic procedures pass. This is decision making. Hence, again, what we have is a system that touches on each of the risk-benefit tasks without making a distinction among the various tasks.

Since the risk-benefit tasks are conflated, the various tasks are necessarily simplified and confused. We have seen that the various risk-benefit tasks are resource intensive (since various experts must be involved), necessarily complex (since a drug trial is rarely simple), and time consuming. This is the reason why they are done separately. To conflate the various tasks into one system that ought to be accomplished within the few hours that the REC convenes is an impossibility. Precisely because of this conflation, plus the consideration that all the risk-benefit tasks ought to be done within the time restrictions of an REC, both procedure-level approaches cursorily and confusedly “accomplish” the various tasks. As such, we cannot expect the procedure-level approaches to have the same level of robustness, transparency, explicitness, and coherence as the various approaches of decision studies have. Neither of the procedure-level approaches could have the same robustness that the value-tree had, for example, in expressing and illustrating the relations between the nature, cause, consequences, as well as the uncertainties, of both risk and benefit components. Neither is also transparent, explicit, and rigorous enough to capture the acceptable risk definitions and the various weights and scores that are reflective of the various values and ethical dispositions that the MAUT method provided. The two procedure-level approaches simply do not require evaluators to be explicit in terms of their evaluative values. Though risk treatment is largely present in both procedure-level approaches, risk treatment, at least in the Net Risk Test, is sometimes confounded with risk evaluation. In the procedure-level approaches, RECs would also not have the benefit of systematically focusing the discussion on divergences and differences that a good risk evaluation makes possible. Lastly, because of the conflation and confusion of the various risk-benefit tasks, REC members are left to their own devices and intuition to decide on what is important to discuss and which is not, and eventually, to decide if the risks are justifiable relative to the benefits. Such a “procedure” could be categorized as a “taking into account and bearing in mind” process, a process that Dowie rightfully criticized as vague, general, and plainly intuitive [ 15 ].

Recommendations

We have seen that the methods from decision studies are more robust, transparent, and coherent than any of the procedure-level approaches. This is not surprising considering the fact that decision studies have been utilized in many various fields for quite some time now. The robustness of the decision studies methods stems from the clear distinction between risk-benefit analysis, risk-benefit evaluation, risk treatment, and decision making. In decision studies, each of the risk-benefit tasks is a system in itself that ought not to be conflated. In addition, in contrast to “taking into account and bearing in mind” processes, decision studies encourage the exposure of beliefs and values [ 15 ] precisely because it is from this explicitness that discussions can be defined and ordered. As such, we recommend the following:

RECs should make clear what their task is. RECs do not have the time and are not in the best position to do risk analyses. As such, risk analysis must be a task for the sponsor. As regards risk evaluation, RECs ought to provide their own risk-benefit evaluation to pair with the sponsor’s/investigator’s evaluation since this is the best way to systematically point out areas of divergence/convergence. These areas would aid in putting order in REC discussions. The evaluation of risk treatment suggestions and possibly coming up with a revised or different risk treatment appraisal ought to also form part of REC discussions. Lastly, it is obviously the REC’s task to make the final decision on whether the risks of the trial are justified given the benefits.

Precisely because such a clarification of tasks is so essential if the REC is to function efficiently, RECs must look into how decision studies may be incorporated in its risk-benefit tasks. This is something we will do in our next article. For now, it is imperative to lay the theoretical groundwork for the urgency of such incorporation.

The procedure-level approaches emphasize on the role of the various ethical concepts such as net risk, minimum risk, clinical equipoise, in the risk-benefit task of RECs. These are legitimate concerns; nevertheless, RECs must know when these concepts play a role in the various risk-benefit tasks. Minimal risk, for example, is a concept that ought to be present in risk treatment and/or deliberation towards final decision making.

Both the Net Risk Test and the Component Analysis conflate risk-benefit analysis, risk-benefit evaluation, risk treatment, and decision making. This makes the risk-benefit task of RECs confusing, if not impossible. It is necessary to make a distinction between these four different tasks if RECs are to be clear about what their task truly is. By looking at decision studies, we realize that RECs ought to evaluate risks and benefits, appraise risk treatment suggestions, and make the final decision. Further clarification and elaboration of these tasks would necessitate research ethicists to look beyond the procedure-level approaches. It further requires research ethicists to allow decision studies discourses into the current discussion on the risk-benefit tasks of RECs. Admittedly, this would take a lot of time and research effort. Nevertheless, the discussion on the REC’s risk-benefit task would be more fruitful and democratic if research ethics opens its doors to other disciplines that could truly help clarify risk-benefit task distinctions.

King NM, Churchill LR: Assessing and comparing potential benefits and risks of harm. The Oxford textbook of clinical research ethics. Edited by: Emanuel E, Grady C, Crouch RA, Lie RA, Miller FG, Wendler D. 2008, Oxford University Press, New York, 514-26.

Google Scholar  

US National Bioethics Advisory Commission: Ethical and Policy issues in Research Involving Human Participants. 2001

Westra AE, de Beufort ID: The merits of procedure-level risk-benefit assessment. 2011, Ethics and Human Research, IRB

Aven T: Risk analysis: asssessing uncertainties beyond expected values and probabilities. 2008, Wiley, Chichester

Book   Google Scholar  

Vose D: Risk analysis: a quantitative guide. 2008, John Wiley & Sons, Chichester, 3

European Network and Information Security Agency. Risk assessment. European Network and Information Security Agency. 2012, Available from: http://www.enisa.europa.eu/act/rm/cr/risk-management-inventory/rm-process/risk-assessment

Miller P, Weijer C: Evaluating benefits and harms in clinical research. Principles of Health Care Ethics, Second Edition. Edited by: Ashcroft RE, Dawson A, Draper H, McMillan JR. 2007, Wiley & Sons

Weijer C, Miller PB: When are research risks reasonable in relation to anticipated benefits?. Nat Med. 2004, 10 (6): 570-573. 10.1038/nm0604-570.

Article   Google Scholar  

Weijer C: When are research risks reasonable in relation to anticipated benefits?. J Law Med Ethics. 2000, 28: 344-361. 10.1111/j.1748-720X.2000.tb00686.x.

Wendler D, Miller FG: Assessing research risks systematically: the net risks test. J Med Ethics. 2007, 33 (8): 481-486. 10.1136/jme.2005.014043.

Rid A, Wendler D: A framework for risk-benefit evaluations in biomedical research. Kennedy Inst Ethics J. 2011 June, 21 (2): 141-179. 10.1353/ken.2011.0007.

European Medicines Agency -- Committee for Medicinal Products for Human Use. Reflection paper on benefit-risk assessment methods in the context of the evaluation of marketing authorization applications of medicinal products for human use. 2008

Mussen F, Salek S, Walker S: Benefit-risk appraisal of medicines. 2009, John Wiley & Sons, Chichester

Baron J: Thinking and deciding. 2008, Cambridge University Press, New York, 4

Dowie J: Decision analysis: the ethical approach to most health decision making. Principles of Health Care Ethics. Edited by: Ashcroft RE, Dawson A, Draper H, McMillan JR. 2007, John Wiley & Sons, 577-83.

Council for International Organizations of Medical Sciences’s Working Group IV. Benefit-risk balance for marketed drugs: evaluating safety signals. 1998

Rid A, Wendler D: Risk-benefit assessment in medical research– critical review and open questions. Law, Probability Risk. 2010, 9: 151-177. 10.1093/lpr/mgq006.

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6939/13/6/prepub

Download references

Acknowledgements

This study was performed in the context of the Escher project (T6-202), a project of the Dutch Top Institute Pharma, Leiden, The Netherlands.

Author information

Authors and affiliations.

Julius Center for Health Sciences and Primary Care, Utrecht University Medical Center, Heidelberglaan 100, Utrecht, 3584CX, The Netherlands

Rosemarie D L C Bernabe, Ghislaine J M W van Thiel & Johannes J M van Delden

GlaxoSmithKline, Huis ter Heideweg 62, Zeist, 3705, LZ, The Netherlands

Jan A M Raaijmakers

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Rosemarie D L C Bernabe .

Additional information

Competing interests.

RB’s PhD project is funded by the Dutch Top Institute Pharma. JR works for and holds stocks in GlaxoSmithKline. JvD and GvT have no competing interests to declare.

Authors’ contributions

All authors were involved in the design of the manuscript. RB did the research and wrote the draft and final manuscript; GvT commented on the drafts, wrote parts of the manuscript, and approved the final version; JR commented on the drafts and approved the final version of the manuscript; JvD commented on the drafts and approved the final version of the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2, authors’ original file for figure 3, authors’ original file for figure 4, authors’ original file for figure 5, rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Bernabe, R.D.L.C., van Thiel, G.J.M.W., Raaijmakers, J.A.M. et al. The risk-benefit task of research ethics committees: An evaluation of current approaches and the need to incorporate decision studies methods. BMC Med Ethics 13 , 6 (2012). https://doi.org/10.1186/1472-6939-13-6

Download citation

Received : 05 April 2012

Accepted : 11 April 2012

Published : 20 April 2012

DOI : https://doi.org/10.1186/1472-6939-13-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Risk benefit assessment
  • Ethics committee
  • Decision theory
  • Net risk test
  • Component analysis

BMC Medical Ethics

ISSN: 1472-6939

risk benefit assessment in research

Philosophical Bioethics Hub

Risk-Benefit Assessment in Research Ethics

Introduction.

risk benefit assessment in research

What is research ethics?

Research ethics deals with acceptable norms in the design and application of a research study. It governs the standards of conduct of investigators in scientific research in order to uphold important principles including autonomy, justice and beneficence (WHO).

What is Risk-Benefit Assessment?

Since the Belmont Report, most biomedical research guidelines require a sort of risk-benefits assessment in order to carry the research scientifically and ethically. The principles underlying this sort of assessment are beneficence and non-maleficence.

Justification for risk-benefit assessment are twofold “to verify the scientific/social validity of the research since an unscientific research is also an unethical research; and to ensure that the risks that the participants are exposed to are necessary, justified, and minimized.” (Bernabade et al 2012)

Therefore, participation in research trials is best when participants derive something health, or something other, valuable from it, or when it significantly improves generalizable knowledge without inducing unreasonable and unjust harm.

Assigned Reading

Rid, A., & Wendler, D. (2011). A framework for risk-benefit evaluations in biomedical research.  Kennedy Institute of Ethics journal ,  21 (2), 141–179. https://doi.org/10.1353/ken.2011.0007

Thesis : In their paper Rid and Wendler (2011) note the importance of risk-benefit assessment and highlight the paucity of comprehensive and concrete guidance to perform these assessment. This has resulted in rather unsystematic methods often largely based on intuitions and a disparity of assessment across several research studies. In order to address this gap the authors, designed the first step-by-step comprehensive guiding framework for risks benefits assessments in research ethics. Their framework is based on extant guidelines and regulations, other relevant literature and normative analysis.

Discussion Questions

The following questions were considered by seminar participants prior to the discussion:

  • What were some of the strengths of the framework?
  • What were some of the weaknesses or more vague elements of the framework?

Reflection Points

1. Evaluation of the requirements that studies meet a minimum threshold of “social value” – social value is a normatively laden concept (what ones person thinks as contributing to social value, another might think of as subtracting). It is also vague and may be too easy to apply (can hand waive that every study has social value of some sort).

2. “Enhancement” as an idea in their framework (requirement to “enhance” the social value the study and “enhance” potential benefits to participants). What is the normative justification for this? Why do researchers have an obligation to do this? Or IRB members? Also, consider potential unintended consequences of this – may negatively affect some of the science in the study, may impose costs on researchers or society, etc.

3. Evaluation of “clinical” benefit to participants. Is this biasing against non-clinical studies? Is this too narrow (benefits of psychological or other nature – such as fulfilled desire to be altruistic)? On the other hand, too broad or hand-waving (“benefit” from talking to researchers about feelings)? Violates equipoise/therapeutic misconception?

4. Steps 5, 6, 7/ the “weighing steps” – steps where the identified risks (and their likelihood), clinical benefits, net risks, and social value are weighed to establish whether the assessment is favorable toward allowing the study. How exactly is this weighing done? Ideas of “informed and impartial social arbitrator” and “informed clinician” are introduced to help, but these constructs are vague and can allow for introduction of bias.

5. Lack of context – avoids questions such as whether the study is therapeutic or non-therapeutic? Whether the study involves people who are dying, etc.6. May be too biased towards approval since it primes people to consider “social value” in first and last step–social value is easy to come by.

References and Additional Resources

Bernabe, R.D.L.C., van Thiel, G.J.M.W., Raaijmakers, J.A.M.  et al.  (2012). The risk-benefit task of research ethics committees: An evaluation of current approaches and the need to incorporate decision studies methods.  BMC Med Ethics   13 ,  6 https://doi.org/10.1186/1472-6939-13-6

Abdalla M.E. (2017) Ethical Issues Involved with the Analysis of Risks and Benefits. In: Silverman H. (eds) Research Ethics in the Arab Region. Research Ethics Forum, vol 5. Springer, Cham. https://doi.org/10.1007/978-3-319-65266-5_8

Rid, Annette & Wendler, David. (2010). Risk-benefit assessment in medical research–critical review and open questions. Law, Probability and Risk . 9 . 151-177. 10.1093/lpr/mgq006.

Risk Assessment and Risk-Benefit Assessment

  • First Online: 14 July 2022

Cite this chapter

risk benefit assessment in research

  • Jinyao Chen 2 &
  • Lishi Zhang 2  

556 Accesses

The framework of risk analysis has become the principal procedure for dealing with food safety issues. Risk analysis consists of three components: risk management, risk assessment, and risk communication, while risk assessment is defined as the scientific evaluation of possibility and consequences of adverse health outcomes resulting from food-borne hazards exposure in the case of food safety issues. Risk assessment is a scientifically based process consisting of the following steps: hazard identification, hazard characterization, exposure assessment, and risk characterization. The procedures of risk assessment of chemical hazards and microbiological hazards are a little bit different. This chapter would focus on the chemical hazards. On the other hand, positive and adverse effects may be induced concurrently by a single food item, e.g., fish, whole grain products, or even a single food component, e.g., folic acid, phytosterols, in which scenarios the risk-benefit assessment should be adopted. The principles and main steps of risk-benefit assessment are the same with risk assessment. Risk-benefit assessment comprises three parts, i.e., risk assessment, benefit assessment, and risk-benefit comparison, among which risk-benefit comparison is the trickiest one, usually a common metric of the health outcome is needed. Risk-benefit assessment is a valuable approach to systematically integrating the current evidence to provide the best science-based answers to address complicated questions in the areas of food and nutrition, especially in evaluating nutrient fortification policy, developing a tolerable upper intake of nutrient, and recommending a particular dietary pattern.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

World Health Organization. About Risk Analysis in Food. 2010. Available at: http://www.who.int/foodsafety/micro/riskanalysis/en/

Codex Alimentarius Commission. 2010. Working principles for risk analysis for application in the framework of the Codex Alimentarius. Codex Alimentarius Commission Procedural Manual, 19th edition, Rome 2010. Available at: ftp://ftp.fao.org/codex/Publications/ProcManuals/Manual_19e.pdf

FAO/WHO. Food Safety Risk Analysis: A Guide for National Food Safety Authorities. FAO Food and Nutrition Paper No. 87. 2006. Available at: ftp://ftp.fao.org/docrep/fao/009/a0822e/a0822e.pdf

FAO/WHO. 1995. Application of Risk Analysis to Food Standards Issues. Report of the Joint FAO/WHO Expert Consultation. Geneva, 13-17 March 1995. Available at: ftp://ftp.fao.org/es/esn/food/Risk_Analysis.pdf

FAO/WHO. 1997. Risk management and food safety. FAO Food Nutr Pap No. 65. Available at: ftp://ftp.fao.org/docrep/fao/w4982e/w4982e00.pdf

FAO/WHO. 1998. The application of risk communication to food standards and safety matters. FAO Food and Nutrition Paper No 70. Available at: http://www.fao.org/docrep/005/x1271e/x1271e00.htm

FAO/WHO. 2005. Working principles for risk analysis for application in the framework of the Codex Alimentarius. In Codex Alimentarius Commission. Procedural Manual. 15th Edition. Available at: ftp://ftp.fao.org/codex/Publications/ProcManuals/Manual_15e.pdf

FAO. 2003. Food Safety: Science and Ethics. Report of an FAO Expert Consultation. Rome, 3–5 September 2002. FAO Readings in Ethics 1. Available at ftp://ftp.fao.org/docrep/fao/006/j0776e/j0776e00.pdf

European Food Safety Authority. Transparency in risk assessment carried out by EFSA: guidance document on procedural aspects. EFSA J. 2006;2006(353):1–16. Available at: http://www.efsa.europa.eu/en/science/sc_commitee/sc_documents/1494.html

Google Scholar  

Joint Institute for Food Safety and Applied Nutrition. Website of the Food Safety Risk Analysis Clearinghouse. A joint project between the University of Maryland and the United States Food and Drug Administration. Collection of resources related to food safety risk communication. Available at: http://www.foodrisk.org/risk_communication.cfm

FAO/WHO. 2016. Risk communication applied to food safety handbook. Food safety and quality series, 2. Rome. Available at: https://www.who.int/foodsafety/Risk-Communication/en/

EFSA scientific committee; guidance on human health risk-benefit assessment of food. EFSA J. 2010;8(7):1673. https://doi.org/10.2093/j.efsa.2010.1673 . Available online: www.efsa.europa.eu

Weed DL. Weight of evidence: a review of concept and methods. Risk Anal. 2005;25:1545–57.

Article   PubMed   Google Scholar  

Dixit R, Riviere J, Krishnan K, Andersen ME. Toxicokinetics and physiologically based toxicokinetics in toxicology and risk assessment. J Toxicol Environ Health B Crit Rev. 2003;6(1):1–40.

Article   CAS   PubMed   Google Scholar  

Coecke S, Pelkonen O, Leite SB, Bernauer U, Bessems JG, Bois FY, Gundert-Remy U, Loizou G, Testai E, Zaldívar JM. Toxicokinetics as a key to the integrated toxicity risk assessment based primarily on non-animal approaches. Toxicol In Vitro. 2013;27(5):1570–7.

ECETOC. Framework for the Integration of Human and Animal Data in Chemical Risk Assessment. Technical Report No. 104 ISSN-0773-8072-104. Brussels: European Centre for Ecotoxicology and Toxicology of Chemicals; 2009.

James RC, Britt JK, Halmes NC, Guzelian PS. Evidence-based causation in toxicology: a 10-year retrospective. Hum Exp Toxicol. 2015;34(12):1245–52.

Rodricks JV, Levy JI. Science and decisions: advancing toxicology to advance risk assessment. Toxicol Sci. 2013;131(1):1–8.

Jennings P, Corvi R, Culot M. A snapshot on the progress of in vitro toxicology for safety assessment. Toxicol In Vitro. 2017;45(Pt 3):269–71.

Sauer UG, Deferme L, Gribaldo L, Hackermüller J, Tralau T, van Ravenzwaay B, Yauk C, Poole A, Tong W, Gant TW. The challenge of the application of 'omics technologies in chemicals risk assessment: background and outlook. Regul Toxicol Pharmacol. 2017;91(Suppl 1):S14–26.

McMullen PD, Andersen ME, Cholewa B, Clewell HJ 3rd, Dunnick KM, Hartman JK, Mansouri K, Minto MS, Nicolas CI, Phillips MB, Slattery S, Yoon M, Clewell RA. Evaluating opportunities for advancing the use of alternative methods in risk assessment through the development of fit-for-purpose in vitro assays. Toxicol In Vitro. 2018;48:310–7.

Adami HO, Berry SC, Breckenridge CB, Smith LL, Swenberg JA, Trichopoulos D, Weiss NS, Pastoor TP. Toxicology and epidemiology: improving the science with a framework for combining toxicological and epidemiological evidence to establish causal inference. Toxicol Sci. 2011;122(2):223–34.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Hernández AF, Tsatsakis AM. Human exposure to chemical mixtures: challenges for the integration of toxicology with epidemiology data in risk assessment. Food Chem Toxicol. 2017;103:188–93. https://doi.org/10.1016/j.fct.2017.03.012 .

EFSA. Guidance of the scientific committee on a request from EFSA on the use of the benchmark dose approach in risk assessment. The EFSA Journal. 2009;2009(1150):1–72.

Neumann HG. Risk assessment of chemical carcinogens and thresholds. Crit Rev Toxicol. 2009;39(6):449–61.

Adeleye Y, Andersen M, Clewell R, Davies M, Dent M, Edwards S, Fowler P, Malcomber S, Nicol B, Scott A, Scott S, Sun B, Westmoreland C, White A, Zhang Q, Carmichael PL. Implementing toxicity testing in the 21st century (TT21C): making safety decisions using toxicity pathways, and progress in a prototype risk assessment. Toxicology. 2015;5(332):102–11.

Article   CAS   Google Scholar  

McConnell ER, Bell SM, Cote I, Wang RL, Perkins EJ, Garcia-Reyero N, Gong P, Burgoon LD. Systematic omics analysis review (SOAR) tool to support risk assessment. PLoS One. 2014;9(12):e110379.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Dourson M, Becker RA, Haber LT, Pottenger LH, Bredfeldt T, Fenner-Crisp PA. Advancing human health risk assessment: integrating recent advisory committee recommendations. Crit Rev Toxicol. 2013;43(6):467–92.

Hartwig A, Arand M, Epe B, Guth S, Jahnke G, Lampen A, Martus HJ, Monien B, Rietjens IMCM, Schmitz-Spanke S, Schriever-Schwemmer G, Steinberg P, Eisenbrand G. Mode of action-based risk assessment of genotoxic carcinogens. Arch Toxicol. 2020;94(6):1787–877.

Thomas PC, Bicherel P, Bauer FJ. How in silico and QSAR approaches can increase confidence in environmental hazard and risk assessment. Integr Environ Assess Manag. 2019;15(1):40–50.

Gbeddy G, Egodawatta P, Goonetilleke A, Ayoko G, Chen L. Application of quantitative structure-activity relationship (QSAR) model in comprehensive human health risk assessment of PAHs, and alkyl-, nitro-, carbonyl-, and hydroxyl-PAHs laden in urban road dust. J Hazard Mater. 2020;5(383):121154.

Ågerstrand M, Beronius A. Weight of evidence evaluation and systematic review in EU chemical risk assessment: foundation is laid but guidance is needed. Environ Int. 2016;92-93:590–6.

Article   PubMed   CAS   Google Scholar  

Barlow S, Renwick AG, Kleiner J, Bridges JW, Busk L, Dybing E, Edler L, Eisenbrand G, Fink-Gremmels J, Knaap A, Kroes R, Liem D, Müller DJ, Page S, Rolland V, Schlatter J, Tritscher A, Tueting W, Würtzen G. Risk assessment of substances that are both genotoxic and carcinogenic report of an International Conference organized by EFSA and WHO with support of ILSI Europe. Food Chem Toxicol. 2006;44(10):1636–50.

Embry MR, Bachman AN, Bell DR, Boobis AR, Cohen SM, Dellarco M, Dewhurst IC, Doerrer NG, Hines RN, Moretto A, Pastoor TP, Phillips RD, Rowlands JC, Tanir JY, Wolf DC, Doe JE. Risk assessment in the 21st century: roadmap and matrix. Crit Rev Toxicol. 2014;44(Suppl 3):6–16.

Stedeford T, Zhao QJ, Dourson ML, et al. The application of non-default uncertainty factors in the U.S. EPA’s Integrated Risk Information System (IRIS). Part I: UF(L), UF(S), and “other uncertainty factors”[J]. J Environ Sci Health C. 2007;25(3):245–79.

Article   Google Scholar  

Pohl HR, Chou CH, Ruiz P, Holler JS. Chemical risk assessment and uncertainty associated with extrapolation across exposure duration. Regul Toxicol Pharmacol. 2010;57(1):18–23.

Moretto A, Bachman A, Boobis A, Solomon KR, Pastoor TP, Wilks MF, Embry MR. A framework for cumulative risk assessment in the 21st century. Crit Rev Toxicol. 2017;47(2):85–97.

Boobis AR, Ossendorp BC, Banasiak U, Hamey PY, Sebestyen I, Moretto A. Cumulative risk assessment of pesticide residues in food. Toxicol Lett. 2008;180(2):137–50.

Safe SH. Development validation and problems with the toxic equivalency factor approach for risk assessment of dioxins and related compounds. J Anim Sci. 1998;76(1):134–41.

Gallagher SS, Rice GE, Scarano LJ, Teuschler LK, Bollweg G, Martin L. Cumulative risk assessment lessons learned: a review of case studies and issue papers. Chemosphere. 2015;120:697–705.

Cote I, Andersen ME, Ankley GT, Barone S, Birnbaum LS, Boekelheide K, Bois FY, Burgoon LD, Chiu WA, Crawford-Brown D, Crofton KM, DeVito M, Devlin RB, Edwards SW, Guyton KZ, Hattis D, Judson RS, Knight D, Krewski D, Lambert J, Maull EA, Mendrick D, Paoli GM, Patel CJ, Perkins EJ, Poje G, Portier CJ, Rusyn I, Schulte PA, Simeonov A, Smith MT, Thayer KA, Thomas RS, Thomas R, Tice RR, Vandenberg JJ, Villeneuve DL, Wesselkamper S, Whelan M, Whittaker C, White R, Xia M, Yauk C, Zeise L, Zhao J, DeWoskin RS. The next generation of risk assessment multi-year study-highlights of findings, applications to risk assessment, and future directions. Environ Health Perspect. 2016;124(11):1671–82.

Article   PubMed   PubMed Central   Google Scholar  

Munro IC, Renwick AG, Danielewska-Nikiel B. The threshold of toxicological concern (TTC) in risk assessment. Toxicol Lett. 2008;180(2):151–6.

Lachenmeier DW, Rehm J. Comparative risk assessment of alcohol, tobacco, cannabis and other illicit drugs using the margin of exposure approach. Sci Rep. 2015;30(5):8126.

Tijhuis MJ, de Jong N, Pohjola MV, Gunnlaugsdóttir H, Hendriksen M, Hoekstra J, Holm F, Kalogeras N, Leino O, van Leeuwen FX, Luteijn JM, Magnússon SH, Odekerken G, Rompelberg C, Tuomisto JT, Ueland WBC, Verhagen H. State of the art in benefit-risk analysis: food and nutrition. Food Chem Toxicol. 2012;50(1):5–25.

Rietjens IM, Alink GM. Future of toxicology--low-dose toxicology and risk--benefit analysis. Chem Res Toxicol. 2006 Aug;19(8):977–81.

Fransen H, de Jong N, Hendriksen M, Mengelers M, Castenmiller J, Hoekstra J, van Leeuwen R, Verhagen H. A tiered approach for risk–benefit assessment of foods. Risk Anal. 2010;30:808–16.

Verhagen H, Andersen R, Antoine JM, Finglas P, Hoekstra J, Kardinaal A, Nordmann H, Pekcan G, Pentieva K, Sanders TA, van den Berg H, van Kranen H, Chiodini A. Application of the BRAFO tiered approach for benefit-risk assessment to case studies on dietary interventions. Food Chem Toxicol. 2012;50(Suppl 4):S710–23. https://doi.org/10.1016/j.fct.2011.06.068 .

Gold MR, Stevenson D, Fryback DG. HALYS and QALYS and DALYS, oh my: similarities and differences in summary measures of population health. Annu Rev Public Health. 2002;2002(23):115–34.

Hoekstra J, Fransen HP, van Eijkeren JC, Verkaik-Kloosterman J, de Jong N, Owen H, Kennedy M, Verhagen H, Hart A. Benefit-risk assessment of plant sterols in margarine: a QALIBRA case study. Food Chem Toxicol. 2013;54:35–42.

Hart A, Hoekstra J, Owen H, Kennedy M, Zeilmaker MJ, de Jong N, Gunnlaugsdottir H. Qalibra: A general model for food risk-benefit assessment that quantifies variability and uncertainty. Food Chem Toxicol. 2013;54:4–17.

Hoekstra J, Verkaik-Kloosterman J, Rompelberg C, van Kranen H, Zeilmaker M, Verhagen H, de Jong N. Integrated risk-benefit analyses: method development with folic acid as example. Food Chem Toxicol. 2008;46:893–909.

Krul L, Kremer BHA, Luijckx NBL, Leeman WR. Quantifiable risk-benefit assessment of micronutrients: from theory to practice. Crit Rev Food Sci Nutr. 2017;57(17):3729–46.

Cohen JT, Bellinger DC, Connor WE, Kris-Etherton PM, Lawrence RS, Savitz DA, Shaywitz BA, Teutsch SM, Gray GM. A quantitative risk-benefit analysis of changes in population fish consumption. Am J Prev Med. 2005;29:325–34.

Institute of Medicine (IoM). Sea food choices. In: Balancing benefits and risks. Washington, D.C: National Academy Press; 2007.

Ginsberg GL, Toal BF. Quantitative approach for incorporating methylmercury risks and omega-3 fatty acid benefits in developing species specific fish consumption advice. Environ Health Perspect. 2009;117:267–75.

Gao YX, Zhang HX, Li JG, Zhang L, Yu XW, He JL, Shang XH, Zhao YF, Wu YN. The benefit risk assessment of consumption of marine species based on benefit-risk analysis for foods (BRAFO)-tiered approach. Biomed Environ Sci. 2015;28(4):243–52.

CAS   PubMed   Google Scholar  

Hoekstra J, Hart A, Boobis A, Claupein E, Cockburn A, Hunt A, Knudsen I, Richardson D, Schilter B, Schutte K, Torgerson PR, Verhagen H, Watzl B, Chiodini A. BRAFO tiered approach for benefit–risk assessment of foods. Food Chem Toxicol. 2012;50(Suppl 4):S684–98.

van den Berg M, Kypke K, Kotz A, Tritscher A, Lee SY, Magulova K, Fiedler H, Malisch R. WHO/UNEP global surveys of PCDDs, PCDFs, PCBs and DDTs in human milk and benefit-risk evaluation of breastfeeding. Arch Toxicol. 2017;91(1):83–96.

Download references

Author information

Authors and affiliations.

Department of Nutrition and Food Safety, West China School of Public Health, Sichuan University, Chengdu, China

Jinyao Chen & Lishi Zhang

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

Lishi Zhang

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Singapore Pte Ltd.

About this chapter

Chen, J., Zhang, L. (2022). Risk Assessment and Risk-Benefit Assessment. In: Zhang, L. (eds) Nutritional Toxicology. Springer, Singapore. https://doi.org/10.1007/978-981-19-0872-9_10

Download citation

DOI : https://doi.org/10.1007/978-981-19-0872-9_10

Published : 14 July 2022

Publisher Name : Springer, Singapore

Print ISBN : 978-981-19-0870-5

Online ISBN : 978-981-19-0872-9

eBook Packages : Biomedical and Life Sciences Biomedical and Life Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Introduction
  • Conclusions
  • Article Information

A, Simplified Geneva risk score. The cumulative incidence was 1.2% (95% CI, 0.3%-2.2%) for patients at low risk and 2.6% (95% CI, 1.5%-3.6%) for patients at high risk (log-rank P  = .09). B, Original Geneva risk score. The cumulative incidence was 1.1% (95% CI, 0.1%-2.0%) for patients at low risk and 2.6% (95% CI, 1.5%-3.6%) for patients at high risk (log-rank P  = .07). C, Padua score. The cumulative incidence was 1.4% (95% CI, 0.5%-2.3%) for patients at low risk and 2.8% (95% CI, 1.5%-4.0%) for patients at high risk (log-rank P  = .08). D, IMPROVE (International Medical Prevention Registry on Venous Thromboembolism) score. The cumulative incidence was 1.8% (95% CI, 0.9%-2.6%) for patients at low risk and 2.7% (95% CI, 1.1%-4.3%) for patients at high risk (log-rank P  = .26).

The area under the ROC curve was 58.1% for the simplified Geneva score (95% CI, 55.4%-60.7%), 53.8% (95% CI, 51.1%-56.5%) for the original Geneva score, 56.5% (95% CI, 53.7%-59.1%) for the Padua score, and 55.0% (95% CI, 52.3%-57.7%) for the IMPROVE (International Medical Prevention Registry on Venous Thromboembolism) score.

eMethods. Variable Definitions

eFigure. Flow Chart

eTable 1. Venous Thromboembolism Risk Assessment Models

eTable 2. Venous Thromboembolism Events in Low- and High-Risk Patients According to the Four Risk Assessment Models

eTable 3. Discrimination and Goodness of Fit of Each Risk Assessment Model to Predict Hospital-Acquired Venous Thromboembolism

eTable 4. Venous Thromboembolism Events in Patients Without Pharmacological Thromboprophylaxis According to the Four Risk Assessment Models

eTable 5. Predictive Accuracy of Risk Assessment Models for Hospital-Acquired Venous Thromboembolism in Patients Without Pharmacological Thromboprophylaxis

eTable 6. Venous Thromboembolism Events in Low- and High-Risk Patients According to the Four Risk Assessment Models, Stratified by Antiplatelet Treatment During Hospitalization

eTable 7. Predictive Accuracy of Risk Assessment Models for Hospital-Acquired Venous Thromboembolism, Stratified by Antiplatelet Treatment During Hospitalization

eTable 8. Sensitivity Analysis Investigating the Discriminative Performance of Risk Assessment Models With Different Outcome Scenarios Among Patients Lost to Follow-up

eTable 9. Demographics, Predictors and Outcomes of Participants in the RISE Study and the Derivation Cohorts of the Four Risk Assessment Models

eReferences.

Data Sharing Statement

  • VTE Risk Assessment Models for Acutely Ill Medical Patients JAMA Network Open Invited Commentary May 10, 2024 Lara N. Roberts, MBBS, MD(Res); Roopen Arya, BMBCh(Oxon), MA, PhD

See More About

Sign up for emails based on your interests, select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Get the latest research based on your areas of interest.

Others also liked.

  • Download PDF
  • X Facebook More LinkedIn

Häfliger E , Kopp B , Darbellay Farhoumand P, et al. Risk Assessment Models for Venous Thromboembolism in Medical Inpatients. JAMA Netw Open. 2024;7(5):e249980. doi:10.1001/jamanetworkopen.2024.9980

Manage citations:

© 2024

  • Permissions

Risk Assessment Models for Venous Thromboembolism in Medical Inpatients

  • 1 Department of General Internal Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
  • 2 Division of General Internal Medicine, Department of Medicine, Geneva University Hospitals, Geneva, Switzerland
  • 3 Division of Internal Medicine, Department of Medicine, Lausanne University Hospital, Lausanne, Switzerland
  • 4 CTU Bern, University of Bern, Bern, Switzerland
  • Invited Commentary VTE Risk Assessment Models for Acutely Ill Medical Patients Lara N. Roberts, MBBS, MD(Res); Roopen Arya, BMBCh(Oxon), MA, PhD JAMA Network Open

Question   What is the prognostic performance of the simplified Geneva score and other validated risk assessment models (RAMs) to predict venous thromboembolism (VTE) in medical inpatients?

Findings   In this cohort study providing a head-to-head comparison of validated RAMs among 1352 medical inpatients, sensitivity of RAMs to predict 90-day VTE ranged from 39.3% to 82.1% and specificity of RAMs ranged from 34.3% to 70.4%. Discrimination was poor, with an area under the receiver operating characteristic curve of less than 60% for all RAMs.

Meaning   This study suggests that the accuracy and prognostic performance of the simplified Geneva score and other validated RAMs to predict VTE is limited and their clinical usefulness is thus questionable.

Importance   Thromboprophylaxis is recommended for medical inpatients at risk of venous thromboembolism (VTE). Risk assessment models (RAMs) have been developed to stratify VTE risk, but a prospective head-to-head comparison of validated RAMs is lacking.

Objectives   To prospectively validate an easy-to-use RAM, the simplified Geneva score, and compare its prognostic performance with previously validated RAMs.

Design, Setting, and Participants   This prospective cohort study was conducted from June 18, 2020, to January 4, 2022, with a 90-day follow-up. A total of 4205 consecutive adults admitted to the general internal medicine departments of 3 Swiss university hospitals for hospitalization for more than 24 hours due to acute illness were screened for eligibility; 1352 without therapeutic anticoagulation were included.

Exposures   At admission, items of 4 RAMs (ie, the simplified and original Geneva score, the Padua score, and the IMPROVE [International Medical Prevention Registry on Venous Thromboembolism] score) were collected. Patients were stratified into high and low VTE risk groups according to each RAM.

Main Outcomes and Measures   Symptomatic VTE within 90 days.

Results   Of 1352 medical inpatients (median age, 67 years [IQR, 54-77 years]; 762 men [55.4%]), 28 (2.1%) experienced VTE. Based on the simplified Geneva score, 854 patients (63.2%) were classified as high risk, with a 90-day VTE risk of 2.6% (n = 22; 95% CI, 1.7%-3.9%), and 498 patients (36.8%) were classified as low risk, with a 90-day VTE risk of 1.2% (n = 6; 95% CI, 0.6%-2.6%). Sensitivity of the simplified Geneva score was 78.6% (95% CI, 60.5%-89.8%) and specificity was 37.2% (95% CI, 34.6%-39.8%); the positive likelihood ratio of the simplified Geneva score was 1.25 (95% CI, 1.03-1.52) and the negative likelihood ratio was 0.58 (95% CI, 0.28-1.18). In head-to-head comparisons, sensitivity was highest for the original Geneva score (82.1%; 95% CI, 64.4%-92.1%), while specificity was highest for the IMPROVE score (70.4%; 95% CI, 67.9%-72.8%). After adjusting the VTE risk for thromboprophylaxis use and site, there was no significant difference between the high-risk and low-risk groups based on the simplified Geneva score (subhazard ratio, 2.04 [95% CI, 0.83-5.05]; P  = .12) and other RAMs. Discriminative performance was poor for all RAMs, with an area under the receiver operating characteristic curve ranging from 53.8% (95% CI, 51.1%-56.5%) for the original Geneva score to 58.1% (95% CI, 55.4%-60.7%) for the simplified Geneva score.

Conclusions and Relevance   This head-to-head comparison of validated RAMs found suboptimal accuracy and prognostic performance of the simplified Geneva score and other RAMs to predict hospital-acquired VTE in medical inpatients. Clinical usefulness of existing RAMs is questionable, highlighting the need for more accurate VTE prediction strategies.

Venous thromboembolism (VTE) represents one of the leading avoidable causes of death among hospitalized patients. 1 Although particularly common among patients undergoing surgery, 2 about 75% of hospital-acquired cases of VTE occur in nonsurgical patients. 3 Pharmacologic thromboprophylaxis reduces the risk of VTE among selected medical inpatients. 4 - 6 However, given the associated small increase in bleeding risk and the low baseline VTE incidence in the overall population of medical inpatients, 4 , 7 its provision should be targeted to patients at increased risk of VTE. 4 , 7 , 8

Although risk stratification in surgical patients is based mostly on the type of intervention, 2 assessment of VTE risk among medical patients is more challenging and requires integration of various individual risk factors. 9 , 10 To simplify and standardize VTE risk assessment among medical inpatients, risk assessment models (RAMs) such as the original Geneva score, 11 the Padua score, 12 or the IMPROVE (International Medical Prevention Registry on Venous Thromboembolism) score 13 have been developed, and their use is encouraged by clinical guidelines. 7 , 8 , 14 The practical usefulness of current RAMs is, however, limited by suboptimal sensitivities, 15 nonuniform cutoff values to define risk groups, 13 , 16 or a large number of variables. 11 With the aim of developing a more usable RAM, the simplified Geneva score has been derived. 17 A retrospective external validation study showed good discrimination and calibration of the simplified Geneva score 18 ; however, prospective validation is currently lacking. In addition, the comparative performance of validated RAMs has not been examined prospectively. Using data from a prospective multicenter cohort of medical inpatients, we aimed to validate the simplified Geneva score and to perform a head-to-head comparison of its prognostic performance with previously validated RAMs.

RISE (Risk Stratification for Hospital-Acquired Thromboembolism in Medical Patients) is a multicenter prospective cohort study of medical patients admitted to 3 Swiss tertiary care hospitals from June 18, 2020, to January 4, 2022 (ClinicalTrial.gov NCT04439383 ). The methods have been previously described. 19 Reporting conforms to the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis ( TRIPOD ) reporting guideline and checklist for prediction model validation. 20 The study was conducted in accordance with all applicable legal and regulatory requirements. Authorization was granted from the responsible ethics committees (Kantonale Ethikkommission Bern, Commission cantonale d’éthique de la recherche sur l’être humain CER-VD, and Commission Cantonale d’Ethique de la Recherche sur l’être humain [CCER]), and written informed consent was obtained from all study participants.

Consecutive adults hospitalized in general internal medicine were screened on weekdays, and eligible patients were enrolled within 72 hours of admission. We included acutely ill patients aged 18 years or older who were admitted for hospitalization for more than 24 hours. Exclusion criteria were indication for therapeutic anticoagulation, estimated life expectancy less than 30 days, transfer from the intensive care unit or other wards, insufficient German or French language proficiency, prior enrollment in the study, and unwillingness to provide informed consent. For patients unable to consent due to mental illness or cognitive impairment, written consent was obtained from an authorized representative.

At baseline, study personnel collected data on demographics, comorbidities, and VTE risk factors ( Table 1 ; eMethods in Supplement 1 ). The simplified and original Geneva score, the IMPROVE score, and the Padua score were calculated and patients were categorized as high or low VTE risk according to each RAM (eTable 1 in Supplement 1 ). 11 - 13 , 17 Treating physicians were not informed of the scores and the use of thromboprophylaxis was not influenced by the study. No RAM was implemented in order sets, but internal guidelines suggested to use the Padua score in 2 centers and the simplified Geneva score in 1 center to assess the indication for thromboprophylaxis. For patients at high risk of VTE, pharmacologic thromboprophylaxis was recommended, or nonpharmacologic prophylaxis for those at high bleeding risk.

The primary outcome was symptomatic, objectively confirmed fatal and nonfatal VTE, including pulmonary embolism as well as distal and proximal deep vein thrombosis of the lower and upper extremity within 90 days of admission (eMethods in Supplement 1 ). To exclude preexisting VTE, we did not consider VTE diagnosed within 48 hours of admission. 21 To assess VTE outcomes, study personnel blinded to RAM scores conducted follow-up visits on the day prior to discharge or the day of discharge, and contacted participants, their contact persons, and/or primary care physicians by telephone 90 days after admission. 11 , 22 In case of a VTE outcome, medical and radiologic reports were collected to assess the date, type, and circumstances of the event. For participants who died, the cause was recorded based on medical reports, death certificates, and autopsy reports, if available (eMethods in Supplement 1 ). All VTE outcomes and deaths were adjudicated by a committee of 3 independent clinical experts blinded to RAM scores.

The sample size was calculated to validate the simplified Geneva score for the prediction of hospital-acquired VTE. Assuming that 67% of patients would be categorized as high risk based on the simplified Geneva score, and assuming a 90-day VTE incidence of 2.8% among patients at high risk and 0.6% among patients at low risk based on a previous study, 17 we determined that recruitment of 1308 patients would be required to detect an absolute risk difference of 2.2%, with a power of 80% at a 2-sided α of .05. To account for potential dropouts, we aimed to recruit 1350 participants.

Standard descriptive statistical tests were used to compare low and high VTE risk groups based on the simplified Geneva score. Time-to-event analyses with competing risk methods were used to assess the prognostic performance of the simplified Geneva score and the other RAMs and their association with VTE, with non-VTE death representing the competing risk, using a subdistribution hazard model of Fine and Gray. 23 Subhazard ratios with 95% CIs were calculated, first unadjusted and then adjusted for pharmacologic thromboprophylaxis use as a time-varying covariate and for study site. Cumulative incidences of VTE among patients at low risk and patients at high risk were assessed using Kaplan-Meier curves, with calculation of P values based on log-rank tests. Sensitivity, specificity, and positive and negative predictive values and likelihood ratios were determined for each RAM. The area under the curve (AUC) was calculated to assess the discriminative power of each continuous score using time-dependent receiver operating characteristic curve analysis, considering censored data and competing events. Calibration was determined using the Hosmer-Lemeshow goodness-of-fit test; use of a calibration plot was not possible because 2 of the RAMs were derived empirically (ie, based on literature or clinical expertise) rather than data driven. 12 , 17 , 18

Patients for whom therapeutic anticoagulation was started for reasons other than VTE during follow-up were censored in the main analysis. Patients who were lost to follow-up were censored at the last visit.

We performed a subgroup analysis of patients who did not receive pharmacologic thromboprophylaxis at any time during hospitalization, and a subgroup analysis stratified by antiplatelet treatment during hospitalization. In a sensitivity analysis, we investigated how different outcome scenarios among patients lost to follow-up would be associated with discriminative performance of RAMs. The scenarios included VTE occurring (1) in all patients lost to follow-up, (2) in patients at high risk, and 3) in patients at low risk only.

Stata, version 17 (StataCorp LLC), and R, version 4.2.2 (R Project for Statistical Computing), were used for all analyses. A 2-sided P  < .05 was considered statistically significant.

Of 4205 patients screened, 1352 were included in the RISE cohort (eFigure in Supplement 1 ). The median age was 67 years (IQR, 54-77 years) (vs 76 years [IQR, 64-85 years] for those excluded), with 590 women (43.6%) and 762 men (56.4%) ( Table 1 ). Overall, 263 patients (19.5%) had active cancer, and 88 (6.5%) had a previous VTE event. Immobilization for 3 days or more was recorded for 382 patients (28.3%), and 698 (51.6%) had a prescription for pharmacologic or mechanical VTE prophylaxis at baseline. The median length of stay was 7 days (IQR, 5-11 days). The proportion of patients in the low-risk category receiving thromboprophylaxis was 37.9% (174 of 459) based on the original Geneva score. The proportion of patients categorized as high risk of VTE was 63.2% (n = 854) based on the simplified Geneva score, slightly higher with the original Geneva score (66.1% [n = 893]), and lower with the Padua score (47.8% [n = 646]) and the IMPROVE score (29.8% [n = 403]) (eFigure in Supplement 1 ).

Of all 1352 study participants, 10 (0.7%) were lost to follow-up and 88 (6.5%) died during the 90-day follow-up period. Venous thromboembolism occurred in 28 patients (2.1%); 18 events were pulmonary embolism (no fatal pulmonary embolism), and 10 were deep vein thrombosis.

According to the simplified Geneva score, VTE occurred in 2.6% (95% CI, 1.7%-3.9%) of patients at high risk (22 of 854) and 1.2% (95% CI, 0.6%-2.6%) of patients at low risk (6 of 498) (eTable 2 in Supplement 1 ). Similarly, VTE risk was 2.6% (95% CI, 1.7%-3.8%) (23 of 893) in the high-risk group and 1.1% (95% CI, 0.5%-2.5%) (5 of 459) in the low-risk group according to the original Geneva risk score, 2.8% (95% CI, 1.8%-4.4%) (18 of 646) in the high-risk group and 1.4% (95% CI, 0.8%-2.6%) (10 of 706) in the low-risk group based on the Padua score, and 2.7% (95% CI, 1.5%-4.8%) (11 of 403) in the high-risk group and 1.8% (95% CI, 1.1%-2.9%) (17 of 949) in the low-risk group based on the IMPROVE score. The 90-day cumulative incidence of VTE did not significantly differ between the low-risk and high-risk groups of the simplified Geneva score or in any risk groups based on the other RAMs ( Figure 1 ).

Patients classified as high risk based on the simplified Geneva score did not have a statistically significantly increased VTE risk compared with those classified as low risk (adjusted subhazard ratio, 2.04 [95% CI, 0.83-5.05]; P  = .12). Results were similar for the other 3 RAMs ( Table 2 ). The simplified Geneva score showed a sensitivity of 78.6% (95% CI, 60.5%-89.8%) and a specificity of 37.2% (95% CI, 34.6%-39.8%) for the prediction of VTE ( Table 3 ). Sensitivity was highest with the original Geneva score (82.1%; 95% CI, 64.4%-92.1%) and lowest with the IMPROVE score (39.3%; 95% CI, 23.6%-57.6%), while specificity was highest with the latter (70.4%; 95% CI, 67.9%-72.8%). The positive predictive value of the simplified Geneva score was 2.6% (95% CI, 1.7%-3.9%), while the negative predictive value was 98.8% (95% CI, 97.4%-99.4%); the positive likelihood ratio was 1.25 (95% CI, 1.03-1.52), and the negative likelihood ratio was 0.58 (95% CI, 0.28-1.18). Positive predictive values, negative predictive values, and positive and negative likelihood ratios of the other RAMs were similar.

The discriminative performance was highest for the simplified Geneva score, with an AUC of 58.1% (95% CI, 55.4%-60.7%) and lowest for the original Geneva score, with an AUC of 53.8% (95% CI, 51.1%-56.5%), but overall poor for all RAMs ( Figure 2 ; eTable 3 in Supplement 1 ). Calibration was acceptable for all RAMs (eTable 3 in Supplement 1 ).

In subgroup analyses of the 510 patients without pharmacologic thromboprophylaxis, VTE within 90 days occurred in 6 patients (1.2%) (eTable 4 in Supplement 1 ). The accuracy of the RAMs did not improve compared with the results in the overall population (eTable 5 in Supplement 1 ).

In an analysis stratified by antiplatelet treatment, VTE occurred in 9 of 420 patients (2.1%) with antiplatelet treatment and 19 of 932 patients (2.0%) without antiplatelet treatment (eTable 6 in Supplement 1 ). The accuracy of the RAMs remained similar irrespective of antiplatelet use (eTable 7 in Supplement 1 ). In a sensitivity analysis investigating different outcome scenarios in the 10 patients lost to follow-up, discriminative performance for all RAMs slightly increased when assuming that VTE occurred in patients at high risk only, with a maximum AUC of 61.9% for the Padua score (eTable 8 in Supplement 1 ).

In this prospective, multicenter cohort of medical inpatients, the simplified Geneva score showed a similarly poor prognostic accuracy and discriminative performance for predicting VTE compared with the original Geneva score, the Padua score, and the IMPROVE score. The cumulative incidence of VTE within 90 days for low-risk and high-risk categories of all 4 RAMs did not significantly differ. Overall, our results suggest that existing RAMs do not perform particularly well in identifying medical inpatients at risk for VTE.

We found no association between risk group and time to a first VTE event for all 4 RAMs. Although the overall incidence of VTE within 90 days was similar in our study compared with the derivation cohorts of the Geneva score, Padua score, and IMPROVE score, VTE incidence among those in the low-risk categories of our validation cohort was surprisingly high (1.1%-1.8% vs 0.3%-0.6% in the derivation cohorts of the RAMs) 11 - 13 and above the 1% threshold that has been suggested for provision of thromboprophylaxis. 7 A potential explanation for the comparatively high VTE incidence in the low-risk groups could possibly be associated with the differing proportions of patients with pharmacologic thromboprophylaxis. 11 , 17 The proportion of patients in the low-risk category receiving thromboprophylaxis was lower in our cohort (37.9% [174 of 459] based on the original Geneva score) 24 than in the derivation cohort (49%) of the original and simplified Geneva score, 17 or other large cohorts. 1 , 10

The sensitivities of the RAMs based on our study were lower than in previous cohorts. 11 , 25 , 26 For example, sensitivity ranged from 73% to 90% (for the original and simplified Geneva scores, Padua score, and IMPROVE score) in a post hoc analysis from a Swiss prospective cohort, and from 74% to 92% (for the Caprini score, IMPROVE score, and Padua score) in a retrospective analysis from the French PREVENU (Prevention of Venous Thromboembolism Disease in Emergency Departments) study. 17 , 26 Sensitivity is critical in RAMs to select patients for whom a preventive intervention (ie, thromboprophylaxis) can be safely forgone. 27 However, specificity should also be considered: use of the simplified and original Geneva scores to target thromboprophylaxis prescription may result in overtreatment due to their low specificity and high sensitivity.

The discriminative performance for 90-day VTE was poor in our study, with an AUC of 53.8% to 58.1%. Although some previous validation studies (based on retrospective data or post hoc analyses of prospectively collected data) showed better discriminative performance (AUC >70%), 16 , 17 poor results had also been reported in an external validation study of RAMs (including the Padua score and the IMPROVE score) using medical record data from Michigan hospitals, 28 as well as the retrospective analysis of the PREVENU study. 26

There are several potential explanations for the different results of our study and derivation or other validation studies of the Geneva score, Padua score, and IMPROVE score. 16 , 17 First, VTE risk in low- and high-risk groups based on RAMs can be overestimated or underestimated by differing thromboprophylaxis use in the risk groups. Second, differences could be due to the definition of immobility, which differs between RAMs. 29 Subjective estimation of mobility is inaccurate, 30 and often surrogates such as the ability to go to the bathroom are used to quantify mobility. 11 , 16 , 17 As mobility is a highly weighted item in all these RAMs, objective mobility measures (eg, from accelerometry) may improve estimation of VTE risk. Third, data of derivation studies and some validation studies have been collected more than 10 years ago, 11 , 13 , 16 , 18 and inpatient care practices have changed within the last decade (eg, with shorter hospital stays, intensified in-hospital mobilization), with a direct association with VTE risk. 14 Fourth, although our cohort is generally comparable with the population of the derivation studies (eTable 9 in Supplement 1 ), there may be unmeasured variations in characteristics associated with VTE risk. For example, approximately one-third of the patients in our cohort received antiplatelet therapy, while these data are not reported for previous derivation and validation studies. 11 - 13 , 16 , 17 However, antiplatelet treatment did not have a relevant association with accuracy measures in our study. In addition, subsequent hospitalizations and subsequent use of thromboprophylaxis may be associated with 90-day VTE risk.

Given the overall limited accuracy and prognostic performance of all analyzed RAMs, our results cast doubts on their reliability to identify medical inpatients at risk of VTE for whom thromboprophylaxis is warranted. Even though guidelines, including those from the American College of Chest Physicians or the National Institute for Health and Care Excellence, encourage the use of RAMs to identify medical inpatients at high VTE risk, 7 , 14 our results emphasize the need for more accurate risk prediction strategies, as already advocated by others. 8 For example, it is unclear whether the use of objective mobility measures or artificial intelligence–based models could improve VTE risk prediction. 19 , 31 In addition, the clinical benefit associated with applying RAMs is unclear. 8 , 25 Except for a single randomized trial that showed a reduction in VTE rates with a computer-alert program incorporating the Kucher RAM, 32 no prospective comparative study has, to our knowledge, demonstrated improved clinical or economic outcomes with the application of RAMs in clinical practice. 33 The overall necessity of VTE risk stratification to implement targeted thromboprophylaxis may be questioned in light of the uncertain net clinical benefit associated with thromboprophylaxis for medical inpatients. 34 , 35 Randomized clinical trials conducted more than 15 years ago showed up to 63% reductions in VTE with pharmacologic thromboprophylaxis compared with placebo, although the results were mainly due to a reduced risk of asymptomatic VTE of unclear clinical relevance. 6 , 36 , 37 The recently published SYMPTOMS (Systematic Elderly Medical Patients Thromboprophylaxis: Efficacy on Symptomatic Outcomes) trial did not show significant differences in symptomatic VTE at 30 days in more than 2500 older medical inpatients randomized to enoxaparin or placebo, albeit the trial was underpowered due to premature termination. 38 In addition, thromboprophylaxis does not reduce mortality in medical inpatients, 5 but may be associated with a small increase in bleeding risk based on results of a meta-analysis, 4 although we did not find such an association in data from our cohort. 39

Our study has some limitations. First, results may have been affected by thromboprophylaxis use, and unadjusted accuracy measures are therefore difficult to interpret. To address this limitation, we conducted a subgroup analysis among patients without thromboprophylaxis, but the size of the subpopulation was small and thus the analysis was underpowered. Thromboprophylaxis was not assigned at random, which may have had a negative association with measures of accuracy and discrimination due to lower actual VTE rates among patients at high risk for VTE. However, the potential for this bias is reduced by the relevant proportion of underuse of thromboprophylaxis for patients at high risk and overuse for patients at low risk, as previously demonstrated in our cohort. 11 , 24 , 40 In addition, all 4 RAMs were derived in populations of patients with or without thromboprophylaxis, and withholding thromboprophylaxis to perform a derivation or validation study would be unethical. Second, the number of VTE events was low, with large 95% CIs around the estimates. Even though differences in VTE risk between low-risk and high-risk groups were not statistically significant, they may still be clinically relevant. Third, given that patients were recruited from Swiss university hospitals, our results may not be generalizable to health care settings outside of high-income countries or White populations. Fourth, patients at high risk may have been underrepresented in our cohort, given that patients screened but excluded were older than those included, although this may be mostly explained by exclusion of populations for whom RAMs are irrelevant (eg, those receiving therapeutic anticoagulation or with a life expectancy <30 days). Fifth, we did not use specific criteria to define recurrent deep vein thrombosis. 41 However, only 1 deep vein thrombosis event occurred in a patient with prior VTE.

To our knowledge, this cohort study provides the first prospective head-to-head comparison of validated RAMs. The easy-to-use simplified Geneva score showed similarly poor performance in predicting the risk for hospital-acquired VTE among medical inpatients compared with other validated RAMs. Overall, accuracy and prognostic performance of all analyzed RAMs were limited, questioning their clinical usefulness. More accurate strategies to predict VTE risk among medical inpatients as well as randomized studies evaluating the effect of risk assessment strategies are needed.

Accepted for Publication: March 4, 2024.

Published: May 10, 2024. doi:10.1001/jamanetworkopen.2024.9980

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2024 Häfliger E et al. JAMA Network Open .

Corresponding Author: Christine Baumgartner, MD, MAS, Department of General Internal Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, CH-3010 Bern, Switzerland ( [email protected] ).

Author Contributions: Dr Rossel had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Drs Méan and Baumgartner are co–last authors.

Concept and design: Häfliger, Aujesky, Méan, Baumgartner.

Acquisition, analysis, or interpretation of data: Häfliger, Kopp, Darbellay Farhoumand, Choffat, Rossel, Reny, Méan, Baumgartner.

Drafting of the manuscript: Häfliger, Kopp, Rossel, Méan, Baumgartner.

Critical review of the manuscript for important intellectual content: Häfliger, Darbellay Farhoumand, Choffat, Rossel, Reny, Aujesky, Méan, Baumgartner.

Statistical analysis: Häfliger, Rossel.

Obtained funding: Méan, Baumgartner.

Administrative, technical, or material support: Kopp, Darbellay Farhoumand, Choffat, Reny, Aujesky, Méan, Baumgartner.

Supervision: Darbellay Farhoumand, Reny, Méan, Baumgartner.

Conflict of Interest Disclosures: None reported.

Funding/Support: The RISE cohort was funded by the Swiss Society of General Internal Medicine (SGAIM) Foundation, Novartis Biomedical Research Foundation, Swiss Heart Foundation, Chuard Schmid Foundation, and Gottfried und Julia Bangerter-Rhyner Stiftung.

Role of the Funder/Sponsor: The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Data Sharing Statement: See Supplement 2 .

Additional Contributions: We thank the following persons for their help with data collection: Barbara Kocher, MD, and Damiana Pulver, Department of General Internal Medicine, Inselspital, Bern University Hospital, University of Bern; Sarah Bonjour, Pauline Julliard, and Sophie Marclay, Division of Internal Medicine, Department of Medicine, Lausanne University Hospital; and Pauline Gosselin, PhD, and Karolina Luczkowska, PhD, Division of General Internal Medicine, Department of Medicine, Geneva University Hospitals. The compensation for their work was supported by the funding sources.

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts
  • Reference Manager
  • Simple TEXT file

People also looked at

Review article, modernizing and harmonizing regulatory data requirements for genetically modified crops—perspectives from a workshop.

www.frontiersin.org

  • 1 Corteva™ Agriscience, Indianapolis, IN, United States
  • 2 CropLife International, Arlington, VA, United States
  • 3 BASF Corporation, Research Triangle Park, NC, United States
  • 4 Corteva™ Agriscience, Johnston, IA, United States
  • 5 Syngenta Seeds LLC, Research Triangle Park, NC, United States
  • 6 Bayer Crop Science, Chesterfield, MO, United States

Genetically modified (GM) crops that have been engineered to express transgenes have been in commercial use since 1995 and are annually grown on 200 million hectares globally. These crops have provided documented benefits to food security, rural economies, and the environment, with no substantiated case of food, feed, or environmental harm attributable to cultivation or consumption. Despite this extensive history of advantages and safety, the level of regulatory scrutiny has continually increased, placing undue burdens on regulators, developers, and society, while reinforcing consumer distrust of the technology. CropLife International held a workshop at the 16th International Society of Biosafety Research (ISBR) Symposium to examine the scientific basis for modernizing global regulatory frameworks for GM crops. Participants represented a spectrum of global stakeholders, including academic researchers, GM crop developers, regulatory consultants, and regulators. Concurrently examining the considerations of food and feed safety, along with environmental safety, for GM crops, the workshop presented recommendations for a core set of data that should always be considered, and supplementary (i.e., conditional) data that would be warranted only on a case-by-case basis to address specific plausible hypotheses of harm. Then, using a case-study involving a hypothetical GM maize event expressing two familiar traits (insect protection and herbicide tolerance), participants were asked to consider these recommendations and discuss if any additional data might be warranted to support a science-based risk assessment or for regulatory decision-making. The discussions during the workshop highlighted that the set of data to address the food, feed, and environmental safety of the hypothetical GM maize, in relation to a conventional comparator, could be modernized compared to current global regulatory requirements. If these scientific approaches to modernize data packages for GM crop regulation were adopted globally, GM crops could be commercialized in a more timely manner, thereby enabling development of more diverse GM traits to benefit growers, consumers, and the environment.

1 Introduction

Genetically modified (GM) crops that have been engineered to express transgenes have been commercially cultivated since 1995 and are annually grown on 200 million hectares globally. These crops have delivered important societal benefits, such as increased crop yields, resilience to adverse growing conditions, reduced tillage leading to improved soil health, reduction in the need for crop protection inputs, preservation of natural resources, and improved rural economies ( Klümper and Qaim, 2014 ; Dively et al., 2018 ; Zilberman et al., 2018 ; Smyth, 2020 ; Ala-Kokko et al., 2021 ; Macall et al., 2021 ; Peshin et al., 2021 ; Brookes, 2022a ; Brookes, 2022b ; Brookes, 2022c ). These benefits have led to rapid adoption of GM technology for agricultural production, including 80% of global cotton and 73% of global soybean. One-third of global maize production includes GM traits for herbicide tolerance, insect protection, or both ( AgbioInvestor, 2023 ). GM traits have been introduced in other row crops such as oilseed rape, sugar beet, and alfalfa and, at a smaller scale, in specialty crops such as apples, eggplant, squash and potatoes ( ISAAA, 2020 ). Hundreds of studies have been conducted to assess the safety of GM crops, and there have been no substantiated cases of resulting harm to people or livestock that consume GM crops or to the environment in which they are grown ( European Commission, 2010 ; Snell et al., 2012 ; Van Eenennaam and Young, 2014 ; NASEM, 2016 ).

Despite this track record of safety and benefits, regulatory data requirements for approval and commercialization of GM crops have continued to grow globally. GM technology is primarily limited to major global crops, like maize and soybean, and to major input traits, such as insect protection and herbicide tolerance. While there are many efforts underway to use GM technology for other traits and to improve minor crops, especially for small holders in the developing world ( David, 2009 ; Shelton, 2021 ; Woodruff, 2024 ), securing the regulatory approvals to enable cultivation and avoid potential trade disruptions can present often insurmountable challenges to commercialization. Only a few large multinational developers can afford the US$115 million cost and also persist for the 16 years that it currently takes, on average, to bring a new trait to the global market. More than one-third of those costs, and more than one-half of that time, are taken by the regulatory process ( AgbioInvestor, 2022 ). These extensive and complex regulatory systems also mean that governments must invest significant resources in developing and maintaining regulatory bodies staffed with sufficient people and expertise, creating a burden on taxpayers and society. Countries that cannot afford such an investment are missing out on the benefits of GM crops.

CropLife International and its member companies that develop GM crops (BASF, Bayer Crop Science, Corteva™ Agriscience, and Syngenta) have proposed a modernized regulatory framework and streamlining of data requirements for GM crops that is based on scientific rationale and builds on the 25 years of experience with the technology, and the history of its safe use ( Mathesius et al., 2020 ; Anderson et al., 2021 ; Bachman et al., 2021 ; Brune et al., 2021 ; Goodwin et al., 2021 ; McClain et al., 2021 ; Roper et al., 2021 ; Waters et al., 2021 ). The development of the proposed framework was motivated and guided by considering four key questions. 1) Are today’s regulations for GM crop approvals risk-proportionate? 2) Do today’s data requirements act as an unnecessary barrier to beneficial innovation? 3) How can knowledge and experience accumulated over the last 25 years inform modernization of regulations? 4) Can data requirements be streamlined and harmonized across countries and authorities? These questions were used to guide the determination of the types of data that are necessary to ensure GM crops are developed and deployed without increased risks for food and feed safety or the environment compared to conventional crops. Under this framework, core data, which are important for the problem formulation step of the risk assessment of the GM crop, were identified. The core data are used for problem formulation to identify plausible cause-and-effect hypotheses of harm from the GM crop. Depending upon the outcome of the problem formulation for a specific crop by trait combination, additional supplementary (i.e., conditional) studies may be needed, on a case-by-case basis, to analyze any plausible risk identified. Figure 1A outlines proposed core and supplemental studies for a Food and Feed Safety Assessment; Figure 1B outlines proposed core and supplemental studies for an Environmental Risk Assessment. CropLife International took an approach that is consistent with principles of risk assessment such that the proposed data requirements can fully inform decision-making by a regulatory agency, without the extraneous data present in many current regulatory submissions that does not meaningfully contribute to the risk assessment of the GM crop.

www.frontiersin.org

Figure 1 . (A) proposes a set of data recommended for a science-based food and feed safety assessment for a typical GM crop and considers as core studies: basic molecular characterization, protein characterization and expression, and protein safety (i.e., history of safe use of the protein and source organism and bioinformatics to identify potential toxins and allergens). The outcomes of these core data are used to inform the problem formulation step and decide, on a case-by-case basis which, if any, supplementary studies are needed to make a conclusion on safety ( Brune et al., 2021 ; Waters et al., 2021 ). (A) is adapted from Brune et al., 2021 and Waters et al., 2021 . (B) proposes a set of data recommended for a science-based environmental risk assessment for a typical GM crop and considers as data: understanding the receiving environment and the basic biology of the unmodified plant; assessing the agronomic similarity of the GM crop to its conventional counterparts (i.e., agronomic comparative assessment); and understanding the intended trait of the GM plant and assessment of how the intended trait may lead to environmental harm. The core data should be used first to inform the problem formulation. If a conclusion cannot be made about the pathway to harm using the core data, additional case-by-case hypothesis-driven supplementary studies should be considered ( Anderson et al., 2021 ).

To further examine whether CropLife International’s proposed modernized data requirements are sufficient for food and feed safety assessments and for environmental risk assessments, a workshop was held at the 16th International Society of Biosafety Research (ISBR) Symposium (St. Louis, USA) in 2023. Using a case study of a hypothetical GM maize event containing two familiar transgenic traits (herbicide resistance and insect protection). The workshop participants were charged with considering whether the proposed data in the case study are scientifically both necessary and sufficient to determine the food, feed and environmental safety of the hypothetical GM crop.: CropLife International member representatives that served as moderators during the workshop authored this publication to report the outcomes and summarize the discussions that took place among the participants. The participants varied in their backgrounds and prior experience with risk assessment and included individuals from regulatory agencies, technology developers, consultant groups, and academia. A wide range of geographical areas were represented.

2 Case study description

For the case study, a hypothetical GM maize event was presented to the workshop participants for evaluation. The hypothetical event was intentionally simple for this exercise (i.e., a familiar crop with traits that are similar to many transgenic events that have already been reviewed and approved by regulatory agencies globally, with several in commercial production for many years), which enabled the participants to analyze in greater depth the need for data that is routinely submitted but may not contribute to the safety assessment. More specifically, a maize ( Zea mays ) event containing a single insertion encoding for two proteins from a single T-DNA introduced using standard disarmed Agrobacterium tumefaciens-based transformation was described. The two hypothetical traits provide protection against lepidopteran pests and tolerance to treatment with glyphosate herbicide, using a hypothetical Cry1 protein from Bacillus thuringiensis ( Bt ) and a hypothetical EPSPS protein variant isolated from maize, respectively. The workshop participants were asked to separately consider a food and feed safety assessment or an environmental risk assessment for this same hypothetical GM maize event. Additional distinctions between the presentation of the case study for the different assessments are outlined below.

2.1 Food and feed safety assessment

For the Food and Feed Safety Assessment, the results from hypothetical evaluations of core data on the characterization and safety assessment of the event were provided (summarized in Table 1 ). Throughout this paper, the term ‘data’ refers to both the results of experiments or studies as well as information gathered from literature reviews, consensus documents and other similar sources. As described in Waters et al. (2021) , the core data for a food and feed safety assessment are: 1) molecular characterization, 2) protein characterization, and 3) protein safety (allergenicity and toxicity). The results of the molecular characterization demonstrated that there was an insertion of a single T-DNA sequence into the maize genomic DNA without any vector backbone sequences. There were no changes in the intended protein coding sequence and constitutive expression of both proteins were driven by familiar promoter elements (35S from cauliflower mosaic virus and ubiquitin promoter from Zea mays , respectively). Finally, the inserted DNA and the traits were indicated as being stable over three generations. The protein characterization data given to participants indicated that the molecular weight and amino acid sequence were as expected for both proteins. The function of the hypothetical Cry1 protein was established as having activity limited to target lepidopteran pest species, with no activity against other insect orders. Field tolerance to glyphosate from the hypothetical EPSPS protein variant was also as expected. The protein safety data indicated that both proteins are similar to proteins that have a history of safe use for food and feed; neither EPSPS proteins nor Cry proteins have any known toxicity or allergenicity concerns. Bioinformatics analysis comparing the amino acid sequences of both hypothetical proteins to a protein database also demonstrated that neither protein is related to any protein of toxicological concern nor related to any allergens in the qualified allergen database.

www.frontiersin.org

Table 1 . Summary of food and feed safety assessment core data of the hypothetical GM maize.

A familiar crop with familiar traits and minimal genetic disruptions was used for the workshop to promote discussion of what data is really needed to establish the food and feed safety of a GM crop event. It was also noted to workshop participants that extensive protein expression data in the plant was not obtained, nor was detailed proximate or nutrient composition data included. Further, while it was established that bioinformatics confirmed no homology to known allergens or toxins, no exposure assessments, no animal feeding studies, or other more direct assessments of potential for harm from the hypothetical event were included. As presented, the case study stated that considering 1) the assessment from the core data, 2) the familiarity of the crop and traits, and 3) the lack of direct interaction with other metabolic pathways of the plant, there was no hypothesis of food and/or feed safety risks for the new GM maize crop, and therefore additional supplementary data are not warranted to establish food and feed safety, in accordance with the approach established in Brune et al. (2021) , McClain et al. (2021) and Roper et al. (2021) .

2.2 Environmental risk assessment

For Environmental Risk Assessment (ERA), the intention of the case study was to model how problem formulation and core data should be leveraged to inform ERA of a GM crop for cultivation safety. Problem formulation is a process used in the ERA to develop plausible pathways to harm resulting from cultivation of the GM crop. Problem formulation first considers core data, then considers other data on a case-by-case basis if it is deemed necessary to inform the risk assessment. For ERA, core data includes information related to the receiving environment, description of basic biology of the unmodified plant, assessment of the agronomic similarity of the GM crop to its conventional counterparts, and characterization of the intended traits of the GM crop (summarized in Table 2 ). For the purpose of the case study, the protection goal was broadly stated as protection of biodiversity, specifically protection of beneficial or charismatic species. For the purposes of the workshop, the core characteristics of the event as described for the food and feed assessment were considered the same (e.g., molecular features), with additional information focused on agronomic and environmental aspects provided to guide the ERA discussion.

www.frontiersin.org

Table 2 . Summary of environmental risk assessment core data.

The participants were presented with the following set of core data (summarized in Table 2 ) and were asked to consider if a plausible pathway to harm could be developed related to weediness, invasiveness, gene flow to wild relatives or hazard to non-target organisms: 1) assessment of the receiving environment indicating no wild relatives of maize present in the cultivation country and no changes to the standard agronomic practices relative to non-modified maize; 2) assessment of the basic biology of maize, using consensus documents, demonstrating non-modified maize has no weediness characteristics and requires human intervention for propagation and survival; 3) multilocation field trial data demonstrating hypothetical maize was agronomically similar to non-modified maize; and 4) assessment of the intended phenotype (i.e., insect protection and herbicide tolerant traits are not intended to increase fitness or survival in the environment).

Based on the core data assessed, the case study proposed that there are no plausible hypotheses for how cultivation of the hypothetical maize could result in environmental harm related to weediness, invasiveness, and gene flow to wild relatives. Thus, additional data will not further contribute to meaningful assessment of environmental safety. However, the case study proposed that a plausible pathway to harm to non-target organisms could be developed based on the intended insect protection phenotype. The hypothetical Cry1 protein was presented as providing protection against specific lepidopteran insect pests (European corn borer, Asian corn borer, Southwestern corn borer, corn earworm, and fall armyworm).

The mode of action of Cry proteins in GM crops is well-documented ( Bravo et al., 2007 ; OECD, 2007 ). In this case study, additional supplemental protein expression data and non-target organism hazard data were provided to the participants, and they were asked to consider if additional plausible pathways to harm could be developed. The set of supplemental data (summarized in Table 3 ) was as follows: 1) multilocation field trial data measuring the concentration of the hypothetical Cry1 protein in several plant tissues to inform exposure assessment; 2) an exposure assessment for different non-target organisms to consider the likelihood and magnitude of exposure to the hypothetical Cry1 protein; and 3) results of non-target organism Tier I hazard studies for several surrogate species representing different taxonomic orders (e.g., ladybird beetle, a soil dwelling organism, and a non-target predator) conducted with the Cry1 protein in the diet.

www.frontiersin.org

Table 3 . Summary of environmental risk assessment supplementary data.

The multilocation field trial data showed that the Cry1 protein was only detectable (above the limit of detection) in the leaf and whole plant, with the highest concentration found in R1 leaf. The protein was below the limit of detection of the analytical assay in pollen and root. Based on the tissue expression, the exposure assessment concluded that since there is no expression of the Cry1 protein in pollen, there would be no route of exposure to non-target pollen feeding organisms (e.g., honeybee). Finally, the Tier I hazard studies indicated that no hazard was observed at concentrations that exceeded >10x the expected environmental concentration.

Usually, the assessment of adverse effects in non-target organisms follows a tiered approach that starts with laboratory studies at levels that exceed worst-case exposure conditions ( Romeis et al., 2011 ). Tier I laboratory studies with non-target organisms are typically conducted using at least 10X the worst-case expected environmental concentration. In this case, the results of the hypothetical Tier I dietary studies indicated no hazard (i.e., adverse effects) at concentrations that exceeded 10x the worst-case expected environmental concentration, and thus a conclusion that evidence is sufficient without conducting additional hazard testing was indicated. Based on data from the exposure assessment and non-target hazard assessment studies, the case study proposed that there were no plausible pathways to harm to non-target organisms due to lack of exposure and/or lack of risk because there were no adverse effects at concentrations that exceeded 10X the worst-case expected environmental concentration. Participants were asked to consider whether they agreed with the conclusions proposed by the case study based on core data and additional supplementary data related to protein expression, non-target organism exposure, and non-target organism hazard.

Additional information such as molecular data to confirm that the insert is an intact single copy, stable across generations, and that there is no insertion of DNA from the plasmid backbone were not provided in the ERA case study. These additional data for product characterization have historically been submitted to regulators as part of cultivation applications, but they are not directly relevant to ERA ( Anderson et al., 2021 ).

3 Learnings from breakout group discussions

After participants attended the introductory presentation session of the workshop, they were distributed into smaller discussion groups of approximately 10 people, with CropLife International member representatives serving as moderators. Each participant had the opportunity to choose either the Food and Feed Safety Assessment or the Environmental Risk Assessment, depending on their respective areas of interest.

The goal of the smaller group discussion sessions was to allow participants to go into deeper conversations about the proposed modernized paradigm for a risk assessment of a GM crop. Discussions were aided by a distribution of a printed booklet that included a description of the hypothetical GM maize event and the data collected, and that outlined the key concepts of using the core data for a Food and Feed Safety Assessment and Environmental Risk Assessment. Moderators provided some time for the participants to review the information and then introduced the case study by giving a brief overview of the information provided in each data section of the case study. Participants were encouraged to provide feedback and to bring up questions and/or comments about topics/elements of the case study that they considered not sufficiently covered by the data provided. They were also asked to complete a worksheet allowing for comments on the specific steps of the assessment process.

Discussions during this small group session were productive and highly informative. Overall, the participants were engaged, willing to discuss, and mostly supportive of the general assessment framework of primarily using core data and only using further assessments on a case-by-case basis.

A summary of key points from the breakout group discussions is shared below. This section is not intended to be a complete summary of the discussion, rather the authors have captured points of interest with an emphasis on points that are worth considering for future workshops and discussions on this topic.

3.1 Food and feed safety assessment

In the small group session, participants were asked to consider 1) the assessment from the core studies (see Table 1 ), 2) the familiarity of the crop and traits, and 3) the lack of direct interaction with other metabolic pathways of the plant, and then decide whether there was a hypothesis of food and/or feed safety risks for the new GM maize crop. Because of these considerations, the position for the case-study was that, for the hypothetical event, additional supplemental studies are not warranted to establish food and feed safety, and the participants discussed whether they agreed with this position.

Below are some key feedback and questions captured during the workshop regarding the proposed approach for the assessment of Food and Feed Safety of the hypothetical GM maize event.

3.1.1 Molecular characterization (transformation method, transformation construct, DNA insert characterization)

Overall, the participants agreed that the proposed molecular characterization core data is aligned with what is currently provided and that the information was sufficient to inform a food and feed safety assessment. One potential exception to the core data package that was discussed is data demonstrating that the insert is stable over at least three generations. The participants suggested that this study could be considered as supplemental, and not necessarily required as part of the core data package, if the insert is demonstrated to be inserted into the chromosome and is not interrupting endogenous genes or regulatory elements, and there is no other reason to expect that the insert might be unstable (e.g., insertion site near a transposon). There was some discussion that three generations of data may not be considered enough by all regulatory agencies and that additional generations could be required for polyploid crop species. Additionally, participants raised questions about Agrobacterium transformation not being targeted and discussed providing data on whether any internal genes were modified. It was also noted by workshop participants that the use of Next-Generation Sequencing (NGS) to characterize the insert is not yet accepted by all regulatory agencies, but also there was recognition of the utility of NGS to provide a more comprehensive characterization of the insert and the insertion site compared to traditional methods (e.g., Southern blots).

3.1.2 Protein characterization (molecular weight, protein sequence confirmation, protein function)

Participants agreed that the protein characterization information was sufficient to inform the food and feed safety assessment, with some discussions around whether a registrant would always be able to provide what is required, as some proteins may be more challenging to characterize (e.g., difficulties in isolating the proteins in an active form, generating specific antibodies, or generating SDS-PAGE and Western blot data). A question was also raised on maize codon optimization and if the protein would still be considered the same as the native version. Future workshops can reinforce that maize codon optimization of the GM trait gene does not alter the trait protein sequence. Thus, it should not change the safety profile of the protein if there is no change to the amino acid sequence. Discussion also occurred regarding familiarity with promoters and the relationship to expression levels. The participants discussed if there might be a need to better understand the protein expression levels for unfamiliar promoters and also if increased expression levels might raise a concern of potentially increased allergenicity risk.

3.1.3 Protein safety/toxicology/allergenicity (background, source, history of safe use, bioinformatics)

Participants agreed that the EPSPS protein information for safety was sufficient to inform the food and feed risk assessments, but questions were raised about Cry proteins around digestibility and heat stability. There was also discussion regarding how similar a protein would need to be to a known protein to be considered familiar. Additionally, concerns were raised in the small group discussion on the limited protein expression data provided in the case study as it related to an exposure assessment. In response, the moderators noted that an exposure assessment is not necessary, because no hazard was identified from the proteins. However, when a hazard is identified, then protein expression levels are needed to enable assessment of potential exposure ( Brune et al., 2021 ).

3.1.4 Additional information needed to determine event safety

It was stated by one participant that if there was a disruption of a native gene, then composition data could be requested. Discussion also occurred regarding the concept of History of Safe Use (HOSU), and the amount of data, time and similarity (e.g., consideration of minor protein sequence differences) needed to establish something as having sufficient familiarity to be considered safe without additional data. One participant suggested that protein sequence data would be needed to demonstrate a HOSU and could be useful in determining the activity of the protein.

3.1.5 General feedback for food and feed safety assessment

Although participants generally agreed that the case study with a familiar crop and familiar traits is a good starting point for the discussions, several suggestions were made for further discussions to also provide a case study on an unfamiliar event or protein, to lay out how each study informs the safety assessment, to provide more on the problem formulation process, and to provide more graphics and to use examples. Discussion also occurred around the challenges of communicating and making changes to the currently provided data in regulatory applications. On this topic, proposals from participants included suggestions to emphasize more the end goal of getting needed products on the market sooner with less regulatory burden for all stakeholders and to publish more data prior to submission of the application in the scientific literature, and to be ready to provide additional data upon request.

3.2 Environmental risk assessment

After introducing the case study, the CropLife International moderator described a list (provided with the case study) of the specific potential pathways to harm that are relevant to the cultivation of the hypothetical maize event. Additionally, an explanation for how the core data can be used to sufficiently assess environmental risk was provided. For plausible pathways to harm that may not be sufficiently addressed by the core data (i.e., potential harm to non-target organisms), another list of potential pathways to harm that are specific to non-target organism (NTO) exposure was also presented.

Below are some key feedback and questions captured during the workshop regarding the proposed approach for the Environmental Risk Assessment of the hypothetical GM maize event.

3.2.1 Weediness potential

There was an overall consensus among the workshop groups that weediness can be adequately assessed using only core data. Participants agreed that there is not a plausible pathway to harm in the case study since maize is highly domesticated and volunteers will not survive without human intervention and management. One group discussed questions around the potential for dormancy, which may be a weediness trait, and whether it can be assessed in the core data (multilocation field trial; Table 2 ). It was concluded within the groups that the similarity in agronomic characteristics between the GM maize event and the non-GM maize in the case study core data is sufficient to show that there is a highly unlikely risk of weediness potential. This follows the principle of placing risk in the context of current practice (i.e., that the modified maize will have no greater risk than that of cultivation of the non-modified maize) ( Raybould and MacDonald, 2018 ). However, one workshop group had unresolved discussions on whether a difference in agronomic performance between different geographical regions may result in differences in the risk assessment and what specific agronomic elements are the most relevant to consider. Some participants in this group proposed scenarios in which the agronomic data generated in field trials performed outside of the cultivation country may not sufficiently represent the agronomic outcomes of field trials performed within the cultivation country.

3.2.2 Gene flow potential to wild relatives

There was general consensus that there is no environmental safety concern of gene flow in the case study based on the core data because there were no wild relatives present in the hypothetical cultivating environment. There was some interest from participants in further exploring how the risk assessment and data requirements will change if the cultivation environment did contain wild relatives. Also, there was some discussion on the threshold of relatedness between the GM maize and a wild relative species that constitutes a safety concern in terms of gene flow. Ultimately, there was additional consensus that product registrants should demonstrate that there are no wild relative species that are reproductively compatible with GM maize (regardless of species relatedness) to position that there is no gene flow concern. Alternatively, if there are wild relative species in the area of cultivation an assessment of the likelihood and consequences of trait introgression into the wild relative population may be warranted based on a problem formulation approach ( Anderson et al., 2021 ). Participants generally stressed the importance of citing published literature (e.g., accepted consensus references on crop-specific biology) as part of the core data to support the environmental risk assessment. Although it was acknowledged that gene flow will not likely occur between GM maize and wild relatives in the case study example, there was some discussion around whether gene flow may occur between the GM maize and adjacent local non-GM maize varieties and negatively impact crop integrity and biodiversity. The case study focused on assessing plausible pathways to harm related to gene flow between GM maize and sexually compatible weedy relatives. Future workshops can address concerns that were raised about coexistence of GM and non-GM cropping systems. Such a workshop may have to distinguish between environmental risks and market or socio-political concerns. For example, countries that have landrace populations for which the genetic make-up per se is a protection goal may have societal concerns about coexistence (for example, there could be changes the genetic identity of the landrace).

3.2.3 Plausible pathways to harm for non-target organisms (NTO)

All groups aligned that the only plausible pathway to harm from the case study that could not be sufficiently addressed with core data alone was the potential for harm to NTOs from potential exposure to the hypothetical Cry1 protein ( Table 2 ). Participants discussed the plausible pathways to harm that are specific to NTOs. There was general agreement that no additional data was needed to assess the potential for the EPSPS protein conferring the herbicide tolerance trait to cause harm to NTOs. However, participants acknowledged that public perception of herbicide tolerance traits could influence regulatory decisions and may need to be considered when determining the registrability of a GM crop. Such perceptions are not reflective of an actual risk, and the additional data generated do not inform the science-based risk assessment. For other pathways to harm, there was consensus that if there was either no hazard or no detectable exposure, then there is low risk to NTOs. For example, honeybees that may directly consume maize pollen and NTO lepidopterans that may indirectly consume maize pollen that drifts onto their host plants should have low risk in the ERA case study since the GM maize event has expression less than the limit of detection (LOD) of the insecticidal protein in pollen tissue ( Table 2 ). It was generally accepted by workshop participants that if expression of the insecticidal protein is <LOD in tissues that might be consumed by an NTO, further toxicity testing to determine hazard is not warranted.

Participants were also mostly aligned that aquatic environments generally experience minimal exposure to GM crop tissue and so additional toxicity testing is not needed for aquatic NTO species in most situations. However, some participants expressed uncertainty on whether this may be an issue if GM crops are cultivated very close to aquatic environments, which may affect exposure levels to NTO aquatic species. For NTO species where there is a plausible pathway to harm, all groups agreed that further data (exposure assessment or NTO Tier I laboratory testing) might be needed. Some discussions among participants regarding appropriate surrogate species to use for NTO testing and to what extent test species need to match those found in the cultivation regions were not resolved in the workshop. There was some additional discussion around the large body of scientific literature describing the surrogate species concept for testing Cry proteins and other types of plant incorporated protectants (e.g., Romeis, et al., 2011 ; Romeis et al., 2013 ; Bachman et al., 2021 ). While the terms “focal species” and “indicator species” were not discussed directly as part of the workshop, understanding protection goals and selecting appropriate surrogate species or indicator species to inform the science-based assessment of risk is an important consideration ( Rose, 2007 ; Roberts et al., 2020 ). Despite the lack of consensus on species selection, there was clear alignment among participants that NTO species representatives should only be tested if there is a valid hypothesis that there is a plausible pathway to harm for that specific organism type. For this reason, NTO studies should only be conducted when hypothesis-driven ( Figure 1B ).

3.2.4 General feedback and future considerations for ERA

Although participants agreed that a generic ERA case study is a good starting place, participants indicated that future workshops using a modified case study tailored for specific geographical regions will be even more helpful. As different countries have different sets of questions and concerns from local regulatory agencies, using more country-specific scenarios and less familiar pest-control traits in a case study may be more directly relevant in that region.

Related to gene flow, there was not a consensus about potential for harm in small team discussions. Future workshops would benefit from guided discussion to help develop problem formulation for gene flow. For example, it could be established as a baseline that for gene flow to occur naturally in the environment, and when assessing the potential for harm from gene flow between GM maize and local maize varieties, it should be compared to potential for harm from gene flow of non-GM maize and local maize varieties ( OECD, 2023 ). Furthermore, future workshops can reinforce that if gene flow to local maize varieties is a relevant concern for a specific cultivation country, then there is a large body of literature to leverage to assess if additional data is needed to inform the risk assessment (See OECD, 2023 Annex B for recent review) such a workshop would need to distinguish between the true environmental impact and concerns related to trade or economic issues.

Also, there were productive discussions on the topic of data transportability. Participants generally accepted the concept of transportability for lab study data. However, due to a lack of time for discussion, some unresolved questions remained regarding the transportability of field study data. Future workshops will benefit from guided discussion to help explain the principle of data transportability. An underlying principle of data transportability is that if no biologically relevant differences between a GM crop and its conventional counterparts are observed in one country or region, data from these studies can be used to inform the risk assessment in another country, regardless of agroclimatic zone ( Bachman et al., 2021 ). Following the recommendations for modernizing global regulatory frameworks for GM crops, additional agronomic data should only be collected in the local environment if there are plausible pathways for harm that cannot be fully informed by the core data.

Furthermore, there was some interest from participants in discussing how the proposed risk assessment paradigm might apply to combined GM products (i.e., breeding stacks), yield and stress traits (e.g., drought resistance), and streamlining of import registrations.

One topic that generated discussion across groups was the value of product characterization data in an environmental risk assessment. In the proposed modernized regulatory framework ( Anderson et al., 2021 ), underlying characterization data for the GM event are not regarded as core to the regulatory assessments (such as molecular data to confirm that the insert is an intact single copy, stable across generations, and that there is no plasmid backbone DNA). Although these data do not directly inform the ERA ( Anderson et al., 2021 ), it was discussed that an understanding of the characteristics of the GM product provides foundational information that enables the regulatory assessments to focus on the intended introduced trait during the problem formulation stage. Therefore, it was proposed to consider including, as part of the modernized ERA framework, a set of foundational information and data from the characterization of the GM event that confirms that (1) the intended gene sequence was inserted and functions as intended, as well as the number of such insertions; (2) the plants produce the intended newly expressed protein (NEP); (3) the intended phenotype is achieved.

4 Key considerations and takeaways from the workshop

The case study for the workshop considered a single event, albeit one that contained genetic material encoding for two proteins leading to two distinct traits (herbicide tolerance and insect protection). However, the majority of commercialized products contain multiple GM events that are combined through conventional breeding (also known as stacked trait products). The typical regulatory process first assesses all single events, before applying regulatory processes, if any, to the stacked trait products. In this sense, the case study used for the workshop reflected a realistic scenario in which regulators assess a single event regardless of whether the event will be commercialized as a single event or as a stacked trait product.

Regulatory processes for stacked trait products vary globally, with many countries recognizing the long, safe history of conventional breeding and not requiring additional assessment once all the single events are approved. It is the position of CropLife International that additional safety assessment of a stacked trait product produced by conventional breeding should not be required unless there is a plausible and testable hypothesis for interaction of the traits ( Goodwin et al., 2021 ). This case study did not address stacked trait products however, further iterations could include consideration of stacked trait products and how to evaluate possible interaction of traits.

The workshop was convened to explore the proposed modernized data requirements for regulatory assessments of GM crops ( Anderson et al., 2021 ; Waters et al., 2021 ). The participants were charged with considering whether currently implemented regulations for GM crops are risk-proportionate or whether they create an unwarranted barrier to the introduction of new traits. The organizers presented a position that knowledge and experience from 25 years of research and development could inform regulatory modernization and that streamlined data requirements could advance harmonization across countries and authorities.

Overall, considering the case study discussed, the participants at the workshop found the proposed modernized data requirements generally to be necessary and sufficient for decision making to support the safe commercial introduction of a new GM crop. There was a clear consensus that some of the current data requirements are no longer routinely warranted for familiar traits such as that discussed in the case study, given the track record of GM crops not presenting unexpected or unintended effects on food or feed safety or environmental risk relative to their conventional counterparts. Participants appreciated the benefit of harmonized hypothesis-based risk assessments to enable future deployment of GM crops that can address emerging agricultural challenges associated with increasing demand for affordable healthy food and changing agricultural environments. The points discussed in this publication will be used to further clarify recommendations for supplementary case-by-case data and guide the development of future, more targeted workshops and related discussions. In particular, applying the proposed framework to traits and crops with which there is less familiarity and established HOSU than those used in the case study may be associated with greater uncertainty in the foundational information of the GM event. Additional case studies involving less familiar traits and different crops should be used to further test the robustness of the modernized regulatory framework.

The workshop focused on what data was scientifically necessary and sufficient to make a conclusion on the food, feed and environmental safety of the GM crop. However, several participants noted that certain data not included in the case study was either required in their jurisdiction or routinely submitted by applicants. While it was beyond the scope of this workshop, future targeted workshops or symposia could address the extent to which regulatory authorities have the flexibility to decide, on a case-by-case basis, what data is necessary to make a conclusion on safety. In some jurisdictions the recommendations of the modernization project could be implemented by applicants by including a scientific rationale in their submission for why a specific study is not necessary. In other cases, changes to laws, regulations, or written guidance would be needed to implement these recommendations.

The case study for the first workshop, as described in this publication, was a valuable tool to foster discussion about science-based data requirements for the assessment of GM crops. If these scientific approaches to modernize data packages for GM crop regulation were adopted globally, delays to the commercialization of GM crops could be reduced, thereby allowing farmers access to new GM traits that will benefit not just growers, but consumers and the environment as well. For more information on the case study used in the workshop, or if there is interest in hosting a similar workshop, please contact the corresponding author.

Author contributions

NS: Conceptualization, Writing–original draft, Writing–review and editing. AS: Conceptualization, Writing–original draft, Writing–review and editing. JS: Writing–original draft, Writing–review and editing. JA: Writing–original draft, Writing–review and editing. MH: Writing–original draft, Writing–review and editing. DM: Writing–original draft, Writing–review and editing. CM: Writing–original draft, Writing–review and editing. MS: Writing–original draft, Writing–review and editing. SS: Writing–original draft, Writing–review and editing. EU-W: Writing–original draft, Writing–review and editing.

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. CropLife International supported the open access publication of this work. The funder was not involved in the writing of this article, or the decision to submit it for publication.

Conflict of interest

Authors NS, JA, and CM were employed by Corteva™ Agriscience. Author AS was employed by CropLife International. Authors JS and MS were employed by BASF Corporation. Authors MH and SS were employed by Syngenta Seeds LLC. Authors DM and E-UW were employed by Bayer Crop Science. BASF Corporation, Bayer Crop Science, Corteva™ Agriscience, and Syngenta Seeds LLC are commercial developers of GM crops.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

AgbioInvestor (2022). Time and cost to develop a new GM trait. Available at: https://agbioinvestor.com/wp-content/uploads/2022/05/AgbioInvestor-Time-and-Cost-to-Develop-a-New-GM-Trait.pdf (Accessed November 18, 2023).

Google Scholar

AgbioInvestor (2023). Global GM crop area review. Available at: https://gm.agbioinvestor.com/downloads (Accessed November 29, 2023).

Ala-Kokko, K., Lanier Nalley, L., Shew, A. M., Tack, J. B., Chaminuka, P., Matlock, M. D., et al. (2021). Economic and ecosystem impacts of GM maize in South Africa. Glob. Food Secur. 29, 100544. doi:10.1016/j.gfs.2021.100544

CrossRef Full Text | Google Scholar

Anderson, J., Bachman, P. M., Burns, A., Chakravarthy, S., Goodwin, L., Privalle, L., et al. (2021). Streamlining data requirements for the environmental risk assessment of genetically modified (GM) crops for cultivation approvals. J. Regul. Sci. 9 (1), 26–37. doi:10.21423/jrs-v09i1anderson

Bachman, P. M., Anderson, J., Burns, A., Chakravarthy, S., Goodwin, L., Privalle, L., et al. (2021). Data transportability for studies performed to support an environmental risk assessment for genetically modified (GM) crops. J. Regul. Sci. 9 (1), 38–44. doi:10.21423/jrs-v09i1bachman

Bravo, A., Gill, S. S., and Soberón, M. (2007). Mode of action of Bacillus thuringiensis Cry and Cyt toxins and their potential for insect control. Toxicon 49 (4), 423–435. doi:10.1016/j.toxicon.2006.11.022

PubMed Abstract | CrossRef Full Text | Google Scholar

Brookes, G. (2022a). Farm income and production impacts from the use of genetically modified (GM) crop technology 1996-2020. Gm. Crops Food 13 (1), 171–195. doi:10.1080/21645698.2022.2105626

Brookes, G. (2022b). Genetically modified (GM) crop use 1996–2020: environmental impacts associated with pesticide use change. Gm. Crops Food 13 (1), 262–289. doi:10.1080/21645698.2022.2118497

Brookes, G. (2022c). Genetically modified (GM) crop use 1996–2020: impacts on carbon emissions. Gm. Crops Food 13 (1), 242–261. doi:10.1080/21645698.2022.2118495

Brune, P., Chakravarthy, S., Graser, G., Mathesius, C. A., McClain, S., Petrick, J. S., et al. (2021). Core and supplementary studies to assess the safety of genetically modified (GM) plants used for food and feed. J. Regul. Sci. 9 (1), 45–60. doi:10.21423/jrs-v09i1brune

Clark, B. W., Phillips, T. A., and Coats, J. R. (2005). Environmental fate and effects of Bacillus thuringiensis (bt) proteins from transgenic crops: a review. J. Agric. Food Chem. 53 (12), 4643–4653. doi:10.1021/jf040442k

David, M. A. (2009). GAIN report: Nigeria agricultural biotechnology annual report. Available at: https://apps.fas.usda.gov/newgainapi/api/Report/DownloadReportByFileName?fileName=AGRICULTURAL%20BIOTECHNOLOGY%20ANNUAL_Lagos_Nigeria_8-3-2009 (Accessed February 12, 2024).

Dively, G. P., Venugopal, P. D., Bean, D., Whalen, J., Holmstrom, K., Kuhar, T. P., et al. (2018). Regional pest suppression associated with widespread Bt maize adoption benefits vegetable growers. Proc. Natl. Acad. Sci. 115 (13), 3320–3325. doi:10.1073/pnas.1720692115

European Commission, Directorate-General for Research and Innovation (2010). A decade of EU-funded GMO research (2001-2010) . Brussels: European Commission . doi:10.2777/97784

Goodwin, L., Hunst, P., Burzio, L. A., Rowe, L., Money, S., and Chakravarthy, S. (2021). Stacked trait products are as safe as non-genetically modified (GM) products developed by conventional breeding practices. J. Regul. Sci. 9 (1), 22–25. doi:10.21423/jrs-v09i1goodwin

Icoz, I., and Stotzky, G. (2008). Fate and effects of insect-resistant Bt crops in soil ecosystems. Soil Biol. Biochem. 40 (3), 559–586. doi:10.1016/j.soilbio.2007.11.002

International Service for the Acquisition of Agri-biotech Applications (ISAAA) (2020). Global status of commercialized biotech/GM crops in 2019 (brief 55). Available at: https://www.isaaa.org/resources/publications/briefs/55/executivesummary/default.asp .

International Service for the Acquisition of Agri-biotech Applications (ISAAA) (2023). ISAAA’s GM approval database. Available at: https://www.isaaa.org/gmapprovaldatabase (Accessed September 14, 2023).

Klümper, W., and Qaim, M. (2014). A meta-analysis of the impacts of genetically modified crops. PLoS ONE 9 (11), e111629. doi:10.1371/journal.pone.0111629

Macall, D. M., Trabanino, C. R., Soto, A. H., and Smyth, S. J. (2020). Genetically modified maize impacts in Honduras: production and social issues. Transgenic Res. 29 (5), 575–586. doi:10.1007/s11248-020-00221-y

Mathesius, C. A., Sauve-Ciencewicki, A., Anderson, J., Cleveland, C., Fleming, C., Frierdich, G. E., et al. (2020). Recommendations for assessing human dietary exposure to newly expressed proteins in genetically modified crops. J. Regul. Sci. 8, 1–12. doi:10.21423/JRS-V08MATHESIUS

McClain, S., Herman, R. A., Islamovic, E., Ranjan, R., Silvanovich, A., Song, P., et al. (2021). Allergy risk assessment for newly expressed proteins (NEPs) in genetically modified (GM) plants. J. Regul. Sci. 9 (1), 67–75. doi:10.21423/jrs-v09i1mcclain

National Academies of Sciences, Engineering, and Medicine (2016). Genetically engineered crops: experiences and prospects . Washington, DC, USA: The National Academies Press US . doi:10.17226/23395

OECD (2003). Consensus document on the biology of Zea mays subsp. mays (maize) . ENV/JM/MONO(2007)14. Paris, France: Organisation for Economic Cooperative Development . Available at: https://one.oecd.org/document/env/jm/mono(2003)11/en/pdf .

OECD (2007). Consensus document on safety information on transgenic plants expressing Bacillus thuringiensis - derived insect control protein . ENV/JM/MONO(2003)11. Paris, France: Organisation for Economic Cooperative Development . Available at: https://one.oecd.org/document/env/jm/mono(2007)14/en/pdf .

OECD (2023). Consensus document on environmental considerations for the release of transgenic plants, harmonisation of regulatory oversight in biotechnology . ENV/CBC/MONO(2023)30. Paris, France: Organisation for Economic Cooperative Development . doi:10.1787/62ed0e04-en

Peshin, R., Hansra, B. S., Singh, K., Nanda, R., Sharma, R., Yangsdon, S., et al. (2021). Long-term impact of Bt cotton: an empirical evidence from North India. J. Clean. Prod. 312, 127575. doi:10.1016/j.jclepro.2021.127575

Raybould, A., and Macdonald, P. (2018). Policy-led comparative environmental risk assessment of genetically modified crops: testing for increased risk rather than profiling phenotypes leads to predictable and transparent decision-making. Front. Bioeng. Biotechnol. 6, 43. doi:10.3389/fbioe.2018.00043

Raybould, A., Stacey, D., Vlachos, D., Graser, G., Li, X., and Joseph, R. (2007). Non-target organism risk assessment of MIR604 maize expressing mCry3A for control of corn rootworm. J. Appl. Entomology 131 (6), 391–399. doi:10.1111/j.1439-0418.2007.01200.x

Roberts, A., Boeckman, C. J., Mühl, M., Romeis, J., Teem, J. L., Valicente, F. H., et al. (2020). Sublethal endpoints in non-target organism testing for insect-active GE crops. Front. Bioeng. Biotechnol. 8, 556. doi:10.3389/fbioe.2020.00556

Romeis, J., Hellmich, R., Candolfi, M., Carstens, K., De Schrijver, A., Gatehouse, A., et al. (2011). Recommendations for the design of laboratory studies on non-target arthropods for risk assessment of genetically engineered plants. Transgenic Res. 20 (1), 1–22. doi:10.1007/s11248-010-9446-x

Romeis, J., McLean, M. A., and Shelton, A. M. (2013). When bad science makes good headlines: bt maize and regulatory bans. Nat. Biotech. 31 (5), 386–387. doi:10.1038/nbt.2578

Roper, J., Lipscomb, E. A., Petrick, J. S., Ranjan, R., Sauve-Ciencewicki, A., and Goodwin, L. (2021). Toxicological assessment of newly expressed proteins (NEPs) in genetically modified (GM) plants. J. Regul. Sci. 9 (1), 61–66. doi:10.21423/jrs-v09i1roper

Rose, R. I. (2007). White paper on tier-based testing for the effects of proteinaceous insecticidal plant-incorporated protectants on NonTarget arthropods for regulatory risk assessments. Available at: https://www.epa.gov/sites/default/files/2015-09/documents/tier-based-testing.pdf (Accessed April 2, 2024).

Shelton, A. M. (2021). Bt eggplant: a personal account of using biotechnology to improve the lives of resource-poor farmers. Am. Entomologist 67 (3), 52–59. doi:10.1093/ae/tmab036

Smyth, S. J. (2020). The human health benefits from GM crops. Plant Biotechnol. J. 18 (4), 887–888. doi:10.1111/pbi.13261

Snell, C., Bernheim, A., Bergé, J.-B., Kuntz, M., Pascal, G., Paris, A., et al. (2012). Assessment of the health impact of GM plant diets in long-term and multigenerational animal feeding trials: a literature review. Food Chem. Toxicol. 50 (3), 1134–1148. doi:10.1016/j.fct.2011.11.048

Stotzky, G. (2005). Persistence and biological activity in soil of the insecticidal proteins from Bacillus thuringiensis, especially from transgenic plants. Plant Soil 266 (1), 77–89. doi:10.1007/s11104-005-5945-6

U.S. Environmental Protection Agency (US EPA) (2010). Bacillus thurigiensis Cry3Bb1 corn biopesticides registration action document. Available at: https://www3.epa.gov/pesticides/chem_search/reg_actions/pip/cry3bb1-brad.pdf .

Van Eenennaam, A. L., and Young, A. E. (2014). Prevalence and impacts of genetically engineered feedstuffs on livestock populations. J. Animal Sci. 92 (11), 4255–4278. doi:10.2527/jas.2014-8124

van Frankenhuyzen, K. (2009). Insecticidal activity of Bacillus thuringiensis crystal proteins. J. Invertebr. Pathol. 101 (1), 1–16. doi:10.1016/j.jip.2009.02.009

Waters, S., Ramos, A., Henderson Culler, A., Hunst, P., Zeph, L., Gast, R., et al. (2021). Recommendations for science-based safety assessment of genetically modified (GM) plants for food and feed uses. J. Regul. Sci. 9 (1), 16–21. doi:10.21423/JRS-V09I1WATERS

Woodruff, S. (2024). Gardeners can now grow a genetically modified purple tomato made with snapdragon DNA. Available at: https://www.npr.org/sections/health-shots/2024/02/06/1228868005/purple-tomato-gmo-gardeners .

Zilberman, D., Holland, T. G., and Trilnick, I. (2018). Agricultural GMOs—what we know and where scientists disagree. Sustainability 10 (5), 1514. doi:10.3390/su10051514

Keywords: genetically modified (GM), regulation, food and feed, safety assessment, environmental risk assessment (ERA), problem formulation, cultivation, data requirements

Citation: Storer NP, Simmons AR, Sottosanto J, Anderson JA, Huang MH, Mahadeo D, Mathesius CA, Sanches da Rocha M, Song S and Urbanczyk-Wochniak E (2024) Modernizing and harmonizing regulatory data requirements for genetically modified crops—perspectives from a workshop. Front. Bioeng. Biotechnol. 12:1394704. doi: 10.3389/fbioe.2024.1394704

Received: 02 March 2024; Accepted: 12 April 2024; Published: 10 May 2024.

Reviewed by:

Copyright © 2024 Storer, Simmons, Sottosanto, Anderson, Huang, Mahadeo, Mathesius, Sanches da Rocha, Song and Urbanczyk-Wochniak. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Abigail R. Simmons, [email protected]

This article is part of the Research Topic

Advancing Science in Support of Sustainable Bio-Innovation: 16th ISBR Symposium

  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Nih research matters.

May 7, 2024

Urine test identifies high-risk prostate cancers

At a glance.

  • Researchers developed a urine-based test that can distinguish between slow-growing prostate cancers that pose little risk and more aggressive cancers that need treatment.
  • The test could help some patients avoid unnecessary biopsies and other tests that carry potential risks.

Doctor talking to older male patient.

Prostate cancer is a leading cause of cancer death among men nationwide. Screening for prostate cancer typically includes a blood test to measure levels of a substance called prostate specific antigen (PSA), which is produced by the prostate gland. PSA levels can be elevated in men who have prostate cancer or certain non-cancerous conditions, like inflammation of the prostate.

Elevated PSA levels can lead to additional tests that may include a biopsy. The biopsy involves removing about a dozen small tissue samples from several areas of the prostate gland to look for cancer cells. Although biopsies are generally safe, they can be painful. They can also lead to fever, urinary tract infection, or other side effects. In many cases, the biopsy identifies slow-growing prostate cancers that would benefit from close monitoring but do not need immediate treatment.

Researchers have been searching for ways to avoid unnecessary biopsies by finding noninvasive ways to distinguish between aggressive prostate cancers that need treatment and slow-growing cancers that may never need treatment.

About a decade ago, an NIH-supported research team led by Dr. Arul M. Chinnaiyan of the University of Michigan developed a urine-based test called MyProstateScore (MPS) that is still in use. Based on two genes that are often found at high levels in the urine of men who have prostate cancer, MPS enables early detection of prostate cancer. But it does not distinguish between low-grade and more serious cancers.

In their latest study, a team led by Chinnaiyan and Dr. Jeffrey Tosoian of Vanderbilt University worked to identify a set of urine-based genes that could distinguish aggressive prostate cancers. Their findings appeared on April 18, 2024, in JAMA Oncology.

The researchers first analyzed RNA sequencing data from nearly 59,000 genes to identify a set of 54 candidate markers. All were linked to either prostate cancer overall or uniquely linked to high-grade cancers, and all were detectable in urine. Further analyses and modeling in 761 patients narrowed down the options to a combination of 17 genes that best predicted the presence of high-grade cancers. A reference gene associated with general prostate tissue was also added. The new 18-gene test was dubbed MyProstateScore 2.0 (MPS2).

MPS2 was validated by analyzing urine samples from another group of 743 men. Each received a biopsy because of elevated PSA levels. The biopsies showed that 20% of them had high-grade prostate cancer.

Validation analysis showed that MPS2 could rule out the presence of high-grade cancer with 97% accuracy. The researchers also compared MPS2 to results from other biomarker tests, including the original MPS test. The analysis showed that MPS2 was better able to identify high-grade cancers. The researchers estimated that it could help patients avoid up to 51% of unnecessary biopsies.

“In nearly 800 patients with an elevated PSA level, the new test was capable of ruling out the presence of clinically significant prostate cancer with remarkable accuracy,” Tosoian says. “This allows patients to avoid more burdensome and invasive tests, like MRI and prostate biopsy, with great confidence that we are not missing something.”

—by Vicki Contie

Related Links

  • Comparing Side Effects After Prostate Cancer Treatment
  • New Insight into Regenerating Prostate Tissue
  • Combining Tests More Accurately Diagnoses Prostate Cancer
  • Biomarker Signatures of Prostate Cancer
  • Genomic Diversity of Metastases Among Men with Prostate Cancer
  • Metabolomics Links Compounds to Prostate Cancer
  • Prostate Predicaments: When Bladder Problems Are Pressing
  • Prostate Cancer
  • An Improved Prostate Cancer Biomarker Test May Help Men Avoid Unnecessary Biopsy
  • Prostate Problems, Aging
  • Early Detection Research Network

References:  Development and Validation of an 18-Gene Urine Test for High-Grade Prostate Cancer. Tosoian JJ, Zhang Y, Xiao L, Xie C, Samora NL, Niknafs YS, Chopra Z, Siddiqui J, Zheng H, Herron G, Vaishampayan N, Robinson HS, Arivoli K, Trock BJ, Ross AE, Morgan TM, Palapattu GS, Salami SS, Kunju LP, Tomlins SA, Sokoll LJ, Chan DW, Srivastava S, Feng Z, Sanda MG, Zheng Y, Wei JT, Chinnaiyan AM; EDRN-PCA3 Study Group. JAMA Oncol. 2024 Apr 18. doi: 10.1001/jamaoncol.2024.0455. Online ahead of print. PMID: 38635241.

Funding:  NIH’s National Cancer Institute (NCI); Howard Hughes Medical Institute; Prostate Cancer Foundation; and the American Cancer Society.

Connect with Us

  • More Social Media from NIH

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Board on Life Sciences; Division on Earth and Life Studies; Committee on Science, Technology, and Law; Policy and Global Affairs; Board on Health Sciences Policy; National Research Council; Institute of Medicine. Potential Risks and Benefits of Gain-of-Function Research: Summary of a Workshop. Washington (DC): National Academies Press (US); 2015 Apr 13.

Cover of Potential Risks and Benefits of Gain-of-Function Research

Potential Risks and Benefits of Gain-of-Function Research: Summary of a Workshop.

  • Hardcopy Version at National Academies Press

2 Assessing Risks and Benefits

Dr. Charles Haas, Drexel University, a member of the symposium planning committee, summarized the standard risk assessment process. The major steps in risk assessment were first articulated in a National Research Council report titled Risk Assessment in the Federal Government: Managing the Process (NRC, 1983), otherwise known as the “Red Book.” This report has been updated several times (see NRC 1994 , 1996 , and 2009 ). The basic framework laid out for risk assessment consists of the steps in Box 2-1 .

Basic Steps in the Risk Assessment Process. Hazard Assessment: Determining whether a particular chemical (or microbiological agent) is or is not causally linked to particular health effects. Exposure Assessment: Determining the extent of human exposure (more...)

However, there are also other considerations besides following these technical steps, and Drs. Baruch Fischhoff (Carnegie Mellon University), Gavin Huntley-Fenner (Huntley-Fenner Advisors, Inc.), and Monica Schoch-Spana (University of Pittsburgh Medical Center [UPMC] Center for Health Security) elaborated on these and provided further details about crucial considerations that need to be taken into account in risk assessments. These comments are summarized later in this chapter.

Haas noted that the major focus of attention with regard to Gain-of-Function (GoF) research has been on hazard assessment. This encompasses occupational health risks, but needs to go beyond this to risks to the members of the public near research sites as well as global risks for pandemic organisms. A number of questions in this arena need to be addressed in a risk assessment, stated Haas. Do the safety records of high containment laboratories provide an appropriate basis for quantifying the risks of lab accidents that lead to worker or public exposures or are there more systematic approaches that need to be incorporated into a risk assessment? Are there finer gradations of lab capabilities that must be considered that go beyond the BSL/ABSL (biosafety level/animal biosafety level) framework, for example, the competence of the laboratory staff and the steps taken by the host institution for community preparedness (see comments by Dr. Rebecca Moritz of the University of Wisconsin's Biosecurity Task Force in Chapter 5 )? And how is deliberate misuse of either the pathogens themselves or the information obtained through the research on these pathogens to be incorporated into the risk assessment?

Haas noted that the debate on GoF research has paid scant attention to either exposure assessment or dose response assessment. Both are crucial components of a risk assessment, although it is likely that at least for dose response, there is little information available, particularly for Middle East Respiratory Syndrome (MERS) virus and possibly also for Severe Acute Respiratory Syndrome (SARS) virus. As a consequence, the GoF research debate has jumped directly into the risk characterization stage without the benefit of the missing intermediate analyses and the dissection of the exposure and dose response issues that may make considerable differences in how the risk characterization is framed. It was Haas's view that the current risk characterization picture contains too many lumped parameters, combining factors dealing with environmental effects, host properties, and infectious agents. All of these need to be taken into account when estimating outcomes and require more attention, as does the role of uncertainty. Very often we do not necessarily know that we have incorporated all of the factors that may influence uncertainty. Similarly, the full fundamental basis of risk assessment is missing for the risk management considerations for GoF research, although it is still possible to discuss to what degree biological and methodological modifications can reduce or obviate risk. The risks of not doing the proposed work, highlighted in several talks and comments during the symposium, also should be considered and balanced against the risks of doing the research. Finally, Haas noted that a risk assessment can inform decisions, but is not determinate per se. The concept of “acceptable” risk is a trans-scientific issue that will be more appropriately addressed in the policy arena.

In Session 8 of the symposium, Baruch Fischhoff, another member of the symposium planning committee, gave an overview of what risk/benefit assessment can and cannot do, as well as what has been learned from past attempts to conduct risk/benefit assessments. He recommended a book, Risk: A Very Short Introduction ( Fischhoff and Kadvany, 2011 ), in which the authors use simple conceptual frameworks from decision theory and behavioral research to explain the science and practice of creating measures of risk, how scientists address risks using historical records, scientific theories, probability, and expert judgment, and what cognitive scientists have learned about how people deal with risks and how these lessons apply to diverse examples and demonstrate how understanding risk can improve making choices in everyday life and public policy.

Fischhoff outlined the key considerations related to the risk assessment paradigm above. These considerations include:

Defining “risk” and “benefit”

Assessing risks and expected benefits

Communicating risks and expected benefits

Organizing to reduce risks and increase expected benefits

For the last item, he noted that for GoF research, the expected benefits are potentially reduced risks. For this reason, the same methodologies apply to assessing risks and expected benefits.

DEFINING “RISK” AND “BENEFIT”

Fischhoff stated that the terms of all analyses embody values that favor some interests above others. Thus, when transparent, the underlying assumptions can be controversial and, therefore, an analytical and deliberative process is required to create socially acceptable definitions. Such analyses utilize science to inform estimates, but they also depend on subjective value judgments about what metrics to include and how much weight to put on each. One commonly used metric is risk of death, which can be defined as the risk that somebody dies, or in terms of the probability that someone exposed to a hazard dies prematurely, or the number of years of life that are expected to be lost with each death. A further refinement of this metric may assign higher value to deaths of particular groups, for example, young people. Other bases for evaluating the value of death as an outcome of risk include whether the deaths are equitably distributed, voluntarily assumed, well understood, controllable, or borne by future generations. Echoing Haas, Fischhoff noted that choosing among these and other alternatives require making value judgments, which is a role for the policy makers.

  • ASSESSING RISKS AND (EXPECTED) BENEFITS

Fischhoff noted these key needs for risk assessments:

  • Socially acceptable outcomes defined
  • Factors that are believed to affect outcomes identified
  • Factors and interdependencies assessed based on observation and expert judgment
  • Quality of the evidence assessed

Fischhoff urged policy makers to have a clear idea of what the purpose of a particular risk/benefit analysis is so that the analysis suits its purpose. He noted that risk analyses can be either for purposes of “design” or to inform decisions. Analyses for purposes of design identify better options to improve understanding of complex systems. Analyses to inform decisions focus on the acceptability of risks (given the expected benefits) by predicting outcomes. As an example of the former, Fischhoff cited a 1975 Reactor Safety Study known as “WASH-1400” (USNRC, 1975) that attempted to assess the risk of accidents at commercial nuclear power plants in the United States. The study was later critiqued by an ad hoc review group that stated the following:

We find that WASH-1400 was a conscientious and honest effort to apply the methods of fault-tree/event-tree analysis to an extremely complex system … in order to determine the overall probability and consequences of an accident… We have found a number of sources of both conservativism and nonconservatism in the probability calculations of WASH-1400…. Among the former are inability to quantify human adaptability during the course of an accident …, while among the latter are nagging issues about completeness, and an inadequate treatment of common cause failure. We are unable to define whether the overall probability of a core melt given in WASH-1400 is high or low, but we are certain that the error bands are understated. We cannot say by how much. (USNRC, 1975)

This example illustrates two notable things. First, risk assessments on low probability/high consequence events are not new. Second, the roles of uncertainty as well as human factors (see more below) are crucial in risk assessment. As also pointed out by Haas, risk assessments generally are forced to deal with considerable uncertainty, which needs to be acknowledged and dealt with. As Huntley-Fenner added later during the discussion, the fact that certain types of accidents, fatalities, and injuries are rare and we do not often see them is interpreted as a sign that things are going well. But the absence of such rare events may not necessarily be a positive sign; it may be that we are just missing the right indicators. If we do not see the data relevant to what accounts for safety, then maybe we are not looking in the right places or in the right way.

  • HUMAN BEHAVIOR AS A SOURCE OF VULNERABILITY AND RESILIENCE

Fischhoff noted that the contribution of human factors to understanding industrial and other processes has been studied for a very long time, referencing, for example, a study by H.M. Vernon (1921) , a member of the English Industrial Fatigue Research Board, on Industrial Fatigue and Efficiency. He also noted that the literature from nuclear power and other sectors makes clear that human behavior must be taken into account as both a source of vulnerability and a source of resilience. Although human error is clearly a problem, human innovation can also rescue difficult situations.

Gavin Huntley-Fenner elaborated on the topic of human factors in his presentation. He defined human factors as “the study of the interrelationships between humans, the tools they use, and the environment in which they live and work.” He provided some data on the role of human error in various accident scenarios: 80 percent of motor vehicle accidents, 80 percent of medical errors, and 60-80 percent of aviation accidents are estimated to be attributable to human factors. He stated that studies have shown that physical (e.g., working in personal protective equipment) and cognitive (e.g., working under conditions of fatigue) stresses undermine human reliability. Not only can human error not be eliminated, but it also has actually increased as a contributor to accidents in some arenas, such as traffic accidents. Analyses of human reliability and errors must identify the critical areas that are incompatible with human capabilities and the areas where a system is vulnerable to human error. He cited a 2009 Government Accountability Office (GAO, 2009) report that found that role of human error is unappreciated.

Huntley-Fenner provided a list of characteristics to guide hazard analysis processes ( Box 2-2 ). He added some key questions to be asked:

Best Hazards Analysis Processes. Include multi-disciplinary teams Incorporate qualitative and quantitative data

  • Are task demands compatible with human capabilities and characteristics?
  • Has the system been designed to cope with the inevitability of human error?
  • Does the system take advantage of unique human capabilities?

According to Huntley-Fenner, the benefits of a risk assessment guided by consideration of human factors are summarized in Box 2-3 .

Enhanced preparedness Prevention of significant accidents

Huntley-Fenner cautioned the audience about our limited capacity to understand and manage risk. He noted that we tend to underestimate risk, are optimistic about our capacity to control local risk, and need to be aware of the potential to accrue benefits (science) and externalize risks (public health). He also, however, highlighted the fact that establishing simple, consistent routines can yield significant reductions in errors, referencing, for example, a paper by Haynes et al. (2009) that reported the use of a simple surgical safety checklist that resulted in a significant decline in errors related to anesthesia in surgical procedures.

Fischhoff further elaborated on the general area of limitations in risk assessment and noted that the limits include variability among observations, the quality of the studies on which the analysis is based (internal validity), whether these studies are generalizable (external validity), and how good the underlying science is (“pedigree”). “These are the standard considerations that a policy maker needs to know in order to make responsible judgments about the risks and benefits of a technology…” said Fischhoff. He stressed the importance of risk communication and taking account of behavioral research that demonstrates how humans tend to make faulty intuitive judgments. He cited two special issues of the Proceedings of the National Academy of Sciences (PNAS) 1 , one in 2013 and another in 2014, devoted to “The Science of Science Communication” as well as a Food and Drug Administration Strategic Plan for Risk Communication (USFDA 2009) as good sources of additional information on risk communication. The topic was also elaborated on by Schoch-Spana in her presentation in Session 8.

  • PUBLIC ENGAGEMENT

Monica Schoch-Spana framed her presentation with four questions: Who is the public? What do we mean by engagement? Why is engaging the public valuable? And what are some take away considerations for the National Science Advisory Board for Biosecurity (NSABB), National Institutes of Health (NIH), and workshop attendees?

Who Is “the Public”?

Schoch-Spana defined “the public” in the broadest sense as all the people who are interested in or affected by GoF research governance decisions. However, who is in that group depends on political jurisdiction and many other factors that complicate definitions. Global, national, and local publics are all relevant to this particular debate. In the pandemic context the population at risk is global. Anyone in the world, at least in the abstract, can be equally in danger of infection and equally in need of medical countermeasures potentially informed by GoF research.

In a U.S. context, Schoch-Spana referenced a study by Sandra Quinn and colleagues ( Quinn et al., 2011 ) who proposed that U.S. racial and ethnic minorities were at a threefold disadvantage during the 2009 H1N1 influenza pandemic. These subgroups faced enhanced exposure to the H1N1 virus because of social, economic, and behavioral elements. They faced greater susceptibility to influenza because of the high prevalence of chronic disease and immunosuppression, and they had impaired access to timely and trusted health information, vaccination, and treatment. There are also other “national” publics that come to mind in the United States. Ultimately, the U.S. taxpayer underwrites the cost of government-sponsored research and confers authority and operating budgets on federal bodies implicated in the biosafety systems that have been created and continue to be refined to keep researchers and the larger public safe in the context of GoF and other research of concern.

Schoch-Spana also noted that at the local level there also is another potentially relevant public—the communities that actually host the facilities in which GoF research is conducted. In the case of a laboratory release they could be on the front end of an emerging pandemic. As a result, they have a strong and direct interest in the biosecurity and biosafety systems designed to avert any release and, should prevention fail, they also have a direct interest in locally robust systems to treat the sick and interrupt transmission.

What Do We Mean by “Engagement”?

Schoch-Spana stated that “engagement” usually refers to the processes by which citizens influence the policies and programs that affect them. In a democracy people have a variety of means to make their voices heard. They can vote, write letters, lobby, demonstrate, and take other collective actions. Over the past 50 years more direct means of public participation in the decision-making process itself have developed as citizens have become less deferential toward authorities and public policy issues have become more complicated.

The theories of deliberative democracies have flourished and practical experience in participatory approaches has accumulated. Scholars and practitioners usually talk about public engagement as a flow of influence and information between authorities and constituents. Very simplistically, there are three different modes of public engagement: communication, consultation, and collaboration.

In the communication mode, an official or an agency conveys information to members of the public in a one-way fashion, often with the intent of educating and informing the public. Public feedback is not required and not necessarily sought ( Schoch-Spana, 2007 ). In the case of the GoF research debate, this could take the shape of press releases, educational websites, and reports, such as the proceedings of meeting such as this one.

The consultation mode is an interaction in which authorities solicit opinions through surveys, polls, and focus groups or during public comment periods. Again this communication is one-way, but it is from the citizens to the authorities. The public's points of view, criticisms, and constructive advice can inform policy options, but this input is just one of many that decision-makers take into consideration.

The third mode, collaboration, is considered to be a two-way flow of information and influence between citizens and authorities; it is about dialogue fostering better understanding of very complex problems from all sides and perspectives. Collaboration allows an opportunity for collective learning as part of honest and respectful interaction among the authorities and diverse constituents ( Schoch-Spana, 2007 ). Such iterative exchanges, as Fischhoff indicated earlier, are necessary to approach policy concerns that are technically and ethically complex.

Why Is Engaging the Public Valuable?

Schoch-Spana noted that there is a valuable summary in the 2008 NRC report Public Participation in Environmental Assessment and Decision Making , which identified three important justifications for deliberative processes: improving product quality, enhancing legitimacy, and building capacity.

  • Improving product quality: Collaboration enhances decision quality by helping to get the science right. People who are not typically considered experts may nonetheless have relevant local knowledge that is sensitive to context. Their input has often been able to correct technical analyses that have been misapplied to local conditions. The public can also bring fresh eyes not encumbered by technical presuppositions that in the end can improve the technical competence of policy decisions.
  • Enhancing legitimacy: Participation can serve as a means to inform and elicit the consent of the governed on complex issues in ways that traditional methods such as elections cannot provide. Participatory forms of engagement, when performed in good faith, can help build trust between officials and the public and enable officials to consider different points of view, including those of otherwise disenfranchised people. They can also provide evidence even to dissenting participants and nonparticipants that officials have indeed acted in a fair and accountable manner.
  • Building capacity: Well-executed public participation builds a foundation of trust and mutual understanding as well as practical experience with dialogue, which can benefit future policy formulation implementation and evaluation. The public can derive greater facility with the science and the political process, and scientists and governing officials can develop a better understanding of public concerns. Such an exchange helps scientists, citizens, and governing officials understand the aspects of a problem that go beyond their immediate circumstances and provides the opportunity for refinement and even the changing of opinion.

Schoch-Spana reiterated a fourth cross-cutting justification for public engagement—navigating uncertainty—which Fischhoff, Huntley-Fenner, and Haas had also mentioned. Involving the public can strengthen the capacity of civil society and technical experts, industry, and government for analysis and reflection on the uncertain and ambiguous nature of many scientific and technological developments. Judgments informed using scientific fact and social values are necessary in the context of unforeseen consequences that can be good or bad or in between and can unfold over decades or more. The benefits of public participation are not merely aspirations. The 2008 NRC report that Schoch-Spana referenced provides information on a large number of studies from across the social sciences that demonstrate these benefits.

Considerations for the NIH and the NSABB

Schoch-Spana concluded with two points that she believes merit further attention for broad public engagement in the proposed GoF assessment—nested engagement and enduring structures. Broader publics at local, national, and global levels could participate in public engagement exercises that are national in scope, diversely populated, and involve technically and ethically complex health security matters, for example, how does one distribute scarce medical resources in an influenza pandemic? Policy makers could consider holding deliberations in communities hosting GoF research laboratories and populating the national conversation to address the health disparities aspects of risks and benefits. Federal agencies and partners such as the National Academies and other interested entities could also encourage their counterparts internationally to develop comparable deliberative processes. She noted that in 2009 a citizen consultation on climate policy was conducted simultaneously in 38 countries. Transnational consideration of a transnational public health problem seems to Schoch-Spana a reasonable goal to at least consider.

On enduring structures, public engagement on GoF should not be limited to a “one and done” performance. Engagement mechanisms on this issue could serve as a foundation for the development of deliberative systems to tackle analogous dilemmas that are certain to emerge in the future. Participatory endeavors and the diffusion of well-crafted communication products emanating from them are investments in democratic governance. Such efforts would enhance the scientific literacy of citizens as well as the capacity of scientists, their sponsors, and their regulators to represent their work in broadly meaningful ways. Her final takeaway message was “How a decision is made is just as important for many people as the outcome of that decision.”

  • SUMMARY OF RISK AND BENEFIT OVERVIEW

Fischhoff summarized as follows the tasks to be accomplished by the risk/benefit assessment that the NIH plans to conduct:

  • Define the risks and benefits;
  • Assess the risks and expected benefits;
  • Communicate the risks and expected benefits; and
  • Organize to reduce the risks and increase the expected benefits.

The bottom line, he stressed, is that a credible risk benefit analysis must be both technically sound and socially acceptable. There should be a strategic decision on whether to focus on design or decision. There should be proper disciplinary breadth and proper treatment of uncertainty. An ongoing two-way communication with stakeholders is needed to ensure that the assessment receives the credibility it deserves. The process should be organized for transparency and learning.

For this particular endeavor, it is Fischhoff's view that the NIH would do best to focus on design and on determining how to reduce the risks and increase the expected benefits. He pointed out that the structure of risk analysis is well known and has been used many times. But the benefits side of the equation is more difficult and poses more interesting problems that require an investment in formalizing the benefit arguments as well as formalizing the arguments for those who see alternative paths. It is necessary to know which numbers are really important and whether they are even relevant to an analysis. He noted that the NIH could consult with people who have some experience with these issues, for example, Michael Gorham and Kevin Dunbar at the University of Virginia; both study scientific discovery processes, at the individual and laboratory levels, and know something about this. It is possible to take advantage of such people who have already studied the world of scientific innovation in quantitative or qualitative terms.

Fischhoff reiterated that the assessment should seek to inform decisions, not presume to make them. “Anybody who thinks that putting out a contract for a risk/benefit analysis will tell the country what to do on this topic is just deluding themselves.” The subjectivities that inevitably exist in setting the terms of any analysis also need to be acknowledged. When taking a multi-attribute approach to things, somebody needs to decide which attributes are the outstanding ones and whether mortality will be measured in terms of probability of premature death or in terms of expected lives saved. In addition, how should the externalized costs and benefits to the rest of the world, i.e., those that can take advantage of breakthroughs in this country if our sociopolitical and economic systems allow, be weighed? There is a need for some socially acceptable way to resolve the subjectivity in scientific judgment, which must be explicitly acknowledged. As scientists we know that all analyses are incomplete. We can do a better job of quantifying things that are often left out, such as human factors, but there are certain things, such as the quality of the underlying research, that will remain matters of judgment. Similarly, the associated uncertainties in the analysis must be elicited and expressed.

Fischhoff also reiterated the importance of considering and evaluating human factors in scientifically sound ways and that public engagement should be treated as an opportunity to increase the public's literacy and to build trust in a community. This means reaching out, pulling the community into the process, and taking its opinion seriously. It means accommodating the concerns, which are easier to deal with at the beginning than at the end. A good design process is one that does not need some kind of patchwork at the end. Finally, the appropriate level of aggregation for making decisions needs to be determined while considering the variability in the research and the resolution of the decision-making processes. If it is appropriate to evaluate research proposals on a case-by-case basis, then bodies that are properly staffed and resourced and that have credibility with the public to make those case-by-case decisions will be needed.

In the discussion following the end of the Session 8 presentations, Harvey Fineberg noted that Haas included on one of his slides the question “When should the precautionary principle be invoked?” This slide referenced a 1997 report by the Presidential/Congressional Commission on Risk Assessment and Risk Management. In this report, the following comments were provided on the question of use of the “precautionary principle:”

Decision-makers must balance the value of obtaining additional information against the need for a decision, however uncertain. Sometimes a decision must be made under the precautionary principle. Every effort should be made to avoid “paralysis by analysis” where the need for additional information is used as an excuse to avoid or postpone decision-making. When sufficient information is available to make a risk management decision or when additional information or analysis would not contribute significantly to the quality of the decision, the decision should not be postponed. (Presidential/Congressional Commission, 1997:39)

Several participants, including Haas himself, noted that there may be a dearth of information for quantifying many aspects of the GoF research risk assessment. Dr. Michael Imperiale of the University of Michigan commented that there is a lot of debate about how one quantifies the risks and benefits and that there are different ways to look at this question. People can come up with different numbers depending on what is fed into the equation. In addition, benefits, as many people noted, may be intangible or difficult to predict. Some outcomes may not be evident until 20 years in the future. He stated that we should not kid ourselves into thinking we can come up with some formula to plug in all the variables and produce something that shows that the risks outweigh the benefits or vice versa. It needs to be acknowledged that it will be difficult to quantify the equation and, in addition, if we were able to determine exact numbers, then different individuals would place different values on different variables. Some may believe that the advancement of knowledge is much more important than whether risky research is going to inform vaccine preparedness. He believes that one of the best things to come out of the risk assessment would be to convince ourselves and the public that we considered the issues in depth and that whatever decision we made was not pulled out of thin air, but rather the result of a careful deliberative process.

Fineberg also asked Fischhoff whether there are special issues that should be considered in a situation such as that posed by GoF research where there is a very small likelihood of a catastrophic possible outcome. Fischhoff replied that there are people who say that the public is incapable of understanding small risks or making difficult decisions, but that he does not believe that the evidence supports that. He said he would not give up on the public on the basis of a glib meme about public incompetence. People respond in ways seemingly contrary to the evidence because the evidence has not been presented in a credible way. The precautionary principle is brought into play when the people who are uncomfortable with technologies are analytically outgunned by the officials in charge of those technologies. Proposers of projects, such as new power plants, are often well financed and reluctant to modify their proposals, which are often assembled without listening to other people or incorporating other concerns. This can produce an either/or situation, and the people who object are simply outgunned. The precautionary principle may be the only arrow in their quiver, but it may make objectors appear to be demanding zero risk and unwilling to accept any kind of trade-off.

PNAS 2013. vol. 110 Supplement 3 and PNAS 2014 Vol. III Supplement 4.

  • Cite this Page Board on Life Sciences; Division on Earth and Life Studies; Committee on Science, Technology, and Law; Policy and Global Affairs; Board on Health Sciences Policy; National Research Council; Institute of Medicine. Potential Risks and Benefits of Gain-of-Function Research: Summary of a Workshop. Washington (DC): National Academies Press (US); 2015 Apr 13. 2, Assessing Risks and Benefits.
  • PDF version of this title (849K)

In this Page

  • DEFINING “RISK” AND “BENEFIT”

Other titles in this collection

  • The National Academies Collection: Reports funded by National Institutes of Health

Recent Activity

  • Assessing Risks and Benefits - Potential Risks and Benefits of Gain-of-Function ... Assessing Risks and Benefits - Potential Risks and Benefits of Gain-of-Function Research

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

News Center

Genetic variation linked to increased risk of pick’s disease.

risk benefit assessment in research

A unique genetic variation in the MAPT gene was associated with an increased risk of Pick’s disease, a rare form of frontotemporal dementia, according to a recent study published in The Lancet Neurology.   

Tamar Gefen, ‘15 PhD, ‘12 MS , assistant professor of Psychiatry and Behavioral Sciences in the Division of Psychology , was a co-author of the study.  

Pick’s disease is defined by the presence of abnormal substances, called Pick bodies, in the frontal and temporal lobes of the brain. The disease causes stark changes in a patient’s behavior and personality while other cognitive abilities such as memory are unaffected.  

Currently, the disease can only be officially diagnosed by analyzing post-mortem brain tissue and, therefore, has remained understudied.  

“It can often be misdiagnosed as a psychiatric disorder, but really what is happening is that Pick’s disease is targeting the parts of the brain responsible for controlling and regulating our personality and personhood,” Gefen said.  

Pick bodies are composed of specialized tau proteins encoded by the MAPT gene. The MAPT gene contains two haplotypes or sets of closely linked DNA variations that were inherited together: H1 and H2. 

Previous work has established the MAPT H1 haplotype as a major genetic risk factor for certain tauopathies, or neurodegenerative disorders characterized by the accumulation of abnormal tau protein.  

In the current study, the investigators aimed to understand the association of the MAPT H2 haplotype with Pick’s disease risk, age onset and disease duration. The team performed their analysis using data from the Pick’s Disease International Consortium, which was established to enable the collection of data from individuals with Pick’s disease worldwide.  

Specifically, the investigators collected brain samples from 338 patients with Pick’s disease (61 percent male and 39 percent female; 100 percent white) from 35 brain banks and hospitals in North America, Europe and Australia between January 2020 and January 2023. More than 1,300 neurologically healthy controls (47 percent male and 53 percent female; 100 percent white) were also recruited from the Mayo Clinic between March 1998 and Sept 2019.  

Brain samples and patients were then genotyped for the MAPT H1 and H2 haplotype-defining variant rs8070723. Next, the investigators genotyped and constructed the six-variant-defined MAPT H1 sub-haplotypes (rs1467967, rs242557, rs3785883, rs2471738, rs8070723 and rs7521).  

From their two-part analysis, the investigators discovered that the MAPT H2 haplotype was associated with an increased risk of Pick’s disease compared with the H1 haplotype in patients of European ancestry. However, the H2 haplotype was not associated with age onset or disease duration.  

According to Gefen, the current findings are groundbreaking because they may directly lead to earlier and more accurate diagnoses and inform the development of new genetically-based therapeutics for patients.  

“One element to this study worth highlighting is that it took an international consortium from around the world to collect enough brain samples from the generous brain donation of participants. The Mesulam Center that houses the NIA-funded Northwestern Alzheimer’s Disease Research Center holds one of the largest clinically characterized cohorts of individuals with Pick’s disease. What an honor it was to contribute so substantially to this work, the outcome of which has a direct impact on patients’ lives,” Gefen said.  

The work was supported by the Wellcome Trust, Rotha Abraham Trust, Brain Research UK, the Dolby Fund, Dementia Research Institute (Medical Research Council), the U.S. National Institutes of Health, and the Mayo Clinic Foundation. 

Related Posts

Leading prostate cancer research for more than 20 years , how herpes hijacks a ride into cells, weight loss drug shows benefits for heart failure.

Comments are closed.

Type above and press Enter to search. Press Esc to cancel.

IMAGES

  1. Risk Benefit Analysis and Residual Risk are Key Components of ISO 14971:2019

    risk benefit assessment in research

  2. EU MDR’s Benefit-Risk Ratio Making Your Clinical Evaluations Safety-Focused

    risk benefit assessment in research

  3. (PDF) Risk-benefit Assessment

    risk benefit assessment in research

  4. Risk/benefit assessment for clinical outcomes with statistically...

    risk benefit assessment in research

  5. Risk Benefit Assessment

    risk benefit assessment in research

  6. Example Risk Benefit Assessment

    risk benefit assessment in research

VIDEO

  1. What is a Health Risk Assessment and How Does it Evaluate Wellness

  2. 19_Dr. Vijayaprasad

  3. الفرق بين RISK Assessment and Risk Management Process

  4. The Storm is Coming: Are you Ready?

  5. Enterprise Risk Assessments Part 4

  6. Clinical Research In Children: 12 Tips For Better Clinical Research In Children

COMMENTS

  1. 13

    13.1 Introduction. This chapter explores the concept of risk-benefit analysis in health research regulation, as well as ethical and practical questions raised by identifying, quantifying, and weighing risks and benefits. It argues that the pursuit of objectivity in risk-benefit analysis is ultimately futile, as the very concepts of risk and ...

  2. Conducting Risk-Benefit Assessments and Determining Level of IRB Review

    Conducting Risk-Benefit Assessments. Role of the Investigator: When designing research studies, investigators are responsible for conducting an initial risk-benefit assessment using the steps outlined in the diagram below. Role of the IRB: The IRB ultimately is responsible for evaluating the potential risks and weighing the probability of the risk occurring and the magnitude of harm that may ...

  3. PDF Risk-Benefit Assessment in Clinical Research

    Step 4: Risk-Benefit profile. Determine whether the benefits to participants justify the risks, and whether the risk/benefit profile of the intervention (study) is at least as favorable as the available alternatives. If YES: the intervention (study) is acceptable (with respect to participant risks and benefits).

  4. PDF Guidance on Assessing and Minimizing Risk in Human Research

    The benefits of a study do not alter the risk classification. The risk/benefit assessment only refers to the acceptability of the risk, not the level of the risk. A study deemed greater than minimal risk cannot be classified as minimal risk just because the potential benefits are great, but the research could be approved for this reason.

  5. The risk-benefit task of research ethics committees: An evaluation of

    Background. Research ethics committees (RECs) are tasked to do a risk-benefit assessment of proposed research with human subjects for at least two reasons: to verify the scientific/social validity of the research since an unscientific research is also an unethical research; and to ensure that the risks that the participants are exposed to are necessary, justified, and minimized [].

  6. Benefit-risk evaluation: the past, present and future

    Started in 2016, also under IMI, the Patient Preferences in Benefit and Risk Assessments during the Treatment Life Cycle (PREFER) project is a 5-year public-private research initiative with representatives from academia, the pharmaceutical industry, and patient organization and health technology assessment bodies, with an aim to strengthen ...

  7. PDF Risk-Benefit Assessment in Clinical Research

    Terms of Art. 'Risks' and 'benefits' refer to the good things and bad things that can happen to participants, factored by their likelihood. 'Risks' refer to certain harms (pain of a needle stick), possible harms, and burdens (waiting). 'Benefits' refer to definite benefits, possible benefits, and decreased burdens.

  8. Quantitative Benefit-Risk Assessment: State of the Practice Within

    Benefit-risk assessments for medicinal products and devices have advanced significantly over the past decade. The purpose of this study was to characterize the extent to which the life sciences industry is utilizing quantitative benefit-risk assessment (qBRA) methods. ... Future research in this area might benefit by conducting interviews ...

  9. Risk and Risk-Benefit Evaluations in Biomedical Research

    Fourth, risk-benefit evaluations involve a set of judgments that go beyond a direct assessment and weighing of research risks and potential benefits. The term "risk-benefit evaluation," strictly understood, refers to absolute judgments about risks and potential benefits.

  10. Protocol Design

    The first step in evaluating a protocol's risk-benefit relationship is to classify the procedures and interventions, the research components, that present risks as either therapeutic or nontherapeutic. It is the intentof the intervention or procedure, therapeutic or nontherapeutic, that drives the moral analysis of these components.

  11. (PDF) Risk-benefit Assessment

    Risk-benefit assessment is the crucial task of research ethics committees members. There should be an in-depth assessment of expected risks and burdens in comparison to the potential benefits to ...

  12. PDF Framework for Conducting Risk and Benefit Assessments of Gain-of

    As part of its charge, the NSABB is to 1) provide advice on the design, development, and conduct of risk and benefit assessments for gain-of-function studies, and. 2) provide formal recommendations on the conceptual approach to the evaluation of proposed gain-of-function studies. This document was unanimously approved by the committee on May 5 ...

  13. The risk-benefit task of research ethics committees: An evaluation of

    Background Research ethics committees (RECs) are tasked to assess the risks and the benefits of a trial. Currently, two procedure-level approaches are predominant, the Net Risk Test and the Component Analysis. Discussion By looking at decision studies, we see that both procedure-level approaches conflate the various risk-benefit tasks, i.e., risk-benefit assessment, risk-benefit evaluation ...

  14. Assessing Risks and Benefits

    Definitions. Benefit A valued or desired outcome; an advantage. Risk The probability of harm or injury (physical, psychological, social, or economic) occurring as a result of participation in a research study. Both the probability and magnitude of possible harm may vary from minimal to significant. Federal regulations define only "minimal risk ...

  15. How Risky Can Biomedical Research Be?

    One of the unmet challenges for risk-benefit assessment in biomedical research is whether there should be an upper limit of risk in non-beneficial studies involving competent and consenting participants, and if yes, how it should be defined. This chapter focuses on this second question. It examines the four dominant regulatory and conceptual ...

  16. Risk-Benefit Assessment in Research Ethics

    Justification for risk-benefit assessment are twofold "to verify the scientific/social validity of the research since an unscientific research is also an unethical research; and to ensure that the risks that the participants are exposed to are necessary, justified, and minimized." (Bernabade et al 2012) Therefore, participation in research ...

  17. (PDF) The risk-benefit task of research ethics committees: An

    The assessment approaches used by regulators and research reviewers who are working in clinical practice are then discussed, together with a highlight of the new trends in risk-benefit analysis ...

  18. Process of risk assessment by research ethics committees: foundations

    Risks and burdens in the study participation, as well as an adequate risk-benefit balance, are key concepts for the evaluation of clinical studies by research ethics committees (RECs). An adequate assessment and continuous monitoring to ensure compliance of risks and burdens in clinical trials have long been described as a central task in research ethics. However, there is currently no uniform ...

  19. Methodological guidelines and publications of benefit-risk assessment

    Benefit-risk assessment (BRA), ... This scoping review will consider methodological research studies or documents that provided recommendations or guidance on conducting or reporting a BRA in the context of health technology's life-cycle such as decision making at the population level including the regulatory decision and HTA decision ...

  20. Risk Assessment and Risk-Benefit Assessment

    Risk-benefit assessment comprises three parts, i.e., risk assessment, benefit assessment, and risk-benefit comparison, among which risk-benefit comparison is the trickiest one, usually a common metric of the health outcome is needed. ... Research Needs: (1) the key data needs, e.g., toxicological data; and/or (2) the methodology gaps, e.g ...

  21. Risk-Benefit Evaluation in Clinical Research Practice

    Section I: Principles Of Risk-Benefit Assessment In Clinical Research. Risk-benefit assessment is an universal requirement for protecting human participants in clinical trials. The concept and perception of risks and benefits, their types, degree, and relative weight are complex attributes which can vary depending on the context in which the ...

  22. (PDF) Risk-benefit assessment of foods

    Recent years have. seen an increase in research and data to assess those impacts. Risk-Benefit Assessment (RBA) has emerged as a decision-. support tool that considers both negative and positive ...

  23. Risk Assessment Models for VTE in Medical Inpatients

    Key Points. Question What is the prognostic performance of the simplified Geneva score and other validated risk assessment models (RAMs) to predict venous thromboembolism (VTE) in medical inpatients?. Findings In this cohort study providing a head-to-head comparison of validated RAMs among 1352 medical inpatients, sensitivity of RAMs to predict 90-day VTE ranged from 39.3% to 82.1% and ...

  24. Modernizing and harmonizing regulatory data requirements for

    Figure 1.(A) proposes a set of data recommended for a science-based food and feed safety assessment for a typical GM crop and considers as core studies: basic molecular characterization, protein characterization and expression, and protein safety (i.e., history of safe use of the protein and source organism and bioinformatics to identify potential toxins and allergens).

  25. Urine test identifies high-risk prostate cancers

    Researchers developed a urine-based test that can distinguish between slow-growing prostate cancers that pose little risk and more aggressive cancers that need treatment. The test could help some patients avoid unnecessary biopsies and other tests that carry potential risks. Prostate cancer is a leading cause of cancer death among men nationwide.

  26. Quantitative risk assessment of gas leakage and explosion in open

    The results show that the integrated risk for the open kitchen layout in different apartments is 1.37 ~ 4.65 times than that for the partition kitchen layouts. The integrated risk of gas leakage explosion in three open kitchen layout arrangements was reduced by 94.8%, 50%, and 94.5%, respectively, by setting two layers of protection.

  27. Assessing Risks and Benefits

    Dr. Charles Haas, Drexel University, a member of the symposium planning committee, summarized the standard risk assessment process. The major steps in risk assessment were first articulated in a National Research Council report titled Risk Assessment in the Federal Government: Managing the Process (NRC, 1983), otherwise known as the "Red Book." This report has been updated several times ...

  28. Genetic Variation Linked to Increased Risk of Pick's Disease

    A unique genetic variation in the MAPT gene was associated with an increased risk of Pick's disease, a rare form of frontotemporal dementia, according to a recent study published in The Lancet Neurology. Tamar Gefen, '15 PhD, '12 MS, assistant professor of Psychiatry and Behavioral Sciences in the Division of Psychology, was a co-author ...