• Introduction
  • Conclusions
  • Article Information

eTable 1. Characteristics of Delphi panellists

eTable 2. Characteristics of Consensus Meeting panellists

eTable 3. Fillable Checklist: CONSORT-Outcomes (for combined completion of CONSORT 2010 and CONSORT-Outcomes 2022 items)

eTable 4. Fillable Checklist: CONSORT-Outcomes 2022 Extension items only (for separate completion of CONSORT 2010 and CONSORT-Outcomes 2022 items)

eTable 5. Key users and groups, proposed actions, and potential benefits for supporting implementation and adherence to CONSORT-Outcomes

eTable 6. CONSORT-Outcomes items and five optional additional items for outcome reporting in trial reports and related documents

eAppendix 1. Development of CONSORT-Outcomes, including patient/public engagement

eAppendix 2 . Detailed search strategy for scoping review

eAppendix 3. Acknowledgement of Contributors to the Development of CONSORT-Outcomes (and SPIRIT-Outcomes)

eFigure. Flow of candidate items through development of CONSORT-Outcomes

eReferences

  • Reporting of Patient-Reported Outcomes in Randomized Trials : The CONSORT PRO Extension JAMA Special Communication February 27, 2013 Calvert and colleagues expand the Consolidated Standards of Reporting Trials (CONSORT) to include 5 guidelines essential for adequately reporting patientreported outcomes as primary or major secondary end points in research articles in its CONSORT PRO guidelines. Melanie Calvert, PhD; Jane Blazeby, MD; Douglas G. Altman, DSc; Dennis A. Revicki, PhD; David Moher, PhD; Michael D. Brundage, MD; for the CONSORT PRO Group
  • CONSORT Extension for Reporting Multi-Arm Parallel Group Randomized Trials JAMA Special Communication April 23, 2019 This extension of the CONSORT 2010 Statement, a guideline and checklist for reporting parallel group randomized trials, provides updates for the reporting of multi-arm trials to assist evaluations of rigor and reproducibility and enhance understanding of the methodology of trials with more than 2 comparison arms. Edmund Juszczak, MSc; Douglas G. Altman, DSc; Sally Hopewell, DPhil; Kenneth Schulz, PhD
  • CONSERVE Guidelines for Trials Modified Due to Extenuating Circumstances JAMA Special Communication July 20, 2021 This Special Communication describes CONSERVE, a guideline developed to improve the transparency, quality, and completeness of reporting of trials and trial protocols that undergo important modifications in response to extenuating circumstances such as the COVID-19 pandemic. Aaron M. Orkin, MD, MSc, MPH; Peter J. Gill, MD, DPhil; Davina Ghersi, MD, PhD; Lisa Campbell, MD, MBBCh, BSc; Jeremy Sugarman, MD, MPH, MA; Richard Emsley, PhD; Philippe Gabriel Steg, MD; Charles Weijer, MD, PhD; John Simes, MBBS, MD; Tanja Rombey, MPH; Hywel C. Williams, DSc; Janet Wittes, PhD; David Moher, PhD; Dawn P. Richards, PhD; Yvette Kasamon, MD; Kenneth Getz, MBA; Sally Hopewell, MSc, DPhil; Kay Dickersin, MA, PhD; Taixiang Wu, MPH; Ana Patricia Ayala, MISt; Kenneth F. Schulz, PhD; Sabine Calleja, MI; Isabelle Boutron, MD, PhD; Joseph S. Ross, MD, MHS; Robert M. Golub, MD; Karim M. Khan, MD, PhD; Cindy Mulrow, MD, MSc; Nandi Siegfried, PhD, MPH; Joerg Heber, PhD; Naomi Lee, MD; Pamela Reed Kearney, MD; Rhoda K. Wanyenze, MBChB, MPH, PhD; Asbjørn Hróbjartsson, MD, PhD, MPhil; Rebecca Williams, PharmD, MPH; Nita Bhandari, PhD; Peter Jüni, MD; An-Wen Chan, MD, DPhil; CONSERVE Group; Aaron M. Orkin; Peter J. Gill; Davina Ghersi; Lisa  Campbell; Jeremy Sugarman; Richard  Emsley; Philippe Gabriel Steg; Charles Weijer; John Simes; Tanja Rombey; Hywel C. Williams; Janet Wittes; David Moher; Dawn P. Richards; Yvette Kasamon; Kenneth Getz; Sally Hopewell; Kay Dickersin; Taixiang Wu; Ana Patricia Ayala; Kenneth F. Schulz; Sabine Calleja; Isabelle Boutron; Joseph S. Ross; Robert M. Golub; Karim M. Khan; Cindy Mulrow; Nandi Siegfried; Joerg Heber; Naomi Lee; Pamela Reed Kearney; Rhoda K. Wanyenze; Asbjørn Hróbjartsson; Rebecca  Williams; Nita Bhandari; Peter Jüni; An-Wen Chan; Veronique Kiermer; Jacqueline  Corrigan-Curay; John  Concato
  • Guidelines for Reporting Outcomes in Trial Protocols JAMA Special Communication December 20, 2022 This report, the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT)–Outcomes 2022 extension, is a consensus statement on the standards for describing outcome-specific information in clinical trial protocols and contains recommendations to be integrated with the SPIRIT 2013 statement. Nancy J. Butcher, PhD; Andrea Monsour, MPH; Emma J. Mew, MPH, MPhil; An-Wen Chan, MD, DPhil; David Moher, PhD; Evan Mayo-Wilson, DPhil; Caroline B. Terwee, PhD; Alyssandra Chee-A-Tow, MPH; Ami Baba, MRes; Frank Gavin, MA; Jeremy M. Grimshaw, MBCHB, PhD; Lauren E. Kelly, PhD; Leena Saeed, BSc, BEd; Lehana Thabane, PhD; Lisa Askie, PhD; Maureen Smith, MEd; Mufiza Farid-Kapadia, MD, PhD; Paula R. Williamson, PhD; Peter Szatmari, MD; Peter Tugwell, MD; Robert M. Golub, MD; Suneeta Monga, MD; Sunita Vohra, MD; Susan Marlin, MSc; Wendy J. Ungar, PhD; Martin Offringa, MD, PhD

See More About

Select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Others Also Liked

  • Download PDF
  • X Facebook More LinkedIn
  • CME & MOC

Butcher NJ , Monsour A , Mew EJ, et al. Guidelines for Reporting Outcomes in Trial Reports : The CONSORT-Outcomes 2022 Extension . JAMA. 2022;328(22):2252–2264. doi:10.1001/jama.2022.21022

Manage citations:

© 2024

  • Permissions

Guidelines for Reporting Outcomes in Trial Reports : The CONSORT-Outcomes 2022 Extension

  • 1 Child Health Evaluative Sciences, The Hospital for Sick Children Research Institute, Toronto, Ontario, Canada
  • 2 Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
  • 3 Department of Chronic Disease Epidemiology, School of Public Health, Yale University, New Haven, Connecticut
  • 4 Department of Medicine, Women’s College Research Institute, University of Toronto, Toronto, Ontario, Canada
  • 5 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
  • 6 School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada
  • 7 Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina, Chapel Hill
  • 8 Amsterdam University Medical Centers, Vrije Universiteit, Department of Epidemiology and Data Science, Amsterdam, the Netherlands
  • 9 Department of Methodology, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
  • 10 public panel member, Toronto, Ontario, Canada
  • 11 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
  • 12 Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
  • 13 Department of Pharmacology and Therapeutics, University of Manitoba, Winnipeg, Canada
  • 14 Children’s Hospital Research Institute of Manitoba, Winnipeg, Canada
  • 15 Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
  • 16 NHMRC Clinical Trials Centre, University of Sydney, Sydney, New South Wales, Australia
  • 17 patient panel member, Ottawa, Ontario, Canada
  • 18 MRC-NIHR Trials Methodology Research Partnership, Department of Health Data Science, University of Liverpool, Liverpool, England
  • 19 Cundill Centre for Child and Youth Depression, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
  • 20 Department of Psychiatry, The Hospital for Sick Children, Toronto, Ontario, Canada
  • 21 Bruyère Research Institute, Ottawa, Ontario, Canada
  • 22 Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
  • 23 Department of Medicine, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
  • 24 Departments of Pediatrics and Psychiatry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada
  • 25 Clinical Trials Ontario, Toronto, Canada
  • 26 Department of Public Health Sciences, Queen’s University, Kingston, Ontario, Canada
  • 27 Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, Ontario, Canada
  • 28 Division of Neonatology, The Hospital for Sick Children, Toronto, Ontario, Canada
  • Special Communication Reporting of Patient-Reported Outcomes in Randomized Trials : The CONSORT PRO Extension Melanie Calvert, PhD; Jane Blazeby, MD; Douglas G. Altman, DSc; Dennis A. Revicki, PhD; David Moher, PhD; Michael D. Brundage, MD; for the CONSORT PRO Group JAMA
  • Special Communication CONSORT Extension for Reporting Multi-Arm Parallel Group Randomized Trials Edmund Juszczak, MSc; Douglas G. Altman, DSc; Sally Hopewell, DPhil; Kenneth Schulz, PhD JAMA
  • Special Communication CONSERVE Guidelines for Trials Modified Due to Extenuating Circumstances Aaron M. Orkin, MD, MSc, MPH; Peter J. Gill, MD, DPhil; Davina Ghersi, MD, PhD; Lisa Campbell, MD, MBBCh, BSc; Jeremy Sugarman, MD, MPH, MA; Richard Emsley, PhD; Philippe Gabriel Steg, MD; Charles Weijer, MD, PhD; John Simes, MBBS, MD; Tanja Rombey, MPH; Hywel C. Williams, DSc; Janet Wittes, PhD; David Moher, PhD; Dawn P. Richards, PhD; Yvette Kasamon, MD; Kenneth Getz, MBA; Sally Hopewell, MSc, DPhil; Kay Dickersin, MA, PhD; Taixiang Wu, MPH; Ana Patricia Ayala, MISt; Kenneth F. Schulz, PhD; Sabine Calleja, MI; Isabelle Boutron, MD, PhD; Joseph S. Ross, MD, MHS; Robert M. Golub, MD; Karim M. Khan, MD, PhD; Cindy Mulrow, MD, MSc; Nandi Siegfried, PhD, MPH; Joerg Heber, PhD; Naomi Lee, MD; Pamela Reed Kearney, MD; Rhoda K. Wanyenze, MBChB, MPH, PhD; Asbjørn Hróbjartsson, MD, PhD, MPhil; Rebecca Williams, PharmD, MPH; Nita Bhandari, PhD; Peter Jüni, MD; An-Wen Chan, MD, DPhil; CONSERVE Group; Aaron M. Orkin; Peter J. Gill; Davina Ghersi; Lisa  Campbell; Jeremy Sugarman; Richard  Emsley; Philippe Gabriel Steg; Charles Weijer; John Simes; Tanja Rombey; Hywel C. Williams; Janet Wittes; David Moher; Dawn P. Richards; Yvette Kasamon; Kenneth Getz; Sally Hopewell; Kay Dickersin; Taixiang Wu; Ana Patricia Ayala; Kenneth F. Schulz; Sabine Calleja; Isabelle Boutron; Joseph S. Ross; Robert M. Golub; Karim M. Khan; Cindy Mulrow; Nandi Siegfried; Joerg Heber; Naomi Lee; Pamela Reed Kearney; Rhoda K. Wanyenze; Asbjørn Hróbjartsson; Rebecca  Williams; Nita Bhandari; Peter Jüni; An-Wen Chan; Veronique Kiermer; Jacqueline  Corrigan-Curay; John  Concato JAMA
  • Special Communication Guidelines for Reporting Outcomes in Trial Protocols Nancy J. Butcher, PhD; Andrea Monsour, MPH; Emma J. Mew, MPH, MPhil; An-Wen Chan, MD, DPhil; David Moher, PhD; Evan Mayo-Wilson, DPhil; Caroline B. Terwee, PhD; Alyssandra Chee-A-Tow, MPH; Ami Baba, MRes; Frank Gavin, MA; Jeremy M. Grimshaw, MBCHB, PhD; Lauren E. Kelly, PhD; Leena Saeed, BSc, BEd; Lehana Thabane, PhD; Lisa Askie, PhD; Maureen Smith, MEd; Mufiza Farid-Kapadia, MD, PhD; Paula R. Williamson, PhD; Peter Szatmari, MD; Peter Tugwell, MD; Robert M. Golub, MD; Suneeta Monga, MD; Sunita Vohra, MD; Susan Marlin, MSc; Wendy J. Ungar, PhD; Martin Offringa, MD, PhD JAMA

Question   What outcome-specific information should be included in a published clinical trial report?

Findings   Using an evidence-based and international consensus–based approach that applied methods from the Enhancing the Quality and Transparency of Health Research (EQUATOR) methodological framework, 17 outcome-specific reporting items were identified.

Meaning   Inclusion of these items in clinical trial reports may enhance trial utility, replicability, and transparency and may help limit selective nonreporting of trial results.

Importance   Clinicians, patients, and policy makers rely on published results from clinical trials to help make evidence-informed decisions. To critically evaluate and use trial results, readers require complete and transparent information regarding what was planned, done, and found. Specific and harmonized guidance as to what outcome-specific information should be reported in publications of clinical trials is needed to reduce deficient reporting practices that obscure issues with outcome selection, assessment, and analysis.

Objective   To develop harmonized, evidence- and consensus-based standards for reporting outcomes in clinical trial reports through integration with the Consolidated Standards of Reporting Trials (CONSORT) 2010 statement.

Evidence Review   Using the Enhancing the Quality and Transparency of Health Research (EQUATOR) methodological framework, the CONSORT-Outcomes 2022 extension of the CONSORT 2010 statement was developed by (1) generation and evaluation of candidate outcome reporting items via consultation with experts and a scoping review of existing guidance for reporting trial outcomes (published within the 10 years prior to March 19, 2018) identified through expert solicitation, electronic database searches of MEDLINE and the Cochrane Methodology Register, gray literature searches, and reference list searches; (2) a 3-round international Delphi voting process (November 2018-February 2019) completed by 124 panelists from 22 countries to rate and identify additional items; and (3) an in-person consensus meeting (April 9-10, 2019) attended by 25 panelists to identify essential items for the reporting of outcomes in clinical trial reports.

Findings   The scoping review and consultation with experts identified 128 recommendations relevant to reporting outcomes in trial reports, the majority (83%) of which were not included in the CONSORT 2010 statement. All recommendations were consolidated into 64 items for Delphi voting; after the Delphi survey process, 30 items met criteria for further evaluation at the consensus meeting and possible inclusion in the CONSORT-Outcomes 2022 extension. The discussions during and after the consensus meeting yielded 17 items that elaborate on the CONSORT 2010 statement checklist items and are related to completely defining and justifying the trial outcomes, including how and when they were assessed (CONSORT 2010 statement checklist item 6a), defining and justifying the target difference between treatment groups during sample size calculations (CONSORT 2010 statement checklist item 7a), describing the statistical methods used to compare groups for the primary and secondary outcomes (CONSORT 2010 statement checklist item 12a), and describing the prespecified analyses and any outcome analyses not prespecified (CONSORT 2010 statement checklist item 18).

Conclusions and Relevance   This CONSORT-Outcomes 2022 extension of the CONSORT 2010 statement provides 17 outcome-specific items that should be addressed in all published clinical trial reports and may help increase trial utility, replicability, and transparency and may minimize the risk of selective nonreporting of trial results.

Well designed and properly conducted randomized clinical trials (RCTs) are the gold standard for producing primary evidence that informs evidence-based clinical decision-making. In RCTs, trial outcomes are used to assess the intervention effects on participants. 1 The Consolidated Standards of Reporting Trials (CONSORT) 2010 statement provided 25 reporting items for inclusion in published RCT reports. 2 , 3

Fully reporting trial outcomes is important for replicating results, knowledge synthesis efforts, and preventing selective nonreporting of results. A scoping review revealed diverse and inconsistent recommendations on how to report trial outcomes in published reports by academic, regulatory, and other key sources. 4 Insufficient outcome reporting remains common across academic journals and disciplines; key information about outcome selection, definition, assessment, analysis, and changes from the prespecified outcomes (ie, from the trial protocol or the trial registry) is often poorly reported. 5 - 9 Such avoidable reporting issues have been shown to affect the conclusions drawn from systematic reviews and meta-analyses, 10 contributing to research waste. 11

Although calls for improved reporting of trial outcomes have been made, 5 , 12 what constitutes useful, complete reporting of trial outcomes to knowledge users such as trialists, systematic reviewers, journal editors, clinicians, patients, and the public is unclear. 4 Two extensions (for harms in 2004 and for patient-reported outcomes in 2013) 13 , 14 of the CONSORT statement relevant to the reporting of specific types of trial outcomes exist; however, no standard reporting guideline for essential outcome-specific information applicable to all outcome types, populations, and trial designs is available. 4

The aim of the CONSORT-Outcomes 2022 extension was to develop harmonized, evidence- and consensus-based outcome reporting standards for clinical trial reports.

The CONSORT-Outcomes 2022 extension was developed as part of the Instrument for Reporting Planned Endpoints in Clinical Trials (InsPECT) project 15 in accordance with the Enhancing the Quality and Transparency of Health Research (EQUATOR) methodological framework for reporting guideline development. 16 Ethics approval was not required as determined by the research ethics committee at The Hospital for Sick Children. The development 15 of the CONSORT-Outcomes 2022 extension occurred in parallel with the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT)–Outcomes 2022 extension for clinical trial protocols. 17

First, we created an initial list of recommendations relevant to reporting outcomes for RCTs that were synthesized from consultation with experts and a scoping review of existing guidance for reporting trial outcomes (published within the 10 years prior to March 19, 2018) identified through expert solicitation, electronic database searches of MEDLINE and the Cochrane Methodology Register, gray literature searches, and reference list searches as described. 4 , 18 Second, a 3-round international Delphi voting process took place from November 2018 to February 2019 to identify additional items and assess the importance of each item, which was completed by 124 panelists from 22 countries (eTable 1 in the Supplement ). Third, an in-person expert consensus meeting was held (April 9-10, 2019), which was attended by 25 panelists from 4 countries, including a patient partner and a public partner, to identify the set of essential items relevant to reporting outcomes for trial reports and establish dissemination activities. Selection and wording of the items was finalized at a postconsensus meeting by executive panel members and via email with consensus meeting panelists.

The detailed methods describing development of the CONSORT-Outcomes 2022 extension appear in eAppendix 1 in the Supplement , including the number of items evaluated at each phase and the process toward the final set of included items (eFigure in the Supplement ). The scoping review trial protocol and findings have been published 4 , 18 and appear in eAppendix 1 in the Supplement and the search strategy appears in eAppendix 2 in the Supplement . The self-reported characteristics of the Delphi voting panelists and the consensus meeting panelists appear in eTables 1-2 in the Supplement . Details regarding the patient and public partner involvement appear in eAppendix 1 in the Supplement .

In addition to the inclusion of the CONSORT 2010 statement checklist items, the CONSORT-Outcomes 2022 extension recommends that a minimum of 17 outcome-specific reporting items be included in clinical trial reports, regardless of trial design or population. The scoping review and consultation with experts identified 128 recommendations relevant to reporting outcomes in trial reports, the majority (83%) of which were not included in the CONSORT 2010 statement. All recommendations were consolidated into 64 items for Delphi voting; after the Delphi survey process, 30 items met the criteria for further evaluation at the consensus meeting and possible inclusion in the CONSORT-Outcomes 2022 extension. The CONSORT 2010 statement checklist items and the 17 items added by the CONSORT-Outcomes 2022 extension appear in Table 1 . 19

A fillable version of the checklist appears in eTables 3-4 in the Supplement and on the CONSORT website. 20 When using the updated checklist, users should refer to definitions of key terms in the glossary 21 - 38 ( Box ) because variations in the terminology and definitions exist across disciplines and geographic areas. The 5 core elements of a defined outcome (with examples) appear in Table 2 . 39 , 40

Glossary of Terms Used in the CONSORT-Outcomes 2022 Extension

Composite outcome: A composite outcome consists of ≥2 component outcomes (eg, proportion of participants who died or experienced a nonfatal stroke). Participants who have experienced any of the events specified by the components are considered to have experienced the composite outcome. 21 , 22

CONSORT 2010: Consolidated Standards of Reporting Trials (CONSORT) statement that was published in 2010. 2 , 3

CONSORT-Outcomes 2022 extension: Additional essential checklist items describing outcome-related content that are not covered by the CONSORT 2010 statement.

Construct validity: The degree to which the scores reported in a trial are consistent with the hypotheses (eg, with regard to internal relationships, the relationships of the scores to other instruments, or relevant between-group differences) based on the assumption that the instrument validly measures the domain to be measured. 30

Content validity: The degree to which the content of the study instrument is an adequate reflection of the domain to be measured. 30

Criterion validity: The degree to which the scores of a study instrument are an adequate reflection of a gold standard. 30

Cross-cultural validity: The degree to which the performance of the items on a translated or culturally adapted study instrument are an adequate reflection of the performance of the items using the original version of the instrument. 30

Minimal important change: The smallest within-patient change that is considered important by patients, clinicians, or relevant others. 4 , 5 The change may be in a score or unit of measure (continuous or ordinal measurements) or in frequency (dichotomous outcomes). This term is often used interchangeably in health literature with the term minimal important difference . In the CONSORT-Outcomes 2022 extension, the minimal important change conceptually refers to important intrapatient change (item 6a.3) and the minimal important difference refers to the important between-group difference. Minor variants of the term, such as minimum instead of minimal, or the addition of the adjective clinically or clinical are common (eg, the minimum clinically important change). 23

Minimal important difference: The smallest between-group difference that is considered important by patients, clinicians, or relevant others. 24 - 27 The difference may be in a score or unit of measure (continuous or ordinal measurements) or in frequency (dichotomous outcomes). Minor variants of the term, such as minimum instead of minimal, or the addition of the adjective clinically or clinical are common (eg, the minimum clinically important difference). 23

Outcome: Refers to what is being assessed to examine the effect of exposure to a health intervention. 1 The 5 core elements of a defined outcome appear in Table 2 .

Primary outcome: The planned outcome that is most directly related to the primary objective of the trial. 28 It is typically the outcome used in the sample size calculation for trials with the primary objective of assessing efficacy or effectiveness. 29 Many trials have 1 primary outcome, but some have >1. The term primary end point is sometimes used in the medical literature when referring to the primary outcome. 4

Reliability: The degree to which the measurement is free from error. Specifically, the extent to which scores have not changed for participants and are the same for repeated measures under several conditions (eg, using different sets of items from the same rating scale for internal consistency; over time or test-retest; by different persons on the same occasion or interrater; or by the same persons, such as raters or responders, on different occasions or intrarater). 30

Responsiveness: The ability of a study instrument to accurately detect and measure change in the outcome domain over time. 31 , 32 Distinct from an instrument’s construct validity and criterion validity, which refer to the validity of a single score, responsiveness refers to the validity of a change score (ie, longitudinal validity). 30

Secondary outcomes: The outcomes prespecified in the trial protocol to assess any additional effects of the intervention. 28

Smallest worthwhile effect: The smallest beneficial effect of an intervention that justifies the costs, potential harms, and inconvenience of the interventions as determined by patients. 33

SPIRIT 2013: Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) statement that was published in 2013. 35 , 36

SPIRIT-Outcomes 2022 extension: Additional essential checklist items describing outcome-related trial protocol content that are not covered by the SPIRIT 2013 statement. 17

Structural validity: The degree to which the scores of a study instrument (eg, a patient questionnaire or a clinical rating scale) are an adequate reflection of the dimensionality of the domain to be measured. 30

Study instrument: The scale or tool used to make an assessment. A study instrument may be a questionnaire, a clinical rating scale, a laboratory test, a score obtained through a physical examination or an observation of an image, or a response to a single question. 34

Target difference: The value that is used in sample size calculations as the difference sought to be detected on the primary outcome between intervention groups and that should be considered realistic or important (such as the minimal important difference or the smallest worthwhile effect) by ≥1 key stakeholder groups. 37 , 38

Validity: The degree to which a study instrument measures the domain it purports to measure. 30

Application of these new checklist items from the CONSORT-Outcomes 2022 extension, in conjunction with the CONSORT 2010 statement, ensures trial outcomes will be comprehensively defined and reported. The estimated list of key users, their proposed actions, and the consequential potential benefits of implementing the 17 CONSORT-Outcomes 2022 extension checklist items appears in eTable 5 in the Supplement and was generated from the consensus meeting’s knowledge translation session. Examination and application of these outcome reporting recommendations may be helpful for trial authors, journal editors, peer reviewers, systematic reviewers, patients, the public, and trial participants (eTable 5 in the Supplement ).

This report contains a brief explanation of the 17 checklist items generated from the CONSORT-Outcomes 2022 extension. Guidance on how to report the existing checklist items can be found in the CONSORT 2010 statement, 2 in Table 1 , and in an explanatory guideline report. 41 Additional items that may be useful to include in some trial reports or in associated trial documents (eg, the statistical analysis plan or in a clinical study report 42 ) appear in eTable 6 in the Supplement , but were not considered essential reporting items for all trial reports.

This item expands on CONSORT 2010 statement checklist item 6a to explicitly ask for reporting on the rationale underlying the selection of the outcome domain for use as the primary outcome. At a broad conceptual level, the outcome’s domain refers to the name or concept used to describe an outcome (eg, pain). 10 , 39 The word domain can be closely linked to and sometimes used equivalently with the terms construct and attribute in the literature. 40 Even though a complete outcome definition is expected to be provided in the trial report (as recommended by CONSORT 2010 statement checklist item 6a), 2 , 40 the rationale for the choice of the outcome domain for the trial’s primary outcome is also essential to communicate because it underpins the purpose of the proposed trial.

Important aspects for the rationale may include (1) the importance of the outcome domain to the individuals involved in the trial (eg, patients, the public, clinicians, policy makers, funders, or health payers), (2) the expected effect of the intervention on the outcome domain, and (3) the ability to assess it accurately, safely, and feasibly during the trial. It also should be reported whether the selected outcome domain originated from a core outcome set (ie, an agreed standardized set of outcomes that should be measured in all trials for a specific clinical area). 43

This item expands on CONSORT 2010 statement checklist item 6a that recommends completely defining prespecified primary and secondary outcomes, and provides specific recommendations that mirror SPIRIT 2013 statement checklist item 12 (and its explanatory text for defining trial outcomes). 35 CONSORT-Outcomes 2022 extension checklist item 6a.2 recommends describing each element of an outcome including its measurement variable, specific analysis metric, method of aggregation, and time point. Registers such as ClinicalTrials.gov already require that trials define their outcomes using this framework. 10 , 35 , 39 Failure to clearly report each element of the outcomes from a trial enables undetectable multiple testing, data cherry-picking, and selective nonreporting of results in the trial report compared with what was planned. 10 , 44

This item expands on CONSORT-Outcomes 2022 extension checklist item 6a.2. In cases in which the participant-level analysis metric for the primary outcome represents intraindividual change from an earlier value (such as those measured at baseline), a definition and justification of what is considered the minimal important change (MIC) for the relevant study instrument should be provided. In the CONSORT-Outcomes 2022 extension, the MIC was defined as the smallest within-patient change that is considered important by patients, clinicians, or relevant others (common alternative terminologies appear in the Box ). 24 , 25 , 31 The MIC is important to report for all trials that use a within-participant change metric, such as those that plan to analyze the proportion of participants showing a change larger than the MIC value in each treatment group (eg, to define the proportion who improved) 45 or in n-of-1 trial designs. 46

Describing the MIC will facilitate understanding of the trial results and their clinical relevance by patients, clinicians, and policy makers. Users with trial knowledge may be interested in the MIC itself as a benchmark or, alternatively, in a value larger than the known MIC. Describing the justification for the selected MIC is important because there can be numerous MICs available for the same study instrument, with varying clinical relevance and methodological quality depending on how and in whom they were determined. 47 - 50 If the MIC is unknown for the study instrument with respect to the trial’s population and setting, this should be reported.

This item expands on CONSORT-Outcomes 2022 extension checklist item 6a.2 to prompt authors, if applicable, to describe the prespecified cutoff values used to convert any outcome data collected on a continuous (or ordinal) scale into a categorical variable for their analyses. 10 , 35 Providing an explanation of the rationale for the choice of the cutoff value is recommended; it is not unusual for different trials to apply different cutoff values. The cutoff values selected are most useful when they have clear clinical relevance. 51 Reporting this information will help avoid multiple testing (known as “p-hacking”), data cherry-picking, and selective nonreporting of results in the trial report. 10 , 44 , 52

This item expands on CONSORT-Outcomes 2022 extension checklist item 6a.2 regarding the time point to prompt authors, if applicable, to specify the time point used in the main analysis if outcome assessments were performed at multiple time points after randomization (eg, trial assessed blood pressure daily for 12 weeks after randomization). Specifying the preplanned time points of assessment used for the analyses will help limit the possibility of unplanned analyses of multiple assessment time points and the selective nonreporting of time points that did not yield large or significant results. 35 , 39

Providing a rationale for the choice of time point is encouraged (eg, based on the expected clinical trajectory after the intervention or the duration of treatment needed to achieve a clinically meaningful exposure to treatment). The length of follow-up should be appropriate to the management decision the trial is designed to inform. 53

A composite outcome consists of 2 or more component outcomes that may be related. Participants who have experienced any 1 of the defined component outcomes comprising the composite outcome are considered to have experienced the composite outcome. 21 , 22 When used, composite outcomes should be prespecified, justified, and fully defined, 51 which includes a complete definition of each individual component outcome and a description of how those will be combined (eg, what analytic steps define the occurrence of the composite outcome).

However, composite outcomes can be difficult to interpret even when sufficiently reported. For example, a composite outcome can disguise treatment effects when the effects on the component outcomes go in opposite directions or when the component outcomes have different effect levels (eg, combining death and disability), furthering the need for quality reporting for every component. 22 , 54

Any outcomes that were not prespecified in the trial protocol or trial registry that were measured during the trial should be clearly identified and labeled. Outcomes that were not prespecified can result from the addition of an entirely new outcome domain that was not initially planned (eg, the unplanned inclusion and analysis of change in frequency of cardiovascular hospital admissions obtained from a hospital database). In addition, outcomes that differ from the prespecified outcomes in measurement variable, analysis metric, method of aggregation, and analysis time point are not prespecified. For example, if the trial reports on treatment success rates at 12 months instead of the prespecified time point of 6 months in the trial protocol, the 12-month rate should be identified as an outcome that was not prespecified with the specific change explained. Many reasons exist for changes in outcome data (eg, screening, diagnostic, and surveillance procedures may change; coding systems may change; or new adverse effect data may emerge). For fundamental changes to the primary outcome, investigators should report details (eg, the nature and timing of the change, motivation, whether the impetus arose from internal or external data sources, and who proposed and who approved these changes).

The addition of undeclared outcomes in trial reports is a major concern. 7 , 55 Among 67 trials published in 5 high-impact CONSORT-endorsing journals, there were 365 outcomes added (a mean of 5 undeclared outcomes per trial). 7 Less than 15% of the added outcomes were described as not being prespecified. 7 Determining whether reported outcomes match those in trial protocols or trial registries should not be left for readers to check for themselves, which is an onerous process (estimated review time of 1-7 hours per trial), 7 and is impossible in some cases (such as when the trial protocol is not publicly available). There can be good reasons to change study outcomes while a trial is ongoing, and authors should disclose these changes (CONSORT 2010 statement checklist item 6b) to avoid any potential appearance of reporting bias.

The information needed to provide a sufficient description of the study instrument should be enough to allow replication of the trial and interpretability of the results (eg, specify the version of the rating scale used, the mode of administration, and the make and model of the relevant laboratory instrument). 35 It is essential to summarize and provide references to empirical evidence that demonstrated sufficient reliability (eg, test-retest, interrater or intrarater reliability, and internal consistency), validity (eg, content, construct, criterion, cross-cultural, and structural validity), ability to detect change in the health outcome being assessed (ie, responsiveness) as appropriate for the type of study instrument, and enable comparison with a population similar to the study sample. Such evidence may be drawn from high-quality primary studies of measurement properties, from systematic reviews of measurement properties of study instruments, and from core outcome sets. Diagnostic test accuracy is relevant to report when the defined outcome relates to the presence or absence of a condition before and after treatment. 56

CONSORT-Outcomes 2022 extension checklist item 6a.8 also recommends describing relevant measurement properties in a population similar to the study sample (or at least not substantively different) because measurement properties of study instruments cannot be presumed to be generalizable between different populations (eg, between different age groups). 57 If measurement properties of the study instrument were unknown for the population used, this can be stated with a rationale for why it was considered appropriate or necessary to use this instrument.

This information is critical to report because the quality and interpretation of the trial data rest on these measurement properties. For example, study instruments with poor content validity would not accurately reflect the domain that was intended to be measured, and study instruments with low interrater reliability would undermine the trial’s statistical power 35 if an expected result was not discovered or accounted for in the planned power calculations. 30 - 32

Substantially different responses, and therefore different trial results, can be obtained for many types of outcomes (eg, behavioral, psychological outcomes), depending on who is assessing the outcome of interest. This variability may result from differences in assessors’ training or experience, different perspectives, or patient recall. 58 , 59 Assessments of a clinical outcome reported by a clinician, a patient, or a nonclinician observer or through a performance-based assessment are correspondingly classified by the US Food and Drug Administration as clinician-reported, patient-reported, observer-reported, and performance outcomes. 60

For outcomes that could be assessed by various people, an explanation for the choice of outcome assessor made in the context of the trial should be provided. For outcomes that are not influenced by the outcome assessor (eg, plasma cholesterol levels), this information is less relevant. Professional qualifications or any trial-specific training necessary for trial personnel to function as outcome assessors is often relevant to describe 35 (eg, when using the second edition of the Wechsler Abbreviated Scale of Intelligence, an assessor with a PhD or PsyD and ≥5 years of experience with the relevant patient population and ≥15 prior administrations using this instrument or similar IQ assessments might be required). Details regarding blinding of an assessor to the patient’s treatment assignment and emerging trial results are covered in CONSORT 2010 statement checklist item 11a.

Providing a description of any of the processes used to promote outcome data quality during and after data collection in a trial provides transparency and facilitates appraisal of the quality of the trial’s data. For example, subjective outcome assessments may have been performed in duplicate (eg, pathology assessments) or a central adjudication committee may have been used to ensure independent and accurate outcome assessments. Other common examples include verifying the data are in the proper format (eg, integer), the data are within an expected range of values, and the data are reviewed with independent source document verification (eg, by an external trial monitor). 35 The trial report should include a full description or a brief summary with reference to where the complete information can be found (eg, an open access trial protocol).

This item expands on CONSORT 2010 statement checklist item 7a for reporting how sample size was determined to prompt authors to report the target difference used to inform the trial’s sample size calculation. The target difference is the value used in sample size calculations as the difference sought to be detected in the primary outcome between the intervention groups at the specific time point for the analysis that should be considered realistic or important by 1 or more key stakeholder groups. 37 The Difference Elicitation in Trials project has published extensive evidence-based guidance on selecting a target difference for a trial, sample size calculation, and reporting. 37 , 38 The target difference may be the minimal important difference (MID; the smallest difference between patients perceived as important) 24 , 26 , 27 or the smallest worthwhile effect (the smallest beneficial effect of an intervention that justifies the costs, harms, and inconvenience of the interventions as determined by patients). 33 Because there can be different pragmatic or clinical factors informing the selected target difference (eg, the availability of a credible MID for the study instrument used to assess the primary outcome), 47 and numerous different options available (eg, 1 of several MIDs or values based on pilot studies), 47 it is important to explain why the chosen target difference was selected. 23 , 48 , 49

This item extends CONSORT 2010 statement checklist item 12a to prompt authors to describe any statistical methods used to account for multiplicity relating to the analysis or interpretation of the results. Outcome multiplicity issues are common in trials and deserve particular attention when there are coprimary outcomes, multiple possible time points resulting from the repeated assessment of a single outcome, multiple planned analyses of a single outcome (eg, interim or subgroup analysis, multigroup trials), or numerous secondary outcomes for analysis. 61

The methods used to account for such forms of multiplicity include statistical methods (eg, family-wise error rate approaches) or descriptive approaches (eg, noting that the analyses are exploratory, placing the results in the context of the expected number of false-positive outcomes). 61 , 62 Such information may be briefly described in the text of the report or described in more detail in the statistical analysis plan. 63 Authors may report if no methods were used to account for multiplicity (eg, not applicable or were not considered necessary).

This item extends CONSORT 2010 statement checklist item 12a to recommend that authors (1) state and justify any criteria applied for excluding certain outcome data from the analysis or (2) report that no outcome data were excluded. This is in reference to explicitly and intentionally excluded outcome data, such as in the instance of too many missing items from a participant’s completed questionnaire, or through other well-justified exclusion of outliers for a particular outcome. This helps the reader to interpret the reported results. This information may be presented in the CONSORT flow diagram where the reasons for outcome data exclusion are stated for each outcome by treatment group.

The occurrence of missing participant outcome data in trials is common, and in many cases, this missingness is not random, meaning it is related to either allocation to a treatment group, patient-specific (prognostic) factors, or the occurrence of a specific health outcome. 64 , 65 When there are missing data, CONSORT-Outcomes 2022 extension checklist item 12a.3 recommends describing (1) any methods used to assess or identify the pattern of missingness and (2) any methods used to handle missing outcomes or entire assessments (the choice of which is informed by the identified pattern of missingness) in the statistical analysis (eg, multiple imputation, complete case, based on likelihood, and inverse-probability weighting).

It is critical to provide information about the patterns and handling of any missing data because missing data can lead to reduced power of the trial, affect its conclusions, and affect whether trials are at low or high risk of bias, depending on the pattern of missingness. 66 , 67 A lack of clarity about the magnitude of the missingness and about how missing data were handled in the analysis makes it impossible for meta-analysists to accurately extract sample sizes needed to weight studies in their pooled estimates and prevents accurate assessment of any risk of bias arising from missing data in the reported results. 67 , 68 This checklist item is not applicable if there is a complete data set, and it may be unimportant if the amount of missing data can be considered negligible.

Patterns of missingness (also referred to as missing data mechanisms) include missing completely at random, missing at random, and missing not at random and require description in trials to help readers and meta-analysists determine which patterns are present in data sets. 69 Some of the missing data may still be able to be measured (eg, via concerted follow-up efforts with a subset of the trial participants with missing data) to help distinguish between missing at random and missing not at random. 70 The pattern of missingness relates to the choice of the methods used to handle missing outcomes or entire assessments (eg, multiple imputation and maximum likelihood analyses assume the data are at least missing at random) and is essential to report. Any sensitivity analyses that were conducted to assess the robustness of the trial results (eg, using different methods to handle missing data) should be reported. 35 , 64

Trial outcome data can be analyzed in many ways that can lead to different results. The general reporting principle described in the CONSORT 2010 statement was to “describe statistical methods with enough detail to enable a knowledgeable reader with access to the original data to verify the reported results.” 2 Item 12a.4 extends CONSORT 2010 statement checklist item 12a to recommend that authors report the definition of the outcome analysis population used for each analysis as it relates to nonadherence of the trial protocol. For each analysis, information on whether the investigators included all participants who were randomized to the group to which they were originally allocated (ie, intention-to-treat analysis) has been widely recognized to be particularly important to the critical appraisal and interpretation of trial findings. 2 , 35

Because amounts of missing data may vary among different outcomes and the reasons data are missing may also vary, CONSORT-Outcomes 2022 extension checklist item 12a.4 specifies reporting the definition of the outcome analysis population used in the statistical analyses. For example, a complete data set may be available to analyze the outcome of mortality but not for patient-reported outcomes within the same trial. In another example, analysis of harms might be restricted to participants who received the trial intervention so the absence or occurrence of harm was not attributed to a treatment that was never received. 35

This item expands on CONSORT 2010 statement checklist item 17a on outcomes and estimation to remind authors to ensure that they have reported the results for all outcome analyses that were prespecified in the trial protocol or statistical analysis plan. 68 Although this is expected to be standard practice, 2 the information available in the trial report is often insufficient regarding prespecified analyses for the reader to determine whether there was selective nonreporting of any trial results. 71 When it is not feasible to report on all prespecified analyses in a single trial report (eg, trials with a large number of prespecified secondary outcomes), authors should report where the results of any other prespecified outcome analyses can be found (eg, in linked publications or an online repository) or signal their intention to report later in the case of longer-term follow-up.

A recent study of adherence showed that prespecified statistical analyses remain low in published trials, with unexplained discrepancies between the prespecified and reported analyses. 71 This item extends CONSORT 2010 statement checklist item 18 on ancillary analyses to recommend that an explanation should be provided for any analyses that were not prespecified (eg, in the trial protocol or statistical analysis plan), but that are being reported in the trial report. These types of analyses can be called either exploratory analyses or analyses that were not prespecified. Communicating the rationale for any analyses that were not prespecified, but which were performed and reported, is important for trial transparency and for correct appraisal of the trial’s credibility. It can be important to state when such additional analyses were performed (eg, before or after seeing any results from comparative analyses for other outcomes). Multiple analyses of the same data create a risk for false-positive findings and selective reporting of analyses that were not prespecified could lead to bias.

The CONSORT-Outcomes 2022 extension provides evidence- and consensus-based guidance for reporting outcomes in published clinical trial reports, extending the CONSORT 2010 statement checklist with 17 additional reporting items and harmonizing reporting recommendations with guidance from the SPIRIT-Outcomes 2022 extension. 17 Alignment across these 2 extension guidelines creates a cohesive continuum of reporting from the trial protocol to the completed trial that will facilitate both the researcher’s production of the trial protocol and trial report and, importantly, any assessment of the final report’s adherence to the trial protocol. 20 Similar to the CONSORT 2010 statement, 41 the CONSORT-Outcomes 2022 extension applies to the content of the trial report, including the tables and figures and online-only supplementary material. 72 The current recommendations are similarly not prescriptive regarding the structure or location of reporting this information; authors should “address checklist items somewhere in the article, with ample detail and lucidity.” 41

Users of the CONSORT-Outcomes 2022 extension checklist should note that these additional checklist items represent the minimum essential items for outcomes reporting and are being added to the CONSORT 2010 statement 2 , 3 guidelines to maximize trial utility, transparency, replication, and limit selective nonreporting of results (eTable 5 in the Supplement ). In some cases, it may be important to report additional outcome-specific information in trial reports, 4 such as those in eTable 6 in the Supplement or refer to CONSORT-PRO for PRO-specific reporting guidance 14 and the CONSORT extension for reporting harms. 13 Authors adhering to the CONSORT-Outcomes 2022 extension should explain why any item is not relevant to their trial. For example, this extension checklist, which is for reporting systematically assessed outcomes, might not be applicable to outcomes that are not systematically collected or prespecified such as spontaneously reported adverse events. When constrained by journal word count, authors can refer to open access trial protocols, statistical analysis plans, trial registry data, or provide online-only supplementary materials.

We anticipate that the key users of the CONSORT-Outcomes 2022 extension will be trial authors, journal editors, peer reviewers, systematic reviewers, meta-analysis researchers, academic institutions, patients (including trial participants), and the broader public (eTable 5 in the Supplement ). Use of this extension by these groups may help improve trial utility, transparency, and replication. Patient and public engagement was successfully embedded into a consensus meeting for a methodologically complex topic, a rarity in reporting guideline development to date. Future reporting guideline development should engage patients and members of the public throughout the process. The CONSORT-Outcomes 2022 extension will be disseminated as outlined previously, 15 including through the EQUATOR Network and the CONSORT website. End users can provide their input on the content, clarity, and usability online, 73 which will inform any future updates.

This study has several limitations. First, the included checklist items are appropriate for systematically collected outcomes, including most potential benefits and some harms; however, other items might be applicable for reporting harms not systematically assessed. 74

Second, because these checklist items are not yet integrated in the main CONSORT checklist, finding and using multiple checklists may be considered burdensome by some authors and editors, which may affect uptake. 75 Future efforts to integrate these additional checklist items in the main CONSORT checklist might promote implementation in practice.

Third, although a large, multinational group of experts and end users was involved in the development of these recommendations with the aim of increasing usability among the broader research community, the Delphi voting results could have been affected by a nonresponse bias because panelists were self-selecting (ie, interested individuals signed up to take part in the Delphi voting process).

Fourth, the consensus meeting panelists were purposively sampled based on their expertise and roles relevant to clinical trial conduct, oversight, and reporting. 15 The views of individuals not well represented by the consensus meeting panelists (eg, trialists outside North America and Europe) may differ. The systematic and evidence-based approach 15 , 16 used to develop this guideline, including a rigorous scoping review of outcome reporting guidance, 4 , 18 will help mitigate the potential effect of these limitations.

This CONSORT-Outcomes 2022 extension of the CONSORT 2010 statement provides 17 outcome-specific items that should be addressed in all published clinical trial reports and may help increase trial utility, replicability, and transparency and may minimize the risk of selective nonreporting of trial results.

Corresponding Author: Nancy J. Butcher, PhD, Peter Gilgan Centre for Research and Learning, The Hospital for Sick Children, 686 Bay St, Toronto, ON M5G 0A4, Canada ( [email protected] ).

Accepted for Publication: October 25, 2022.

Author Contributions: Dr Butcher had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: All authors.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Butcher.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Monsour.

Obtained funding: Offringa.

Administrative, technical, or material support: Monsour, Mew, Baba.

Supervision: Butcher, Offringa.

Conflict of Interest Disclosures: Dr Butcher reported receiving grant funding from CHILD-BRIGHT and the Cundill Centre for Child and Youth Depression at the Centre for Addiction and Mental Health (CAMH) and receiving personal fees from Nobias Therapeutics. Ms Mew reported receiving salary support through a Canadian Institutes of Health Research doctoral foreign study award. Dr Kelly reported receiving funding from the Canadian Cancer Society, Research Manitoba, the Children’s Hospital Research Institute of Manitoba, Mitacs, and the SickKids Foundation. Dr Askie reported being a co-convenor of the Cochrane Prospective Meta-Analysis Methods Group. Dr Farid-Kapadia reported currently being an employee of Hoffmann La-Roche and holding shares in the company. Dr Williamson reported chairing the COMET initiative management group. Dr Szatmari reported receiving funding from the CAMH. Dr Tugwell reported co-chairing the OMERACT executive committee; receiving personal fees from the Reformulary Group, UCB Pharma GmbH, Parexel International, PRA Health Sciences, Amgen, AstraZeneca, Bristol Myers Squibb, Celgene, Eli Lilly, Genentech, Roche, Genzyme, Sanofi, Horizon Therapeutics, Merck, Novartis, Pfizer, PPD Inc, QuintilesIMS (now IQVIA), Regeneron Pharmaceuticals, Savient Pharmaceuticals, Takeda Pharmaceutical Co Ltd, Vertex Pharmaceuticals, Forest Pharmaceuticals, and Bioiberica; serving on data and safety monitoring boards for UCB Pharma GmbH, Parexel International, and PRA Health Sciences; and receiving unrestricted educational grants from the American College of Rheumatology and the European League of Rheumatology. Dr Monga reported receiving funding from the Cundill Centre for Child and Youth Depression at CAMH; receiving royalties from Springer for Assessing and Treating Anxiety Disorders in Young Children ; and receiving personal fees from the TD Bank Financial Group for serving as a chair in child and adolescent psychiatry. Dr Ungar reported being supported by the Canada research chair in economic evaluation and technology assessment in child health. No other disclosures were reported.

Funding/Support: This work, project 148953, received financial support from the Canadian Institutes of Health Research (supported the work of Drs Butcher, Kelly, Szatmari, Monga, and Offringa and Mss Saeed and Marlin).

Role of the Funder/Sponsor: The Canadian Institutes of Health Research had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Disclaimers: Dr Golub is Executive Deputy Editor of JAMA , but he was not involved in any of the decisions regarding review of the manuscript or its acceptance. This article reflects the views of the authors, the Delphi panelists, and the consensus meeting panelists and may not represent the views of the broader stakeholder groups, the authors’ institutions, or other affiliations.

Additional Contributions: We gratefully acknowledge the additional contributions made by the project core team, the executive team, the operations team, the Delphi panelists, and the international consensus meeting panelists (eAppendix 3 in the Supplement ). We thank Andrea Chiaramida, BA, for administrative project support and Lisa Stallwood, MSc, for administrative manuscript support (both with The Hospital for Sick Children). We thank Petros Pechlivanoglou, PhD, and Robin Hayeems, PhD, for piloting and providing feedback on the format of the Delphi survey (both with The Hospital for Sick Children). We thank Roger F. Soll, MD (Cochrane Neonatal and the Division of Neonatal-Perinatal Medicine, Larner College of Medicine, University of Vermont), and James Webbe, MB BChir, PhD, and Chris Gale, MBBS, PhD (both with Neonatal Medicine, School of Public Health, Imperial College London), for pilot testing the CONSORT-Outcomes 2022 extension checklist. None of these individuals received compensation for their role in the study.

Additional Information: The project materials and data are publicly available on the Open Science Framework at https://osf.io/arwy8/ .

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

Guidelines for Reporting Outcomes in Trial Reports: The CONSORT-Outcomes 2022 Extension

Affiliations.

  • 1 Child Health Evaluative Sciences, The Hospital for Sick Children Research Institute, Toronto, Ontario, Canada.
  • 2 Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada.
  • 3 Department of Chronic Disease Epidemiology, School of Public Health, Yale University, New Haven, Connecticut.
  • 4 Department of Medicine, Women's College Research Institute, University of Toronto, Toronto, Ontario, Canada.
  • 5 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada.
  • 6 School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada.
  • 7 Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina, Chapel Hill.
  • 8 Amsterdam University Medical Centers, Vrije Universiteit, Department of Epidemiology and Data Science, Amsterdam, the Netherlands.
  • 9 Department of Methodology, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands.
  • 10 public panel member, Toronto, Ontario, Canada.
  • 11 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada.
  • 12 Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada.
  • 13 Department of Pharmacology and Therapeutics, University of Manitoba, Winnipeg, Canada.
  • 14 Children's Hospital Research Institute of Manitoba, Winnipeg, Canada.
  • 15 Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada.
  • 16 NHMRC Clinical Trials Centre, University of Sydney, Sydney, New South Wales, Australia.
  • 17 patient panel member, Ottawa, Ontario, Canada.
  • 18 MRC-NIHR Trials Methodology Research Partnership, Department of Health Data Science, University of Liverpool, Liverpool, England.
  • 19 Cundill Centre for Child and Youth Depression, Centre for Addiction and Mental Health, Toronto, Ontario, Canada.
  • 20 Department of Psychiatry, The Hospital for Sick Children, Toronto, Ontario, Canada.
  • 21 Bruyère Research Institute, Ottawa, Ontario, Canada.
  • 22 Ottawa Hospital Research Institute, Ottawa, Ontario, Canada.
  • 23 Department of Medicine, Feinberg School of Medicine, Northwestern University, Chicago, Illinois.
  • 24 Departments of Pediatrics and Psychiatry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada.
  • 25 Clinical Trials Ontario, Toronto, Canada.
  • 26 Department of Public Health Sciences, Queen's University, Kingston, Ontario, Canada.
  • 27 Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, Ontario, Canada.
  • 28 Division of Neonatology, The Hospital for Sick Children, Toronto, Ontario, Canada.
  • PMID: 36511921
  • DOI: 10.1001/jama.2022.21022

Importance: Clinicians, patients, and policy makers rely on published results from clinical trials to help make evidence-informed decisions. To critically evaluate and use trial results, readers require complete and transparent information regarding what was planned, done, and found. Specific and harmonized guidance as to what outcome-specific information should be reported in publications of clinical trials is needed to reduce deficient reporting practices that obscure issues with outcome selection, assessment, and analysis.

Objective: To develop harmonized, evidence- and consensus-based standards for reporting outcomes in clinical trial reports through integration with the Consolidated Standards of Reporting Trials (CONSORT) 2010 statement.

Evidence review: Using the Enhancing the Quality and Transparency of Health Research (EQUATOR) methodological framework, the CONSORT-Outcomes 2022 extension of the CONSORT 2010 statement was developed by (1) generation and evaluation of candidate outcome reporting items via consultation with experts and a scoping review of existing guidance for reporting trial outcomes (published within the 10 years prior to March 19, 2018) identified through expert solicitation, electronic database searches of MEDLINE and the Cochrane Methodology Register, gray literature searches, and reference list searches; (2) a 3-round international Delphi voting process (November 2018-February 2019) completed by 124 panelists from 22 countries to rate and identify additional items; and (3) an in-person consensus meeting (April 9-10, 2019) attended by 25 panelists to identify essential items for the reporting of outcomes in clinical trial reports.

Findings: The scoping review and consultation with experts identified 128 recommendations relevant to reporting outcomes in trial reports, the majority (83%) of which were not included in the CONSORT 2010 statement. All recommendations were consolidated into 64 items for Delphi voting; after the Delphi survey process, 30 items met criteria for further evaluation at the consensus meeting and possible inclusion in the CONSORT-Outcomes 2022 extension. The discussions during and after the consensus meeting yielded 17 items that elaborate on the CONSORT 2010 statement checklist items and are related to completely defining and justifying the trial outcomes, including how and when they were assessed (CONSORT 2010 statement checklist item 6a), defining and justifying the target difference between treatment groups during sample size calculations (CONSORT 2010 statement checklist item 7a), describing the statistical methods used to compare groups for the primary and secondary outcomes (CONSORT 2010 statement checklist item 12a), and describing the prespecified analyses and any outcome analyses not prespecified (CONSORT 2010 statement checklist item 18).

Conclusions and relevance: This CONSORT-Outcomes 2022 extension of the CONSORT 2010 statement provides 17 outcome-specific items that should be addressed in all published clinical trial reports and may help increase trial utility, replicability, and transparency and may minimize the risk of selective nonreporting of trial results.

Publication types

  • Research Support, Non-U.S. Gov't
  • Checklist / standards
  • Clinical Trials as Topic* / standards
  • Guidelines as Topic*
  • Research Design* / standards

Grants and funding

  • MR/S014357/1/MRC_/Medical Research Council/United Kingdom
  • CIHR/Canada

Aquila Corporation Wheelchair Cushion Systems

Case Reports Vs Clinical Studies

Uncategorized

This post discusses questions validity if authored by an employee of the reporting company, Roho. This blog will answer these questions regarding clinical studies and clinical evidence:

  • What is the difference between a clinical study and a case report?
  • Who can observe and document the results of a clinical study?
  • What circumstance would beg questioning the validity of a case report?

(Information below was take from the site clinical trials .gov a service of the National Institute of Health)

Definition of case report and clinical study

In medicine, a  case report  is a detailed  report of the symptoms, signs, diagnosis, treatment, and follow-up of an individual patient.  Case reports  may contain a demographic profile of the patient, but usually describe an unusual or novel occurrence.

The case report is written on one individual patient.

Clinical Study

A research study using human subjects to evaluate biomedical or health-related outcomes. Two types of clinical studies are Interventional Studies  (or clinical trials) and  Observational Studies . A clinical study involves multiple patients.

Observational Clinical Studies have a qualified investigator.

In an observational study, investigators assess health outcomes in groups of participants according to a research plan or protocol. Participants may receive interventions (which can include medical products such as drugs or devices) or procedures as part of their routine medical care, but participants are not assigned to specific interventions by the investigator (as in a clinical trial).

The Key Responsibilities of a Clinical Study Investigator:

  • Be qualified to practice medicine or psychiatry and meet the qualifications specified by applicable national regulatory requirements(s)
  • Be qualified by education, training, and experience to assume responsibility for the proper conduct of the study,
  • Be familiar with and compliant with Good Clinical Practice (GCP)  ICH E6 Guideline  and applicable ethical and regulatory requirements prior to commencement of work on the study.
  • Provide evidence of his/her qualification using the Abbreviated  TransCelerate Curriculum Vitae (CV) form

The internal validity of a medical device case report is questioned if bias is present. One must consider bias in a case report authored by an employee of the company that makes the device described in the report.

These are the facts on clinical studies published on the roho website. 

  • There are 15 of what roho calls clinical studies on the roho website.  Based on the above definitions, these are not clinical studies but rather case reports. 
  • Of these 15 case reports only one pertains a seat cushion improving a pressure ulcer.

This one single case report is written by Cynthia Fleck, an employee of crown therapeutics which is a division of roho

After selling 1 million cushions over the span of 45 years in business roho has exactly 1 case report which was written by an employee of roho which then begs the question of validity of this report.

Related Posts

Pressure Injury Prevention , SofTech , Uncategorized

Considerations with a standing chair mechanism in fighting pressure sores.

Inspirational people in wheelchairs to follow on social media, get relief & healing from pressure injuries.

Order Your Pressure Relief Wheelchair Cushion Today

case study clinical trial reporting

  • NIH Grants & Funding
  • Blog Policies

NIH Extramural Nexus

case study clinical trial reporting

Further Refining Case Studies and FAQs about the NIH Definition of a Clinical Trial in Response to Your Questions

22 comments.

In August and September we released case studies and FAQs to help those of you doing human subjects research to determine whether your research study meets the NIH definition of a clinical trial. Correctly making this determination is important to ensure you are following the initiatives we have been implementing to improve the transparency of clinical trials , including the need to pick clinical trial -specific funding opportunity announcements for due dates of January 25, 2018 and beyond.

We have made no changes to the NIH definition of a clinical trial , or how the definition is interpreted.  What we have done is revise existing case studies and add a few new ones to help clarify how the definition of clinical trial does or does not apply to: studies of delivery of standard clinical care, device studies, natural experiments, preliminary studies for study procedures, and studies that are primarily focused on the nature or quality of measurements as opposed to biomedical or behavioral outcomes..

As a reminder , the case studies illustrate how to apply the  four questions  researchers involved in human studies need to ask, and answer, to determine if their study meets the NIH definition of a clinical trial. These questions are:

  • Does the study involve human participants?
  • Are the participants prospectively assigned to an intervention?
  • Is the study designed to evaluate the effect of the intervention on the participants?
  • Is the effect that will be evaluated a health-related biomedical or behavioral outcome?

If the answer to all four questions is yes, then we consider your research a clinical trial.

Note that If the answers to the 4 questions are yes, your study meets the NIH definition of a clinical trial, even if…

  • You are studying healthy participants
  • Your study does not have a comparison group (e.g., placebo or control)
  • Your study is only designed to assess the pharmacokinetics, safety, and/or maximum tolerated dose of an investigational drug
  • Your study is utilizing a behavioral intervention

Studies intended solely to refine measures are not considered clinical trials.

The adjustments to the case studies include the following:

  • #7a, #8a, #24, #31a: Clarified whether it meets definition of intervention
  • #18c: Replaced with a more illustrative case study
  • #18d, 24, and 33: Clarified whether study was designed to assess the nature or quality of a measurement, as opposed to the effect of an intervention on a behavioral or biomedical outcome.
  • #18g: New case study about testing procedures
  • #36 a-b: New case studies about standard clinical care
  • #37: New case study about Phase 1 device studies
  • #38: New case study about natural experiments.
  • #39: Proposed case study about preliminary tests for study procedures.
  • New case studies specific to select NIH Institutes and Centers

We recognize that sometimes in an attempt to be helpful we end up providing a lot of material to look through. So to help you quickly find the case studies that are most relevant to your research we have added the ability to filter the case studies by keyword.

We also added two new FAQs on standard clinical care and Phase 1 devices.

Thank you for your continuing dialog on this topic. We look forward to continuing to work with you as we move towards higher levels of  trust and transparency with our clinical trials.

Update: Some of these case studies have been revised since this publication.

RELATED NEWS

“We have made no changes to the NIH definition of a clinical trial, or how the definition is interpreted.”

Heaven forbid the NIH responds to investigators’ concerns.

I appreciate the fact that the NIH is working to refine it’s case studies, which have been largely helpful. However, I would like to point out what I think is not so helpful in your revision of 18c, and compare it to 18a. In the former, the case study states that feedback to subjects of winning or losing in a gambling task would be an intervention qualifying as a clinical trial. But in 18a, which could involve a working memory task, is not a clinical trial. However, what happens if after each memory trial, a subject were to receive correct/incorrect feedback? This would affect some of the same brain areas as the win/lose feedback in a gambling task. The implication of this is that studying working memory with feedback is a clinical trial, but without feedback, it’s not a clinical trial? Why is it that studying reward processes makes it a clinical trial, but studying memory processes are not? It becomes very hard to discern the rules that you are using as to what constitutes an intervention, and what does not.

Steve Taylor

The distinction between 18A (an fMRI study that is not a clinical trial) and 18C (an fMRI study that is a clinical trial) is not at all clearcut. Please consider:

– In case 18A, fMRI is used (“Participants are administered … brain scans (e.g., fMRI)”), and the determination is made that the research project is not a clinical trial.

– However, in case 18C, fMRI is used (“The investigators will measure the comparative effects of ‘ wins’ and ‘losses’ on brain function (fMRI in striatal regions) during the gambling task.”), and the determination is made that the research project is a clinical trial.

This naturally raises the question: Would the findings of case 18C apply if the (unspecified) fMRI task in case 18A involved ‘wins’ and ‘losses’? If so, then how shall fMRI studies be assessed? Do all tasks designed to probe (say) working memory and early visual processing fall under case 18A, while tasks designed to probe the reward system are to be considered clinical trials, per Case 18C?

How did we get here?

The real issue is that such nit-picking arguments should be unnecessary, given the clear divide between basic research and clinical trials.

In biology, the idea of a “manipulation” is fundamental to the definition of experimental (vs. observational) science. Basic experimental science is about seeing how manipulations affect measurements, while clinical trials are about seeing how interventions affect health or biomedical-related outcomes. Historically, there has been little confusion regarding these distinctions, but NIH’s published “Cases” avoid this historical understanding and instead manifest expansive readings of “intervention” and “health outcomes” so as to create a vast enlargement of the category of clinical trials, in a manner that diverges from NIH’s original statement of purpose, as the 2014 definition stated that new definition “is not intended to expand the scope of the category of clinical trials”.

The (now twice-revised) “Cases” continue to disregard historical understanding, remain inconsistent with NIH’s 2014 statement of purpose, and create unnecessary and harmful confusion.

James J. Pekar, Ph.D. Professor of Radiology, Johns Hopkins University

Amen- I completely agree with Dr Pekar’s statements. Clinical Trials designation should be reserved for “long term’ intervention (i.e. in the case of behavioral or a drug administration clinical trial should encompass continuous and long-term administration, not single discrete administration and measurement) at least in my opinion.

In my opinion, to say that anything involving human subjects except observational or epi studies are clinical trials is a gross change in definition of clinical trials as well as accepted experimental procedure. Also, I don’t think all these changes will accomplish the underlying problem of clinical trial results not being published and publicaly available. NIH could accomplish that by just making final grant report abstracts available on the REPORTER system

Jennifer Nasser, PhD, RD Drexel University

The comment by James Pekar above is completely on the mark.

Cases 18a, 18b, and 18c reveal the confusion about what is ‘modifying’ vs. ‘measuring’ behavior. The problem is that *every* experiment (say a brain scanning fMRI experiment) is designed to ‘modify’ the behavior in some sense.

18b says that if you administer standard cognitive tasks then it is not a clinical trial, because ‘The standard cognitive tasks and the fMRI are being performed to measure and describe brain activity, but not to modify it.’ But of course any task modifies the brain activity: that is the whole point of the task. You would not do a task if it had no effect on brain activity or behavior. If you administer a standard sustained attention task, you will induce activation in attention related areas of the brain, which is a modification from the normal state. You then measure and describe it.

18c says that if you assign subjects to ‘win’ and ‘loss’ conditions it is a clinical trial because you are modifying brain activity. Again, every task modifies brain activity. How is win/loss different from say attend/not attend, read words/read nonwords, see faces/see houses?

If you see faces, that includes activity in face-processing areas of brain. If you win, that induces activity in reward related areas. Why exactly is the latter a clinical trial and the former not (I assume)?

Vastly expanding the definition of clinical trial is clearly harmful and burdensome.

What are the requirements for the NIH definition for FDA clinical trials that involve a medical device. For example, evaluation of a rapid HIV test on prospectively collected samples from subjects, as compared with results from a reference test?

It is deeply disappointing to see the NIH callously disregard and dismiss such overwhelming concerns. Personally, I have yet to meet a single practicing researcher who agrees with the NIH’s approach or responses.

The APA and FABBS sent letters last July on this matter, yet their concerns were apparently not seriously considered. Was there a public, point-by-point response? This isn’t an issue of clarifying boundary conditions or edge-cases, but of wholesale, fundamental issues with the structure of this policy.

The NIH leadership needs to do some serious self-reflection about the negative impacts their conflation of basic, translational and clinical science. This goes far beyond administrative bloat and bureaucratic burden.

What will happen if NIH and/or reviewers disagree with an investigator’s assessment of whether their research is a clinical trial or not? That is, what if NIH and/or the reviewers think the application was submitted under the wrong PA?

Four comments 1) How all this effort will solve the stated problem of non-publication is manifestly unclear. 2) When a policy requires complex explanations illustrated by debatable cases there is a problem. 3) We all want to do the right thing but there must be a better way to do it. The FTEs involved in all this effort could be better spent. 4) NIDDK case #3 is classified as not a clinical trial because “a brief bout of exercise” is not regarded as an intervention. What if it was “a bout of exercise” or “a bout of prolonged exercise” or “running 2 miles” or “an exercise program” – which of these would these be interventions? (In the previous NIDDK case #2 fasting is regarded as an intervention).

Mike. Dude. Please for the love of all holy consult with some more human cognition researchers before continuing to refine these case studies in response to our collective outrage and confusion. As many of us pointed out last round, the case studies are inconsistent with one another. Now, #18c just makes no sense compared to the other examples under #18. There is no way to “assess brain activity under standard laboratory conditions” without having some kind of manipulation involved. The “wins vs. losses” in #18c is just one of the many ways that brain activity is assessed under standard laboratory conditions. These examples continue to fail to capture basic aspects of the experimental logic of fMRI (and related methods). I am still not sure how my next grant will be categorized and who at NIH I will have to argue with when trying to figure out if my “standard laboratory conditions” do or do not count as a “manipulation.”

While I appreciate the efforts that NIH has made to make the case studies more useful for applicants, I believe there are still improvements that can be made and that will minimize the questions that program officers and other staff will need to address under these new policies.

Since the revised (September 2017) version of the case studies was released, NIH has started to divide clinical trials into two categories: mechanistic and clinical efficacy. It would be very useful if the case studies could reflect this distinction, since some FOAs are specifically designated for one or the other, since it would add depth to the definitions and draw distinctions between the two.

More generally, I’d like to see the logic behind some of the new and revised case studies expanded on and clarified. Looking at some of the subcases in Case 18 (particularly 18c and 18e), it appears that the underlying rule for whether a study is a clinical trial is whether there is any brain activity that can be detected as part of the study procedure. This is an interesting standard, as any task (including passively being in the MRI) will result in brain activity, which is considered a biomedical outcome under this standard.

The logic here seems more curious in contrast with Case 24, where asking someone about the legibility of a text is not a clinical trial, but Case 26, which asks about comprehension, rather than legibility, is. Overall, the current standard – that brain activity of any sort, resulting from any experimental manipulation – requires compliance with clinical trial registration and reporting requirements seems to be used somewhat unevenly.

I do appreciate seeing Case 18g and Case 39, which provide guidance on pilot studies, which are essential for developing good experiments, and I appreciate that pilot studies are exempted from the need to register.

I want to end by saying that I believe that requiring registration of studies, as both a matter of diligence in government-funded research, and out of respect for our subjects, is essential. That said, I believe that NIH could achieve this goal with respect to what are now being labeled “mechanistic clinical trials” by moving away from the clinical trials language, and simply calling those experiments “mechanistic trials” to reflect their basic, non-clinical nature.

I’m particularly concerned here that, to people outside the research community, the phrase “clinical trial” has justifiable weight, and that by including these basic science studies under that umbrella, we are collectively opening ourselves to problems down the line. In general, drugs and treatments tested with clinical trials are considered, by the public, to be efficacious, and I am concerned that “mechanistic clinical trials” will be used to misrepresent basic research to the public, and potentially to Congress. I fully support NIH’s push for preregistration of basic science studies, under the designation of “mechanistic trials”, but I believe they should be separated from clinical trials, to both reflect the different goals of clinical and basic research, and to prevent confusion.

Benjamin Wolfe, PhD CSAIL, MIT

I appreciate NIH’s continued efforts to refine the case examples. I concur with the concerns that continue to be expressed about Case 18 and its variants. I believe that Case 9, which is a clinical trial simply because the experimenters assign participants to various sleep durations and measure changes in stress hormone levels, also merits further consideration. For Case 18 variants and Case 9, there seems to be no room for distinguishing between a biomedical consequence that is measured as part of a basic research design aiming to test a theory about mechanisms of action or involved systems, as compared to an outcome measure specifically targeted for change, due to its potential for therapeutic benefit.

Regarding the ever-broadening NIH clinical trial definition case studies: As a behavioral scientist, I¹m worried that Cases 24 & 26 contradict each other, and neither should be a clinical trial. The substantive difference appears to be that #24 involves opinions (so not a trial) and #26 involves memory, which is suddenly a health-related outcome (yes a trial). All memory, really? No opinions, really? What if #24 were about the readability of consent forms; would it suddenly be a clinical trial? What if #26 were about memory for announcements; would it no longer be a clinical trial? These examples confound modality (opinion/memory) with content (announcement/consent), so the examples are ambiguous as to which dimension is decisive.

Quoting from the webpage: Case 24 The study involves evaluating different types of printed announcements to identify the best designs for ensuring comprehension and retention of information in adults. Visitors to public libraries will be selected at random and asked to read one of the two announcements and then to take a short survey to elicit their perspectives about readability.

* Does the study involve human participants? Yes, the visitors to the library are human participants. * Are the participants prospectively assigned to an intervention? Yes, the participants will be prospectively assigned to an intervention, reading printed materials. * Is the study designed to evaluate the effect of the intervention on the participants? No, the study is designed to learn about participants¹ opinions. It is not designed to evaluate the effect of the printed materials on the participants health-related biomedical or behavioral outcomes. There is no change to the participant as a result of providing an opinion about readability of the printed materials.

Case 26 The study involves randomizing individuals to different processes for informed consent. It is designed to assess the effectiveness of interactive and multimedia components in enhancing participants¹ understanding of the study¹s purpose and procedures.

* Does the study involve human participants? Yes, the individuals assigned to the different consent processes are human participants. * Are the participants prospectively assigned to an intervention? Yes, the participants are prospectively assigned to an intervention, different consent processes. * Is the study designed to evaluate the effect of the intervention on the participants? Yes, the study is designed to evaluate the effect of different informed consent processes on understanding the study. * Is the effect being evaluated a health-related biomedical or behavioral outcome? Yes, enhanced comprehension of information is a health-related behavioral outcome.

I agree with the other comments made above. It is deeply disappointing and disheartening that NIH is unable to address concerns from the scientific community in order to modify the clinical trial definition with logic and humility.

Regarding case 18 (in particular 18c), we’ve heard your constructive feedback regarding inadequate clarity, engaged in follow-up discussion with outside stakeholders involved in this field of research, and, with their help revised case #18(a-g).

Mike, can you please do a blog that addresses what constitutes a “study” and “study record” in the new PHS HS/CT Form. I’m an administrator, and I’m getting this question quite a bit from faculty. The instructions state to add a separate study record for each protocol involving human subjects, but what constitutes a protocol? Is it each separate hypothesis? Each aim may have several hypotheses, does a study record have to be completed for each one? Is it per aim or per hypothesis? I understand that every study is unique and may require different approaches. Or, the specific FOA may even dictate what to do! Can I direct researchers directly to NIH program staff for further clarification for specific studies? Any clarification would help. Thank you!

Mike addressed a similar question recently in an Nature Human Behavior Interview. In that interview he provides the following guidance. “Your question gets to the tension between lumping and splitting. We would encourage lumping. That is, to the extent possible, describe a series of experiments as a set of variants of a single design. Then one registration can cover all of them. The FORMS-E grant proposal will work in a similar way; you could describe several studies as variations of one basic experiment.

Thus it is certainly possible, indeed likely, that one protocol will cover multiple hypotheses. It’s certainly conceivable that one study record could cover multiple aims. Think lumping, not splitting.

Do I understand correctly that a single-session study with repeated measures would NOT count as a clinical trial, because it does not meet criterion #2? Thanks.

As we are not familiar with your specific study design, we recommend you speak with program officials at your prospective funding NIH Institute or Center to determine if it would be considered a clinical trial.

Has NIH issued any updates regarding the interpretation of the NIH clinical trial definition and registration and reporting requirements since the omnibus budget passed last month?

“The 2018 Omnibus is referencing the September 21, 2016 (not 2017) NIH policy on registering and reporting clinical trial results in ClinicalTrials.gov. NIH is working with Congressional members and professional societies to address specific concerns about reporting basic science trials to ClinicalTrials.gov as we move forward. We hope to have more to share after these discussions.”

It would be helpful if all the case answered all 4 questions, not just the cases that are clinical trials. Additionally, it would be helpful if the NEI could provide some example cases to add to this list. Thanks.

Before submitting your comment, please review our blog comment policies.

Your email address will not be published. Required fields are marked *

Loading metrics

Open Access

Peer-reviewed

Research Article

Institutional dashboards on clinical trial transparency for University Medical Centers: A case study

Roles Conceptualization, Data curation, Investigation, Methodology, Project administration, Software, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Berlin Institute of Health at Charité - Universitätsmedizin Berlin, QUEST Center for Responsible Research, Berlin, Germany

ORCID logo

Roles Conceptualization, Data curation, Investigation, Methodology, Software, Visualization, Writing – review & editing

Roles Software, Visualization, Writing – review & editing

Roles Conceptualization, Funding acquisition, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

  • Delwen L. Franzen, 
  • Benjamin Gregory Carlisle, 
  • Maia Salholz-Hillel, 
  • Nico Riedel, 
  • Daniel Strech

PLOS

  • Published: March 21, 2023
  • https://doi.org/10.1371/journal.pmed.1004175
  • Peer Review
  • Reader Comments

Fig 1

University Medical Centers (UMCs) must do their part for clinical trial transparency by fostering practices such as prospective registration, timely results reporting, and open access. However, research institutions are often unaware of their performance on these practices. Baseline assessments of these practices would highlight where there is room for change and empower UMCs to support improvement. We performed a status quo analysis of established clinical trial registration and reporting practices at German UMCs and developed a dashboard to communicate these baseline assessments with UMC leadership and the wider research community.

Methods and findings

We developed and applied a semiautomated approach to assess adherence to established transparency practices in a cohort of interventional trials and associated results publications. Trials were registered in ClinicalTrials.gov or the German Clinical Trials Register (DRKS), led by a German UMC, and reported as complete between 2009 and 2017. To assess adherence to transparency practices, we identified results publications associated to trials and applied automated methods at the level of registry data (e.g., prospective registration) and publications (e.g., open access). We also obtained summary results reporting rates of due trials registered in the EU Clinical Trials Register (EUCTR) and conducted at German UMCs from the EU Trials Tracker. We developed an interactive dashboard to display these results across all UMCs and at the level of single UMCs. Our study included and assessed 2,895 interventional trials led by 35 German UMCs. Across all UMCs, prospective registration increased from 33% ( n = 58/178) to 75% ( n = 144/193) for trials registered in ClinicalTrials.gov and from 0% ( n = 0/44) to 79% ( n = 19/24) for trials registered in DRKS over the period considered. Of trials with a results publication, 38% ( n = 714/1,895) reported the trial registration number in the publication abstract. In turn, 58% ( n = 861/1,493) of trials registered in ClinicalTrials.gov and 23% ( n = 111/474) of trials registered in DRKS linked the publication in the registration. In contrast to recent increases in summary results reporting of drug trials in the EUCTR, 8% ( n = 191/2,253) and 3% ( n = 20/642) of due trials registered in ClinicalTrials.gov and DRKS, respectively, had summary results in the registry. Across trial completion years, timely results reporting (within 2 years of trial completion) as a manuscript publication or as summary results was 41% ( n = 1,198/2,892). The proportion of openly accessible trial publications steadily increased from 42% ( n = 16/38) to 74% ( n = 72/97) over the period considered. A limitation of this study is that some of the methods used to assess the transparency practices in this dashboard rely on registry data being accurate and up-to-date.

Conclusions

In this study, we observed that it is feasible to assess and inform individual UMCs on their performance on clinical trial transparency in a reproducible and publicly accessible way. Beyond helping institutions assess how they perform in relation to mandates or their institutional policy, the dashboard may inform interventions to increase the uptake of clinical transparency practices and serve to evaluate the impact of these interventions.

Author summary

Why was this study done.

  • Clinical trials are the foundation of evidence-based medicine and should follow established guidelines for transparency: Their results should be available, findable, and accessible regardless of the outcome.
  • Previous studies have shown that many clinical trials fall short of transparency guidelines, which distorts the medical evidence base, creates research waste, and undermines medical decision-making.
  • University Medical Centers (UMCs) play an important role in increasing clinical trial transparency but are often unaware of their performance on these practices, making it difficult to drive improvement.

What did the researchers do and find?

  • We developed a pipeline to evaluate clinical trials across several established practices for clinical trial transparency and applied it in a cohort of 2,895 clinical trials led by German UMCs.
  • We found that while some practices are gaining adherence (e.g., prospective registration in ClinicalTrials.gov increased from 33% to 75% over the period considered), there is much room for improvement (e.g., 41% of trials reported results within 2 years of trial completion).
  • We developed a dashboard to communicate these transparency assessments to UMCs and support their efforts to improve.

What do these findings mean?

  • Our study demonstrates the feasibility of developing a dashboard to communicate adherence to established practices for clinical trial transparency.
  • By highlighting areas for improvement, the dashboard provides actionable information to UMCs and empowers their efforts to improve.
  • The dashboard may inform interventions to increase clinical trial transparency and be scaled to other countries and stakeholders, such as funders or clinical trial registries.

Citation: Franzen DL, Carlisle BG, Salholz-Hillel M, Riedel N, Strech D (2023) Institutional dashboards on clinical trial transparency for University Medical Centers: A case study. PLoS Med 20(3): e1004175. https://doi.org/10.1371/journal.pmed.1004175

Academic Editor: Florian Naudet, University of Rennes 1, FRANCE

Received: April 28, 2022; Accepted: January 18, 2023; Published: March 21, 2023

Copyright: © 2023 Franzen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The authors confirm that all data underlying the findings are fully available without restriction. The dashboard is openly available at: https://quest-cttd.bihealth.org/ . Code to produce the dashboard is openly available in GitHub at: https://github.com/quest-bih/clinical-dashboard . Code to generate the dataset displayed in the dashboard is openly available in GitHub: https://github.com/maia-sh/intovalue-data/releases/tag/v1.1 . Data can be downloaded from the dashboard and are openly available on OSF at: https://osf.io/26dgx/ . Raw data obtained from trial registries are openly available on Zenodo at: https://doi.org/10.5281/zenodo.7590083 . Data for summary results reporting in the EUCTR are available via the EU Trials Tracker.

Funding: This work was funded by the Federal Ministry of Education and Research of Germany (BMBF 01PW18012, https://www.bmbf.de ). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: I have read the journal’s policy and the authors of this manuscript have the following competing interests: The authors are affiliated to the Charité – Universitätsmedizin Berlin, one of the institutions included in this evaluation and in the dashboard.

Abbreviations: CONSORT, Consolidated Standards of Reporting Trials; CTIMP, Clinical Trial of an Investigational Medicinal Product; DOI, Digital Object Identifier; DORA, Declaration on Research Assessment; DRKS, Deutsches Register Klinischer Studien (German Clinical Trials Register); EUCTR, EU Clinical Trials Register; FDAAA, Food and Drug Administration Amendments Act; ICMJE, International Committee of Medical Journal Editors; OA, Open Access; OSF, Open Science Framework; STROBE, Strengthening the Reporting of Observational Studies in Epidemiology; TRN, trial registration number; UMC, University Medical Center; WHO, World Health Organization

Introduction

Valid medical decision-making depends on an evidence base composed of clinical trials that were prospectively registered and reported in an unbiased and timely manner. The registration of clinical trials in publicly accessible registries informs clinicians, patients, and other relevant stakeholders about what trials are planned, in progress or completed, and aggregates key information relating to those trials. Trial registration thus reduces bias in our understanding of the existing medical evidence and disincentivizes outcome-switching and selective reporting [ 1 ]. For clinical trials to generate useful and generalizable medical knowledge gain, trial results should also be reported in a timely manner after trial completion per the World Health Organization (WHO) Joint Statement on Public Disclosure of Results from Clinical Trials [ 2 ]. Disclosure is a necessary but not sufficient component of transparency: Trial results should also be openly accessible and findable, in line with established guidelines [ 2 – 6 ]. However, several studies have shown that clinical trials are often not registered and reported according to these standards [ 7 – 11 ].

Audits of research practices can build understanding of the status quo, inform new policies, and evaluate the impact of interventions to support improvement. Examples include the European Commission’s Open Science monitor [ 12 ], the German Open Access monitor [ 13 ], the French Open Science Monitor in health [ 14 ], and institution-specific dashboards of select research practices [ 15 ]. Focusing on trial transparency, the EU Trials Tracker and the Food and Drug Administration Amendments Act 2007 (FDAAA) TrialsTracker [ 16 , 17 ] display up-to-date summary results reporting rates of public and private trial sponsors in a transparent and accessible way. The EU Trials Tracker served as a key resource for initiatives aiming to increase reporting rates of drug trials in the EU Clinical Trials Register (EUCTR) [ 18 , 19 ]. Based on the EU Trials Tracker, results reporting in the EUCTR has increased from 50% in 2018 to 84% (late 2022).

Research institutions such as University Medical Centers (UMCs) can incentivize practices for research transparency through their reward and promotion systems [ 20 , 21 ] and by providing education, infrastructure, and services [ 22 , 23 ]. However, internal and external assessments of research conducted at UMCs rarely acknowledge these practices [ 24 , 25 ]. Rather, traditional indicators of research performance such as the number of clinical trials, the extent of third-party funding, and the impact factor of published papers emphasize quantity over quality, which can entrench problematic research practices [ 26 ]. Initiatives such as the Declaration on Research Assessment (DORA) and the Hong Kong Principles have called for a change in the way researchers are assessed, and for more recognition of behaviors that strengthen research integrity [ 20 , 27 ]. The establishment of the Coalition on Advancing Research Assessment (CoARA) and the 2022 Agreement on Reforming Research Assessment emphasize this shift towards rewarding responsible research practices to maximize research quality and impact [ 28 ]. In turn, the UNESCO Recommendation on Open Science adopted in 2021 affirmed the need to establish monitoring and evaluation mechanisms relating to open science [ 29 ]. Audits of transparency practices could empower UMCs to support their uptake by highlighting where there is room for improvement and where to allocate resources. Comparative assessments between institutions could also provide examples of successes and stimulate knowledge transfer.

Audits that are based on open and scalable methods facilitate repeated evaluation and uptake at other organizations. Such an evaluation of transparency practices at the level of clinical trials led by UMCs requires reproducible and efficient procedures for (a) sampling all clinical trials and associated results publications affiliated to UMCs and (b) measuring select registration and reporting practices. We previously established procedures for identifying all clinical trials associated with a specific UMC and their earliest results publications [ 9 , 11 ]. In turn, an increasing number of open-source publication and registry screening tools have been developed in the context of meta-research projects aiming to increase research transparency and reproducibility [ 10 , 30 – 32 ].

The objective of this study was to perform a status quo analysis of a set of established practices for clinical trial transparency at the level of UMCs and present these assessments in the form of an interactive dashboard to support efforts to improve performance. While the general approach of our study is applicable for UMCs worldwide, this study focused on German UMCs.

Producing a dashboard for clinical trial transparency required the development of a pipeline consisting of 3 main steps: first, the identification of registered clinical trials led by German UMCs; second, the evaluation of select registration and reporting practices, including (a) the partly automated and partly manual identification of earliest results publications of these trials and (b) the application of automated tools at the registry and publication level; third, the presentation of these baseline assessments in the form of an interactive dashboard. An overview of the dependence of these steps on automated versus manual approaches is provided in S1 Supplement . The development of the dashboard was iterative and did not have a prospective protocol. The methods to develop the underlying dataset of clinical trials and associated results publications, however, were preregistered in Open Science Framework (OSF) for trials completed 2009 to 2013 [ 33 ] and 2014 to 2017 [ 34 ].

Data sources and inclusion and exclusion criteria

The data displayed in the dashboard relate exclusively to registered (either prospectively or retrospectively) clinical trials obtained from 3 data sources with the following inclusion and exclusion criteria:

  • The IntoValue cohort of registered clinical trials and associated results [ 35 ]. This dataset consists of interventional clinical trials registered in ClinicalTrials.gov or DRKS, considered as complete between 2009 and 2017 per the registry, and led by a German UMC (i.e., led either as sponsor, responsible party, or as host of the principal investigator). Trials were searched for 38 German UMCs based on their inclusion as members on the website of the association of medical faculties of German universities [ 36 ] at the time of data collection. In line with WHO and International Committee of Medical Journal Editors (ICMJE) definitions [ 4 , 37 ], trials in this cohort include all interventional studies and are not limited to Clinical Trials of an Investigational Medicinal Product (CTIMP) regulated by the EU’s Clinical Trials Regulation or Germany’s drug or medical device laws. The dataset includes data from partly automated and partly manual searches to identify the earliest reported results associated with these trials (as summary results in the registry and as publication). The methods for sampling UMC-specific sets of registered clinical trials and tracking associated results are described in detail elsewhere [ 9 , 11 ]. Briefly, we used automated methods to search registries for clinical trials associated with German UMCs and manually validated the affiliations of all trials. We deduplicated trials in this cohort that were cross-registered in ClinicalTrials.gov and DRKS (see more information in S2 Supplement ). Results publications associated with these trials were identified by means of a manual search across several search engines. This was complemented by automated methods to identify linked publications in the registry [ 10 ]. To reflect the most up-to-date status of trials, we downloaded updated registry data for the trials in this cohort on 1 November 2022 and reapplied the original IntoValue exclusion criteria: study completion date before 2009 or after 2017, not considered as complete based on study status, and not interventional. More detailed information on the inclusion and exclusion criteria can be found in S2 Supplement .
  • For assessing prospective registration in ClinicalTrials.gov , we used a more recent cohort of interventional trials registered in ClinicalTrials.gov , started between 2006 and 2018, led by a German UMC, and considered as complete per study status in the registry. We downloaded updated registry data for the trials in this cohort on 1 November 2022 and reapplied the same exclusion criteria as above except for completion date ( S2 Supplement ).
  • For assessing results reporting in the EUCTR, we retrieved data from the EU Trials Tracker on 4 November 2022 [ 16 ]. We found a sponsor name for 34 of the UMCs included in this study as of August 2021 (sponsor names in the EU Trials Tracker are subject to change). If more than one corresponding sponsor name was found for a given UMC (Bochum, Giessen, Heidelberg, Kiel, Marburg, and Tübingen), we selected the sponsor with the most trials. More detailed information can be found in S3 Supplement .

Analysis of registration and reporting practices

The dashboard displays the performance of UMCs on 7 recommended transparency practices for trial registration and reporting. In this study, we focused on adherence to ethical principles and reporting guidelines that apply to all trials. Compliance with a legal regulation was only assessed for summary results reporting in the EUCTR. For an overview of these practices, relevant guidelines and laws, the sample considered, and the measured outcome, see Fig 1 (sources in S4 Supplement ) and Table 1 . The data for these metrics were obtained through a combination of automated approaches and manual searches, several of which have been described previously [ 8 – 11 ]. In the following, we outline the methods used to generate the data for each metric. More detailed information can be found in the Methods page of the dashboard and in S5 Supplement .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

Relevant guidelines and/or laws are provided for each practice (as of November 2022). A list of references can be found in S4 Supplement . An adaptation of this overview is included in the “Why these practices?” page of the dashboard. *DFG: According to the DFG guidelines at the time of writing, summary results should be posted in the registry at the latest 2 years after trial completion, or earlier if required by applicable legal regulations. BMBF, Bundesministerium für Bildung und Forschung; CIOMS, Council for International Organizations of Medical Sciences; CONSORT, Consolidated Standards of Reporting Trials; CTIMP, Clinical Trial of an Investigational Medicinal Product; DFG, Deutsche Forschungsgemeinschaft; ICMJE, International Committee of Medical Journal Editors; ICTRP, International Clinical Trials Registry Platform; WHO, World Health Organization; WMA, World Medical Association.

https://doi.org/10.1371/journal.pmed.1004175.g001

thumbnail

https://doi.org/10.1371/journal.pmed.1004175.t001

Prospective registration.

Raw registry data downloaded from ClinicalTrials.gov and DRKS were further processed to determine the registration status of trials. We defined a trial to be prospectively registered if the trial was registered in the same or a previous month to the trial start date.

Bidirectional links between registry entries and associated results publications.

We extracted links to publications from the registry data and obtained the full text of publications. We then applied regular expressions to detect publication identifiers in registrations, and trial registration numbers (TRNs) in publications. The application of these methods on the IntoValue cohort was reported previously [ 10 ].

Summary results reporting in the registry.

For ClinicalTrials.gov , we extracted the relevant information from the structured summary results field. For DRKS, we detected summary results based on the presence of keywords (e.g., Ergebnisbericht or Abschlussbericht) in the reference title. The summary results date in DRKS was extracted manually from the registry’s change history. We obtained summary results reporting rates in the EUCTR from the EU Trials Tracker. We retrieved historical data (percent reported, total number of due trials, and total number of trials that reported results) from the associated code repository [ 16 ].

Reporting as a manuscript publication.

The earliest publication found for each trial and its publication date was derived from the original IntoValue dataset [ 35 ]. Dissertations were excluded from publication-based metrics.

Open Access (OA) status.

To determine the OA status of trial results publications, we queried the Unpaywall database via its API on 1 November 2022 using UnpaywallR and assigned one of the following statuses: gold (openly available in an OA journal), hybrid (openly available in a subscription-based journal), green (openly available in a repository), bronze (openly available on the journal page but without a clear open license), or closed. As publications can have several OA versions, we applied a hierarchy such that only one OA status was assigned to each publication, in descending order: gold, hybrid, green, bronze, and closed.

Interactive dashboard

We developed an interactive dashboard to present the outcome of these assessments at the institutional level in an accessible way to the UMC leadership and the wider research community. The dashboard was developed with the Shiny R package (version 1.6.0) [ 38 ] based on an initial version developed by NR for the Charité –Universitätsmedizin Berlin [ 15 ]. The dashboard was shaped by interviews with UMC leadership, support staff, funders, and experts in responsible research who provided feedback on a prototype version [ 39 ]. This feedback led to the inclusion of several features to facilitate the interpretation of the data and contextualize the assessed transparency practices. The code underlying the dashboard developed in this study is openly available in GitHub under an AGPL license ( https://github.com/quest-bih/clinical-dashboard ) and may be adapted for further use.

We generated descriptive statistics on the characteristics of the trials and the transparency practices, all of which are displayed in the dashboard. We report proportions across UMCs (e.g., “Start” page) and per UMC broken down by start year (prospective registration only), completion year, publication year (open access), and registry (publication link in the registry, summary results reporting). We did not test specific hypotheses.

Software, code, and data

Data processing was performed in R (version 4.0.5) [ 40 ] and Python 3.9 (Python Software Foundation, Wilmington, Delaware, USA). With the exception of summary results reporting in the EUCTR (data available via the EU Trials Tracker ), all the data processing steps involved in generating the dataset displayed in this dashboard are openly available in GitHub: https://github.com/maia-sh/intovalue-data/releases/tag/v1.1 . The data displayed in the dashboard are available in OSF [ 41 ] and in the dashboard Datasets page. Raw data obtained from trial registries are openly available in Zenodo [ 42 ]. This study is reported as per the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline for cross-sectional studies ( S6 Supplement ).

Characteristics of trials

The IntoValue dataset that this study is based on includes interventional trials registered in ClinicalTrials.gov or DRKS, led by a German UMC, and reported as complete between 2009 and 2017 ( n = 3,113). Trials were found for 35 out of 38 UMCs searched. After downloading updated registry data for trials in this cohort, we excluded 91 trials based on our exclusion criteria (study completion date before 2009 or after 2017, n = 73; not considered as complete per study status, n = 16; not interventional, n = 2). After removal of duplicates, this led to 2,895 trials that served as the basis for most metrics ( Fig 2 ). For prospective registration in ClinicalTrials.gov , we used a more recent cohort of interventional trials registered in ClinicalTrials.gov , led by a German UMC, started between 2006 and 2018, and considered as complete per study status in the registry ( n = 4,058). After applying our inclusion criteria, this sample included 3,618 trials. S7 Supplement provides an overview of the characteristics of included trials stratified by registry. S8 Supplement provides flow diagrams of the trial and publication screening for each metric.

thumbnail

Flowchart of the trial screening (IntoValue). The box with the thicker contour highlights the starting point of the trial screening for other registry-based metrics (see Flowcharts 1–3 in S8 Supplement ). CT.gov, ClinicalTrials.gov ; DRKS, German Clinical Trials Register; IV, IntoValue; UMC, University Medical Center.

https://doi.org/10.1371/journal.pmed.1004175.g002

Evaluation of trial registration and reporting practices

We developed an interactive dashboard ( https://quest-cttd.bihealth.org /) to display the results of the evaluation of trial registration and reporting across UMCs. In the following, we highlight some of these results. More extensive evaluations of some of these practices are reported in separate publications, such as results reporting of trials [ 9 , 11 ] and links between trial registration and results publications [ 10 ].

Trial registration

Prospective registration : The proportion of trials led by German UMCs that were prospectively registered increased in both ClinicalTrials.gov and DRKS over the period considered. Of 178 trials registered in ClinicalTrials.gov and started in 2006, 58 (33%, 95% confidence interval 26% to 40%) were registered prospectively. A little more than a decade later, 144 of 193 (75%, 95% confidence interval 68% to 80%) trials started in 2018 were registered prospectively. Trials registered in DRKS followed a similar trend: While none of the 44 (0%, 95% confidence interval 0% to 10%) trials started between 2006 and 2008 were prospectively registered, this increased to 19 of 24 (79%, 95% confidence interval 57% to 92%) for trials started in 2017 ( S9 Supplement ). Among clinical trials registered in ClinicalTrials.gov , the median per-UMC rate of prospective registration ranged from 30% ( n = 17/56) to 68% ( n = 127/186) with a median of 55% and a standard deviation of 8%. Per-UMC rates of prospective registration in DRKS ranged from 0% ( n = 0/1) to 75% ( n = 15/20) with a median of 44% and a standard deviation of 15%.

Reporting of a TRN in publications.

Of the 1,895 registered trials with a publication indexed in PubMed, 714 (38%, 95% confidence interval 35% to 40%) reported a TRN in the publication abstract. In turn, 1,136 of 1,893 registered trials for which the full text was available reported a TRN in the publication full text (60%, 95% confidence interval 58% to 62%) ( S9 Supplement ). Only 476 of 1,893 (25%, 95% confidence interval 23% to 27%) of trials reported a TRN in both the abstract and full text of the publication as per the ICMJE and Consolidated Standards of Reporting Trials (CONSORT) guidelines. The per-UMC rate at which clinical trial publications reported a TRN in the abstract ranged from 17% ( n = 13/75) to 56% ( n = 23/41) with a median of 38% and a standard deviation of 8%. The per-UMC rate at which clinical trial publications reported a TRN in the full text was higher, ranging from 43% ( n = 41/95) to 76% ( n = 32/42) with a median of 61% and a standard deviation of 7%.

Publication links in the registry.

Of 1,493 trials registered in ClinicalTrials.gov with a publication, 861 (58%, 95% confidence interval 55% to 60%) had a link to the publication in the registration. In turn, only 111 of 474 trials registered in DRKS with a publication (23%, 95% confidence interval 20% to 28%) had a link to the publication in the registration. Among trials registered in ClinicalTrials.gov with a publication, the per-UMC rate of publication links in the registration ranged from 32% ( n = 12/37) to 88% ( n = 28/32) with a median of 56% and a standard deviation of 12%. Among trials registered in DRKS with a publication, the per-UMC rate of publication links in the registration ranged from 0% ( n = 0/7) to 45% ( n = 5/11) with a median of 23% and a standard deviation of 13%.

Trial reporting

Summary results reporting..

We first assessed how many of the trials registered in ClinicalTrials.gov or DRKS had summary results in the registry. The cumulative proportion of trials that reported summary results has stagnated at low levels between 2009 and 2017. Only 191 of all 2,253 (8%, 95% confidence interval 7% to 10%) trials registered in ClinicalTrials.gov , and 20 of all 642 (3%, 95% confidence interval 2% to 5%) trials registered in DRKS had summary results in the registry ( S9 Supplement ). Per-UMC summary results reporting rates for all trials ranged between 0% ( n = 0/42) and 32% ( n = 8/25) (median of 7% and a standard deviation of 7%) for ClinicalTrials.gov , and between 0% ( n = 0/23) and 50% ( n = 7/14) (median of 0% and a standard deviation of 9%) for DRKS. In contrast, reporting of summary results in the EUCTR was higher and increased over time: In almost 2 years, results reporting for due trials almost doubled from 41% ( n = 223/541, 95% confidence interval 37% to 46%) in December 2020 to 79% ( n = 647/813, 95% confidence interval 77% to 82%) in October 2022 (EU Trials Tracker) ( S9 Supplement ). At the time of data collection (November 2022), per-UMC summary results reporting rates in the EUCTR ranged between 0% ( n = 0/1) and 100% ( n = 14/14) across all included UMCs with a median of 82% and a standard deviation of 30%.

Timely reporting of results (2- and 5-year reporting rates).

Next, we assessed how many trials registered in ClinicalTrials.gov or DRKS reported results in a timely manner. Reporting guidelines and German research funders have called on clinical trials to report (a) summary results in the registry within 12 and 24 months of trial completion and (b) results in a manuscript publication within 24 months of trial completion [ 2 , 43 – 45 ]. We therefore considered 2 years as timely reporting for both reporting routes. Of 2,892 trials registered in ClinicalTrials.gov or DRKS with a 2-year follow-up period for reporting results as either summary results or a manuscript publication, 1,198 (41%, 95% confidence interval 40% to 43%) had done so within 2 years of trial completion.

While the 5-year reporting rate was unsurprisingly higher, 505 of 1,619 trials (31%, 95% confidence interval 29% to 34%) registered in ClinicalTrials.gov or DRKS with 5-year follow-up between trial completion and the manual publication search had not reported results as a journal publication within 5 years of trial completion. Publication in a journal was the dominant route of reporting results, with summary results reporting rates below 10% across all completion years and follow-up periods. Per-UMC reporting rates as a manuscript publication ranged between 15% ( n = 7/46) and 58% ( n = 19/33) (2-year rate, median 39%, standard deviation 9%) and between 50% ( n = 24/48) and 87% ( n = 13/15) (5-year rate, median 70%, standard deviation 8%). Per-UMC reporting rates as summary results ranged between 0% ( n = 0/76) and 14% ( n = 6/43) (2-year rate, median 4%, standard deviation 4%) and between 0% ( n = 0/72) and 21% (9/42) (5-year rate, median 5%, standard deviation 5%).

The proportion of trial results publications that were openly accessible (gold, hybrid, green, or bronze) increased from 42% in 2010 ( n = 16/38, 95% confidence interval 27% to 59%) to 74% in 2020 ( n = 72/97, 95% confidence interval 64% to 82%) ( S9 Supplement ). Across all publication years, 891 of 1,920 (46%, 95% confidence interval 44% to 49%) trial publications were neither openly accessible via a journal nor an OA repository based on Unpaywall. Per-UMC rates of trial results publications that were OA ranged from 26% ( n = 10/38) to 72% ( n = 23/32) with a median of 55% and a standard deviation of 10%.

The key outcome of this paper is an interactive and openly accessible dashboard to visualize adherence to the aforementioned best practices for trial registration and reporting across German UMCs: https://quest-cttd.bihealth.org/ . The dashboard displays the data in 3 ways: (a) assessment across all UMCs (national dashboard; see a screenshot in Fig 3 ); (b) comparative assessment between UMCs; and (c) UMC-specific assessment (see a screenshot for one UMC in S10 Supplement ).

thumbnail

Assessment of 7 registration and reporting practices across all included German UMCs (8 November 2022).

https://doi.org/10.1371/journal.pmed.1004175.g003

To allow for a better interpretation of the data displayed in the dashboard, absolute numbers are displayed in all plots as mouse-overs. A description of the methods and limitations of each metric is also provided next to each plot, with more detailed information in the Methods page. A FAQ page addresses general considerations raised in interviews with relevant stakeholders [ 39 ]. These interviews highlighted the importance of an overall narrative justifying the choice of metrics included. We therefore designed an infographic of relevant laws and guidelines to contextualize the clinical transparency metrics included in the dashboard (adapted from Fig 1 ).

Concerns about delayed and incomplete results reporting in clinical research and other sources of research waste have triggered debate on incentivizing individual researchers and UMCs to adopt more responsible research practices [ 20 , 22 , 23 ]. Here, we introduced the methods and results underlying a dashboard for clinical trial transparency, which provides actionable information on UMCs’ performance in relation to established registration and reporting practices and thereby empowers their efforts to support improvement. This dashboard approach for clinical trial transparency at the level of individual UMCs serves to (a) inform institutions about their performance and set this in relation to national and international transparency guidelines and funder mandates, (b) highlight where there is room for improvement, (c) trigger discussions across relevant stakeholder groups on responsible research practices and their role in assessing research performance, (d) point to success stories and facilitate knowledge sharing between UMCs, and (e) inform the development and evaluation of interventions that aim to increase trial transparency.

Trends in trial transparency

The dashboard displays progress over time and allows the data to be explored in different ways. While the upward trend for several practices (e.g., prospective registration, OA) is encouraging, there is much room for improvement with respect to established guidelines for clinical trial transparency. For example, less than half (45%) of trials registered in ClinicalTrials.gov or DRKS and completed in 2017 reported results in a manuscript publication within 2 years of trial completion as per WHO and funder recommendations [ 2 , 43 , 44 ]. We observed a striking difference in the cumulative proportion of summary results reporting of drug trials registered in the EUCTR compared with trials registered in ClinicalTrials.gov and DRKS. The uptake of summary results reporting in the EUCTR likely reflects the combined impact of the EU legal requirement for drug trials to report summary results within 12 months [ 45 ], the launch of the EU Trials Tracker and subsequent academic initiatives to increase reporting rates [ 8 , 18 ], as well as media attention [ 46 ]. This suggests that audits of compliance with respect to established guidelines and further awareness raising may also have the potential to increase results reporting rates of other types of trials.

Actionable areas for stakeholders

Some of the practices included in this dashboard can still be addressed retroactively, such as linking publications in the trial registration (realized for 49% of trials with a publication). These constitute actionable areas for improvement that UMCs can contribute to by providing education, support, and incentives. One important way to incentivize UMCs in this regard is to make responsible research practices part of internal and external quality assessment procedures. Other stakeholders such as funders, journals and publishers, registries, and bibliographic databases should complement these activities by reviewing compliance with their policies as well as applicable guidelines and/or laws. Salholz-Hillel and colleagues, for example, outlined specific recommendations for each stakeholder to improve links between trial registrations and publications [ 10 ]. UMCs and their core facilities for clinical research can, for example, use the data linked to the dashboard to inform principal investigators about the transparency of their specific trials. We are currently finalizing such a “report card” approach at the Charité - Universitätsmedizin Berlin [ 47 ].

Scalability beyond German UMCs

The datasets and methods used in this study can be scaled: This has been demonstrated in another European country (Poland) [ 48 ] and is currently underway in California, USA [ 49 ]. While the generation of the underlying dataset of clinical trials and associated results publications involves manual checks (approximately 10 person-hours per 100 trials), the assessment of transparency practices is largely automated. Institutions in possession of an in-house cohort of clinical trial registry numbers and persistent identifiers (e.g., Digital Object Identifier (DOI)) from matched journal publications, however, could achieve results more quickly. The code to create the dashboard is openly available and can be adapted to other cohorts.

Stakeholder and community engagement

The uptake of this dashboard approach by UMCs and other stakeholders depends on their respective attitudes and readiness. We previously solicited stakeholders’ views on an institutional dashboard with metrics for responsible research. While interviewees considered the dashboard helpful to see where an institution stands and to initiate change, some pointed to the challenge that making such a dashboard public might risk incorrect interpretation of the metrics and harm UMCs’ reputation [ 39 ]. While similar challenges with interpretation and reputation apply to current metrics for research assessment (e.g., impact factors and third-party funding), this stakeholder feedback demonstrates the need for community engagement when introducing novel strategies for research assessment. In this regard, a Delphi study was performed to reach consensus on a core outcome set of open science practices within biomedicine to support audits at the institutional level [ 50 ]. A detailed comparative assessment of existing monitoring initiatives and lessons learned could further support these efforts.

Updates and further development of the dashboard

We are planning regular updates of the registry data for trials already in the dashboard, as well as the inclusion of more recent cohorts of trials with at least 2 years follow-up (e.g., trials completed 2018 to 2021 assessed in 2023). Besides these updates, further transparency practices may be integrated into the dashboard in the future, e.g., dissemination of results as preprints, the use of self-archiving to broaden access to results [ 51 ], adherence to reporting guidelines [ 3 ], or data sharing [ 52 ]. Beyond transparency, other potential metrics could reflect the number of discontinued trials [ 53 ] or the proportion of trials that inform clinical practice [ 54 ]. The development of such metrics should acknowledge the availability of standards and infrastructure pertaining to the underlying practices [ 23 ] and differences between study types and disciplines [ 27 ]. Future versions of the dashboard may also display additional subpopulation comparisons, such as different clinical trial registries or UMC particularities [ 55 ].

Limitations

A limitation of this study is that inaccurate or outdated registry data (e.g., incorrect completion dates or trial status) may have impacted the assessment of transparency practices described in this study. To mitigate this limitation, we updated the registry data with the most recent data we could obtain. The update-related changes suggest no systematic bias in the comparison across UMCs. Another limitation is that the trial dataset may contain more cross-registrations than we identified. For the aforementioned “report card” project, we manually verified 168 trials and found only 2 missed cross-registrations (1%). We therefore believe that missed cross-registrations represent only a small portion of our sample. Moreover, the assessment of each practice in the dashboard applies to a specific subset of trials or publications and comes with unique limitations, largely resulting from challenges associated with manual or automated methods (outlined in more detail in S5 Supplement ). More generally, the dashboard focuses on interventional trials registered in ClinicalTrials.gov or DRKS and does not display how German UMC drug trials only registered in the EUCTR perform on established transparency practices (except for summary results reporting in the registry). We are considering including all drug trials in the EUCTR conducted by German UMCs in future developments of the dashboard.

UMCs play an important role in fostering clinical trial transparency but face challenges doing so in the absence of baseline assessments of current practice. We assessed adherence to established practices for clinical trial registration and reporting at German UMCs and communicated the results in the form of an interactive dashboard. We observed room for improvement across all assessed practices, some of which can still be addressed retroactively. The dashboard provides actionable information to drive improvement, facilitates knowledge sharing between UMCs, and informs the development of interventions to increase research transparency.

Supporting information

S1 supplement. use of automated vs. manual approaches across methods..

https://doi.org/10.1371/journal.pmed.1004175.s001

S2 Supplement. Inclusion and exclusion criteria.

https://doi.org/10.1371/journal.pmed.1004175.s002

S3 Supplement. Selected sponsor names in the EU Trials Tracker.

https://doi.org/10.1371/journal.pmed.1004175.s003

S4 Supplement. Sources for Fig 1 .

https://doi.org/10.1371/journal.pmed.1004175.s004

S5 Supplement. Detailed methods and limitations of registration and reporting metrics.

https://doi.org/10.1371/journal.pmed.1004175.s005

S6 Supplement. STROBE checklist for cross-sectional studies.

https://doi.org/10.1371/journal.pmed.1004175.s006

S7 Supplement. Characteristics of included trials.

https://doi.org/10.1371/journal.pmed.1004175.s007

S8 Supplement. Flow diagrams of the trial and publication screening.

https://doi.org/10.1371/journal.pmed.1004175.s008

S9 Supplement. Screenshots of the “Start” page of the dashboard.

https://doi.org/10.1371/journal.pmed.1004175.s009

S10 Supplement. Screenshot of the “One UMC” page of the dashboard.

https://doi.org/10.1371/journal.pmed.1004175.s010

Acknowledgments

We would like to acknowledge Tamarinde Haven and Martin Holst for their valuable input that shaped the dashboard. We acknowledge financial support from the Open Access Publication Fund of Charité – Universitätsmedizin Berlin and the German Research Foundation (DFG).

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 2. World Health Organization. Joint statement on public disclosure of results from clinical trials. World Health Organization [Internet]. 2017 [cited 2021 Jun 15]. Available from: https://www.who.int/news/item/18-05-2017-joint-statement-on-registration .
  • 4. International Committee of Medical Journal Editors (ICMJE). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. ICMJE [Internet]. 2021 [cited 2021 Dec 21]. Available from: http://www.icmje.org/icmje-recommendations.pdf .
  • 12. European Commission. Open science monitor. European Commission | Research and innovation [Internet]. [cited 2021 Nov 12]. Available from: https://ec.europa.eu/info/research-and-innovation/strategy/strategy-2020-2024/our-digital-future/open-science/open-science-monitor_en .
  • 13. Open Access Monitor. Open access monitor [Internet]. [cited 2021 Mar 22]. Available from: https://open-access-monitor.de/ .
  • 14. French Open Science Monitor in health. French Open Science Monitor [Internet]. [cited 2022 Feb 4]. Available from: https://frenchopensciencemonitor.esr.gouv.fr/health .
  • 15. BIH-QUEST Center for Responsible Research at Charité - Universitätsmedizin Berlin. Charité Dashboard on Responsible Research. Charité Metrics Dashboard [Internet]. 2021. Available from: https://quest-dashboard.charite.de .
  • 16. EBM DataLab. EU Trials Tracker. 2018 [cited 2021 Jul 26]. Available from: https://eu.trialstracker.net .
  • 17. EBM DataLab. FDAAA TrialsTracker. 2018. Available from: https://fdaaa.trialstracker.net/ .
  • 19. Cochrane Sweden. Cochrane Sweden collaborates on trial transparency report. Cochrane [Internet]. 2020 [cited 2021 Nov 2]. Available from: https://www.cochrane.org/news/cochrane-sweden-collaborates-trial-transparency-report .
  • 27. San Francisco Declaration on Research Assessment. DORA [Internet]. 2012 [cited 2020 Dec 16]. Available from: https://sfdora.org/read/ .
  • 28. Coalition for Advancing Research Assessment. Agreement on reforming research assessment. CoARA [Internet]. 2022 Jul 20. Available from: https://coara.eu/app/uploads/2022/09/2022_07_19_rra_agreement_final.pdf .
  • 29. UNESCO. UNESCO Recommendation on Open Science. UNESCO | Open Science [Internet]. 2021 [cited 2022 Apr 29]. Available from: https://en.unesco.org/science-sustainable-future/open-science/recommendation .
  • 33. Wieschowski S, Kahrass H, Schürmann C, Strech D, Riedel N, Siegerink B, et al. IntoValue. OSF. 2017[cited 2022 Nov 9]. https://doi.org/10.17605/OSF.IO/FH426
  • 34. Wieschowski S, Strech D, Riedel N, Kahrass H, Bruckner T, Holst M, et al. In2Value 2. OSF. 2020 Jun. Available from: https://osf.io/98j7u/ .
  • 36. Mitglieder | Medizinischer Fakultätentag. Medizinischer Fakultätentag [Internet]. [cited 2020 Aug 31]. Available from: https://medizinische-fakultaeten.de/verband/mitglieder/ .
  • 37. World Health Organization (WHO). Glossary. World Health Organization [Internet]. [cited 2021 Dec 21]. Available from: https://www.who.int/clinical-trials-registry-platform/about/glossary .
  • 38. Chang W, Cheng J, Allaire J, Sievert C, Schloerke B, Xie Y, et al. shiny: Web Application Framework for R. 2021. Available from: https://CRAN.R-project.org/package=shiny .
  • 40. R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2021. Available from: https://www.R-project.org/ .
  • 43. Deutsche Forschungsgemeinschaft. Klinische Studien: Stellungnahme der Arbeitsgruppe „Klinische Studien”der DFG-Senatskommission für Grundsatzfragen in der Klinischen Forschung. 2018. Available from: https://www.dfg.de/download/pdf/dfg_im_profil/geschaeftsstelle/publikationen/stellungnahmen_papiere/2018/181025_stellungnahme_ag_klinische_studien.pdf .
  • 44. BMBF. Grundsätze und Verantwortlichkeiten bei der Durchführung klinischer Studien. 2019. Available from: https://projekttraeger.dlr.de/media/gesundheit/GF/Grundsaetze_Verantwortlichkeiten_Klinische_Studien.pdf .
  • 46. Berndt C, Grill M. In Deutschland erforscht, im Nirwana versunken. Süddeutsche.de [Internet]. [cited 2022 Jul 14]. Available from: https://www.sueddeutsche.de/gesundheit/veroeffentlichung-studien-1.4737316 .
  • 47. Salholz-Hillel M, Franzen D, Müller-Ohlraun S, Strech D. Protocol: Clinical trial “report cards” to improve transparency at Charité and BIH: Survey and Intervention. OSF. [cited 2022 Jul 14]. Available from: https://osf.io/dk6gm .
  • 49. Wieschowski S, Strech D, Franzen D, Salholz-Hillel M, Carlisle BG, Malički M, et al. CONTRAST–CalifOrNia TRiAlS Transparency. OSF. 2022 May. Available from: https://osf.io/u9d5c/ .

UW Health recently identified and investigated a security incident regarding select patients’ information. Learn more

  • Find a Doctor
  • Conditions & Services
  • Locations & Clinics
  • Patients & Families
  • Refer a Patient
  • Refill a prescription
  • Price transparency
  • Obtain medical records
  • Clinical Trials
  • Order flowers and gifts
  • Volunteering
  • Send a greeting card
  • Make a donation
  • Find a class or support group
  • Priority OrthoCare
  • Emergency & Urgent care

April 16, 2024

Your guide to understanding phases of cancer clinical trials

Cancer clinic medical staff reviewing images

Clinical trials can be an important resource for patients at every stage of their cancer diagnosis, but understanding the scientific terms, study protocols and process can be intimidating to many patients.

That’s why UW Health | Carbone Cancer Center prioritizes educational resources, including a team of Clinical Trial Nurse Navigators and the new uwhealth.org/cancertrials webpage, to ensure patients have accurate information about how clinical trials work and how their care and safety will be prioritized at every step.

“Having a good, educational source of information about clinical trials is so important because there are a lot of myths and misconceptions out there,” said Sarah Kotila, Clinical Trials Navigation Team Manager at Carbone Cancer Center.

One of the most common questions patients have is what the different phases of a clinical trial mean. Read more about each step in the important process of approving new cancer treatments.

In this initial step of a clinical trial, the research staff is looking at the safety and appropriate dosage of giving a new treatment. They also watch for side effects. The number of patients enrolled on phase I trials are small — there are typically fewer than 50 patients involved.

Kotila often hears patients ask if phase 1 trials are safe. She explains that any cancer treatment comes with potential risks and benefits, whether it’s established clinical care or treatments being tested in clinical trials. With clinical trials, a team of experts is closely monitoring the patient’s care and frequently checking in to see how the patient is doing and feeling.

“Patients should know there can be risks and benefits to any cancer treatment they receive, and this is the same with clinical trials,” she said.

Kotila adds that to get Food and Drug Administration approval to start a clinical trial there has been considerable lab and preclinical research already done to prepare their research for this important next step.

Once a study has cleared its phase I benchmarks, it can move into phase 2. Researchers at this stage continue to monitor safety and are focused on whether the new treatment method is effective for certain diagnoses.

“In phase II, they’re having more people enroll to see if the treatment is effective in specific types of cancer” she said. “It’s still a smaller number, usually less than 100 people.”

Because they are measuring whether the approach is effective, phase 2 typically lasts several months to two years to measure changes over time. Researchers also continue to monitor for side effects that were not seen in phase I with the smaller group.

If the treatment proves to be effective for certain types of cancer, it can advance to phase 3 status.

This is the final step of testing before a treatment can be approved by the FDA for standard clinical use. The new treatment is being compared directly to existing standard of care treatments to determine if the new treatment is as good or better than our current treatments. The patient pool is at least several hundred people to get a widespread view of patient effects and validate the findings.

Placebos, which are inactive substances designed to look like study medication, can be used to randomize patient effects in the study and help preserve integrity of results when evaluating a new treatment. Placebos are rarely used in cancer treatment clinical trials. Kotila reassures patients who are afraid of placebos that they will still get treatment when needed.

“It would be unethical to not treat a cancer patient that needs treatment. Patients in these phase III randomized drug trials may get standard of care plus a placebo or standard of care plus the new treatment being tested, but they will always be treated,” she said.

FDA approval

So what happens if you’re part of a clinical trial and the study medication receives FDA approval? Dr. Mark Burkard , a physician-scientist who leads several clinical trials at Carbone, said the study’s sponsor can choose to stop the trial or continue studying long-term effects.

“In most cases, the sponsor will continue the trial to collect additional information about how the drug works, allowing the patient to choose (if they continue),” Burkard said. “Most patients choose to continue the trial. Some choose to stop the trial, if for example, they live far away and can get the same medicine from an oncologist close to home.”

Learning more

Kotila said patients who are considering clinical trials and have more questions can contact the Clinical Trials Navigation Team at (608) 262-0439 or [email protected] . If a patient would like to schedule an appointment or be seen for a clinical trial at UW Carbone, please contact our intake team at (608) 262-5223 .

This paper is in the following e-collection/theme issue:

Published on 24.4.2024 in Vol 26 (2024)

The Costs of Anonymization: Case Study Using Clinical Data

Authors of this article:

Author Orcid Image

Randomized trials of estrogen-alone and breast cancer incidence: a meta-analysis

  • Published: 23 April 2024

Cite this article

case study clinical trial reporting

  • Rowan T. Chlebowski   ORCID: orcid.org/0000-0002-4212-6184 1 ,
  • Aaron K. Aragaki 2 ,
  • Kathy Pan 3 ,
  • Joanne E. Mortimer 4 ,
  • Karen C. Johnson 5 ,
  • Jean Wactawski-Wende 6 ,
  • Meryl S. LeBoff 8 ,
  • Sayeh Lavasani 4 ,
  • Dorothy Lane 7 ,
  • Rebecca A. Nelson 4 &
  • JoAnn E. Manson 8  

In the Women’s Health initiative (WHI) randomized clinical trial, conjugated equine estrogen (CEE)-alone significantly reduced breast cancer incidence ( P  = 0.005). As cohort studies had opposite findings, other randomized clinical trials were identified to conduct a meta-analysis of estrogen-alone influence on breast cancer incidence.

We conducted literature searches on randomized trials and: estrogen, hormone therapy, and breast cancer, and searches from a prior meta-analysis and reviews. In the meta-analysis, for trials with published relative risks (RR) and 95% confidence intervals (CI), each log-RR was multiplied by weight = 1/V, where V = variance of the log-RR, and V was derived from the corresponding 95% CI. For smaller trials with only breast cancer numbers, the corresponding log-RR = (O – E)/weight, where O is the observed case number in the oestrogen-alone group and E the corresponding expected case number, E = nP.

Findings from 10 randomized trials included 14,282 participants and 591 incident breast cancers. In 9 smaller trials, with 1.2% (24 of 2029) vs 2.2% (33 of 1514) randomized to estrogen-alone vs placebo (open label, one trial) (RR 0.65 95% CI 0.38–1.11, P  = 0.12). For 5 trials evaluating estradiol formulations, RR = 0.63 95% CI 0.34–1.16, P  = 0.15. Combining the 10 trials, 3.6% (262 of 7339) vs 4.7% (329 of 6943) randomized to estrogen-alone vs placebo (overall RR 0.77 95% CI 0.65–0.91, P  = 0.002).

The totality of randomized clinical trial evidence supports a conclusion that estrogen-alone use significantly reduces breast cancer incidence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

case study clinical trial reporting

Data availability

All the data are available from the sources cited in the manuscript.

Collaborative Group on Hormonal Factors in Breast C (2019) Type and timing of menopausal hormone therapy and breast cancer risk: individual participant meta-analysis of the worldwide epidemiological evidence. Lancet 394(10204):1159–1168

Chlebowski RT, Anderson GL, Aragaki AK et al (2020) Association of Menopausal hormone therapy with breast cancer incidence and mortality during long-term follow-up of the Women’s Health Initiative randomized clinical trials. JAMA 324(4):369–380

Article   CAS   PubMed   PubMed Central   Google Scholar  

The Writing Group for the PEPI Trial (1995) Effects of estrogen or estrogen/progestin regimens on heart disease risk factors in postmenopausal women. The Postmenopausal Estrogen/Progestin Interventions (PEPI) Trial. JAMA 273(3):199–208

Herrington DM, Reboussin DM, Brosnihan KB et al (2000) Effects of estrogen replacement on the progression of coronary-artery atherosclerosis. N Engl J Med 343(8):522–529

Article   CAS   PubMed   Google Scholar  

Viscoli CM, Brass LM, Kernan WN, Sarrel PM, Suissa S, Horwitz RI (2001) A clinical trial of estrogen-replacement therapy after ischemic stroke. N Engl J Med 345(17):1243–1249

Schierbeck LL, Rejnmark L, Tofteng CL et al (2012) Effect of hormone replacement therapy on cardiovascular events in recently postmenopausal women: randomised trial. BMJ 345:e6409

Article   PubMed   Google Scholar  

Cherry N, McNamee R, Heagerty A, Kitchener H, Hannaford P (2014) Long-term safety of unopposed estrogen used by women surviving myocardial infarction: 14-year follow-up of the ESPRIT randomised controlled trial. BJOG 121(6):700–705; discussion 5

Chlebowski RT, Rohan TE, Manson JE et al (2015) Breast cancer after use of estrogen plus progestin and estrogen alone: analyses of data from 2 Women’s Health Initiative randomized clinical trials. JAMA Oncol 1(3):296–305

Article   PubMed   PubMed Central   Google Scholar  

Manson JE, Chlebowski RT, Stefanick ML et al (2013) Menopausal hormone therapy and health outcomes during the intervention and extended poststopping phases of the Women’s Health Initiative randomized trials. JAMA 310(13):1353–1368

Gartlehner G, Patel SV, Feltner C et al (2017) Hormone therapy for the primary prevention of chronic conditions in postmenopausal women: evidence report and systematic review for the US preventive services task force. JAMA 318(22):2234–2249

Hemminki E, McPherson K (1997) Impact of postmenopausal hormone therapy on cardiovascular events and cancer: pooled data from clinical trials. BMJ 315(7101):149–153

Hodis HN, Mack WJ, Lobo RA et al (2001) Estrogen in the prevention of atherosclerosis. A randomized, double-blind, placebo-controlled trial. Ann Intern Med 135(11): 939–953

Speroff L, Rowan J, Symons J, Genant H, Wilborn W (1996) The comparative effect on bone density, endometrium, and lipids of continuous hormones as replacement therapy (CHART study) A randomized controlled trial. JAMA 276(17):1397–1403

Genant HK, Baylink DJ, Gallagher JC, Harris ST, Steiger P, Herber M (1990) Effect of estrone sulfate on postmenopausal bone loss. Obstet Gynecol 76(4):579–584

CAS   PubMed   Google Scholar  

Gallagher JC, Kable WT, Goldgar D (1991) Effect of progestin therapy on cortical and trabecular bone: comparison with estrogen. Am J Med 90(2):171–178

Waters DD, Alderman EL, Hsia J et al (2002) Effects of hormone replacement therapy and antioxidant vitamin supplements on coronary atherosclerosis in postmenopausal women: a randomized controlled trial. JAMA 288(19):2432–2440

Greenspan SL, Resnick NM, Parker RA (2005) The effect of hormone replacement on physical performance in community-dwelling elderly women. Am J Med 118(11):1232–1239

Stefanick ML, Anderson GL, Margolis KL et al (2006) Effects of conjugated equine estrogens on breast cancer and mammography screening in postmenopausal women with hysterectomy. JAMA 295(14):1647–1657

Bradburn MJ, Deeks JJ, Berlin JA, Russell LA (2007) Much ado about nothing: a comparison of the performance of meta-analytical methods with rare events. Stat Med 26(1):53–77

Sweeting MJ, Sutton AJ, Lambert PC (2004) What to add to nothing? Use and avoidance of continuity corrections in meta-analysis of sparse data. Stat Med 23(9):1351–1375

Hirji KF, Mehta CR, Patel NR (1987) Computing distributions for exact logistic regression. J Am Stat Assoc 82:1110–1117

Article   Google Scholar  

Chlebowski RT, Anderson G, Manson JE et al (2010) Estrogen alone in postmenopausal women and breast cancer detection by means of mammography and breast biopsy. J Clin Oncol 28(16):2690–2697

Anderson GL, Chlebowski RT, Aragaki AK et al (2012) Conjugated equine oestrogen and breast cancer incidence and mortality in postmenopausal women with hysterectomy: extended follow-up of the Women’s Health Initiative randomised placebo-controlled trial. Lancet Oncol 13(5):476–486

Dauphine C, Moazzez A, Neal JC, Chlebowski RT, Ozao-Choy J (2020) Single hormone receptor-positive breast cancers have distinct characteristics and survival. Ann Surg Oncol 27(12):4687–4694

Fisher B, Costantino JP, Wickerham DL et al (2005) Tamoxifen for the prevention of breast cancer: current status of the National Surgical Adjuvant Breast and Bowel Project P-1 study. J Natl Cancer Inst 97(22):1652–1662

Powles TJ, Ashley S, Tidy A, Smith IE, Dowsett M (2007) Twenty-year follow-up of the Royal Marsden randomized, double-blinded tamoxifen breast cancer prevention trial. J Natl Cancer Inst 99(4):283–290

Veronesi U, Maisonneuve P, Rotmensz N et al (2007) Tamoxifen for the prevention of breast cancer: late results of the Italian Randomized Tamoxifen Prevention Trial among women with hysterectomy. J Natl Cancer Inst 99(9):727–737

Cuzick J, Sestak I, Bonanni B et al (2013) Selective oestrogen receptor modulators in prevention of breast cancer: an updated meta-analysis of individual participant data. Lancet 381(9880):1827–1834

Cuzick J, Sestak I, Cawthorn S et al (2015) Tamoxifen for prevention of breast cancer: extended long-term follow-up of the IBIS-I breast cancer prevention trial. Lancet Oncol 16(1):67–75

Nelson HD, Fu R, Zakher B, Pappas M, McDonagh M (2019) Medication use for the risk reduction of primary breast cancer in women: updated evidence report and systematic review for the US preventive services task force. JAMA 322(9):868–886

Chlebowski RT, Aragaki AK (2023) The Women’s Health Initiative randomized trials of menopausal hormone therapy and breast cancer: findings in context. Menopause 30(4):454–461

Santen RJ, Stuenkel CA, Yue W (2022) Mechanistic effects of estrogens on breast cancer. Cancer J 28(3):224–240

Jordan VC, Ford LG (2011) Paradoxical clinical effect of estrogen on breast cancer risk: a “new” biology of estrogen-induced apoptosis. Cancer Prev Res (Phila) 4(5):633–637

Abderrahman B, Jordan VC (2022) Estrogen for the treatment and prevention of breast cancer: a tale of 2 Karnofsky lectures. Cancer J 28(3):163–168

Santen RJ, Yue W (2019) Cause or prevention of breast cancer with estrogens: analysis from tumor biologic data, growth kinetic model and Women’s Health Initiative study. Climacteric 22(1):3–12

Ellis MJ, Gao F, Dehdashti F et al (2009) Lower-dose vs high-dose oral estradiol therapy of hormone receptor-positive, aromatase inhibitor-resistant advanced breast cancer: a phase 2 randomized study. JAMA 302(7):774–780

Beral V, Reeves G, Bull D, Green J, Million Women Study C (2011) Breast cancer risk in relation to the interval between menopause and starting hormone therapy. J Natl Cancer Inst 103(4):296–305

Beral V, Peto R, Pirie K, Reeves G (2019) Menopausal hormone therapy and 20-year breast cancer mortality. Lancet 394(10204):1139

Manson JE, Aragaki AK, Rossouw JE et al (2017) Menopausal hormone therapy and long-term all-cause and cause-specific mortality: the Women’s Health Initiative randomized trials. JAMA 318(10):927–938

Prentice RL, Aragaki AK, Chlebowski RT et al (2021) Randomized trial evaluation of the benefits and risks of menopausal hormone therapy among women 50–59 years of age. Am J Epidemiol 190(3):365–375

Goss PE, Ingle JN, Ales-Martinez JE et al (2011) Exemestane for breast-cancer prevention in postmenopausal women. N Engl J Med 364(25):2381–2391

Download references

Acknowledgements

We acknowledge the commitment of the WHI investigators, staff, and the trial participants. Program Office: Jacques Roscoe, Shari Ludlum, Dale Burden, Joan McGowan, Leslie Ford, and Nancy Geller (National Heart, Lung, and Blood Institute, Bethesda, MD) Clinical Coordinating Center: Garnet Anderson, Ross Prentice, Andrea LaCroix, and Charles Kopperberg (Fred Hutchinson Cancer Research Center, Seattle, WA) Investigators and Academic Centers: JoAnn E. Manson (Brigham and Women’s Hospital, Harvard Medical School, Boston, MA); Barbara V. Howard (MedStar Health Research Institute/Howard University, Washington, DC); Marcia L. Stefanick (Stanford Prevention Research Center, Stanford, CA); Rebecca Jackson (The Ohio State University, Columbus, OH); Cynthia A. Thompson (University of Arizona, Tucson/Phoenix, AZ); Jean Wactawski-Wende (University at Buffalo, Buffalo, NY); Marian Limacher (University of Florida, Gainesville/Jacksonville, FL); Robert Wallace (University of Iowa, Iowa City/Davenport, IA); Lewis Kuller (University of Pittsburgh, Pittsburgh, PA); Rowan T Chlebowski (The Lundquist Institute, Torrance, CA; and Sally Shumaker (Wake Forest University School of Medicine, Winston-Salem, NC).

A full list of all the investigators who have contributed to WHI science can be retrieved at: https://www.whi.org/researchers/Documents%20%20Write%20a%20Paper/WHI%20Investigator %20Long%20List.pdf.

The development of this paper is partially supported by the National Cancer Institute grants R01 CA119171 and R01 CA10921. The WHI program is funded by the National Heart, Lung, and Blood Institute, National Institutes of Health, U.S. Department of Health and Human Services through contracts HHSN268201600018C, HHSN268201600001C.

Author information

Authors and affiliations.

The Lundquist Institute, 1124 W. Carson Street, Torrance, CA, USA

Rowan T. Chlebowski

Fred Hutchinson Cancer Center, Seattle, WA, USA

Aaron K. Aragaki

Kaiser Permanente Southern California, Downey, CA, USA

City of Hope National Medical Center, Duarte, CA, USA

Joanne E. Mortimer, Sayeh Lavasani & Rebecca A. Nelson

University of Tennessee Health Science Center, Memphis, TN, USA

Karen C. Johnson

University at Buffalo, Buffalo, NY, USA

Jean Wactawski-Wende

Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, USA

Dorothy Lane

Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Meryl S. LeBoff & JoAnn E. Manson

You can also search for this author in PubMed   Google Scholar

Contributions

Rowan Chlebowski and Aaron Aragaki contributed to the study concept and design. Aaron Aragaki performed data collection. Both Rowan Chlebowski and Aaron Aragaki contributed to the first draft of the manuscript and provided the statistical analysis. All authors provided critical manuscript review and revision for important intellectual content. Rowan Chlebowski, Karen Johnson, Jean Wactawski-Wende, Dorothy Lane, and JoAnn Manson obtained funding. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Rowan T. Chlebowski .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

Not applicable, as all data used were from published sources.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Chlebowski, R.T., Aragaki, A.K., Pan, K. et al. Randomized trials of estrogen-alone and breast cancer incidence: a meta-analysis. Breast Cancer Res Treat (2024). https://doi.org/10.1007/s10549-024-07307-9

Download citation

Received : 19 February 2024

Accepted : 17 March 2024

Published : 23 April 2024

DOI : https://doi.org/10.1007/s10549-024-07307-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Breast cancer incidence
  • Estrogen-alone
  • Randomized trials
  • Meta-analysis
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.370; 2020

Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI Extension

Xiaoxuan liu.

1 Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, UK

2 Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK

3 Moorfields Eye Hospital NHS Foundation Trust, London, UK

4 Health Data Research UK, London, UK

5 Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK

Samantha Cruz Rivera

6 Centre for Patient Reported Outcome Research, Institute of Applied Health Research, University of Birmingham, Birmingham, UK

David Moher

7 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada

8 School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada

Melanie J Calvert

9 National Institute of Health Research Surgical Reconstruction and Microbiology Centre, University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK

10 National Institute of Health Research Birmingham Biomedical Research Centre, University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK

11 National Institute of Health Research Applied Research Collaborative West Midlands, Birmingham, UK

Alastair K Denniston

12 National Institute of Health Research Biomedical Research Centre for Ophthalmology, Moorfields Hospital London NHS Foundation Trust and University College London, Institute of Ophthalmology, London, UK

Associated Data

The CONSORT 2010 (Consolidated Standards of Reporting Trials) statement provides minimum guidelines for reporting randomised trials. Its widespread use has been instrumental in ensuring transparency when evaluating new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes.

The CONSORT-AI extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI. Both guidelines were developed through a staged consensus process, involving a literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed on in a two-day consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants).

The CONSORT-AI extension includes 14 new items, which were considered sufficiently important for AI interventions, that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human-AI interaction and providing analysis of error cases.

CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer-reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes.

Introduction

Randomised controlled trials (RCTs) are considered the gold-standard experimental design to provide evidence of the safety and efficacy of an intervention. 1 2 Trial results, if adequately reported, have the potential to inform regulatory decisions, clinical guidelines and health policy. It is therefore crucial that RCTs are reported with transparency and completeness, so that readers can critically appraise the trial methods and findings and assess for the presence of bias in the results. 3 4 5

The CONSORT (Consolidated Standards of Reporting Trials) statement provides evidence-based recommendations to improve the completeness of reporting of RCTs. The statement was first introduced in 1996 and has since been widely endorsed by medical journals internationally. 5 Over the last two decades, it has undergone two updates and has demonstrated a significant positive impact on the quality of RCT reports. 6 7 The most recent CONSORT 2010 statement provides a 25 item checklist of the minimum reporting content applicable to all RCTs, but recognises that certain interventions may require extension or elaboration of these items. Several such extensions exist. 8 9 10 11 12 13

Artificial intelligence (AI) is an area of enormous interest with strong drivers to accelerate new interventions through to publication, implementation and market. 14 While AI systems have been researched for some time, recent advances in deep learning and neural networks have gained significant interest for their potential in health applications. Examples of such applications are wide-ranging and include AI systems for screening and triage, 15 16 diagnosis, 17 18 19 20 prognostication, 21 22 decision-support 23 and treatment recommendation. 24 However, in most recent cases, published evidence consists of in silico , early-phase validation. It has been recognised that most recent AI studies are inadequately reported and existing reporting guidelines do not fully cover potential sources of bias specific to AI systems. 25 The welcome emergence of randomised controlled trials (RCTs) seeking to evaluate newer interventions based on, or including, an AI component (hereafter “AI interventions”) 23 26 27 28 29 30 31 has similarly been met with concerns about the design and reporting. 25 32 33 34 This has highlighted the need to provide reporting guidance that is “fit-for-purpose” in this domain.

CONSORT-AI (as part of the SPIRIT-AI & CONSORT-AI initiative) is an international initiative supported by CONSORT and the EQUATOR Network to evaluate the existing CONSORT 2010 statement and extend or elaborate this guidance where necessary, to support reporting of clinical trials for AI-interventions. 35 36 It is complementary to the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials)-AI statement, which aims to promote high quality protocol reporting for AI trials. This article describes the methods used to identify and evaluate candidate items and gain consensus. In addition, it also provides the CONSORT-AI checklist, which includes the new extension items and their accompanying explanations.

The SPIRIT-AI and CONSORT-AI extensions were simultaneously developed for clinical trial protocols and trial reports. An announcement for the SPIRIT-AI and CONSORT-AI initiative was published in October 2019, 35 and the two guidelines were registered as reporting guidelines under development on the EQUATOR library of reporting guidelines in May 2019. Both guidelines were developed in accordance with the EQUATOR Network’s methodological framework. 37 The SPIRIT-AI and CONSORT-AI steering group, consisting of 15 international experts, was formed to oversee the conduct and methodology of the study. Definitions of key terms are contained in the glossary box 1 .

  • Artificial intelligence (AI) —The science of developing computer systems which can perform tasks normally requiring human intelligence.
  • AI intervention —A health intervention which relies on an artificial intelligence/machine learning component to serve its purpose.
  • CONSORT —Consolidated Standards of Reporting Trials.
  • CONSORT-AI extension item —An additional checklist item to address AI-specific content that is not adequately covered by CONSORT 2010.
  • Class activation map —Class activation maps are particularly relevant to image classification AI interventions. Class activation maps are visualizations of the pixels that had the greatest influence on predicted class, by displaying the gradient of the predicted outcome from the model with respect to the input. They are also referred to as saliency maps or heatmaps.
  • Health outcome —Measured variables in the trial which are used to assess the effects of an intervention.
  • Human-AI interaction —The process of how users/humans interact with the AI intervention, for the AI intervention to function as intended.
  • Clinical outcome —Measured variables in the trial which are used to assess the effects of an intervention.
  • Delphi study —A research method which derives the collective opinions of a group through a staged consultation of surveys, questionnaires, or interviews, with an aim to reach consensus at the end.
  • Development environment —The clinical and operational settings from which the data used for training the model is generated. This includes all aspects of the physical setting (such as geographical location, physical environment), operational setting (such as integration with an electronic record system, installation on a physical device) and clinical setting (such as primary/secondary/tertiary care, patient disease spectrum).
  • Fine-tuning —Modifications or additional training performed on the AI intervention model, done with the intention of improving its performance.
  • Input data —The data that need to be presented to the AI intervention to allow it to serve its purpose.
  • Machine learning (ML) —A field of computer science concerned with the development of models/algorithms which can solve specific tasks by learning patterns from data, rather than by following explicit rules. It is seen as an approach within the field of artificial intelligence.
  • Operational environment —The environment in which the AI intervention will be deployed, including the infrastructure required to enable the AI intervention to function.
  • Output data —The predicted outcome given by the AI intervention based on modelling of the input data. The output data can be presented in different forms, including a classification (including diagnosis, disease severity or stage, or recommendation such as referability), a probability, a class activation map, etc. The output data typically provides additional clinical information and/or triggers a clinical decision.
  • Performance error —Instances where the AI intervention fails to perform as expected. This term can describe different types of failures and it is up to the investigator to specify what should be considered a performance error, preferably based on prior evidence. This can range from small decreases in accuracy (compared to expected accuracy), to erroneous predictions, or the inability to produce an output in certain cases.
  • SPIRIT —Standard Protocol Items: Recommendations for Interventional Trials.
  • SPIRIT-AI —An additional checklist item to address AI-specific content that is not adequately covered by SPIRIT 2013.
  • SPIRIT-AI elaboration item —Additional considerations to an existing SPIRIT 2013 item when applied to AI interventions.

Ethical approval

This study was approved by the ethical review committee at the University of Birmingham, UK (ERN_19-1100). Participant information was provided to Delphi participants electronically prior to survey completion and prior to the consensus meeting. Delphi participants provided electronic informed consent, and written consent was obtained from consensus meeting participants.

Literature review and candidate item generation

An initial list of candidate items for the SPIRIT-AI and CONSORT-AI checklists was generated through review of the published literature and consultation with the steering group and known international experts. A search was performed on 13th May 2019 using the terms “artificial intelligence,” “machine learning” and “deep learning” to identify existing clinical trials for AI interventions listed within the US National Library of Medicine’s clinical trial registry, ClinicalTrials.gov. There were 316 registered trials on ClinicalTrials.gov, of which 62 were completed and seven had published results. 30 38 39 40 41 42 43 Two studies were reported with reference to the CONSORT statement 30 42 and one study provided an unpublished trial protocol. 42 The Operations Team (XL, SCR, MJC and AKD) identified AI-specific considerations from these studies and reframed them as candidate reporting items. The candidate items were also informed by findings from a previous systematic review which evaluated the diagnostic accuracy of deep learning systems for medical imaging. 25 After consultation with the steering group and additional international experts (n=19), 29 candidate items were generated: 26 of which were relevant for both SPIRIT-AI and CONSORT-AI and three of which were relevant only for CONSORT-AI. The Operations Team mapped these items to the corresponding SPIRIT and CONSORT items, revising the wording and providing explanatory text as required to contextualise the items. These items were included in subsequent Delphi surveys.

Delphi consensus process

In September 2019, 169 key international experts were invited to participate in the online Delphi survey to vote on the candidate items and suggest additional items. Experts were identified and contacted via the steering group and were allowed one round of snowball recruitment, where contacted experts could suggest additional experts. In addition, individuals who made contact following publication of the announcement were included. 35 The steering group agreed that individuals with expertise in clinical trials and AI/ML, as well as key users of the technology should be well represented in the consultation. Stakeholders included healthcare professionals, methodologists, statisticians, computer scientists, industry representatives, journal editors, policy makers, health informaticists, law and ethicists, regulators, patients and funders. Participant characteristics are described in the appendix (page 1: supplementary table 1). Two online Delphi surveys were conducted. DelphiManager software (version 4.0), developed and maintained by the COMET (Core Outcome Measures in Effectiveness Trials) initiative, was used to undertake the e-Delphi survey. Participants were given written information about the study and asked to provide their level of expertise within the fields of (i) AI/ML, and (ii) clinical trials. Each item was presented for consideration (26 for SPIRIT-AI and 29 for CONSORT-AI). Participants were asked to vote on each item using a 9-point scale: (1-3) not important, (4-6) important but not critical, and (7-9) important and critical. Respondents provided separate ratings for SPIRIT-AI and CONSORT-AI. There was an option to opt out of voting for each item, and each item included space for free text comments. At the end of the Delphi survey, participants had the opportunity to suggest new items. One hundred and three responses were received for the first Delphi round, and 91 (88% of participants from round one) responses received for the second round. The results of the Delphi survey informed the subsequent international consensus meeting. Twelve new items were proposed by the Delphi study participants and were added for discussion at the consensus meeting. Data collected during the Delphi survey were anonymised and item-level results were presented at the consensus meeting for discussion and voting.

The two-day consensus meeting took place in January 2020 and was hosted by the University of Birmingham, UK, to seek consensus on the content of SPIRIT-AI and CONSORT-AI. Thirty one international stakeholders were invited from the Delphi survey participants to discuss the items and vote for their inclusion. Participants were selected to achieve adequate representation from all the stakeholder groups. Forty one items were discussed in turn, comprising the 29 items generated in the initial literature review and item generation phase (26 items relevant to both SPIRIT-AI and CONSORT-AI; three items relevant to CONSORT-AI only) and the 12 new items proposed by participants during the Delphi surveys. Each item was presented to the consensus group, alongside its score from the Delphi exercise (median and interquartile ranges) and any comments made by Delphi participants related to that item. Consensus meeting participants were invited to comment on the importance of each item and whether the item should be included in the AI extension. In addition, participants were invited to comment on the wording of the explanatory text accompanying each item and the position of each item relative to the SPIRIT 2013 and CONSORT 2010 checklists. After open discussion of each item and the option to adjust wording, an electronic vote took place with the option to include or exclude the item. An 80% threshold for inclusion was pre-specified and deemed reasonable by the steering group to demonstrate majority consensus. Each stakeholder voted anonymously using Turning Point voting pads (Turning Technologies LLC, Ohio, USA; version 8.7.2.14).

Checklist pilot

Following the consensus meeting, attendees were given the opportunity to make final comments on the wording and agree that the updated SPIRIT-AI and CONSORT-AI items reflected discussions from the meeting. The Operations Team assigned each item as extension or elaboration based on a decision tree and produced a penultimate draft of the SPIRIT-AI and CONSORT-AI checklist (supplementary fig 1 on bmj.com). A pilot of the penultimate checklist was conducted with 34 participants to ensure clarity of wording. Experts participating in the pilot included: a) Delphi participants who did not attend the consensus meeting and b) external experts, who had not taken part in the development process, but who had reached out to the steering committee after the Delphi study commenced. Final changes were made on wording only to improve clarity for readers, by the Operations Team (supplementary fig 2).

CONSORT-AI checklist items and explanations

The CONSORT-AI Extension recommends that 14 new checklists items are added to the existing CONSORT 2010 statement (11 extensions and three elaborations). These items were considered sufficiently important for clinical trial reports for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 checklist items. Table 1 lists the CONSORT-AI items.

CONSORT-AI checklist

The 14 items below passed the threshold of 80% for inclusion at the consensus meeting. CONSORT-AI 2a, CONSORT-AI 5 (ii), and CONSORT-AI 19 each resulted from the merging of two items after discussion with the consensus group. CONSORT-AI 4a (i) and (ii) was split into two items for clarity and voted on separately. CONSORT-AI 5(iii) did not fulfill the criteria for inclusion based on its initial wording (77% vote to include); however, after extensive discussion and rewording, the consensus group unanimously supported a re-vote at which point it passed the inclusion threshold (97% to include). The Delphi and voting results for each included and excluded item are described in the appendix (page 2: supplementary table 2).

Title and abstract

Consort-ai 1a,b (i) elaboration: indicate that the intervention involves artificial intelligence/machine learning in the title and/or abstract and specify the type of model..

Explanation: Indicating in the title and/or abstract of the trial report that the intervention involves a form of AI is encouraged, as it immediately identifies the intervention as an artificial intelligence/machine learning intervention and also serves to facilitate indexing and searching of the trial report. The title should be understandable by a wide audience, therefore a broader umbrella term such as artificial intelligence or machine learning is encouraged. More precise terms should be used in the abstract, rather than the title, unless broadly recognised as being a form of artificial intelligence/machine learning. Specific terminology relating to the model type and architecture should be detailed in the abstract.

CONSORT-AI 1a,b (ii) Elaboration: State the intended use of the AI intervention within the trial in the title and/or abstract.

Explanation: Describe the intended use of the AI intervention in the trial report title and/or abstract. This should describe the purpose of the AI intervention and the disease context. 26 44 Some AI interventions may have multiple intended uses or the intended use may evolve over time. Therefore, documenting this allows readers to understand the intended use of the algorithm at the time of the trial.

CONSORT-AI 2a (i) Extension: Explain the intended use for the AI intervention in the context of the clinical pathway, including its purpose and its intended users (such as healthcare professionals, patients, public).

Explanation: In order to understand how the AI intervention is intended to fit into a clinical pathway, a detailed description of its role should be included in the background of the trial report. AI interventions may be designed to interact with different users including healthcare professionals, patients and the public, and its role can be wide-ranging (for example, the same AI intervention could theoretically be replacing, augmenting, or adjudicating components of clinical decision-making). Clarifying the intended use of the AI intervention and its intended user helps readers understand the purpose for which the AI intervention was evaluated in the trial.

CONSORT-AI 4a (i) Elaboration: State the inclusion and exclusion criteria at the level of participants.

Explanation: The inclusion and exclusion criteria should be defined at the participant level as per usual practice in non-AI interventional trial reports. This is distinct from the inclusion and exclusion criteria made at the input data level, which is addressed in item 4a (ii).

CONSORT-AI 4a (ii) Extension: State the inclusion and exclusion criteria at the level of the input data.

Explanation: Input data refer to the data required by the AI intervention to serve its purpose (for example, for a breast cancer diagnostic system, the input data could be the unprocessed or vendor-specific post-processing mammography scan on which a diagnosis is being made; for an early warning system, the input data could be physiological measurements or laboratory results from the electronic health record). The trial report should pre-specify if there were minimum requirements for the input data (such as image resolution, quality metrics or data format) which determined pre-randomisation eligibility. It should specify when, how, and by whom this was assessed. For example, if a participant met the eligibility criteria for lying flat for a CT scan as per item 4a (i), but the scan quality was compromised (for any given reason) to such a level that it was deemed unfit for use by the AI system, this should be reported as an exclusion criterion at the input data level. Note that where input data are acquired after randomisation, any exclusion is considered to be from the analysis, not from enrolment (see CONSORT item 13b and fig 1 ).

An external file that holds a picture, illustration, etc.
Object name is liux059983.f1.jpg

CONSORT 2010 flow diagram—adapted for AI clinical trials

CONSORT-AI 4b Extension: Describe how the AI intervention was integrated into the trial setting, including any onsite or offsite requirements.

Explanation: There are limitations to the generalisability of AI algorithms, one of which is when they are used outside of their development environment. 45 46 AI systems are dependent on their operational environment and the report should provide details of the hardware and software requirements to allow technical integration of the AI intervention at each study site. For example, it should be stated if the AI intervention required vendor-specific devices, if there was specialised computing hardware at each site, or if the site had to support cloud integration, particularly if this was vendor-specific. If any changes to the algorithm were required at each study site as part of the implementation procedure (such as fine-tuning the algorithm on local data), then this process should also be clearly described.

CONSORT-AI 5 (i) Extension: State which version of the AI algorithm was used.

Explanation: Similar to other forms of software as a medical device, AI systems are likely to undergo multiple iterations and updates in their lifespan. It is therefore important to specify which version of the AI system was used in the clinical trial, whether this is the same as the version evaluated in previous studies that have been used to justify the study rationale, and whether the version changed during the conduct of the trial. If applicable, the report should describe what has changed between the relevant versions and the rationales for the changes. Where available, the report should include a regulatory marking reference, such as a unique device identifier (UDI) which requires a new identifier for updated versions of the device. 47

CONSORT-AI 5 (ii) Extension: Describe how the input data were acquired and selected for the AI intervention.

Explanation: The measured performance of any AI system may be critically dependent on the nature and quality of the input data. 48 A description of the input data handling, including acquisition, selection, and pre-processing prior to analysis by the AI system should be provided. Completeness and transparency of this description is integral to the replicability of the intervention beyond the clinical trial in real-world settings. It also helps readers identify whether input data handling procedures were standardised across trial sites.

CONSORT-AI 5 (iii) Extension: Describe how poor quality or unavailable input data were assessed and handled.

Explanation: As with 4a (ii), input data refer to the data required by the AI intervention to serve its purpose. As discussed in CONSORT-AI 4a (ii), the performance of AI systems may be compromised as a result of poor quality or missing input data 49 (for example, excessive movement artefact on an electrocardiogram). The trial report should report the amount of missing data, as well as how this was identified and handled. The report should also specify if there was a minimum standard required for the input data, and where this standard was not achieved, how this was handled (including the impact on, or any changes to, the participant care pathway).

Poor quality or unavailable data can also affect non-AI interventions. For example, suboptimal quality of a scan could impact a radiologist’s ability to interpret it and make a diagnosis. It is therefore important that this information is reported equally in the control intervention, where relevant. If this minimum quality standard was different from the inclusion criteria for input data used to assess eligibility pre-randomisation, this should be stated.

CONSORT-AI 5 (iv) Extension: Specify whether there was human-AI interaction in the handling of the input data, and what level of expertise was required of users.

Explanation: A description of the human-AI interface and the requirements for successful interaction when handling input data should be described. For example, clinician-led selection of regions of interest from a histology slide which is then interpreted by an AI diagnostic system, 50 or endoscopist selection of a colonoscopy video clips as input data for an algorithm designed to detect polyps. 28 A description of any user training provided and instructions for how users should handle the input data provides transparency and replicability of trial procedures. Poor clarity on the human-AI interface may lead to lack of a standard approach and carry ethical implications, particularly in the event of harm. 51 52 For example, it may become unclear whether an error case occurred due to human deviation from the instructed procedure, or if it was an error made by the AI system.

CONSORT-AI 5 (v) Extension: Specify the output of the AI intervention

Explanation: The output of the AI intervention should be clearly specified in the trial report. For example, an AI system may output a diagnostic classification or probability, a recommended action, an alarm alerting to an event, an instigated action in a closed-loop system (such as titration of drug infusions), or other. The nature of the AI intervention's output has direct implications on its usability and how it may lead to downstream actions and outcomes.

CONSORT-AI 5 (vi) Extension: Explain how the AI intervention’s outputs contributed to decision-making or other elements of clinical practice.

Explanation: Since health outcomes may also critically depend on how humans interact with the AI intervention, the report should explain how the outputs of the AI system were used to contribute to decision-making or other elements of clinical practice. This should include adequate description of downstream interventions which can impact outcomes. As with CONSORT-AI 5 (iv), any elements of human-AI interaction on the outputs should be described in detail, including the level of expertise required to understand the outputs and any training/instructions provided for this purpose. For example, a skin cancer detection system that produced a percentage likelihood as output should be accompanied by an explanation of how this output was interpreted and acted on by the user, specifying both the intended pathways (such as skin lesion excision if the diagnosis is positive) and the thresholds for entry to these pathways (such as skin excision if the diagnosis is positive and the probability is >80%). The information produced by comparator interventions should be similarly described, alongside an explanation of how such information was used to arrive at clinical decisions on patient management, where relevant. Any discrepancy in how decision-making occurred versus how it was intended to occur (that is, as specified in the trial protocol), should be reported.

CONSORT-AI 19 Extension: Describe results of any analysis of performance errors and how errors were identified, where applicable. If no such analysis was planned or done, explain why not.

Explanation: Reporting performance errors and failure case analysis is especially important for AI interventions. AI systems can make errors that may be hard to foresee, but which, if allowed to be deployed at scale, could have catastrophic consequences. 53 Therefore, reporting cases of error and defining risk mitigation strategies are important for informing when, and for which populations, the intervention can be safely implemented. The results of any performance error analysis should be reported and the implications of the results discussed.

Other information

Consort-ai 25 extension: state whether and how the ai intervention and/or its code can be accessed, including any restrictions to access or re-use..

Explanation: The trial report should make it clear whether and how the AI intervention and/or its code can be accessed or re-used. This should include details regarding the license and any restrictions to access.

CONSORT-AI is a new reporting guideline extension developed through international multi-stakeholder consensus. It aims to promote transparent reporting of AI intervention trials and is intended to facilitate critical appraisal and evidence synthesis. The extension items added in CONSORT-AI address a number of issues specific to the implementation and evaluation of AI interventions, which should be considered alongside the core CONSORT 2010 checklist and other CONSORT extensions. 54 It is important to note that these are minimum requirements and there may be value in including additional items not included in the checklists (see appendix, page 2: supplementary table 2) in the report or in supplementary materials.

In both CONSORT-AI and its companion project SPIRIT-AI, a major emphasis was the addition of several new items relating to the intervention itself and its application in the clinical context. Items 5 (i) to 5 (vi) were added to address AI-specific considerations when describing the intervention. Specific recommendations were made pertinent to AI systems relating to algorithm version, input and output data, integration into trial settings, expertise of the users, and protocol for acting on the AI system’s recommendations. It was agreed that these details are critical for independent evaluation or replication of the trial. Journal editors reported that, despite the importance of these items, they are currently often missing from trial reports at the time of submission for publication, providing further weight to their inclusion as specifically listed extension items.

A recurrent focus of the Delphi comments and consensus group discussion was around safety of AI systems. This was in recognition that AI systems, unlike other health interventions, can unpredictably yield errors which are not easily detectable or explainable by human judgment. For example, changes to medical imaging that are invisible or appear random to the human eye may change the likelihood of the diagnostic output entirely. 55 56 The concern is, given the theoretical ease at which AI systems could be deployed at scale, any unintended harmful consequences could be catastrophic. CONSORT-AI item 19, which requires specification of any plans to analyse performance errors was added to emphasise the importance of anticipating systematic errors made by the algorithm and their consequences. Beyond this, investigators should also be encouraged to explore differences in performance and error rates across population subgroups. It has been shown that AI systems may be systematically biased towards different outputs, which may lead to different or even unfair treatment on the basis of extant features. 53 57 58 59

The topic of “continuously evolving” AI systems (also known as “continuously adapting” or “continuously learning”) was discussed at length during the consensus meeting, but was agreed to be excluded from CONSORT-AI. These are AI systems with the ability to continuously train on new data, which may cause changes in performance over time. The group noted that, while of interest, this field is relatively early in its development without tangible examples in healthcare applications, and that it would not be appropriate for it to be included in CONSORT-AI at this stage. 60 This topic will be monitored and revisited in future iterations of CONSORT-AI. It is worth noting that incremental software changes, whether continuous or iterative, intentional or unintentional, could have serious consequences on safety performance after deployment. It is therefore of vital importance that such changes are documented and identified by software version and a robust post-deployment surveillance plan is in place.

This study is set in the current context of AI in healthcare, therefore several limitations should be noted. First, there are relatively few published interventional trials in the field of AI for healthcare, therefore the discussion and decisions made during this study were not always supported by existing examples of completed trials. This arises from our stated aim to address the issues of poor reporting in this field as early as possible, recognising the strong drivers in the field and the specific challenges of study design and reporting for AI. As the science and study of AI evolves, we welcome collaboration with investigators to co-evolve these reporting standards to ensure their continued relevance. Second, the literature search of AI RCTs used terminology such as “artificial intelligence,” “machine learning,” and “deep learning,” but not terms such as “clinical decision support systems” and “expert systems,” which were more commonly used in the 90s for technologies underpinned by AI systems and share similar risks with recent examples. 61 It is likely that such systems, if published today, would be indexed under “AI” or “machine learning”; however, clinical decision support systems were not actively discussed during this consensus process. Third, the initial candidate items list was generated by a relatively small group of experts consisting of steering group members and additional international experts; however, additional items from the wider Delphi group were taken forward for consideration by the consensus group, and no new items were suggested during the consensus meeting or post-meeting evaluation.

As with the CONSORT statement, the CONSORT-AI extension is intended as a minimum reporting guidance, and there are additional AI-specific considerations for trial reports which may warrant consideration (see appendix, page 2: supplementary table 2). This extension is particularly aimed at investigators and readers reporting or appraising clinical trials; however, it may also serve as useful guidance for developers of AI interventions in earlier validation stages of an AI system. Investigators seeking to report studies developing and validating the diagnostic and predictive properties of AI models should refer to TRIPOD-ML (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis - Machine Learning) and STARD-AI (Standards for Reporting Diagnostic accuracy studies - Artificial Intelligence), both of which are currently under development. 32 62 Other potentially relevant guidelines are registered with the EQUATOR network, which are agnostic to study design. 63 The CONSORT-AI extension is expected to encourage careful early planning of AI interventions for clinical trials and this, in conjunction with SPIRIT-AI, should help to improve the quality of trials for AI interventions. The development of the CONSORT-AI guidance does not include additional items within the discussion section of trial reports. The guidance provided by CONSORT 2010 on trial limitations, generalisability and interpretation were deemed to be translatable to trials for AI interventions.

There is also recognition that AI is a rapidly evolving field and there will be the need to update CONSORT-AI as the technology and newer applications for it develop. Currently most applications of AI involve disease detection, diagnosis, and triage, and this is likely to have influenced the nature and prioritisation of items within CONSORT-AI. As wider applications that use “AI as therapy” emerge, it will be important to continue to evaluate CONSORT-AI in the light of such studies. Additionally, advances in computational techniques and the ability to integrate them into clinical workflows will bring new opportunities for innovation that benefits patients. However, they may be accompanied by new challenges around study design and reporting. In order to ensure transparency, minimise potential biases, and promote the trustworthiness of the results and the extent to which they may be generalisable, the SPIRIT-AI and CONSORT-AI Steering Group will continue to monitor the need for updates.

Acknowledgments

The SPIRIT-AI and CONSORT-AI Working Group gratefully acknowledge the contributions of the participants of the Delphi study and for providing feedback through final piloting of the checklist.

Delphi study participants: Aaron Y. Lee (Department of Ophthalmology, University of Washington, Seattle, WA, USA), Adrian Jonas (The National Institute for Health and Care Excellence (NICE), London, UK), Alastair K. Denniston (Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK; University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Health Data Research UK, London, UK; Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, University of Birmingham, Birmingham, UK; National Institute of Health Research Biomedical Research Centre for Ophthalmology, Moorfields Hospital London NHS Foundation Trust and University College London, Institute of Ophthalmology, London, UK; Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK), Andre Esteva (Salesforce Research, San Francisco, CA, USA), Andrew Beam (Harvard T.H. Chan School of Public Health, Boston, MA, USA), Andrew Goddard (Royal College of Physicians, London, UK), Anna Koroleva (Universite Paris-Saclay, Orsay, France and Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands), Annabelle Cumyn (Department of Medicine, Université de Sherbrooke, Quebec, Canada), Anuj Pareek (Center for Artificial Intelligence in Medicine & Imaging, Stanford University, CA, USA), An-Wen Chan (Department of Medicine, Women’s College Research Institute, Women’s College Hospital, University of Toronto, Ontario, Canada), Ari Ercole (University of Cambridge, Cambridge, UK), Balaraman Ravindran (Indian Institute of Technology Madras, Chennai, India), Bu’Hassain Hayee (King’s College Hospital NHS Foundation Trust, London, UK), Camilla Fleetcroft (Medicines and Healthcare products Regulatory Agency, London, UK), Cecilia Lee (Department of Ophthalmology, University of Washington, Seattle, WA, USA), Charles Onu (Mila - the Québec AI Institute, McGill University and Ubenwa Health, Montreal, Canada), Christopher Holmes (Alan Turing Institute, London, UK), Christopher Kelly (Google Health, London, UK), Christopher Yau (University of Manchester, Manchester, UK; Alan Turing Institute, London, UK), Cynthia D. Mulrow (Annals of Internal Medicine, Philadelphia, PA, USA), Constantine Gatsonis (Brown University, Providence, RI, USA), Cyrus Espinoza (Patient Partner, Birmingham, UK), Daniela Ferrara (Tufts University, Medford, MA, USA), David Moher (Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada), David Watson (Green Templeton College, University of Oxford, Oxford, UK), David Westhead (School of Molecular and Cellular Biology, University of Leeds, Leeds, UK), Deborah Morrison (National Institute for Health and Care Excellence (NICE), London, UK), Dominic Danks (Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham, UK and The Alan Turing Institute, London, UK), Dun Jack Fu (Moorfields Hospital London NHS Foundation Trust, London, UK), Elaine Manna (Patient Partner, London, UK), Eric Rubin (New England Journal of Medicine, Boston, MA, USA), Ewout Steyerberg (Leiden University Medical Centre and Erasmus MC, Rotterdam, the Netherlands), Fiona Gilbert (University of Cambridge and Addenbrooke’s Hospital, Cambridge, Cambridge, UK), Frank E Harrell Jr, (Department of Biostatistics, Vanderbilt University School of Medicine, Nashville, TN, USA), Gary Collins (Centre for Statistics in Medicine, University of Oxford, Oxford, UK), Gary Price (Patient Partner, Centre for Patient Reported Outcome Research, Institute of Applied Health Research, University of Birmingham, Birmingham, UK), Giovanni Montesano (City, University of London - Optometry and Visual Sciences, London, UK; NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK), Hannah Murfet (Microsoft Research Ltd, Cambridge, UK), Heather Mattie (Harvard T.H. Chan School of Public Health, Harvard University, Boston, MA, USA), Henry Hoffman (Ada Health GmbH, Berlin, Germany), Hugh Harvey (Hardian Health, London, UK), Ibrahim Habli (Department of Computer Science, University of York, York, UK), Immaculate Motsi-Omoijiade (Business School, University of Birmingham, Birmingham, UK), Indra Joshi (Artificial Intelligence Unit, National Health Service X (NHSX), UK), Issac S. Kohane (Harvard University, Boston, MA, USA), Jeremie F. Cohen (Necker Hospital for Sick Children, Université de Paris, CRESS, INSERM, Paris, France), Javier Carmona (Nature Research, New York, NY, USA), Jeffrey Drazen (New England Journal of Medicine, MA, USA), Jessica Morley (Digital Ethics Laboratory, University of Oxford, Oxford, UK), Joanne Holden (National Institute for Health and Care Excellence (NICE), Manchester, UK), Joao Monteiro (Nature Research, New York, NY, USA), Joseph R. Ledsam (DeepMind Technologies, London, UK), Karen Yeung (Birmingham Law School, University of Birmingham, Birmingham, UK), Karla Diaz Ordaz (London School of Hygiene and Tropical Medicine and Alan Turing Institute, London, UK), Katherine McAllister (Health and Social Care Data and Analytics, National Institute for Health and Care Excellence (NICE), London, UK), Lavinia Ferrante di Ruffano (Institute of Applied Health Research,University of Birmingham, Birmingham, UK), Les Irwing (Sydney School of Public Health, University of Sydney, Sydney, Australia), Livia Faes (Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK and Eye Clinic, Cantonal Hospital of Lucerne, Lucerne, Switzerland), Luke Oakden-Rayner (Australian Institute for Machine Learning, North Terrace, Adelaide, Australia), Marcus Ong (Spectra Analytics, London, UK), Mark Kelson (The Alan Turing Institute, London, UK and University of Exeter, Exeter, UK), Mark Ratnarajah (C2-AI, Cambridge, UK), Martin Landray (Nuffield Department of Population Health, University of Oxford, Oxford, UK), Masashi Misawa (Digestive Disease Center, Showa University, Northern Yokohama Hospital, Yokohama, Japan), Matthew Fenech (Ada Health GmbH, Berlin, Germany), Maurizio Vecchione (Intellectual Ventures, Bellevue, WA, USA), Megan Wilson (Google Health, London, UK), Melanie J. Calvert (Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, University of Birmingham, Birmingham, UK; National Institute of Health Research Surgical Reconstruction and Microbiology Centre, University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; National Institute of Health Research Applied Research Collaborative West Midlands; Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK), Michel Vaillant (Luxembourg Institute of Health, Luxembourg), Nico Riedel (Berlin Institute of Health, Berlin, Germany), Niel Ebenezer (Fight for Sight, London, UK), Omer F Ahmad (Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK), Patrick M. Bossuyt (Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam University Medical Centers, the Netherlands), Pep Pamies (Nature Research, London, UK), Philip Hines (European Medicines Agency (EMA), Amsterdam, the Netherlands), Po-Hsuan Cameron Chen (Google Health, Palo Alto, CA, USA), Robert Golub (Journal of the American Medical Association, The JAMA Network, Chicago, IL, USA), Robert Willans (National Institute for Health and Care Excellence (NICE), Manchester, UK), Roberto Salgado (Department of Pathology, GZA-ZNA Hospitals, Antwerp, Belgium and Division of Research, Peter Mac Callum Cancer Center, Melbourne, Australia), Ruby Bains (Gastrointestinal Diseases Department, Medtronic, UK), Rupa Sarkar (Lancet Digital Health, London, UK), Samuel Rowley (Medical Research Council (UKRI), London, UK), Sebastian Zeki (Department of Gastroenterology, Guy's and St Thomas' NHS Foundation Trust, London, UK), Siegfried Wagner (NIHR Biomedical Research Centre at Moorfields Eye Hospital and UCL Institute of Ophthalmology, London, UK), Steve Harries (Institutional Research Information Service, University College London, London, UK), Tessa Cook (Hospital of University of Pennsylvania, Pennsylvania, PA, USA), Trishan Panch (Wellframe, Boston, MA, USA), Will Navaie (Health Research Authority (HRA), London, UK), Wim Weber (British Medical Journal, London, UK), Xiaoxuan Liu (Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK; University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Health Data Research UK, London, UK; Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK), Yemisi Takwoingi (Institute of Applied Health Research, University of Birmingham, Birmingham, UK), Yuichi Mori (Digestive Disease Center, Showa University, Northern Yokohama Hospital, Yokohama, Japan), Yun Liu (Google Health, Palo Alto, CA, USA).

Pilot study participants: Andrew Marshall (Nature Research, New York, NY, USA), Anna Koroleva (Universite Paris-Saclay, Orsay, France and Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands), Annabelle Cumyn (Department of Medicine, Université de Sherbrooke, Quebec, Canada), Anna Goldenberg (SickKids Research Institute, Toronto, ON, Canada), Anuj Pareek (Center for Artificial Intelligence in Medicine & Imaging, Stanford University, CA, USA), Ari Ercole (University of Cambridge, Cambridge, UK), Ben Glocker (BioMedIA, Imperial College London, London, UK), Camilla Fleetcroft (Medicines and Healthcare products Regulatory Agency, London, UK), David Westhead (School of Molecular and Cellular Biology, University of Leeds, Leeds, UK), Eric Topol (Scripps Research Translational Institute, La Jolla, CA, USA), Frank E. Harrell Jr, (Department of Biostatistics, Vanderbilt University School of Medicine, Nashville, TN, USA), Hannah Murfet (Microsoft Research Ltd, Cambridge, UK), Ibarahim Habli (Department of Computer Science, University of York, York, UK), Jeremie F. Cohen (Necker Hospital for Sick Children, Université de Paris, CRESS, INSERM, Paris, France), Joanne Holden (National Institute for Health and Care Excellence (NICE), Manchester, UK), John Fletcher (British Medical Journal, London, UK), Joao Monteiro (Nature Research, New York, NY, USA), Joseph R. Ledsam (DeepMind Technologies, London, UK), Mark Ratnarajah (C2-AI, London, UK), Matthew Fenech (Ada Health GmbH, Berlin, Germany), Michel Vaillant (Luxembourg Institute of Health, Luxembourg), Omer F. Ahmad (Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK), Pep Pamies (Nature Research, London, UK), Po-Hsuan Cameron Chen (Google Health, Palo Alto, CA, USA), Robert Golub (Journal of the American Medical Association, The JAMA Network, Chicago, IL, USA), Roberto Salgado (Department of Pathology, GZA-ZNA Hospitals, Antwerp, Belgium and Division of Research, Peter Mac Callum Cancer Center, Melbourne, Australia), Rupa Sarkar (Lancet Digital Health, London, UK), Siegfried Wagner (NIHR Biomedical Research Centre at Moorfields Eye Hospital and UCL Institute of Ophthalmology, London, UK), Suchi Saria (Johns Hopkins University, Baltimore, MD, USA), Tessa Cook (Hospital of University of Pennsylvania, Pennsylvania, PA, USA), Thomas Debray (University Medical Center Utrecht, Utrecht, the Netherlands), Tyler Berzin (Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA, USA), Wanda Layman (Nature Research, New York, NY, USA), Wim Weber (British Medical Journal, London, UK), Yun Liu (Google Health, Palo Alto, CA, USA).

Additional contributions: Eliot Marston (University of Birmingham, Birmingham, UK) for providing strategic support. Charlotte Radovanovic (University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK) and Anita Walker (University of Birmingham, Birmingham, UK) for administrative support.

Web Extra. 

Extra material supplied by the author

Appendix: Supplementary table 1 (details of Delphi survey and consensus meeting participants) and table 2 (details of Delphi survey and consensus meeting decisions)

Supplementary fig 1: Decision tree for inclusion/exclusion and extension/elaboration

Supplementary fig 2: Checklist development process

Contributors: Concept and design: all authors. Acquisition, analysis, and interpretation of data: all authors. Drafting of the manuscript: XL, SCR, AWC, DM, MJC, and AKD. Obtained funding: AKD, MJC, CY, and CH.

The SPIRIT-AI and CONSORT-AI Working Group: Xiaoxuan Liu, 1,2,3,4,5 Samantha Cruz Rivera, 5,6 David Moher, 7,8 Melanie J Calvert, 4,5,6,9,10,11 Alastair K Denniston, 1,2,4,5,6,12 Hutan Ashrafian, 13,14 Andrew L Beam, 15 An-Wen Chan, 16 Gary S Collins, 17 Ara Darzi, 13,14 Jonathan J Deeks, 10,18 M Khair ElZarrad, 19 Cyrus Espinoza, 20 Andre Esteva, 21 Livia Faes, 3,22 Lavinia Ferrante di Ruffano, 18 John Fletcher, 23 Robert Golub, 24 Hugh Harvey, 25 Charlotte Haug, 26 Christopher Holmes, 27,28 Adrian Jonas, 29 Pearse A Keane, 12 Christopher J Kelly, 30 Aaron Y Lee, 31 Cecilia S Lee, 31 Elaine Manna, 20 James Matcham, 32 Melissa McCradden, 33 Joao Monteiro, 34 Cynthia Mulrow, 35 Luke Oakden-Rayner, 36 Dina Paltoo, 37 Maria Beatrice Panico, 38 Gary Price, 20 Samuel Rowley, 39 Richard Savage, 40 Rupa Sarkar, 41 Sebastian J Vollmer, 28,42 Christopher Yau, 28,43

1. Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, UK

2. Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK

3. Moorfields Eye Hospital NHS Foundation Trust, London, UK

4. Health Data Research UK, London, UK

5. Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK

6. Centre for Patient Reported Outcome Research, Institute of Applied Health Research, University of Birmingham, Birmingham, UK

7. Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada

8. School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada

9. National Institute of Health Research Surgical Reconstruction and Microbiology Centre, University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK

10. National Institute of Health Research Birmingham Biomedical Research Centre, University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK

11. National Institute of Health Research Applied Research Collaborative West Midlands, Birmingham, UK

12. National Institute of Health Research Biomedical Research Centre for Ophthalmology, Moorfields Hospital London NHS Foundation Trust and University College London, Institute of Ophthalmology, London, UK

13. Institute of Global Health Innovation, Imperial College London, London, UK

14. Patient Safety Translational Research Centre, Imperial College London, London, UK

15. Harvard T.H. Chan School of Public Health, Boston, MA, USA

16. Department of Medicine, Women’s College Research Institute, Women’s College Hospital, University of Toronto, Ontario, Canada

17. Centre for Statistics in Medicine, University of Oxford, Oxford, UK

18. Institute of Applied Health Research, University of Birmingham, Birmingham, UK

19. Food and Drug Administration, Maryland, USA

20. Patient Representative

21. Salesforce Research, San Francisco, CA, USA

22. Department of Ophthalmology, Cantonal Hospital Lucerne, Lucerne, Switzerland

23. The BMJ , London, UK

24. JAMA (Journal of the American Medical Association) , Chicago, IL, USA

25. Hardian Health, London, UK

26. New England Journal of Medicine , Massachusetts, USA

27. Department of Statistics and Nuffield Department of Medicine, University of Oxford, Oxford, UK

28. Alan Turing Institute, London, UK

29. The National Institute for Health and Care Excellence (NICE), London, UK

30. Google Health, London, UK

31. Department of Ophthalmology, University of Washington, Seattle, Washington, USA

32. AstraZeneca, Cambridge, UK

33. The Hospital for Sick Children, Toronto, Canada

34. Nature Research , New York, NY, USA

35. Annals of Internal Medicine , Philadelphia, PA, USA

36. Australian Institute for Machine Learning, North Terrace, Adelaide, Australia

37. National Institutes of Health, Maryland, USA

38. Medicines and Healthcare products Regulatory Agency, London, UK

39. Medical Research Council, London, UK

40. PinPoint Data Science, Leeds, UK

41. The Lancet Group, London, UK

42. University of Warwick, Coventry, UK

43. University of Manchester, Manchester, UK

Support: MJC is a National Institute for Health Research (NIHR) Senior Investigator and receives funding from the NIHR Birmingham Biomedical Research Centre, the NIHR Surgical Reconstruction and Microbiology Research Centre and NIHR ARC West Midlands at the University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Health Data Research UK, Innovate UK (part of UK Research and Innovation), the Health Foundation, Macmillan Cancer Support, UCB Pharma. MK ElZarrad is supported by the US Food and Drug Administration (FDA). D Paltoo is supported in part by the Office of the Director at the National Library of Medicine (NLM), National Institutes of Health (NIH). MJC, AD, and JJD are NIHR Senior Investigators. The views expressed in this article are those of the authors, Delphi participants, and stakeholder participants and may not represent the views of the broader stakeholder group or host institution, NIHR or the Department of Health and Social Care, or the NIH or FDA. DM is supported by a University of Ottawa Research Chair. AL Beam is supported by a National Institutes of Health (NIH) award 7K01HL141771-02. SJV receives funding from the Engineering and Physical Sciences Research Council, UK Research and Innovation (UKRI), Accenture, Warwick Impact Fund, Health Data Research UK and European Regional Development Fund. S Rowley is an employee for the Medical Research Council (UKRI).

Competing interests: MJC has received personal fees from Astellas, Takeda, Merck, Daiichi Sankyo, Glaukos, GlaxoSmithKline, and the Patient-Centered Outcomes Research Institute (PCORI) outside the submitted work. PA Keane is a consultant for DeepMind Technologies, Roche, Novartis, Apellis, and has received speaker fees or travel support from Bayer, Allergan, Topcon, and Heidelberg Engineering. CJ Kelly is an employee of Google LLC and owns Alphabet stock. A Esteva is an employee of Salesforce. CRM. R Savage is an employee of Pinpoint Science. JM was an employee of AstraZeneca PLC at the time of this study.

Funding: This work was funded by a Wellcome Trust Institutional Strategic Support Fund: Digital Health Pilot Grant, Research England (part of UK Research and Innovation), Health Data Research UK and the Alan Turing Institute. The study was sponsored by the University of Birmingham, UK. The study funders and sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication.

Data availability: Data requests should be made to the corresponding author and release will be subject to consideration by the SPIRIT-AI and CONSORT-AI Steering Group.

‘Other Duties as Assigned’: The Wide-Ranging Role of the CRC Professional

Blog April 24, 2024

case study clinical trial reporting

Dee Tilley, RN, CGRN, CCRC, ACRP-MDP, ACRP-PM, FACRP, Clinical Research Nurse, Mercy Health St. Vincent Medical Center

If “recruiting and screening patients who try new treatments and monitoring and reporting on patient progress” sounds like a reasonable, if skimpy, definition of the duties of a clinical research coordinator (CRC) at a clinical trial site, imagine the surprise of a newly minted CRC who finds themself tasked with exploring a shuttered hospital in search of old research records, or visiting the local jail in hopes of finding participants who have gone missing mid-study.  

Dee Tilley, RN, CGRN, CCRC, ACRP-MDP, ACRP-PM, FACRP, a 35-year Clinical Research Nurse at Mercy Health St. Vincent Medical Center in Toledo, Ohio, has experienced both of those situations, and so many more “other duties as assigned” that she never thought would be part of her work on trials when she first earned her Certified Clinical Research Coordinator ( CCRC® ) designation from ACRP 18 years ago.  

Clarifying that she has never had to do anything illegal in pursuit of keeping patients safe and data flowing in studies at her site, Tilley, who is an ACRP Fellow and member of The Academy Board of Trustees, nevertheless recalls how she once went “ to a subject’s home with a plain clothes, armed guard to retrieve [the study drug being tested] and do a study visit.” She has also more than once made long road trips with her principal investigator and study supplies “to complete study visits when subjects were unable to return to our office,” and has helped clean out a self-storage unit filled with study records (and all the accumulated grime that came with them) going back 10 years.  

Not exactly possibilities that the typical job description for CRCs is likely to include at most sites, you say? Nor, perhaps, are forays into such arenas as site process improvements, quality assurance, safety management, or medical procedure assistance, all of which another CCRC holder told ACRP she has unexpectedly experienced in her work for a trial site. Which isn’t to say that “unexpected” equals “unwelcome” or “irresponsible” where CRC duties are concerned, as some new challenges may go on to become the best parts of a CRC’s routine.  

“When I first became a CRC, I never thought that I would be assisting with medical procedures and/or devices,” the anonymous CCRC noted. “Another thing that I never expected to be doing was quality assurance work, which has quickly become my favorite part of the job. I love coming up with innovative ways to improve our department and participant safety, as well.” For example, when participants were signing onto studies in high volumes in the midst of short staffing conditions during the COVID-19 pandemic, she “helped develop a system in which we were able to operate quickly while still ensuring that participant safety was the top priority.”  

In their 2021 Clinical Researcher article on “ Are Clinical Research Coordinators Recognized as Professionals? ,” authors Erika Stevens, MA, FACRP, and Liz Wool, RN, BSN, FACRP, CCRA, CMT, wrote that, “[a] s the number of global clinical trials continues to rise, so does the need and demand for qualified research support personnel, which further drive expectations for clearly established job functions. Variability in the assigned roles and responsibilities among [CRCs] creates opportunity to provide clarity in defining the profession. …The ability to establish a clearly defined career roadmap for the CRC —one based on a thorough understanding of the role’s salient competencies—better enables job performance and provides opportunities for career advancement and credentials to those in the profession.”  

The authors may not have meant those “opportunities” to involve things like a CRC setting their alarm for 2 a.m. to call and remind a bedside nurse to draw pharmacokinetic samples from a study participant, or stopping for doughnuts and juice on their way in to work to feed a subject’s six kids so she would come in for her visit, both of which Tilley has done, but a “whatever it takes” attitude will often send one down the right path in this line of work.  

Resources:  

ACRP CCRC Certification  

ACRP Hiring Guidelines for Entry Level Clinical Research Coordinators™  

Study: Credentialed Principal Investigators and CRCs Perform Better  

9 Reasons Sponsors Should Pay for CRCs to Attend Conferences  

Author: Gary Cramer  

Orlando, Florida

Wilmington, Delaware

Akron, Ohio

case study clinical trial reporting

Planting a Seed is Not Enough: Nurturing Community Engagement to Grow Patient Recruitment and Retention 

case study clinical trial reporting

Getting in the Game as a First-Time ACRP Conference Attendee

Jobs in the acrp career center.

  • Clinical Research Technologist-Research | Nemours
  • Research Coordinator - Research Operations | Denver Health
  • Research Fellow | Nemours
  • Research Scientist | Nemours
  • Open access
  • Published: 23 April 2024

A first-in-human clinical study of laparoscopic autologous myoblast sheet transplantation to prevent delayed perforation after duodenal endoscopic mucosal dissection

  • Kengo Kanetaka   ORCID: orcid.org/0000-0002-1620-7044 1 ,
  • Yasuhiro Maruya 1 ,
  • Miki Higashi 1 ,
  • Shun Yamaguchi 2 ,
  • Ryo Matsumoto 2 ,
  • Shinichiro Kobayashi 2 ,
  • Keiichi Hashiguchi 3 ,
  • Fumiya Oohashi 4 ,
  • Masaki Matsumura 4 ,
  • Takahiro Naka 4 ,
  • Yusuke Sakai 5 ,
  • Kazuhiko Nakao 3 ,
  • Shigeru Miyagawa 6 &
  • Susumu Eguchi 2  

Stem Cell Research & Therapy volume  15 , Article number:  117 ( 2024 ) Cite this article

1 Altmetric

Metrics details

The detection rate of superficial non-ampullary duodenal epithelial tumors (SNADETs) has recently been increasing. Large tumors may contain malignant lesions and early therapeutic intervention is recommended. Endoscopic mucosal dissection (ESD) is considered a feasible treatment modality, however, the anatomical and physiological characteristics of the duodenum create a risk of postoperative perforation after ESD.

To explore whether myoblast sheet transplantation could prevent delayed perforation after ESD, a first-in-human (FIH) clinical trial of laparoscopic autologous myoblast sheet transplantation after duodenal ESD was launched. Autologous myoblast sheets fabricated from muscle tissue obtained seven weeks before ESD were transplanted laparoscopically onto the serous side of the ESD. The primary endpoints were the onset of peritonitis due to delayed perforation within three days after surgery and all adverse events during the follow-up period.

Three patients with SNADETs ≥ 20 mm in size underwent transplantation of a myoblast sheet onto the serous side of the duodenum after ESD. In case 1, The patient’s postoperative course was uneventful. Endoscopy and abdominal computed tomography revealed no signs of delayed perforation. Despite incomplete mucosal closure in case 2, and multiple micro perforations during ESD in case 3, cell sheet transplantation could prevent the postoperative massive perforation after ESD, and endoscopy on day 49 after transplantation revealed no stenosis.

Conclusions

This clinical trial showed the safety, efficacy, and procedural operability of this novel regenerative medicine approach involving transplanting an autologous myoblast sheet laparoscopically onto the serosa after ESD in cases with a high risk of delayed perforation. This result indicates the potential application of cell sheet medicine in treating various abdominal organs and conditions with minimal invasiveness in the future.

Trial registration

jRCT, jRCT2073210094. Registered November 8 2021,

https://jrct.niph.go.jp/latest-detail/jRCT2073210094 .

Introduction

Owing to its rarity, the duodenum has not received much attention as a site of tumor development. The incidence of superficial non-ampullary duodenal epithelial tumors (SNADETs), in particular, is said to be 0.02-0.5% among autopsy series [ 1 ]. However, the rate of detection of these tumors has recently been increasing owing to advances in endoscopic technology and increasing awareness of this disease [ 2 ]. Duodenal cancer is estimated to account for 0.5% of all gastrointestinal cancers. A recent Japanese study using a large-scale national database indicated that the incidence of duodenal cancer registered in 2016 was 23.7 per 1,000,000 person-years. The authors described the incidence as increasing worldwide [ 3 , 4 , 5 ].

Accumulating evidence indicates that even if a tumor is localized to the mucosal layer, large tumors may contain malignant lesions [ 6 ], so early therapeutic intervention is recommended for these tumors [ 4 ]. The duodenum is located near vital organs, such as the pancreas and bile duct, and invasive procedures, such as Whipple operation, are needed to treat cases of duodenal cancer. In Japan, due to widespread esophagogastroduodenoscopy being performed to screen for gastric cancer, more than 50% of duodenal cancer cases are diagnosed at the localized stage, and endoscopic resection of these superficial tumors is recommended. According to an analysis of a national cancer registry conducted by Yoshida et al., approximately 48% of these cancers were treated by endoscopic resection, with favorable short- and long-term outcomes achieved [ 3 ].

Endoscopic resection, such as endoscopic mucosal resection (EMR) and endoscopic submucosal dissection (ESD), are considered feasible treatment modalities for these tumors. In contrast to the widespread application of ESD for early gastric and colorectal cancer, ESD for duodenal tumors has been hampered by the fact that maneuvering the scope is difficult because of the narrow curling lumen and very thin wall of the duodenum. Furthermore, mucosal defects after ESD are exposed to irritant digestive contents, such as bile and pancreatic juices, which impair the integrity of the remnant duodenal wall. These anatomical and physiological characteristics create a risk of postoperative perforation after ESD, which is reported in 0-14% of cases [ 7 ] and if delayed perforation developed emergency surgery is often required to cure lethal peritonitis [ 8 , 9 , 10 ]. Clip closure or coverage with artificial materials for post-ESD ulcers has been attempted in order to prevent perforation [ 4 ], with favorable outcomes of a decreased incidence of delayed complications achieved [ 11 ]. However, Kato et al. reported that mucosal closure may be difficult in some tumors, such as those located in the proximal portion or those with large, occupied circumferences [ 12 ].

Mizutani et al. analyzed the risk factors for incomplete closure and found that a lesion location at the medial/anterior wall and a large lesion size were independent predictors of incomplete closure [ 13 ]. Shielding the wound with a Polyglycolic acid (PGA) sheet is also reportedly effective in avoiding delayed adverse events when complete closure of the mucosal defect is not possible. However, Fukuhara reported that the short-term outcomes of these patients were worse than those of patients who achieved complete closure [ 14 ].

Laparoscopic endoscopy cooperative surgery (LECS) is an alternative treatment approach that can achieve an appropriate resection margin and prevent duodenal leakage by reinforcing the ESD site. Kanaji et al. reported the safety and feasibility of duodenal LECS in a single-center prospective study [ 15 ], and a retrospective multicenter study confirmed its safety and feasibility [ 16 ]. However, closure of the thin wall after duodenal ESD remains challenging and requires highly advanced laparoscopic suturing techniques. Furthermore, a large mucosal defect encompassing the posterior wall or the pancreatic side can be difficult to suture, and great care should be taken to avoid causing stenosis or involving the papilla.

Although the precise mechanism underlying delayed perforation is unclear, irritating substances, such as bile and pancreatic juice, may play a crucial role [ 14 , 17 , 18 ]. In addition, Hanaoka et al. reported that delayed perforation after ESD occurred because of ischemic changes at the ESD site [ 19 ]. In a large animal model, we revealed that ischemia is one of the main causes of delayed perforation after duodenal ESD [ 20 ].

We previously reported the efficacy of myoblast sheet transplantation in preventing gastric perforation and pancreatic fistula in a rat model [ 21 , 22 ]. In addition, we demonstrated that transplantation of autologous myoblast sheets onto the serosal site after duodenal ESD prevented delayed perforation in a porcine model [ 23 ]. This result encouraged us to conduct a clinical trial using autologous myoblast sheets in patients with SNADETs.

We therefore established this first-in-human (FIH) clinical trial, the objective of which was to evaluate the safety, efficacy, and procedural operability of this novel regenerative medicine approach involving transplanting an autologous myoblast sheet laparoscopically onto the serosa after ESD in cases with a high risk of delayed perforation.

Study design

This study was a phase 1 FIH clinical trial to assess the safety and efficacy of laparoscopic myoblast sheet transplantation after duodenal ESD. This study was conducted at a single center (Nagasaki University Hospital). No sample size calculation was used in this study, but the study included six adult patients with SNADETs. This study was supported by the Japan Agency for Medical Research and Development (AMED): the title of the approved study “An exploratory clinical trial of TERGS0001 in laparoscopic and endoscopic cooperative surgery for superficial non-ampullary duodenal epithelial tumor “, registered with the Japan Registry of Clinical Trials as jRCT2073210094, and approved by the Institutional Review Board of Nagasaki University on January 20, 2021.

This study was conducted according to the principles of the Declaration of Helsinki and the Japan Good Clinical Practice guidelines, and in compliance with the ethical guidelines for medical studies in human subjects. Written informed consent was obtained from all patients.

The inclusion and exclusion criteria are listed in Table  1 . Patients ≥ 20 years old with SNADETs eligible for ESD were included in this study.

Outcome measurements

These assessments are presented in Table  2 . Abdominal computed tomography (CT) was performed at baseline and three days after transplantation. At baseline, the tumor size was measured using an endoscopic scale. On days 1 and 7 after transplantation, an endoscopic examination was performed to assess the size of the ulcer.

Primary endpoints

Efficacy outcome measures.

The onset of peritonitis due to delayed perforation within three days after surgery.

Delayed perforation was defined as a perforation that manifested clinically, such as with a fever and abdominal pain. Fluid collection and abdominal air outside the duodenum on abdominal CT suggested perforation. We clarified the definition in detail to exclude the effects of laparoscopic procedures (Fig.  1 ).

figure 1

Flowchart for defining the occurrence of peritonitis after laparoscopic cell sheet transplantation. “Severe abdominal pain” means 1 ) pain that could not be relieved with the application of a strong painkiller, such as pentazocine, or 2 ) pain accompanied by rebound tenderness and muscle defense, as judged by two surgeons

Safety outcome measures.

All adverse events during the follow-up period.

Any defect in the clinical trial product.

Any adverse event caused by a defect in the clinical trial product.

Secondary endpoints

Intraabdominal abscess during the follow-up period.

Postoperative drainage fluid examinations (amylase, bilirubin).

Development of epithelization or stricture on postoperative days 7 and 29.

Intraoperative procedural accidents.

Perforation requiring intraoperative surgical closure.

Intraoperative development of micro perforation.

The onset of bleeding.

Other procedural accidents.

Time spent on laparoscopic implantation of a myoblast sheet.

Success in the placement of a myoblast sheet.

Body temperature on the day after ESD and postoperative days 3 and 7.

Peripheral white blood cell counts on the day after ESD and on postoperative days 3 and 7.

CRP level on the day after ESD and postoperative days 3 and 7.

Presence or absence of abdominal pain during the follow-up period.

Presence or absence of the need for secondary emergency surgery owing to the onset of peritonitis after implantation of a skeletal muscle-derived cell sheet.

Presence or absence of bleeding requiring emergency care.

Curative resection rate in all tumors subjected to ESD.

Endoscopic mucosal resection size.

Evaluation of resected tumor specimen.

Mucosal resection size.

Histopathology data.

Relationship between success or failure of mucosal closure and the presence or absence of peritonitis after delayed perforation.

Serious adverse events.

Adverse events caused by harvesting skeletal muscle tissues (adverse events with an undeniable causal relationship with harvesting skeletal muscle tissues).

Changes in vital signs, complete blood count, and serum chemistry.

Autologous myoblast cell cultures and preparation of myoblast sheets

Seven weeks before duodenal ESD, approximately 2–5 g of skeletal muscle was obtained from the quadriceps muscle of each patient and transferred by air to the cell-processing faculty (CPF) of the Terumo Corporation (Kanagawa, Japan). In the CPF, the connective tissue will be carefully removed from the retrieved specimen, and the remaining muscle tissue was minced into small pieces. The muscles were digested at 37 °C in a aluminum block bath with TrypLE Select (Thermo Fisher, MA, USA) containing collagenase, gentamicin sulfate, and amphotericin B. The fluid was discarded, and culture medium (MCDB131; Thermo Fisher) supplemented with 20% fetal bovine serum was added to halt the enzymatic digestion process. Isolated cells were collected by centrifugation and then seeded onto flasks (Thermo Fisher) with MCDB131 medium (Thermo Fisher) supplemented with 20% fetal bovine serum.

After cultivation, they were harvested by trypsinization. After sufficient expansion for cell sheet fabrication, the cells were dissociated from the flasks with TrypLE Select, and the cell suspension will be cryopreserved and transferred to Nagasaki University Hospital two days before transplantation. In the CPF of Nagasaki University, the cells were reincubated on 60-mm temperature-responsive culture dishes (CellSeed, Tokyo, Japan) at 37 °C with the cell numbers adjusted to 2.2 × 10 7 per dish. On the day of transplantation, the cells were washed with ice-cold HBSS(+) and incubated at room temperature for 10 min. After reducing the culture temperature, the myoblast sheet could be detached without any need for enzymatic treatment, thereby preserving the important membrane proteins and extracellular matrix and allowing the cell sheet to successfully integrate with the tissue at the implanted site. The diameter of each detached cell sheet was expected to be approximately 2.5 cm. To increase strength during handling, fibrin was sprayed onto the surface of the cell sheet.

The evaluation of the myoblast sheet

The harvested myoblasts were assessed for the viable cell number and viability at every time point of passaging. Quality testing of the myoblast sheet also included assessments for the presence of bacteria, viruses, mycoplasma, or endotoxin contamination. Cell purity was measured by flow cytometry (Beckman Coulter, Miami, FL, USA) after staining with an anti-cluster of differentiation 56 antibody (CD56; BD Bioscience, San Diego, CA, USA).

The initial plan was to enroll six cases in our clinical study, but due to the coronavirus pandemic, which decreased the chance of detecting SNADETs through a medical examination with gastroduodenoscopy, this trial was limited to a total of three cases.

Table  3 summarizes the perioperative characteristics of the three enrolled patients. Laparoscopic transplantation of the two cell sheets was performed without any adverse events. Although the postoperative course was uneventful in all enrolled patients in this study, our strict criteria for the postoperative amylase level in drainage fluid determined that the development of peritonitis was positive in two of three cases.

Mid 50’s woman was referred to our hospital because SNADET was detected during a routine medical checkup. Upper gastroduodenal endoscopy revealed 0-IIa type tumors in the descending duodenum. As the tumor was 25 mm in diameter, endoscopic resection was indicated because of the potential for malignancy (Fig.  2 A).

figure 2

Preparation of the myoblast sheet and laparoscopic transplantation of the sheet after ESD in case 1. A : Superficial duodenal tumor located in the second portion of the duodenum. B : Muscle tissue was surgically excised from the femur of the patient. C: A transplantable myoblast sheet was harvested after seven weeks of culture. D : A mucosal defect around 40 mm in diameter after duodenal ESD. E : Cell sheets were transplanted onto the serosal side of the duodenal wall after ESD. C : Abdominal CT three days after transplantation revealed no obvious signs of perforation

Written informed consent for inclusion in our clinical study was obtained, and muscle specimens were harvested from the vastus medialis muscle under local anesthesia (Fig.  2 B). The resected specimen (3.2 g) was immediately transported by air under sterile conditions to the CPF of Terumo Corporation located at Kanagawa, where all culture and cell fabrication steps mentioned above were performed.

After seven weeks of cell expansion, the cell suspensions were transported back to the CPF of Nagasaki University Hospital on the day before transplantation and placed in temperature-responsive cell culture dishes (UPCell; CellSeed) to fabricate a myoblast sheet (Fig.  2 C).

Under general anesthesia, duodenal LECS was performed: five trocars were initially inserted into the abdomen for laparoscopic transplantation of cell sheets. Intraoperative endoscopy revealed 0-IIa type tumors on the oral side of the papilla of Vater, in the descending duodenum. After clamping the jejunum, duodenal ESD was performed by an endoscopist (Fig.  2 D). Although immediate perforation was not evident, the wall after ESD was so thin that the endoscopic light could be seen laparoscopically.

After closure of the mucosal defect with endoscopic clipping, two myoblast sheets were transplanted onto the severe side of the duodenal ESD site using a silicon-made membranous device. A myoblast sheet on the carrier was placed onto a polyester mesh, and the carrier and mesh were pinched with conventional laparoscopic forceps and rolled up for placement into the abdominal cavity through a 12-mm laparoscopic port. The carrier and nylon mesh were deployed, and the carrier attached to the cell sheet was placed on the surface of the duodenum, cell-sheet side down. After confirming the attachment of the cell sheet, the carrier and nylon mesh were gently removed and retrieved from the intra-abdominal cavity (Fig.  2 E).

On postoperative day 1, the patient did not show any signs of peritonitis, such as abdominal pain or a fever. An endoscopic examination revealed no perforation, but dropout of several clips was observed. The amylase value in the fluid of the drain placed in Morrison’s pouch was 68 U/L on postoperative day 1.

Abdominal CT on postoperative day 3 showed neither air bubbles nor fluid around the duodenum (Fig.  2 F), and there were no signs of peritonitis or retroperitonitis on a physical examination. The postoperative course was uneventful, and the patient was discharged from the hospital nine days after the operation. Follow-up examinations were performed approximately seven weeks after cell sheet transplantation, including endoscopic and clinical examinations. No tests revealed any abnormalities, confirming the safety of the transplantation procedure.

Mid 70’s man was referred to our hospital because a SNADET 25 mm in diameter was found in the second portion of the duodenum. Seven weeks after harvesting the skeletal muscle tissue, duodenal LECS was performed. Although no intraoperative perforation was observed, mucosal closure with endoscopic clips was insufficient because of the large mucosal defect after ESD (Fig.  3 A). Incomplete closure of the mucosal defect by clipping causes the ulcer base to bulge outward, leaving a diverticulum-like space at the ESD site.

figure 3

Myoblast sheet transplantation after ESD in case 2. A : The closure of the mucosal defect with clipping was incomplete due to its large diameter. A protruding, thinned ulcerative base after incomplete clipping was seen. B : Two cell sheets were transplanted to fully cover the thinned duodenal wall after duodenal mobilization. C : Abdominal CT three days after transplantation showed a small air bubble at the dorsal side of the duodenum. Neither massive free air nor fluid collection was evident. D : The air bubble on abdominal CT had diminished by seven days after transplantation

As the thinned area after ESD was semicircular from the dorsal side to the contralateral side of the pancreas, full mobilization of the duodenum was performed to the extent of the pancreatic head. Two cell sheets were applied to the protruding area of the ESD, and the omentum was placed onto the transplanted sheets (Fig.  3 B).

Although there was no postoperative abdominal pain or a fever, an elevated intra-abdominal drain amylase level of 15,623 U/ml was observed the following day, and abdominal CT revealed a small amount of free air along the dorsal duodenum of the ESD site.

With close follow-up, the patient’s condition did not deteriorate, and the volume of drainage fluid and amylase value promptly decreased to 551 U/ml on the third day (Fig.  4 A). Abdominal CT still showed free air bubbles on the dorsal side of the duodenum (Fig.  3 C). There was no worsening of clinical symptoms, such as abdominal pain, and free air had completely diminished on abdominal CT (Fig.  3 D).

figure 4

Postoperative changes in the excretion volume obtained from the drainage tube placed near the transplanted site in cases 2 ( A ) and 3 ( B )

On the 49th day after transplantation, endoscopic observation revealed no stenosis in the transplanted area.

Early 40’s man with a SNADET located in the descending portion of the duodenum was included in our study. During duodenal LECS, at least four micro perforations were observed intraoperatively; however, each perforation was small and not as severe as that accompanied by mucosal exfoliation (Fig.  5 A). Air and bile leakage diminished after complete mucosal closure with endoscopic clips, where a thin ulcer base bulged outward like a diverticulum due to mucosal clips (Fig.  5 B). Two cell sheets were applied to the serosa side after ESD to cover each perforated site, and the omentum was placed onto the sheets without fixation (Fig.  5 C). No postoperative abdominal pain or a fever was observed; however, a blood test revealed a CRP level of 13 mg/dl.

figure 5

Myoblast sheet transplantation after ESD in case 3. A : At least four micro perforations were evident after duodenal ESD. B : Laparoscopy also revealed micro perforations with bile leakage. The thinned duodenal wall was protruding like the diverticulum due to mucosal clipping. C : All micro perforations were completely sealed by transplanted myoblast sheets. D : Abdominal CT three days after transplantation showed a small air bubble with diverticulum-like protrusion of the duodenal wall. Fluid collection around the duodenum was not evident. E : Endoscopy seven days after transplantation revealed an opening of the mucosa with dropout of intraoperative clipping. F : Abdominal CT seven days after transplantation showed air bubbles along the duodenal wall, but they had not extended even after endoscopic observation

The patient did not show any symptoms of peritonitis; however, an elevated drain amylase level of 16,109 U/ml was observed on postoperative day 3 (Fig.  4 B). CT of the abdomen showed no marked increase in fluid collection and free air, but a small air bubble was detected on the lateral side of the ESD site (Fig.  5 D). Two possibilities were considered as the source of this solitary air bubble: localized free air due to perforation or a diverticulum-like space in the duodenum. Since there were no worsening clinical symptoms, such as a fever or increased abdominal pain, protease inhibitors, somatostatin analog, and antibiotics were started for pancreatitis, resulting in a prompt reduction of the drain amylase level to 1,973 U/ml (Fig.  4 B). On day 7, an endoscopic examination revealed an opening of the mucosa, which was closed intraoperatively with clips (Fig.  5 E). A robust structure backing the bottom of the defect of the duodenal wall was observed, and abdominal CT after endoscopy showed no expansion of free air despite endoscopic insufflation (Fig.  5 F).

On the 49th (50th) day after transplantation, endoscopic observation revealed no stenosis in the transplanted cell sheet area.

This clinical trial evaluated the efficacy, safety, and feasibility of laparoscopic transplantation of autologous myoblast sheets to prevent delayed perforation after duodenal ESD in patients with SNADETs. In this study, we showed that cell sheet transplantation was able to partially prevent postoperative perforation after duodenal ESD, even in patients with an increased risk of delayed perforation, such as those with incomplete mucosal closure and intraoperative micro perforations due to a larger tumor size than expected. The novelty lies in the fact that cell sheet transplantation was performed on the serosal side using a laparoscope and not on the inner lumen using an endoscope. The results obtained in this study are remarkable, as they can expand the applications of cell sheet medicine from the endoscopic field to the laparoscopic surgical field.

Recent advances in tissue engineering have enabled the use of cells as a promising modality for treating patients with intractable diseases. Cell therapy has recently been introduced into clinical practice for the functional repair of deficiencies in various fields, including hepatology and gastroenterology [ 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 ]. Cell sheet medicine has also been implemented in clinical practice in the field of cardiac surgery. Yoshikawa et al. reported that the advantages of cell sheet implantation include the potential to increase the number of implanted cells and the maintenance of cell-to-cell contact, which leads to a greater survival of cells, as well as a better paracrine effect than local injection [ 32 , 33 ], which is thought to be the main mechanism involved in cell sheet transplantation. The secretion of various growth factors, such as hepatocyte growth factor (HGF) and vascular endothelial growth factor (VEGF), can locally promote angiogenesis at the transplanted site. We also recently indicated that myoblast sheet transplantation accelerates healing of the transplanted site through the chronological expression of various growth factors [ 34 ]. Based on the above findings, one of the mechanisms of cell sheet transplantation may therefore directly promote the regeneration of a thinned duodenal wall, and thereby strengthen the adhesion around the ESD site. In addition to these paracrine effects, the barrier effect of the myoblast sheet might also play a key role in the prevention of delayed perforation.

Cell sheet technology has also been applied in clinical practice in gastrointestinal fields as well. Ohki et al. conducted a clinical study of 10 patients in whom autologous buccal cell sheets were transplanted to cover mucosal defects after ESD for superficial esophageal cancer [ 35 ]. They demonstrated remarkable results in which cell sheet transplantation successfully prevented esophageal stricture, even after semi-circumferential ESD with a high risk of stenosis. In collaboration with Ohki et al., Yamaguchi et al. investigated the efficacy of cell sheet transplantation after airplane transportation of a fabricated cell sheet. Despite their study involving a wider resection area and lower cell sheet coverage for the ESD than those reported by Ohki et al., they also showed impressive results with a luminal stenosis rate of 40% and a median number of mandatory endoscopic dilatations of 0 [ 36 ]. Recently, the effect of cell sheet transplantation on post-anastomotic stenosis in congenital esophageal disease has also been reported. Among the three patients who received autologous oral mucosal cell sheets after endoscopic balloon dilatation for anastomotic stenosis, two were free from dilatation for at least one year after transplantation [ 37 ]. Regarding transplantation onto the esophageal lumen as described above, transplantation of the myoblast sheet directly into the mucosa under endoscopy seems to be an attractive option, however, there is still no available feasible device that allows us to transport a fragile cell sheet into the duodenum over the narrow esophagus, the esophagogastric junction and the pylorus. Moreover, the presence of bile, pancreatic juice and active peristalsis of the duodenal lumen is well known to make the engraftment of such a fragile cell sheet onto the ESD ulcer very difficult.

Various routes of cell transplantation, such as intravascular and local injection, are considered, depending on the site where the implanted cells are expected to function. Our study is important in that we successfully transported fragile myoblast sheets into a high-pressure pneumoperitonized abdominal cavity through a thin laparoscopic trocar. In endoscopic transplantation of cell sheets into the inner lumen of the esophagus, surgical intervention to create a specified route for transplantation is not required, and endoscopically applying a sheet-like structure is likely to be relatively easy with a simple procedure [ 37 , 38 , 39 ]. However, transplantation into the inner body, such as the thoracic and abdominal cavities, requires a surgical route for cell sheet transplantation.

In a clinical study of autologous myoblast sheet transplantation in 15 ischemic cardiomyopathy patients, the cell sheets were transplanted onto the left ventricular wall of the heart through thoracotomy of the left fifth intercostal space [ 40 ]. Kanzaki et al. reported thoracoscopic transplantation of an autologous dermal fibroblast sheet after lung resection to treat air leakage [ 41 , 42 ]. They used a CellShifter with thoracoscopic equipment for transplantation through a trocar inserted into the thoracic wall. Recently, various thoracoscopic devices have been developed for the transplantation of sheets into bone-supported thoracic cavities. However, no reports have described the transplantation of sheets into a high-pressure pneumoperitonized abdominal cavity [ 43 , 44 ]. We established a feasible procedure for laparoscopic transplantation of myoblast sheets in our porcine model [ 45 ] and confirmed that this procedure is also practical for clinical settings. This approach may expand the potential utility of cell sheet medicine to intra-abdominal organs other than the duodenum.

Despite inadequate endoscopic clip closure of a large mucosal defect (> 40 mm) in case 2 and multiple intraoperative micro perforations in case 3, cell sheet transplantation in both cases was able to prevent clinical problems, such as increased drainage or worsening of peritonitis symptoms due to rupture of the ulcer base. In case 2, we were unable to exclude the possibility that the elevation of drainage amylase might have been due to pancreatitis caused by Kocher’s maneuver extending to the dorsal side of the pancreas, rather than due to delayed perforation. It might also be possible that the micro perforations in case 3 were not completely covered by the cell sheet, as we stipulated that we could use only two cell sheets. As the patient’s postoperative condition did not deteriorate, mandatory reoperation for cure was not necessary. Although we should judge “peritonitis” occurred in two of three patients along with our criteria, our findings indicate the effectiveness of cell sheet transplantation for reinforcement of defects of the intestinal wall, even with exposure to irritant digestive juice.

An endoscopic examination revealed a robust structure backing the mucosal defect seven days after ESD in case 3. This finding was similar to those obtained in our animal model, indicating massive fibroblasts and collagen fibers among the implanted myoblast backing diminished ulcers based on an immunohistochemical examination at the ESD site [ 23 ]. In all cases, neither stenosis nor any recurrent tumor was found on endoscopy at 49 days after transplantation, which indicates that the paracrine effect of the cell sheet did not negatively impact the process of tissue regeneration.

With its recent national health insurance coverage, duodenal LECS accompanied by suture reinforcement of thin ESD sites has rapidly become popular. Proper candidates for cell sheet transplantation in the future may be patients in whom laparoscopic suture after ESD would be difficult due to the tumor being located on the pancreatic side and in cases of large lesions where there is concern about duodenal stricture. A randomized control trial is needed to validate the effectiveness of cell sheet transplantation for patients who truly benefit from this treatment.

However, we are concerned that there would be hurdles in the accumulation and registration of suitable patients for such a study. First, the subjects of that future study might be those with SNADETs without extension to the submucosa; however, the preoperative diagnosis of the depth of invasion with SNADETs as well as their histological diagnosis is tremendously difficult. In the present study, because tumors were judged as invading into deeper layers over the mucosa with preoperative endoscopy, three patients were excluded from enrollment before the eligibility committee (data not shown). However, a postoperative pathological examination after duodenal LECS revealed that one was an adenoma, and the other two were duodenal carcinomas confined to the mucosa. Second, a large discrepancy was found between the preoperative endoscopic tumor diameter measurement and the actual resected tumor diameter upon ESD, which might have led to the unpredictability of the difficulty of ESD and mucosal clipping after ESD. This uncertainty in the ESD procedure might interfere with the equal distribution of patients in randomized control trials.

There have been several reports on the application of cell sheet technology in the abdominal cavity. Maruya et al. reported that adipose-derived stem cell (ADSC) sheets enhanced anastomotic strength in a miniature pig model [ 46 ]. Hara et al. similarly reported the efficacy of ADSC sheet transplantation around the anastomotic site of the bile duct to prevent biliary anastomotic stricture [ 47 ]. In addition, we previously reported that cell sheets fabricated with islets, fibroblasts, or ADSCs had a cytoprotective effect on the islet function compared to sheets fabricated with islets alone [ 48 ]. These sheets may be transplanted onto the liver surface, as described by Inagaki et al. [ 49 ]. Miyamoto also reported that hepatocyte sheets transplanted onto the liver surface exerted sustained liver function stimulation compared with sheets transplanted subcutaneously [ 50 ].

Several limitations associated with the present study warrant mention. Because of the impact of the spread of the new coronavirus infection, this trial was limited to a total of only three cases, and because the procedure was performed for prevention, statistical consideration of efficacy is difficult. It was also not possible to rule out the influence of the small sample size on the study outcome and power calculations could therefore not performed to support the usefulness of this study..In addition, our clinical study was designed as a single-arm, single-center study that may have affected the outcome of therapy. A randomized control trial seemed difficult to perform, as described above, so future studies with more patients are needed.

Our ongoing FIH clinical trial demonstrated successful transplantation of autologous myoblast sheets into pneumoperitonized abdominal cavities via the laparoscopic approach. These results suggest the potential application of cell sheet medicine in the treatment of various abdominal organs with minimal invasiveness. The efficacy of cell sheet transplantation for preventing delayed perforation after duodenal ESD must be further validated in subsequent clinical trials.

Data availability

The data that support the findings of this study are available from the corresponding author on reasonable request.

Kakushima N, Kanemoto H, Tanaka M, Takizawa K, Ono H. Treatment for superficial non-ampullary duodenal epithelial tumors. World J Gastroenterol. 2014;20:12501–8.

Article   PubMed   PubMed Central   Google Scholar  

Goda K, Kikuchi D, Yamamoto Y, Takimoto K, Kakushima N, Morita Y, et al. Endoscopic diagnosis of superficial non-ampullary duodenal epithelial tumors in Japan: Multicenter case series. Dig Endosc. 2014;26(Suppl 2):23–9.

Article   PubMed   Google Scholar  

Yoshida M, Yabuuchi Y, Kakushima N, Kato M, Iguchi M, Yamamoto Y, et al. The incidence of non-ampullary duodenal cancer in Japan: the first analysis of a national cancer registry. J Gastroenterol Hepatol. 2021;36:1216–21.

Nakagawa K, Sho M, Fujishiro M, Kakushima N, Horimatsu T, Okada KI et al. Clinical practice guidelines for duodenal cancer 2021. J Gastroenterol. 2022.

Yabuuchi Y, Yoshida M, Kakushima N, Kato M, Iguchi M, Yamamoto Y, et al. Risk factors for non-ampullary duodenal adenocarcinoma: a systematic review. Dig Dis. 2022;40:147–55.

Okada K, Fujisaki J, Kasuga A, Omae M, Kubota M, Hirasawa T, et al. Sporadic nonampullary duodenal adenoma in the natural history of duodenal cancer: a study of follow-up surveillance. Am J Gastroenterol. 2011;106:357–64.

Fujihara S, Mori H, Kobara H, Nishiyama N, Matsunaga T, Ayaki M, et al. Management of a large mucosal defect after duodenal endoscopic resection. World J Gastroenterol. 2016;22:6595–609.

Inoue T, Uedo N, Yamashina T, Yamamoto S, Hanaoka N, Takeuchi Y, et al. Delayed perforation: a hazardous complication of endoscopic resection for non-ampullary duodenal neoplasm. Dig Endosc. 2014;26:220–7.

Nonaka S, Oda I, Tada K, Mori G, Sato Y, Abe S, et al. Clinical outcome of endoscopic resection for nonampullary duodenal tumors. Endoscopy. 2015;47:129–35.

PubMed   Google Scholar  

Hara Y, Goda K, Dobashi A, Ohya TR, Kato M, Sumiyama K, et al. Short- and long-term outcomes of endoscopically treated superficial non-ampullary duodenal epithelial tumors. World J Gastroenterol. 2019;25:707–18.

Tsutsumi K, Kato M, Kakushima N, Iguchi M, Yamamoto Y, Kanetaka K, et al. Efficacy of endoscopic preventive procedures to reduce delayed adverse events after endoscopic resection of superficial nonampullary duodenal epithelial tumors: a meta-analysis of observational comparative trials. Gastrointest Endosc. 2021;93:367–74. e3.

Kato M, Ochiai Y, Fukuhara S, Maehata T, Sasaki M, Kiguchi Y, et al. Clinical impact of closure of the mucosal defect after duodenal endoscopic submucosal dissection. Gastrointest Endosc. 2019;89:87–93.

Mizutani M, Kato M, Sasaki M, Masunaga T, Kubosawa Y, Hayashi Y, et al. Predictors of technical difficulty for complete closure of mucosal defects after duodenal endoscopic resection. Gastrointest Endosc. 2021;94:786–94.

Fukuhara S, Kato M, Iwasaki E, Sasaki M, Tsutsumi K, Kiguchi Y, et al. Management of perforation related to endoscopic submucosal dissection for superficial duodenal epithelial tumors. Gastrointest Endosc. 2020;91:1129–37.

Kanaji S, Morita Y, Yamazaki Y, Otowa Y, Takao T, Tanaka S, et al. Feasibility of laparoscopic endoscopic cooperative surgery for non-ampullary superficial duodenal neoplasms: single-arm confirmatory trial. Dig Endosc. 2021;33:373–80.

Nunobe S, Ri M, Yamazaki K, Uraoka M, Ohata K, Kitazono I, et al. Safety and feasibility of laparoscopic and endoscopic cooperative surgery for duodenal neoplasm: a retrospective multicenter study. Endoscopy. 2021;53:1065–8.

Hoteya S, Kaise M, Iizuka T, Ogawa O, Mitani T, Matsui A, et al. Delayed bleeding after endoscopic submucosal dissection for non-ampullary superficial duodenal neoplasias might be prevented by prophylactic endoscopic closure: analysis of risk factors. Dig Endosc. 2015;27:323–30.

Yahagi N, Kato M, Ochiai Y, Maehata T, Sasaki M, Kiguchi Y, et al. Outcomes of endoscopic resection for superficial duodenal epithelial neoplasia. Gastrointest Endosc. 2018;88:676–82.

Hanaoka N, Uedo N, Ishihara R, Higashino K, Takeuchi Y, Inoue T, et al. Clinical features and outcomes of delayed perforation after endoscopic submucosal dissection for early gastric cancer. Endoscopy. 2010;42:1112–5.

Article   CAS   PubMed   Google Scholar  

Hashiguchi K, Maruya Y, Matsumoto R, Yamaguchi S, Ogihara K, Ohnita K, et al. Establishment of an in-vivo porcine delayed perforation model after duodenal endoscopic submucosal dissection. Dig Endosc. 2021;33:381–9.

Tanaka T, Kuroki T, Adachi T, Ono S, Kitasato A, Hirabaru M, et al. Development of a novel rat model with pancreatic fistula and the prevention of this complication using tissue-engineered myoblast sheets. J Gastroenterol. 2013;48:1081–9.

Tanaka S, Kanetaka K, Fujii M, Ito S, Sakai Y, Kobayashi S, et al. Cell sheet technology for the regeneration of gastrointestinal tissue using a novel gastric perforation rat model. Surg Today. 2017;47:114–21.

Matsumoto R, Kanetaka K, Maruya Y, Yamaguchi S, Kobayashi S, Miyamoto D, et al. The efficacy of autologous myoblast sheet transplantation to prevent Perforation after Duodenal Endoscopic Submucosal Dissection in Porcine Model. Cell Transpl. 2020;29:963689720963882.

Article   Google Scholar  

Terai S, Ishikawa T, Omori K, Aoyama K, Marumoto Y, Urata Y, et al. Improved liver function in patients with liver cirrhosis after autologous bone marrow cell infusion therapy. Stem Cells. 2006;24:2292–8.

Voltarelli JC, Couri CE, Stracieri AB, Oliveira MC, Moraes DA, Pieroni F, et al. Autologous nonmyeloablative hematopoietic stem cell transplantation in newly diagnosed type 1 diabetes mellitus. JAMA. 2007;297:1568–76.

Cassinotti A, Annaloro C, Ardizzone S, Onida F, Della Volpe A, Clerici M, et al. Autologous haematopoietic stem cell transplantation without CD34 + cell selection in refractory Crohn’s disease. Gut. 2008;57:211–7.

Hawkey CJ, Allez M, Clark MM, Labopin M, Lindsay JO, Ricart E, et al. Autologous hematopoetic stem cell transplantation for refractory Crohn Disease: a Randomized Clinical Trial. JAMA. 2015;314:2524–34.

Bhansali S, Dutta P, Kumar V, Yadav MK, Jain A, Mudaliar S, et al. Efficacy of autologous bone marrow-derived mesenchymal stem cell and mononuclear cell transplantation in type 2 diabetes Mellitus: a Randomized, Placebo-controlled comparative study. Stem Cells Dev. 2017;26:471–81.

Newsome PN, Fox R, King AL, Barton D, Than NN, Moore J, et al. Granulocyte colony-stimulating factor and autologous CD133-positive stem-cell therapy in liver cirrhosis (REALISTIC): an open-label, randomised, controlled phase 2 trial. Lancet Gastroenterol Hepatol. 2018;3:25–36.

Panes J, Garcia-Olmo D, Van Assche G, Colombel JF, Reinisch W, Baumgart DC, et al. Long-term efficacy and safety of Stem Cell Therapy (Cx601) for Complex Perianal Fistulas in patients with Crohn’s Disease. Gastroenterology. 2018;154:1334–42. e4.

Moroni F, Dwyer BJ, Graham C, Pass C, Bailey L, Ritchie L, et al. Safety profile of autologous macrophage therapy for liver cirrhosis. Nat Med. 2019;25:1560–5.

Menasche P, Alfieri O, Janssens S, McKenna W, Reichenspurner H, Trinquart L, et al. The myoblast autologous grafting in ischemic cardiomyopathy (MAGIC) trial: first randomized placebo-controlled study of myoblast transplantation. Circulation. 2008;117:1189–200.

Yoshikawa Y, Miyagawa S, Toda K, Saito A, Sakata Y, Sawa Y. Myocardial regenerative therapy using a scaffold-free skeletal-muscle-derived cell sheet in patients with dilated cardiomyopathy even under a left ventricular assist device: a safety and feasibility study. Surg Today. 2018;48:200–10.

Yamaguchi S, Higashi M, Kanetaka K, Maruya Y, Kobayashi S, Hashiguchi K, et al. Rapid and chronological expression of angiogenetic genes is a major mechanism involved in cell sheet transplantation in a rat gastric ulcer model. Regen Ther. 2022;21:372–9.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ohki T, Yamato M, Ota M, Takagi R, Murakami D, Kondo M, et al. Prevention of esophageal stricture after endoscopic submucosal dissection using tissue-engineered cell sheets. Gastroenterology. 2012;143:582–8. e2.

Yamaguchi N, Isomoto H, Kobayashi S, Kanai N, Kanetaka K, Sakai Y, et al. Oral epithelial cell sheets engraftment for esophageal strictures after endoscopic submucosal dissection of squamous cell carcinoma and airplane transportation. Sci Rep. 2017;7:17460.

Fujino A, Fuchimoto Y, Baba Y, Isogawa N, Iwata T, Arai K, et al. First-in-human autologous oral mucosal epithelial sheet transplantation to prevent anastomotic re-stenosis in congenital esophageal atresia. Stem Cell Res Ther. 2022;13:35.

Maeda M, Kanai N, Kobayashi S, Hosoi T, Takagi R, Ohki T, et al. Endoscopic cell sheet transplantation device developed by using a 3-dimensional printer and its feasibility evaluation in a porcine model. Gastrointest Endosc. 2015;82:147–52.

Kamao H, Mandai M, Ohashi W, Hirami Y, Kurimoto Y, Kiryu J, et al. Evaluation of the Surgical device and Procedure for Extracellular Matrix-Scaffold-supported human iPSC-Derived retinal pigment epithelium cell sheet transplantation. Invest Ophthalmol Vis Sci. 2017;58:211–20.

Miyagawa S, Domae K, Yoshikawa Y, Fukushima S, Nakamura T, Saito A et al. Phase I clinical trial of autologous stem cell-sheet transplantation therapy for treating cardiomyopathy. J Am Heart Assoc. 2017;6.

Kanzaki M, Takagi R, Washio K, Kokubo M, Yamato M. Bio-artificial pleura using an autologous dermal fibroblast sheet. NPJ Regen Med. 2017;2:26.

Kanzaki M, Takagi R, Washio K, Kokubo M, Mitsuboshi S, Isaka T, et al. Bio-artificial pleura using autologous dermal fibroblast sheets to mitigate air leaks during thoracoscopic lung resection. NPJ Regen Med. 2021;6:2.

Maeda M, Yamato M, Kanzaki M, Iseki H, Okano T. Thoracoscopic cell sheet transplantation with a novel device. J Tissue Eng Regen Med. 2009;3:255–9.

Osada H, Ho WJ, Yamashita H, Yamazaki K, Ikeda T, Minatoya K, et al. Novel device prototyping for endoscopic cell sheet transplantation using a three-dimensional printed simulator. Regen Ther. 2020;15:258–64.

Yamaguchi S, Kanetaka K, Maruya Y, Higashi M, Kobayashi S, Hashiguchi K, et al. Highly feasible procedure for laparoscopic transplantation of cell sheets under pneumoperitoneum in porcine model. Surg Endosc. 2022;36:3911–9.

Maruya Y, Kanai N, Kobayashi S, Koshino K, Okano T, Eguchi S, et al. Autologous adipose-derived stem cell sheets enhance the strength of intestinal anastomosis. Regen Ther. 2017;7:24–33.

Hara T, Soyama A, Adachi T, Kobayashi S, Sakai Y, Maruya Y, et al. Ameliorated healing of biliary anastomosis by autologous adipose-derived stem cell sheets. Regen Ther. 2020;14:79–86.

Yamashita M, Adachi T, Adachi T, Ono S, Matsumura N, Maekawa K, et al. Subcutaneous transplantation of engineered islet/adipose-derived mesenchymal stem cell sheets in diabetic pigs with total pancreatectomy. Regen Ther. 2021;16:42–52.

Inagaki A, Imura T, Nakamura Y, Ohashi K, Goto M. The liver surface is an attractive transplant site for pancreatic islet transplantation. J Clin Med. 2021;10.

Miyamoto D, Sakai Y, Huang Y, Yamasaki C, Tateno C, Hasegawa H, et al. Functional changes of cocultured hepatocyte sheets subjected to continuous liver regeneration stimulation in cDNA-uPA/SCID mouse: differences in transplantation sites. Regen Ther. 2021;18:7–11.

Download references

Acknowledgements

We thank Ms. Tomomi Murai, Ms. Hideko Hasegawa and Dr. Daisuke Miyamoto for helpful discussions and technical support.

This research was supported by AMED under Grant Number JP20bk0104112 (Kengo Kanetaka).

Author information

Authors and affiliations.

Tissue Engineering and Regenerative Therapeutics in Gastrointestinal Surgery, Nagasaki University Graduate School of Biomedical Sciences, Sakamoto 1-7-1, 8528102, Nagasaki, Japan

Kengo Kanetaka, Yasuhiro Maruya & Miki Higashi

Department of Surgery, Nagasaki University Graduate School of Biomedical Sciences, Nagasaki, Japan

Shun Yamaguchi, Ryo Matsumoto, Shinichiro Kobayashi & Susumu Eguchi

Department of Gastroenterology and Hepatology, Nagasaki University Graduate School of Biomedical Sciences, Nagasaki, Japan

Keiichi Hashiguchi & Kazuhiko Nakao

Terumo Corporation, Shibuya, Japan

Fumiya Oohashi, Masaki Matsumura & Takahiro Naka

Department of Chemical Engineering, Faculty of Engineering, Graduate School, Kyushu University, Fukuoka, Japan

Yusuke Sakai

Department of Cardiovascular Surgery, Osaka University Graduate School of Medicine, Osaka, Japan

Shigeru Miyagawa

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: Kengo Kanetaka, Shigeru Miyagawa, Susumu Eguchi. Operative Procedure: Kengo Kanetaka, Yasuhiro Maruya, Shinichiro Kobayashi, Hashiguchi Keiichi. Cell Processing: Miki Higashi, Fumiya Oohashi, Masaki Matsumura, Takahiro Naka. Device development: Yusuke Sakai. Writing-original draft preparation: Ryo Matsumoto, Shun Yamaguchi. Writing-review and editing: Kengo Kanetaka.Surpervision: Susumu Eguchi, Kazuhiko Nakao.

Corresponding author

Correspondence to Kengo Kanetaka .

Ethics declarations

Ethics approval and consent to participate.

The title of the approved study is “An exploratory clinical trial of TERGS0001 in laparoscopic and endoscopic cooperative surgery for superficial non-ampullary duodenal epithelial tumor “, registered with the Japan Registry of Clinical Trials as jRCT2073210094, and approved by the Institutional Review Board of Nagasaki University on January 20, 2021. Human cells in this study were utilized in full compliance with the Ethical Guidelines for Medical and Health Research involving Human Subjects (Ministry of Health, Labor, and Welfare (MHLW), Japan; Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan). Informed consent for publication were obtained from all patients in Japanese text.

Competing interests

Drs. Matsumoto, Yamaguchi, Kobayashi, Hashiguchi, Nakao, and Eguchi have no conflicts of interest or financial ties to declare. Drs. Higashi, Kanetaka, and Maruya received funding for cooperative research on cell sheets from the Terumo Corporation.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Kanetaka, K., Maruya, Y., Higashi, M. et al. A first-in-human clinical study of laparoscopic autologous myoblast sheet transplantation to prevent delayed perforation after duodenal endoscopic mucosal dissection. Stem Cell Res Ther 15 , 117 (2024). https://doi.org/10.1186/s13287-024-03730-3

Download citation

Received : 12 January 2024

Accepted : 10 April 2024

Published : 23 April 2024

DOI : https://doi.org/10.1186/s13287-024-03730-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Endoscopic submucosal resection
  • Superficial non-ampullary duodenal epithelial tumor
  • Laparoscopy
  • Cell-sheet transplantation
  • Clinical trial

Stem Cell Research & Therapy

ISSN: 1757-6512

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

case study clinical trial reporting

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts

Latest science news, discoveries and analysis

case study clinical trial reporting

The Maldives is racing to create new land. Why are so many people concerned?

case study clinical trial reporting

Mini-colon and brain 'organoids' shed light on cancer and other diseases

case study clinical trial reporting

Retractions are part of science, but misconduct isn’t — lessons from a superconductivity lab

case study clinical trial reporting

Monkeypox virus: dangerous strain gains ability to spread through sex, new data suggest

Dna from ancient graves reveals the culture of a mysterious nomadic people, atomic clock keeps ultra-precise time aboard a rocking naval ship, who redefines airborne transmission: what does that mean for future pandemics, ecologists: don’t lose touch with the joy of fieldwork chris mantegna, european ruling linking climate change to human rights could be a game changer — here’s how charlotte e. blattner.

case study clinical trial reporting

Lethal AI weapons are here: how can we control them?

case study clinical trial reporting

Living on Mars would probably suck — here's why

case study clinical trial reporting

Dozens of genes are linked to post-traumatic stress disorder

case study clinical trial reporting

What toilets can reveal about COVID, cancer and other health threats

Nato is boosting ai and climate research as scientific diplomacy remains on ice, how gliding marsupials got their ‘wings’, plastic pollution: three numbers that support a crackdown, first glowing animals lit up the oceans half a billion years ago.

case study clinical trial reporting

Any plan to make smoking obsolete is the right step

case study clinical trial reporting

Will AI accelerate or delay the race to net-zero emissions?

case study clinical trial reporting

Citizenship privilege harms science

We must protect the global plastics treaty from corporate interference martin wagner, un plastics treaty: don’t let lobbyists drown out researchers, current issue.

Issue Cover

Surprise hybrid origins of a butterfly species

Stripped-envelope supernova light curves argue for central engine activity, optical clocks at sea, research analysis.

case study clinical trial reporting

A chemical method for selective labelling of the key amino acid tryptophan

case study clinical trial reporting

Charles Darwin investigates: the curious case of primrose punishment

case study clinical trial reporting

Nanoparticle fix opens up tricky technique to forensic applications

case study clinical trial reporting

Coupled neural activity controls working memory in humans

Robust optical clocks promise stable timing in a portable package, targeting rna opens therapeutic avenues for timothy syndrome, bioengineered ‘mini-colons’ shed light on cancer progression, ancient dna traces family lines and political shifts in the avar empire.

case study clinical trial reporting

Breaking ice, and helicopter drops: winning photos of working scientists

case study clinical trial reporting

Shrouded in secrecy: how science is harmed by the bullying and harassment rumour mill

case study clinical trial reporting

Londoners see what a scientist looks like up close in 50 photographs

How ground glass might save crops from drought on a caribbean island, deadly diseases and inflatable suits: how i found my niche in virology research, books & culture.

case study clinical trial reporting

How volcanoes shaped our planet — and why we need to be ready for the next big eruption

case study clinical trial reporting

Dogwhistles, drilling and the roots of Western civilization: Books in brief

case study clinical trial reporting

Cosmic rentals

Las boriqueñas remembers the forgotten puerto rican women who tested the first pill, dad always mows on summer saturday mornings, nature podcast.

Nature Podcast

Latest videos

Nature briefing.

An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.

case study clinical trial reporting

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Trump trial highlights: DA alleges Trump led 'cook the books' scheme to hide 'porn star payoff'

What to know about trump's trial today.

  • Opening statements were delivered today in former President Donald Trump's historic New York criminal trial.
  • Lawyer Matthew Colangelo from Manhattan District Attorney Alvin Bragg's office detailed an alleged "catch and kill" scheme with Trump's blessing. Trump's attorney Todd Blanche will deliver the opening statement for the defense.
  • A source with direct knowledge of the situation tells NBC News that former National Enquirer publisher David Pecker will be the first witness for the prosecution. Prosecutors have described Pecker as a central figure in the alleged scheme to bury claims from women who said they had affairs with Trump.
  • Judge Juan Merchan said that court will end at 12:30 p.m. ET today and at 2 p.m. tomorrow for Passover.
  • Trump faces 34 counts of falsifying business records related to the hush money payment to adult film actress Stormy Daniels. He has pleaded not guilty and denied a relationship with Daniels.

After trial tomorrow, Trump to meet with former Japanese prime minister

case study clinical trial reporting

Dasha Burns

Rebecca Shabad is in Washington, D.C.

In addition to being in court on Tuesday, Trump is expected to meet with former Japanese Prime Minister Taro Aso. The trial is scheduled to conclude by 2 p.m. tomorrow to allow Passover observations.

Trump's campaign painted the meeting as evidence of his fitness to return to the White House.

“When President Trump is sworn in as the 47th President of the United States, the world will be more secure and America will be more prosperous,” spokesman Brian Hughes said.

Trump rails against judge in New York civil fraud case after agreement was reached in $175 million bond hearing

case study clinical trial reporting

Summer Concepcion

Hours after an agreement was reached at a hearing this morning on the $175 million bond in Trump’s New York civil fraud trial , the former president went on a rant complaining about Judge Arthur Engoron, who is presiding over the civil case.

"He had no idea what he did in the trial. He charged hundreds of millions of dollars on something where I’m totally innocent," Trump told reporters after exiting the courtroom.

"But if you look at what happened today, Judge Engoron should not have done that charge, he should have gone to the business division where they have complex business trials. But actually it should have never been brought because I didn’t overestimate it," he added.

Trump attacks Cohen after leaving courtroom

Speaking to cameras outside the courtroom, Trump railed against the prosecutors for indicting him over a "legal expense" and he attacked Michael Cohen, which he's barred from doing by the judge's gag order.

"It’s a case as to bookkeeping which is a very minor thing in terms of the law, in terms of all the violent crime," he said. "This is a case in which you pay a lawyer and they call it a legal expense in the books."

"I got indicted for that," Trump said.

Trump said that the things Cohen got in trouble for "had nothing to do with me."

"He represented a lot of people over the years but they take this payment and they call it a legal expense... and this is what I got indicted over," he said.

Trump said that instead of being at the trial, he should be campaigning in states like Georgia and Florida.

"It's very unfair," he said. "I should be allowed to campaign."

Trial wraps for the day

case study clinical trial reporting

The trial concluded at 12:42 p.m. — leaving early to allow an alternate juror to make an emergency dental appointment.

 Former President Donald Trump leaves court on April 22, 2024.

Trump's defense team makes objection to part of David Pecker's testimony

Trump lawyer Emil Bove said the defense objected to testimony from David Pecker about Dylan Howard and asked that it be struck from the record.

“We objected to some testimony about the whereabouts of Mr. Howard,” Bove said.

Prosecutor Joshua Steinglass said it's admissible and it could be foundational and goes to witness availability.

Jurors adjourned for the day

case study clinical trial reporting

Jonathan Allen

Gary Grumbach

Merchan has excused jurors for the rest of the day.

Jurors departed the courtroom at 12:25 p.m.

David Pecker describes relationship with Dylan Howard, former editor-in-chief at National Enquirer

case study clinical trial reporting

David Pecker said that he was familiar with Dylan Howard, the former editor-in-chief of the National Enquirer and chief content officer at AMI.

Pecker said Howard reported directly to him and said his job was "to make sure we got the most exclusive and current content.”

This line of questioning suggested that Howard will not testify himself.

Pecker says National Enquirer engaged in 'checkbook journalism'

case study clinical trial reporting

Corky Siemaszko

Trump's longtime pal David Pecker admitted that the National Enquirer paid for some of its scoops.

“We used checkbook journalism and we paid for stories,” Pecker, former publisher of the supermarket tabloid, testified. “I gave a number to the editors that they could not spend more than $10,000 to investigate, produce or publish a story.” 

Trump is more alert as Pecker testifies

case study clinical trial reporting

Katie S. Phang

Trump is now more alert, paying attention and is leaning into the defense table. He's speaking with his lawyer Emil Bove in an animated way.

His eyes are wide open and he's looking in Pecker's direction.

Who is David Pecker?

David Pecker , a Trump ally who is expected to testify during the trial, was the CEO of the National Enquirer’s parent company, American Media Inc. (AMI). He played a key role in the alleged scheme behind the hush money payment to Stormy Daniels in an effort to cover up the affair she claims she had with Trump before the 2016 election (Trump has repeatedly denied her allegations).

Pecker, a longtime friend of Trump, helped cover up potentially damaging stories about him. Prosecutors said Pecker and Michael Cohen had met with Trump at the Trump Tower in 2015 to discuss how Pecker could help suppress negative stories about Trump’s relationships with women. They allegedly discussed an instance involving Daniels, who was paid $130,000 by Cohen to not speak to media outlets about her alleged affair with Trump.

Image: David Pecker

Pecker in 2018 was granted immunity by federal prosecutors in their investigation into Cohen, who pleaded guilty to federal charges in connection to hush money payments issued to women he said he made under Trump’s direction, after he spoke with prosecutors about Cohen’s payment to Daniels.

AMI in 2018 had admitted to paying $150,000 to former Playboy model Karen McDougal to silence her over an alleged affair she had with Trump before the 2016 election. Trump has denied having an affair with McDougal.

Court resumes; prosecution calls David Pecker

The prosecution has called David Pecker to the stand.

Pecker, wearing a yellow tie, with gray-and-white hair combed back, enters the courtroom from a side door.

Pecker is 72, he says. He is married, he says. Been married 36 years, he says.

Trump lawyer plays New Yorker card

Blanche wrapped up his opening statement by trying to appeal to the jury as New Yorkers.

“Listen, use your common sense," he said. "We’re New Yorkers, it’s why we’re here.” 

Blanche reminded the panel members that they assured the court they could put aside the fact that Trump was once president and is now running again.

"We trust you will base it on what you hear in this courtroom and it will be a very swift not guilty verdict," he said.

Agreement reached in the Trump New York civil fraud case

case study clinical trial reporting

Chloe Atkins

In other Trump legal news, an agreement was reached this morning at the $175 million bond hearing in the Trump New York civil fraud case.

Chris Kise, Trump’s attorney said, they agreed with New York Attorney General Letitia James’ office to maintain the Schwab account in cash. Knight will have exclusive control of the account, and shall not trade or withdraw the account for any purpose other than to satisfy the condition of the bond.

Kise said that they will provide a monthly account statement to the attorney general’s office and that they will revise the pledge and control agreement so that it cannot be amended without the court approval.

Kise said that the parties will submit a stipulation that will memorialize this by Thursday.

Blanche concludes, trial takes 10-minute recess

Blanche concluded after 35 minutes and 10 seconds.

At the conclusion, the court took a 10-minute recess.

Blanche details Trump's relationship to Daniels

Blanche said that Daniels, though identifying her by her legal name Stephanie Clifford, is "biased against President Trump."

Blanche said that Trump met her in 2006 when he was running "The Apprentice" TV show, and he was looking for contestants. He said that she saw her chance to make a lot of money in 2016, $130,000 by making the allegations about having a sexual encounter with Trump.

“I’m going to say something else about her testimony, and this is important: It doesn’t matter," he told the jury. “Her testimony, while salacious, does not matter."

Trump's lawyer tests Merchan

Blanche said Trump believed the catch-and-kill agreements were lawful because they were made with the involvement of lawyers.

But through an earlier court ruling, Merchan expressly barred Trump from using this diluted “advice of counsel” defense, holding Trump could not protect certain communications from discovery under the attorney-client privilege while, at the same time, telling jurors that Trump believed his actions were lawful because lawyers were involved on both sides.

Trump watches jury as his lawyer argues

As Blanche moves through his opening statement, Trump is watching the jurors — occasionally moving his eyes to Blanche.

It’s hard to detect from the closed-circuit camera trained on his table. But from behind, you can see that angle of his profile shows he has been focused on the jury box for portions of the statement.

Trump’s mouth is drawn in a serious expression that betrays no emotion.

Merchan sustains objection about Cohen

Blanche accused Cohen of lying in a courtroom previously, presumably referencing Cohen’s plea to tax evasion charges, which he has since recanted and said he did only to spare his family.

Merchan summoned the lawyers back to the bench after Blanche said that Cohen has “testified under oath and lied.” 

Merchan sustained an objection from Colangelo.

Blanche rails against Cohen for attacking Trump

Blanche leaned next into calling Cohen's credibility into question.

“He has talked extensively about his desire to see President Trump go to prison," Blanche said. “Last night, 12 hours ago, Mr. Cohen on a public forum said that he had a mental excitement about this trial and his testimony.”

He said that Cohen's goal is "getting President Trump."

Blanche added that Cohen has testified under oath and lied.

Defense trying to poke holes

case study clinical trial reporting

Laura Jarrett

Blanche’s job as a defense lawyer here isn’t to tell a neat story in the same way as the prosecution — it’s to raise doubt, poke holes and plant questions in the jury’s mind.

We see this on full display today by not disputing the payoff Daniels received. Instead, the defense simply says Trump did nothing wrong. But he doesn’t (yet) tackle how the alleged scheme was first hatched in 2015. He jumps to 2017 and the Cohen reimbursement checks — that’s an easier part of the timeline for the defense side.

Judge calls lawyers to the bench

Merchan has asked the lawyers to approach the bench after prosecutors raised a fourth objection to a portion of Blanche’s opening statement.

There were about a dozen lawyers, between the two sides, huddled around Merchan.

Trump lawyer argues 'there's nothing wrong with trying to influence an election'

“I have a spoiler alert: There’s nothing wrong with trying to influence an election," Blanche said in his opening statement. "It’s called democracy."

Prosecution objects during defense opening statement

Prosecutors objected to Blanche saying that a nondisclosure agreement is “not illegal.” Merchan sustained.

Blanche rephrased and said it is “perfectly legal.” Prosecution objected again. Merchan overruled and let Blanche continue.

Blanche argues Trump had nothing to do with the whole series of events

Blanche argued that Trump wasn't involved in covering up the payments.

"President Trump had nothing to do, had nothing to do with the invoice, with the check being generated, or with the entry on the ledger," he said.

Blanche argues the events in the case were 'years and years ago'

case study clinical trial reporting

Ginger Gibson Senior Washington Editor

Blanche is trying to find any hole he can poke in the prosecution's case and starts by pointing out the dates when the events occurred.

Calling the events "pre-Covid," Blanche emphasized that some of the discussions dated back to 2015.

Prosecutors would not have been able to bring the case until after 2021, because Trump was largely shielded from prosecution while he was president.

Trump lawyer argues 'frugal' Trump wouldn't have paid Cohen that much

Blanche moved away from his lectern and the microphone to get a clearer look from the jury as he noted that Trump paid Cohen $420,000, rather than $130,000, trying to cast doubt on the prosecution’s argument that it was a repayment for the Daniels nondisclosure agreement.

“Would a frugal businessman … would a man who pinches pennies” repay a $130,000 debt to the tune of $420,000, Blanche told the jury.

“This was not a payback.”

Trump lawyer paints his client as a man 'just like me'

While making the case for his client’s innocence, Blanche attempted to humanize the former president in the defense’s opening statement.

Trump is a husband, a father, "a man just like me,” he said.

Trump lawyer begins opening statement by declaring Trump didn't commit any crimes

The former president's lead lawyer began his opening statement by saying that Trump did not commit any crimes.

Blanche said that the DA's office should have never brought the case. He said that Trump is presumed innocent and tells the jury that they will find him not guilty.

Trump's lawyer said that the jury has seen Trump for years. "He’s in some ways larger than life. But he’s also here in this courtroom, doing what any of us would do. Defending himself.”

He added that they will refer to Trump as President Trump because he earned that as the 45th president.

"We will call him President Trump out of respect," Blanche said. “It’s the office he’s running for right now, as the Republican nominee ... he’s also a man, he’s a husband, he’s a father and just like me.”

'Penny-pincher' Trump was willing to pay extra for catch-and-kill stories, prosecutor says

Colangelo said Trump was a "frugal businessman" but didn't count coins when it came to covering up his alleged affairs.

Prosecutors will produce evidence to show that Trump “was a very frugal businessman, believed in pinching pennies," he said. "He believed in watching every dollar. He believed in negotiating every bill. It’s all over all of the books he’s written.”

But, Colangelo said, "When it came time to pay Michael Cohen back for the catch and kill deal, you’ll see he didn’t negotiate it down. He doubled it.”

This, the prosecutor said, shows "just how important it was to him to hide the true nature” of the payments.

Prosecutor says Cohen's testimony will be backed up with emails, texts, phone logs, business documents

Colangelo told the jury that Cohen's testimony during the trial will be backed up by emails, text messages, phone logs and business documents.

“And it will be backed up by Donald Trump’s own words on tape, in social media posts, in his own books, and in videos of his own speeches," he said.

Colangelo finishes opening statement

Colangelo finished his opening statement, speaking for 45 minutes and 30 seconds.

The jury watched, he was seeming to hold their attention.

Trump lawyer Blanche is up next.

Prosecutor says jurors will learn Cohen 'has made mistakes in his past'

Colangelo said jurors can expect to hear a lot about Cohen’s backstory as Trump’s fixer.

“You will learn, and we will be very up front about it, the fact that Michael Cohen like other witnesses in this trial, has made mistakes in his past,” he said.

Prosecution uses Trump's words

The prosecution is going to great lengths to echo Trump’s own language in accusing him of committing crimes to steal an election — election integrity, fraud and the like.

Toward the conclusion of his statement, Colangelo called the alleged scheme “an illegal conspiracy to undermine the integrity of a presidential election” and pointed to “the steps that Donald Trump took to conceal that illegal election fraud.”

Prosecutor says it was a 'double lie' how Trump and his team covered up payments

Colangelo said that the Trump Organization was not in the business of paying people twice.

He said the scheme showed how important it was to hide the payment and the overall election conspiracy. He said Trump agreed to pay Cohen back in monthly installments over 2017 with 12 $35,000 payments; and Cohen would send a bogus invoice to make it seem like they were for legal services.

“That was a double lie,” Colangelo said. “There was no retainer agreement.”

“It was instead what they thought was a clever way to pay Cohen back without being too obvious about it,” he said.

Analysis of prosecution's opening statement

Listening to the prosecution’s story this morning, it’s striking to think how differently things might have turned out if federal prosecutors had charged Trump originally in connection with a campaign-related violation. They couldn’t at the time — they charged Cohen because Trump was president and the Justice Department has a policy of not charging a sitting president.

Instead, prosecutors here in New York have charged him with falsifying business records after the fact. The hurdle for prosecutors now is the timeline in their story. How would “cooking the books” in 2017, as they say, after nearly all of the damning facts had already been exposed by tons of reporting, hide anything from voters?

Prosecutor introduces former Playboy model Karen McDougal

Colangelo says a second catch-and-kill scheme was hatched to cover up Trump's alleged affair with former Playboy Playmate Karen McDougal.

"The defendant desperately did not want this information about Karen McDougal to become public because he was concerned about the election,” Colangelo said of Trump.

Pecker will testify that Trump met with him after the election to thank him, prosecution says

Pecker, the former publisher of the National Enquirer, will testify that Trump met with him after the election to thank him for dealing with the stories about women claiming to have had an affair with him, Colangelo said.

He then noted that Trump brought Pecker to the White House the following year to further show his appreciation.

Prosecutor explains Stormy Daniels situation to jury

Prosecutor Matthew Colangelo said that another woman, adult actress Stormy Daniels, came forward before the election with an encounter she said she had with Trump while he was married.

He said that Cohen learned about the allegations and discussed it with Trump, who didn't want the story to come out, saying that it would be devastating to the campaign.

Colangelo said that Cohen came up with a deal to buy her story with a nondisclosure agreement and she agreed not to disclose her story for $130,000. Trump wanted to delay payment for as long as possible but ultimately he agreed to the payoff.

They eventually agreed that Cohen would create a shell company to transfer the money and Cohen confirmed that Trump would reimburse him, the prosecutor said. Colangelo said that on Oct. 27, 2016, Cohen wired $130,000 to Daniels' lawyers.

Prosecutor says Trump 'cooked the books'

Colangelo is trying to make the case about why the jury should get from hush money payment to document fraud.

Trump Org. couldn't write a check with "Reimbursement for porn star payoff" on the memo line, Colangelo says.

"So they agreed to cook the books” and make it look like the repayment was actually income," he said.

Judge watches prosecution's opening statement closely

Merchan is watching the prosecution’s opening statement closely, but his eyes are going back and forth — pingpong style — between Colangelo and the jurors. He’s rocking gently in his chair with his chin between his thumb and forefinger.

Prosecutor vows to jurors 'you’ll hear defendant’s own voice on a tape'

Colangelo promised that jurors will hear the defendant's "own voice on a tape" in the alleged scheme to silence women who claimed to have had affairs with Trump.

Prosecutor is quoting Trump in the 'Access Hollywood' tape

Colangelo just quoted Trump from the infamous "Access Hollywood" tape that came out in October 2016, just weeks before the election, to the jury.

Colangelo quoted Trump saying that he could grab women by the "p----."

He said that those were Trump’s words one month before Election Day and that “the impact of that video on the campaign was immediate and explosive." Merchan ruled that prosecutors can’t play the tape.

Prosecutor explains $30,000 payment to former Trump Tower doorman

Colangelo explained that Pecker and Cohen learned about a former Trump Tower doorman who was trying to sell information about Trump having a child out of wedlock.

He said Pecker contacted Cohen immediately and Cohen told Trump who told him to take care of it. They then negotiated a $30,000 agreement to buy the story, he said.

Colangelo argued that Pecker was not acting as a publisher, but as a co-conspirator.

Trump lawyers listen intently to prosecution's opening statement

Trump lawyers Blanche and Susan Necheles have turned their seats toward Colangelo as he delivers the prosecution’s opening statement. While Trump continues to face forward with hooded eyes, his lawyer Emil Bove is seen taking notes, looking down in his lap.

Blanche, who does not appear to be taking notes, is also watching the jury as Colangelo continues to deliver his opening statement.

Prosecutor explains alleged roles of Cohen and Pecker in scheme

Colangelo explained Cohen and Pecker’s alleged roles in the hush money scheme.

“Cohen’s job really was to take care of problems for the defendant," he said. “He was Trump’s fixer.”

Colangelo said that together, the two conspired to influence the outcome of the 2016 election and that Pecker would act as eyes and ears for Trump. Pecker's job was to gather information that could be harmful and report that to Cohen, he said.

Prosecutor says Trump began reimbursing Cohen after election

Colangelo, in his opening statement, said Trump starting paying back Cohen for making the hush money payments after winning the White House.

"After the election, the defendant then reimbursed Cohen for that payment through a series of monthly checks all of which were processed through the defendant’s company, the Trump Organization," he said.

Merchan advises jurors against reading about or researching the case online or listening on the radio

Merchan urged jurors not to read or listen to any accounts of the hush money case on the radio or the internet. He also instructed jurors to not conduct research on the case at the library, via Google or any other news source.

Merchan stressed that decisions made by jurors must be based solely on evidence presented in the courtroom.

Prosecutor says 'this case is about criminal conspiracy'

Prosecutor Matthew Colangelo says in his opening statement, “This case is about criminal conspiracy.”

Laying out the prosecution's case in the courtroom for the first time, he described a conspiracy between Trump and Cohen.

He argued that Trump tried to corrupt the 2016 election.

“Then, he covered up that criminal conspiracy by lying in his New York business records over and over and over again," Colangelo said.

Opening statements are beginning

The opening statements are beginning.

Trump's eyes are shut

Trump's eyes are shut and across the aisle, Bragg is catching a glimpse of the former president from his seat in the front row of the gallery.

Merchan reads out jury instructions

Merchan read the jury instructions aloud and explained the stages of the trial. 

He reminded jurors of the basic principles of the law and said that, at the conclusion of the case, he will remind them that the law applies to the crime and that prosecutors must prove beyond a reasonable doubt.

Merchan also explained the role of a court reporter, before going on to tell jurors, “What I say is not evidence.”

“You must decide this case on the evidence,” he said.

“What the lawyers say at any time is not evidence,” the judge added.

Merchan says there are six prior court decisions that are admissible on cross-examination for Trump

Merchan said that if Trump takes the stand, prosecutors can bring up six determinations in four separate proceedings:

  • Feb. 16: The N.Y. fraud case in which a judge found Trump violated law in stating the value of his assets.
  • Oct. 28, 2022: Failing to remove an untrue personally identifying post about a law clerk on DonaldJTrump.com and was fined $5,000.
  • Oct. 21, 2023: Intentionally violated court order by continually attacking court clerk. Fine was $10,000.
  • The court will allow people to bring up how the defendant defamed E. Jean Carroll by making a false statement.
  • Carroll v. Trump II: The court will allow prosecutors to bring up how a jury found Trump defamed E. Jean Carroll by making false statements with actual malice.
  • People James v. Trump: Donald J. Trump Foundation engaged in repeated and willful self-dealing transactions.

Jury being sat

The jury is being brought into the room and seated, for the first time, as a group.

No Trump family members appear to be in the courtroom

It does not appear that there are any of Trump’s family members present in the courtroom this morning.

Bragg has entered the courtroom

Manhattan DA Alvin Bragg is in the courtroom.

Juror 9 was concerned about media attention but will remain on jury

Merchan said that the court received a call from juror 9 who expressed concern about media attention. After a meeting with the juror and lawyers for both sides, the judge announced, however, that the juror will remain on the jury.

Merchan says court will conclude at 12:30 p.m. today

Merchan said alternate juror 6 would be able to make an emergency dentist appointment at 3 p.m. for a toothache. But the appointment was moved up to 1:20 p.m., prompting the judge to tell her that the court would conclude at 12:30 p.m. today.

Lawyers estimate length of opening statements

Prosecutor Joshua Steinglass said the prosecution’s opening statement would be about 40 minutes and Blanche said the defense's would be about 25 minutes.

A key source of money for Trump's legal fees is drying up

case study clinical trial reporting

Ben Kamisar

Trump has covered tens of millions of dollars in legal fees from his leadership PAC, Save America. But a new fundraising report filed over the weekend shows that the revenue stream might be drying up.

Save America started April with just $4.1 million in the bank as the group has paid almost $60 million in legal fees since the start of last year (the majority to firms related to his various trials). But there's a bigger warning sign in the filings for Trump.

Shortly before announcing his presidential bid in 2022, Save America sent the top pro-Trump super PAC, MAGA Inc., $60 million to be used to boost his candidacy from the outside. But amid the former president's legal crunch, MAGA Inc. has been slowly refunding that donation, providing an important injection of funds into Save America as it pays Trump's legal fees. (Note: Virtually all of the money Save America raised last month came from a refund.)

The new filings show that MAGA Inc. has refunded all but $2.8 million of that $60 million donation. So, Trump will need to find new ways to fund his legal defense, as there appears to be no sign those expenses are going away anytime soon.

Court is in session

The judge is on the bench and trial has begun for the day.

Former President Donald Trump at Manhattan Criminal Court on April 22, 2024.

Trump's lawyers will work to try to undermine Michael Cohen's credibility.

All the players in Trump’s hush money trial

The charges against Trump stem from an investigation by the Manhattan District Attorney’s Office into an alleged “catch and kill” scheme to bury negative stories about Trump before the 2016 presidential election in a bid to influence the outcome.

According to prosecutors, several people participated in the scheme, which involved paying people off to buy their silence and covering up the payments in Trump’s business records.

Here are the key people in the case who will come up during the trial, potentially as witnesses.

Protesters outside the courthouse

Anti-Trump protesters demonstrate Monday outside the Manhattan courthouse where the former president is on trial.

A group of protesters is demonstrating outside the courthouse. Some are holding signs. One says, "Election interference is a crime."

"Slept with a porn star. Screwed the voters," another says, with a photo of Trump's face.

Another has images of dictators and then Trump's face saying that they all believe they're above the law.

Trump arrives at the courthouse

Trump arrived at the courthouse at 8:52 a.m.

Former President Donald Trump arrives at Manhattan Criminal Court on April 22, 2024.

Trump criticizes hush money case in overnight post

In an overnight post on his Truth Social platform, Trump blasted Bragg while complaining about the case.

"The Corrupt Soros Funded District Attorney, Alvin Bragg, who has totally lost control of Violent Crime in New York, says that the payment of money to a lawyer, for legal services rendered, should not be referred to in a Ledger as LEGAL EXPENSE," he wrote. "What other term would be more appropriate??? Believe it or not, this is the pretext under which I was Indicted, and that Legal Scholars and Experts CANNOT BELIEVE."

Trump also repeated his claims of the hush money trial being part of an effort to interfere with his presidential campaign.

"It is also the perfect Crooked Joe Biden NARRATIVE — To be STUCK in a courtroom, and not be allowed to campaign for President of the United States!" he wrote.

Here's what you missed last week

Katherine Doyle

  • Day 1, April 15 : On the first day of the New York hush money trial , Trump argued that the criminal justice system is being weaponized against him and repeatedly claimed that the prosecution is engaging in “election interference” amid his re-election campaign. Trump sat at the defense trial as the court worked to eliminate jurors who said they could not be fair and impartial in the case — at least 50 out of 96 of the first batch of prospective jurors were excused for that reason.
  • Day 2, April 16 : The challenge of finding 12 impartial jurors in Democratic-leaning Manhattan continued as lawyers reviewed old social media posts, pressed jurors on where they get their news and sought to nix candidates they thought could potentially taint the case. Merchan had warned Trump against attempting to intimidate potential jurors.
  • Trial off day, April 17 : A day after the first seven jurors were selected out of a pool of nearly 100 people, Trump slammed the jury selection process on the trial’s scheduled off-day. The presumptive GOP presidential nominee erroneously insinuated that he should be entitled to unlimited strikes of potential jurors in the hush money case.
  • Day 3, April 18: Jury selection continued and Trump paid closer attention to potential jurors who brought up certain topics that piqued his interest, such as experience in law enforcement, real estate and the media they consume. Two jurors were dismissed after having been seated, with one juror doubting her ability to be fair or impartial and another after prosecutors raised concerns about a potential criminal history he did not disclose. At the end of the day, Merchan swore in the 12-person jury, plus an alternate.
  • Day 4, April 19 : The five remaining alternates were chosen and sworn in. In a dramatic moment outside the courthouse, a man set himself on fire and later died of his injuries.

Meet the 12 jury members of Trump’s hush money trial

All 12 jurors, plus an alternate, were selected to serve on the jury last week after they made it clear to both sides that they could render a fair and impartial verdict.

Prosecutors and the defense team  whittled down a pool of nearly 200 people to 12 jurors and an alternate after grilling them on their personal history, political views, social media posts and ability to remain impartial despite any opinions they might have about the polarizing former president.

Read the full story here.

Pecker expected to be first witness

A source with direct knowledge of the situation tells NBC News that David Pecker will be the first witness for the prosecution beginning today. This source says that due to the Sandoval hearing, opening statements and the gag order hearing tomorrow, they don’t expect the cross-examination of Pecker to happen until Thursday.

Prosecutors have said that Pecker, the longtime former publisher of the National Enquirer, is a central figure in the alleged coverup scheme and the architect of the “catch and kill” plots.

Opening statements and first witness on tap for Trump hush money trial

case study clinical trial reporting

Dareh Gregorian

Opening statements are set to begin this morning at 9:30 a.m. ET in the case of the People of the State of New York versus Donald Trump , the first criminal trial of a former president.

Attorneys on both sides will present their opening statements after the judge delivers instructions to the 12-person jury and six alternates.

IMAGES

  1. Clinical trial reporting: Definitions and guiding principles

    case study clinical trial reporting

  2. (PDF) Case Study Clinical Case Reports and Reviews Preserve patient's

    case study clinical trial reporting

  3. Case Report Form Template Clinical Trials

    case study clinical trial reporting

  4. Insight into the Different Clinical Trial Study Designs

    case study clinical trial reporting

  5. Checklist of Clinical Research Documents

    case study clinical trial reporting

  6. Case study: Clinical Trials

    case study clinical trial reporting

VIDEO

  1. Clinical Studies Report Writing

  2. Clinical Study Report (CSR)

  3. Professional medical writers increase the quality and speed of clinical trial reporting

  4. Case Study

  5. Live Courtroom Coverage of Young Thug Trial (Reporting by Dennis Byron)

  6. Case Study

COMMENTS

  1. Guidelines To Writing A Clinical Case Report

    A case report is a detailed report of the symptoms, signs, diagnosis, treatment, and follow-up of an individual patient. Case reports usually describe an unusual or novel occurrence and as such, remain one of the cornerstones of medical progress and provide many new ideas in medicine. Some reports contain an extensive review of the relevant ...

  2. Trial Reporting in ClinicalTrials.gov

    The final rule for reporting clinical trial results has now been issued by the Department of Health and Human Services. It aims to increase accountability in the clinical research enterprise, makin...

  3. Guidelines for Reporting Outcomes in Trial Reports

    Guidance on how to report the existing checklist items can be found in the CONSORT 2010 statement, 2 in Table 1, and in an explanatory guideline report. 41 Additional items that may be useful to include in some trial reports or in associated trial documents (eg, the statistical analysis plan or in a clinical study report 42) appear in eTable 6 ...

  4. Writing up your clinical trial report for a scientific journal: the

    The REPORT guide is a 'How to' guide to help you report your clinical research in an effective and transparent way. It is intended to supplement established first choice reporting tools, such as Consolidated Standards of Reporting Trials (CONSORT), by adding tacit knowledge (ie, learnt, informal or implicit knowledge) about reporting topics that we have struggled with as authors or see ...

  5. how-to-write-a-clinical-case-report

    Case reports are important. Although much maligned over the last decade or so, case reports are now more widely recognised for their significant potential benefits: brilliant vehicles for delivering concise clinical-guidance messages to promote best practice; excellent teaching aids in case-based learning; the foundation for subsequent larger research programs; and a very useful training in ...

  6. The Clinical Study Report

    The CSR is intended to describe the methods used for conducting a trial and the results and the interpretation of the results for a trial. Regulatory authorities may have different requirements for the reporting of clinical trial results depending on the type of study, terminating status of the study and the intended use of the study results.

  7. Guidelines for Reporting Outcomes in Trial Reports: The ...

    Grants and funding. This CONSORT-Outcomes 2022 extension of the CONSORT 2010 statement provides 17 outcome-specific items that should be addressed in all published clinical trial reports and may help increase trial utility, replicability, and transparency and may minimize the risk of selective nonreporting of trial res ….

  8. Difference Between Case Reports & Clinical Studies

    Definition of case report and clinical study. In medicine, a case report is a detailed report of the symptoms, signs, diagnosis, treatment, and follow-up of an individual patient. Case reports may contain a demographic profile of the patient, but usually describe an unusual or novel occurrence. The case report is written on one individual patient.

  9. Further Refining Case Studies and FAQs about the NIH Definition of a

    The distinction between 18A (an fMRI study that is not a clinical trial) and 18C (an fMRI study that is a clinical trial) is not at all clearcut. Please consider: - In case 18A, fMRI is used ("Participants are administered … brain scans (e.g., fMRI)"), and the determination is made that the research project is not a clinical trial.

  10. Clinical Trials Case Study

    Challenges Faced and Lessons Learned from Our Trial of VTE Prophylaxis. G. Le Gal and D. MottierNEJM Evid 2023;2 (9) In this Clinical Trials Case Study, the authors describe the challenges faced and lessons learned conducting a trial of venous thromboembolism prophylaxis among hospitalized older adults. Clinical Trials Case Study.

  11. Case 22-2020: A 62-Year-Old Woman with Early Breast Cancer during the

    A Neoadjuvant Chemotherapy Trial for Early Breast Cancer is Impacted by COVID-19: Addressing Vaccination and Cancer Trials Through Education, Equity, and Outcomes, Clinical Cancer Research, 27, 16 ...

  12. Cases

    Challenges Faced and Lessons Learned from Our Trial of VTE Prophylaxis. G. Le Gal and D. MottierNEJM Evid 2023;2 (9) In this Clinical Trials Case Study, the authors describe the challenges faced and lessons learned conducting a trial of venous thromboembolism prophylaxis among hospitalized older adults. Morning Report.

  13. Institutional dashboards on clinical trial transparency for ...

    Reporting guidelines and German research funders have called on clinical trials to report (a) summary results in the registry within 12 and 24 months of trial completion and (b) results in a manuscript publication within 24 months of trial completion [2,43-45]. We therefore considered 2 years as timely reporting for both reporting routes.

  14. PDF Clinical study reports

    T clinical study report (CSR) is a document that describes the results of a clinical trial and is used to assess the safety and efectiveness of a new medical treatment.1-3 CSRs provide detailed information about the design, conduct, and results of clinical studies. A CSR is prepared by the sponsor of the clinical trial to report study outcomes ...

  15. Reporting Individual Case Study Reports (ICSRs) to FAERS Using ICH E2B

    Sponsors of clinical trials conducted under an investigational new drug (IND) application are required to report serious and unexpected suspected adverse reactions in an IND safety report to the ...

  16. Your guide to understanding phases of cancer clinical trials

    Phase 1. In this initial step of a clinical trial, the research staff is looking at the safety and appropriate dosage of giving a new treatment. They also watch for side effects. The number of patients enrolled on phase I trials are small — there are typically fewer than 50 patients involved. Kotila often hears patients ask if phase 1 trials ...

  17. Industry-Sponsored Ghostwriting in Clinical Trial Reporting: A Case Study

    Abstract. In this case study from litigation, we show how ghostwriting of clinical trial results can contribute to the manipulation of data to favor the study medication. Study 329 for paroxetine ...

  18. The Costs of Anonymization: Case Study Using Clinical Data

    Background: Sharing data from clinical studies can accelerate scientific progress, improve transparency, and increase the potential for innovation and collaboration. However, privacy concerns remain a barrier to data sharing. Certain concerns, such as reidentification risk, can be addressed through the application of anonymization algorithms, whereby data are altered so that it is no longer ...

  19. Randomized trials of estrogen-alone and breast cancer ...

    Purpose In the Women's Health initiative (WHI) randomized clinical trial, conjugated equine estrogen (CEE)-alone significantly reduced breast cancer incidence (P = 0.005). As cohort studies had opposite findings, other randomized clinical trials were identified to conduct a meta-analysis of estrogen-alone influence on breast cancer incidence. Methods We conducted literature searches on ...

  20. Reporting guidelines for clinical trial reports for interventions

    Introduction. Randomised controlled trials (RCTs) are considered the gold-standard experimental design to provide evidence of the safety and efficacy of an intervention. 1 2 Trial results, if adequately reported, have the potential to inform regulatory decisions, clinical guidelines and health policy. It is therefore crucial that RCTs are reported with transparency and completeness, so that ...

  21. 'Other Duties as Assigned': The Wide-Ranging Role of the CRC

    If "recruiting and screening patients who try new treatments and monitoring and reporting on patient progress" sounds like a reasonable, if skimpy, definition of the duties of a clinical research coordinator (CRC) at a clinical trial site, imagine the surprise of a newly minted CRC who finds themself tasked with exploring a shuttered hospital in search of old research records, or visiting ...

  22. A first-in-human clinical study of laparoscopic autologous myoblast

    The title of the approved study is "An exploratory clinical trial of TERGS0001 in laparoscopic and endoscopic cooperative surgery for superficial non-ampullary duodenal epithelial tumor ", registered with the Japan Registry of Clinical Trials as jRCT2073210094, and approved by the Institutional Review Board of Nagasaki University on January ...

  23. Latest science news, discoveries and analysis

    Find breaking science news and analysis from the world's leading research journal.

  24. Trump trial highlights: Prosecution details 'catch and kill' scheme as

    Rebecca Shabadis in Washington, D.C. In addition to being in court on Tuesday, Trump is expected to meet with former Japanese Prime Minister Taro Aso. The trial is scheduled to conclude by 2 p.m ...