• All subject areas
  • Agricultural and Biological Sciences
  • Arts and Humanities
  • Biochemistry, Genetics and Molecular Biology
  • Business, Management and Accounting
  • Chemical Engineering
  • Computer Science
  • Decision Sciences
  • Earth and Planetary Sciences
  • Economics, Econometrics and Finance
  • Engineering
  • Environmental Science
  • Health Professions
  • Immunology and Microbiology
  • Materials Science
  • Mathematics
  • Multidisciplinary
  • Neuroscience
  • Pharmacology, Toxicology and Pharmaceutics
  • Physics and Astronomy
  • Social Sciences
  • All subject categories
  • Acoustics and Ultrasonics
  • Advanced and Specialized Nursing
  • Aerospace Engineering
  • Agricultural and Biological Sciences (miscellaneous)
  • Agronomy and Crop Science
  • Algebra and Number Theory
  • Analytical Chemistry
  • Anesthesiology and Pain Medicine
  • Animal Science and Zoology
  • Anthropology
  • Applied Mathematics
  • Applied Microbiology and Biotechnology
  • Applied Psychology
  • Aquatic Science
  • Archeology (arts and humanities)
  • Architecture
  • Artificial Intelligence
  • Arts and Humanities (miscellaneous)
  • Assessment and Diagnosis
  • Astronomy and Astrophysics
  • Atmospheric Science
  • Atomic and Molecular Physics, and Optics
  • Automotive Engineering
  • Behavioral Neuroscience
  • Biochemistry
  • Biochemistry, Genetics and Molecular Biology (miscellaneous)
  • Biochemistry (medical)
  • Bioengineering
  • Biological Psychiatry
  • Biomaterials
  • Biomedical Engineering
  • Biotechnology
  • Building and Construction
  • Business and International Management
  • Business, Management and Accounting (miscellaneous)
  • Cancer Research
  • Cardiology and Cardiovascular Medicine
  • Care Planning
  • Cell Biology
  • Cellular and Molecular Neuroscience
  • Ceramics and Composites
  • Chemical Engineering (miscellaneous)
  • Chemical Health and Safety
  • Chemistry (miscellaneous)
  • Chiropractics
  • Civil and Structural Engineering
  • Clinical Biochemistry
  • Clinical Psychology
  • Cognitive Neuroscience
  • Colloid and Surface Chemistry
  • Communication
  • Community and Home Care
  • Complementary and Alternative Medicine
  • Complementary and Manual Therapy
  • Computational Mathematics
  • Computational Mechanics
  • Computational Theory and Mathematics
  • Computer Graphics and Computer-Aided Design
  • Computer Networks and Communications
  • Computer Science Applications
  • Computer Science (miscellaneous)
  • Computer Vision and Pattern Recognition
  • Computers in Earth Sciences
  • Condensed Matter Physics
  • Conservation
  • Control and Optimization
  • Control and Systems Engineering
  • Critical Care and Intensive Care Medicine
  • Critical Care Nursing
  • Cultural Studies
  • Decision Sciences (miscellaneous)
  • Dental Assisting
  • Dental Hygiene
  • Dentistry (miscellaneous)
  • Dermatology
  • Development
  • Developmental and Educational Psychology
  • Developmental Biology
  • Developmental Neuroscience
  • Discrete Mathematics and Combinatorics
  • Drug Discovery
  • Drug Guides
  • Earth and Planetary Sciences (miscellaneous)
  • Earth-Surface Processes
  • Ecological Modeling
  • Ecology, Evolution, Behavior and Systematics
  • Economic Geology
  • Economics and Econometrics
  • Economics, Econometrics and Finance (miscellaneous)
  • Electrical and Electronic Engineering
  • Electrochemistry
  • Electronic, Optical and Magnetic Materials
  • Emergency Medical Services
  • Emergency Medicine
  • Emergency Nursing
  • Endocrine and Autonomic Systems
  • Endocrinology
  • Endocrinology, Diabetes and Metabolism
  • Energy Engineering and Power Technology
  • Energy (miscellaneous)
  • Engineering (miscellaneous)
  • Environmental Chemistry
  • Environmental Engineering
  • Environmental Science (miscellaneous)
  • Epidemiology
  • Experimental and Cognitive Psychology
  • Family Practice
  • Filtration and Separation
  • Fluid Flow and Transfer Processes
  • Food Animals
  • Food Science
  • Fuel Technology
  • Fundamentals and Skills
  • Gastroenterology
  • Gender Studies
  • Genetics (clinical)
  • Geochemistry and Petrology
  • Geography, Planning and Development
  • Geometry and Topology
  • Geotechnical Engineering and Engineering Geology
  • Geriatrics and Gerontology
  • Gerontology
  • Global and Planetary Change
  • Hardware and Architecture
  • Health Informatics
  • Health Information Management
  • Health Policy
  • Health Professions (miscellaneous)
  • Health (social science)
  • Health, Toxicology and Mutagenesis
  • History and Philosophy of Science
  • Horticulture
  • Human Factors and Ergonomics
  • Human-Computer Interaction
  • Immunology and Allergy
  • Immunology and Microbiology (miscellaneous)
  • Industrial and Manufacturing Engineering
  • Industrial Relations
  • Infectious Diseases
  • Information Systems
  • Information Systems and Management
  • Inorganic Chemistry
  • Insect Science
  • Instrumentation
  • Internal Medicine
  • Issues, Ethics and Legal Aspects
  • Leadership and Management
  • Library and Information Sciences
  • Life-span and Life-course Studies
  • Linguistics and Language
  • Literature and Literary Theory
  • LPN and LVN
  • Management Information Systems
  • Management, Monitoring, Policy and Law
  • Management of Technology and Innovation
  • Management Science and Operations Research
  • Materials Chemistry
  • Materials Science (miscellaneous)
  • Maternity and Midwifery
  • Mathematical Physics
  • Mathematics (miscellaneous)
  • Mechanical Engineering
  • Mechanics of Materials
  • Media Technology
  • Medical and Surgical Nursing
  • Medical Assisting and Transcription
  • Medical Laboratory Technology
  • Medical Terminology
  • Medicine (miscellaneous)
  • Metals and Alloys
  • Microbiology
  • Microbiology (medical)
  • Modeling and Simulation
  • Molecular Biology
  • Molecular Medicine
  • Nanoscience and Nanotechnology
  • Nature and Landscape Conservation
  • Neurology (clinical)
  • Neuropsychology and Physiological Psychology
  • Neuroscience (miscellaneous)
  • Nuclear and High Energy Physics
  • Nuclear Energy and Engineering
  • Numerical Analysis
  • Nurse Assisting
  • Nursing (miscellaneous)
  • Nutrition and Dietetics
  • Obstetrics and Gynecology
  • Occupational Therapy
  • Ocean Engineering
  • Oceanography
  • Oncology (nursing)
  • Ophthalmology
  • Oral Surgery
  • Organic Chemistry
  • Organizational Behavior and Human Resource Management
  • Orthodontics
  • Orthopedics and Sports Medicine
  • Otorhinolaryngology
  • Paleontology
  • Parasitology
  • Pathology and Forensic Medicine
  • Pathophysiology
  • Pediatrics, Perinatology and Child Health
  • Periodontics
  • Pharmaceutical Science
  • Pharmacology
  • Pharmacology (medical)
  • Pharmacology (nursing)
  • Pharmacology, Toxicology and Pharmaceutics (miscellaneous)
  • Physical and Theoretical Chemistry
  • Physical Therapy, Sports Therapy and Rehabilitation
  • Physics and Astronomy (miscellaneous)
  • Physiology (medical)
  • Plant Science
  • Political Science and International Relations
  • Polymers and Plastics
  • Process Chemistry and Technology
  • Psychiatry and Mental Health
  • Psychology (miscellaneous)
  • Public Administration
  • Public Health, Environmental and Occupational Health
  • Pulmonary and Respiratory Medicine
  • Radiological and Ultrasound Technology
  • Radiology, Nuclear Medicine and Imaging
  • Rehabilitation
  • Religious Studies
  • Renewable Energy, Sustainability and the Environment
  • Reproductive Medicine
  • Research and Theory
  • Respiratory Care
  • Review and Exam Preparation
  • Reviews and References (medical)
  • Rheumatology
  • Safety Research
  • Safety, Risk, Reliability and Quality
  • Sensory Systems
  • Signal Processing
  • Small Animals
  • Social Psychology
  • Social Sciences (miscellaneous)
  • Social Work
  • Sociology and Political Science
  • Soil Science
  • Space and Planetary Science
  • Spectroscopy
  • Speech and Hearing
  • Sports Science
  • Statistical and Nonlinear Physics
  • Statistics and Probability
  • Statistics, Probability and Uncertainty
  • Strategy and Management
  • Stratigraphy
  • Structural Biology
  • Surfaces and Interfaces
  • Surfaces, Coatings and Films
  • Theoretical Computer Science
  • Tourism, Leisure and Hospitality Management
  • Transplantation
  • Transportation
  • Urban Studies
  • Veterinary (miscellaneous)
  • Visual Arts and Performing Arts
  • Waste Management and Disposal
  • Water Science and Technology
  • All regions / countries
  • Asiatic Region
  • Eastern Europe
  • Latin America
  • Middle East
  • Northern America
  • Pacific Region
  • Western Europe
  • ARAB COUNTRIES
  • IBEROAMERICA
  • NORDIC COUNTRIES
  • Afghanistan
  • Bosnia and Herzegovina
  • Brunei Darussalam
  • Czech Republic
  • Dominican Republic
  • Netherlands
  • New Caledonia
  • New Zealand
  • Papua New Guinea
  • Philippines
  • Puerto Rico
  • Russian Federation
  • Saudi Arabia
  • South Africa
  • South Korea
  • Switzerland
  • Syrian Arab Republic
  • Trinidad and Tobago
  • United Arab Emirates
  • United Kingdom
  • United States
  • Vatican City State
  • Book Series
  • Conferences and Proceedings
  • Trade Journals

is cancer research a good journal

  • Citable Docs. (3years)
  • Total Cites (3years)

is cancer research a good journal

Follow us on @ScimagoJR Scimago Lab , Copyright 2007-2024. Data Source: Scopus®

is cancer research a good journal

Cookie settings

Cookie Policy

Legal Notice

Privacy Policy

  • Open access
  • Published: 11 March 2021

Evaluating cancer research impact: lessons and examples from existing reviews on approaches to research impact assessment

  • Catherine R. Hanna   ORCID: orcid.org/0000-0002-0907-7747 1 ,
  • Kathleen A. Boyd 2 &
  • Robert J. Jones 1  

Health Research Policy and Systems volume  19 , Article number:  36 ( 2021 ) Cite this article

6610 Accesses

6 Citations

10 Altmetric

Metrics details

Performing cancer research relies on substantial financial investment, and contributions in time and effort from patients. It is therefore important that this research has real life impacts which are properly evaluated. The optimal approach to cancer research impact evaluation is not clear. The aim of this study was to undertake a systematic review of review articles that describe approaches to impact assessment, and to identify examples of cancer research impact evaluation within these reviews.

In total, 11 publication databases and the grey literature were searched to identify review articles addressing the topic of approaches to research impact assessment. Information was extracted on methods for data collection and analysis, impact categories and frameworks used for the purposes of evaluation. Empirical examples of impact assessments of cancer research were identified from these literature reviews. Approaches used in these examples were appraised, with a reflection on which methods would be suited to cancer research  impact evaluation going forward.

In total, 40 literature reviews were identified. Important methods to collect and analyse data for impact assessments were surveys, interviews and documentary analysis. Key categories of impact spanning the reviews were summarised, and a list of frameworks commonly used for impact assessment was generated. The Payback Framework was most often described. Fourteen examples of impact evaluation for cancer research were identified. They ranged from those assessing the impact of a national, charity-funded portfolio of cancer research to the clinical practice impact of a single trial. A set of recommendations for approaching cancer research impact assessment was generated.

Conclusions

Impact evaluation can demonstrate if and why conducting cancer research  is worthwhile. Using a mixed methods, multi-category assessment organised within a framework, will provide a robust evaluation, but the ability to perform this type of assessment may be constrained by time and resources. Whichever approach is used, easily measured, but inappropriate metrics should be avoided. Going forward, dissemination of the results of cancer research impact assessments will allow the cancer research community to learn how to conduct these evaluations.

Peer Review reports

Cancer research attracts substantial public funding globally. For example, the National Cancer Institute (NCI) in the United States of America (USA) had a 2020 budget of over $6 billion United States (US) dollars. In addition to public funds, there is also huge monetary investment from private pharmaceutical companies, as well as altruistic investment of time and effort to participate in cancer research from patients and their families. In the United Kingdom (UK), over 25,000 patients were recruited to cancer trials funded by one charity (Cancer Research UK (CRUK)) alone in 2018 [ 1 ]. The need to conduct research within the field of oncology is an ongoing priority because cancer is highly prevalent, with up to one in two people now having a diagnosis of cancer in their lifetime [ 2 , 3 ], and despite current treatments, mortality and morbidity from cancer are still high [ 2 ].

In the current era of increasing austerity, there is a desire to ensure that the money and effort to conduct any type of research delivers tangible downstream benefits for society with minimal waste [ 4 , 5 , 6 ]. These wider, real-life benefits from research are often referred to as research impact. Given the significant resources required to conduct cancer research in particular, it is reasonable to question if this investment is leading to the longer-term benefits expected, and to query the opportunity cost of not spending the same money directly within other public sectors such as health and social care, the environment or education.

The interest in evaluating research impact has been rising, partly driven by the actions of national bodies and governments. For example, in 2014, the UK government allocated its £2 billion annual research funding to higher education institutions, in part based on an assessment of the impact of research performed by each institution in an assessment exercise known as the Research Excellence Framework (REF). The proportion of funding dependent on impact assessment will increase from 20% in 2014, to 25% in 2021[ 7 ].

Despite the clear rationale and contemporary interest in research impact evaluation, assessing the impact of research comes with challenges. First, there is no single definition of what research impact encompasses, with potential differences in the evaluation approach depending on the definition. Second, despite the recent surge of interest, knowledge of how best to perform assessments and the infrastructure for, and experience in doing so, are lagging [ 6 , 8 , 9 ]. For the purposes of this review, the definition of research impact given by the UK Research Councils is used (see Additional file 1 for full definition). This definition was chosen because it takes a broad perspective, which includes academic, economic and societal views of research impact [ 10 ].

There is a lack of clarity on how to perform research impact evaluation, and this extends to cancer research. Although there is substantial interest from cancer funders and researchers [ 11 ], this interest is not accompanied by instruction or reflection on which approaches would be suited to assessing the impact of cancer research specifically. In a survey of Australian cancer researchers, respondents indicated that they felt a responsibility to deliver impactful research, but that evaluating and communicating this impact to stakeholders was difficult. Respondents also suggested that the types of impact expected from research, and the approaches used, should be discipline specific [ 12 ]. Being cognisant of the discipline specific nature of impact assessment, and understanding the uniqueness of cancer research in approaching such evaluations, underpins the rationale for this study.

The aim of this study was to explore approaches to research impact assessment, identify those approaches that have been used previously for cancer research, and to use this information to make recommendations for future evaluations. For the purposes of this study, cancer research included both basic science and applied research, research into any malignant disease, concerning paediatric or adult cancer, and studies spanning nursing, medical, public health elements of cancer research.

The study objectives were to:

Identify existing literature reviews that report approaches to research impact assessment and summarise these approaches.

Use these literature reviews to identify examples of cancer research impact evaluations, describe the approaches to evaluation used within these studies, and compare them to those described in the broader literature.

This approach was taken because of the anticipated challenge of conducting a primary review of empirical examples of cancer research impact evaluation, and to allow a critique of empirical studies in the context of lessons learnt from the wider literature. A primary review would have been difficult because examples of cancer research impact evaluation, for example, the assessment of research impact on clinical guidelines [ 13 ], or clinical practice [ 14 , 15 , 16 ], are often not categorised in publication databases under the umbrella term of research impact. Reasons for this are the lack of medical subject heading (MeSH) search term relating to research impact assessment and the differing definitions for research impact. In addition, many authors do not recognise their evaluations as sitting within the discipline of research impact assessment, which is a novel and emerging field of study.

General approach

A systematic search of the literature was performed to identify existing reviews of approaches to assess the impact of research. No restrictions were placed on the discipline, field, or scope (national/global) of research for this part of the study. In the second part of this study, the reference lists of the literature reviews identified were searched to find empirical examples of the evaluation of the impact of cancer research specifically.

Data sources and searches

For the first part of the study, 11 publication databases and the grey literature from January 1998 to May 2019 were searched. The electronic databases were Medline, Embase, Health Management and Policy Database, Education Resources Information Centre, Cochrane, Cumulative Index of Nursing and Allied Health Literature, Applied Social Sciences Index and Abstract, Social Services Abstracts, Sociological Abstracts, Health Business Elite and Emerald. The search strategy specified that article titles must contain the word “impact”, as well as a second term indicating that the article described the evaluation of impact, such as “model” or “measurement” or “method”. Additional file 1 provides a full list of search terms. The grey literature was searched using a proforma. Keywords were inserted into the search function of websites listed on the proforma and the first 50 results were screened. Title searches were performed by either a specialist librarian or the primary researcher (Dr. C Hanna). All further screening of records was performed by the primary researcher.

Following an initial title screen, 800 abstracts were reviewed and 140 selected for full review. Articles were kept for final inclusion in the study by assessing each article against specific inclusion criteria (Additional file 1 ). There was no assessment of the quality of the included reviews other than to describe the search strategy used. If two articles drew primarily on the same review but contributed a different critique of the literature or methods to evaluate impact, both were kept. If a review article was part of a grey literature report, for example a thesis, but was also later published in a journal, the journal article only was kept. Out of 140 articles read in full, 27 met the inclusion criteria and a further 13 relevant articles were found through reference list searching from the included reviews [ 17 ].

For the second part of the study, the reference lists from the literature reviews were manually screened [ 17 ] ( n  = 4479 titles) by the primary researcher to identify empirical examples of assessment of the impact of cancer research. Summary tables and diagrams from the reviews were also searched using the words “cancer” and “oncology” to identify relevant articles that may have been missed by reference list searching. After removal of duplicates, 57 full articles were read and assessed against inclusion criteria (Additional file 1 ). Figure  1 shows the search strategy for both parts of the study according to the guidelines for preferred reporting items for systematic reviews and meta-analysis (PRISMA) [ 18 ].

figure 1

Search strategies for this study

Data extraction and analysis

A data extraction form produced in Microsoft ® Word 2016 was used to collect details for each literature review. This included year of publication, location of primary author, research discipline, aims of the review as described by the authors and the search strategy (if any) used. Information on approaches to impact assessment was extracted under three specific themes which had been identified from a prior scoping review as important factors when planning and conducting research impact evaluation. These themes were: (i) categorisation of impact into different types depending on who or what is affected by the research (the individuals, institutions, or parts of society, the environment), and how they are affected (for example health, monetary gain, sustainability) (ii) methods of data collection and analysis for the purposes of evaluation, and (iii) frameworks to organise and communicate research impact. There was space to document any other key findings the researcher deemed important. After data extraction, lists of commonly described categories, methods of data collection and analysis, and frameworks were compiled. These lists were tabulated or presented graphically and narrative analysis was used to describe and discuss the approaches listed.

For the second part of the study, a separate data extraction form produced in Microsoft ® Excel 2016 was used. Basic information on each study was collected, such as year of publication, location of primary authors, research discipline, aims of evaluation as described by the authors and research type under assessment. Data was also extracted from these empirical examples using the same three themes as outlined above, and the approaches used in these studies were compared to those identified from the literature reviews. Finally, a set of recommendations for future evaluations of cancer research impact were developed by identifying the strengths of the empirical examples and using the lists generated from the first part of the study to identify improvements that could be made.

Part one: Identification and analysis of literature reviews describing approaches to research impact assessment

Characteristics of included literature reviews.

Forty literature reviews met the pre-specified inclusion criteria and the characteristics of each review are outlined in Table 1 . A large proportion (20/40; 50%) were written by primary authors based in the UK, followed by the USA (5/40; 13%) and Australia (5/40; 13%), with the remainder from Germany (3/40; 8%), Italy (3/40; 8%), the Netherlands (1/40; 3%), Canada (1/40; 3%), France (1/40; 3%) and Iran (1/40; 3%). All reviews were published since 2003, despite the search strategy dating from 1998. Raftery et al. 2016 [ 19 ] was an update to Hanney et al. 2007 [ 20 ] and both were reviews of studies assessing research impact relevant to a programme of health technology assessment research. The narrative review article by Greenhalgh et al. [ 21 ] was based on the same search strategy used by Raftery et al. [ 19 ].

Approximately half of the reviews (19/40; 48%) described approaches to evaluate research impact without focusing on a specific discipline and nearly the same amount (16/40; 40%) focused on evaluating the impact of health or biomedical research. Two reviews looked at approaches to impact evaluation for environmental research and one focused on social sciences and humanities research. Finally, two reviews provided a critique of impact evaluation methods used by different countries at a national level [ 22 , 23 ]. None of these reviews focused specifically on cancer research.

Twenty-five reviews (25/40; 63%) specified search criteria and 11 of these included a PRISMA diagram. The articles that did not outline a search strategy were often expert reviews of the approaches to impact assessment methods and the authors stated they had chosen the articles included based on their prior knowledge of the topic. Most reviews were found by searching traditional publication databases, however seven (7/40; 18%) were found from the grey literature. These included four reports written by an independent, not-for-profit research institution (Research and Development (RAND) Europe) [ 23 , 24 , 25 , 26 ], one literature review which was part of a Doctor of Philosophy (Ph.D) thesis [ 27 ], a literature review informing a quantitative study [ 28 ] and a review that provided background information for a report to the UK government on the best use of impact metrics [ 29 ].

Key findings from the reviews: approaches to research impact evaluation

Categorisation of impact for the purpose of impact assessment

Nine reviews attempted to categorise the type of research impact being assessed according to who or what is affected by research, and how they are affected. In Fig.  2 , colour coding was used to identify overlap between impact types identified in these reviews to produce a summary list of seven main impact categories.

The first two categories of impact refer to the immediate knowledge produced from research and the contribution research makes to driving innovation and building capacity for future activities within research institutions. The former is often referred to as the academic impact of research. The academic impact of cancer research may include the knowledge gained from conducting experiments or performing clinical trials that is subsequently disseminated via journal publications. The latter may refer to securing future funding for cancer research, providing knowledge that allows development of later phase clinical trials or training cancer researchers of the future.

The third category identified was the impact of research on policy. Three of the review articles included in this overview specifically focused policy impact evaluation [ 30 , 31 , 32 ]. In their review, Hanney et al. [ 30 ] suggested that policy impact (of health research) falls into one of three sub-categories: impact on national health policies from the government, impact on clinical guidelines from professional bodies, and impact on local health service policies. Cruz Rivera and colleagues [ 33 ] specifically distinguished impact on policy making from impact on clinical guidelines, which they described under health impact. This shows that the lines between categories will often blur.

Impact on health was the next category identified and several of the reviews differentiated health sector impact from impact on health gains. For cancer research, both types of health impact will be important given that it is a health condition which is a major burden for healthcare systems and the patients they treat. Economic impact of research was the fifth category. For cancer research, there is likely to be close overlap between healthcare system and economic impacts because of the high cost of cancer care for healthcare services globally.

In their 2004 article, Buxton et al. [ 34 ] searched the literature for examples of the evaluation of economic return on investment in health research and found four main approaches, which were referenced in several later reviews [ 19 , 25 , 35 , 36 ]. These were (i) measuring direct cost savings to the health-care system, (ii) estimating benefits to the economy from a healthy workforce, (iii) evaluating benefits to the economy from commercial development and, (iv) measuring the intrinsic value to society of the health gain from research. In a later review [ 25 ], they added an additional approach of estimating the spill over contribution of research to the Gross Domestic Product (GDP) of a nation.

The final category was social impact. This term was commonly used in a specific sense to refer to research improving human rights, well-being, employment, education and social inclusion [ 33 , 37 ]. Two of the reviews which included this category focused on the impact of non-health related research (social sciences and agriculture), indicating that this type of impact may be less relevant or less obvious for health related disciplines such as oncology. Social impact is distinct from the term societal impact, which was used in a wider sense to describe impact that is external to traditional academic benefits [ 38 , 39 ]. Other categories of impact identified that did not show significant overlap between the reviews included cultural and technological impact. In two of the literature reviews [ 33 , 40 ], the authors provided a list of indicators of impact within each of their categories. In the review by Thonon et al. [ 40 ], only one (1%) of these indicators was specific to evaluating the impact of cancer research.

Methods for data collection and analysis

In total, 36 (90%) reviews discussed methods to collect or analyse the data required to conduct an impact evaluation. The common methods described, and the  strengths and weaknesses of each approach, are shown in Additional file 2 : Table S1. Many authors advocated using a mixture of methods and in particular, the triangulation of surveys, interviews (of researchers or research users), and documentary analysis [ 20 , 30 , 31 , 32 ]. A large number of reviews cautioned against the use of quantitative metrics, such as bibliometrics, alone [ 29 , 30 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 ]. Concerns included that these metrics were often not designed to be comparable between research programmes [ 49 ], their use may incentivise researchers to focus on quantity rather than quality [ 42 ], and these metrics could be gamed and used in the wrong context to make decisions about researcher funding, employment and promotion [ 41 , 43 , 45 ].

Several reviews explained that the methods for data collection and analysis chosen for impact evaluation depended on both the unit of research under analysis and the rationale for the impact analysis [ 23 , 24 , 26 , 31 , 36 , 50 , 51 ]. Specific to cancer research, the unit of analysis may be a single clinical trial or a programme of trials, research performed at a cancer centre or research funded by a specific institution or charity. The rationale for research impact assessment was categorised in multiple reviews under four headings (“the 4 As”): advocacy, accountability, analysis and allocation [ 19 , 20 , 23 , 24 , 30 , 31 , 32 , 33 , 36 , 46 , 52 , 53 ]. Finally, Boaz and colleagues found that there was a lack of information on the cost-effectiveness of research impact evaluation methods but suggested that pragmatic, but often cheaper approaches to evaluation, such as surveys, were least likely to give in depth insights into the processes through which research impact occurred [ 31 ].

Using a framework within a research impact evaluation

Applied to research impact evaluation, a framework provides a way of organising collected data, which encourages a more objective and structured evaluation than would be possible with an ad hoc analysis. In total, 27 (68%) reviews discussed the use of a framework in this context. Additional file 2 : Table S2 lists the frameworks mentioned in three or more of the included reviews. The most frequently described framework was the Payback Framework, developed by Buxton and Hanney in 1996 [ 54 ], and many of the other frameworks identified reported that they were developed by adapting key elements of the Payback framework. None of the frameworks identified were specifically developed to assess the impact of cancer research, however several were specific to health research. The unit of cancer research being evaluated will dictate the most suitable framework to use in any evaluation. The unit of research most suited to each framework is outlined in Additional file 2 : Table S2.

figure 2

Categories of impact identified in the included literature reviews

Additional findings from the included reviews

The challenges of research impact evaluation were commonly discussed in these reviews. Several mentioned that the time lag [ 24 , 25 , 33 , 35 , 38 , 46 , 50 , 53 , 55 ] between research completion and impact occurring should influence when an impact evaluation is carried out: too early and impact will not have occurred, too late and it is difficult to link impact to the research in question. This overlapped with the challenge of attributing impact to a particular piece of research [ 24 , 26 , 33 , 34 , 35 , 37 , 38 , 39 , 46 , 50 , 56 ]. Many authors argued that the ability to show attribution was inversely related to the time since the research was carried out [ 24 , 25 , 31 , 46 , 53 ].

Part II: Empirical examples of cancer research impact evaluation

Study characteristics.

In total, 14 empirical impact evaluations relevant to cancer research were identified from the references lists of the literature reviews included in the first part of this study. These empirical studies were published between 1994–2015 by primary authors located in the UK (7/14; 50%), USA (2/14; 14%), Italy (2/14; 14%), Canada (2/14; 14%) and Brazil (1/14; 14%). Table 2 lists these studies with the rationale for each assessment (defined using the “4As”), the unit of analysis of cancer research evaluated and the main findings from each evaluation. The categories of impact evaluated, methods of data collection and analysis, and impact frameworks utilised are also summarised in Table 2 and discussed in more detail below.

Approaches to cancer research impact evaluation used in empirical studies

Categories of impact evaluated in cancer research impact assessments

Several of the empirical studies focused on academic impact. For example, Ugolini and colleagues evaluated scholarly outputs from one cancer research centre in Italy [ 57 ] and in a second study looked at the academic impact of cancer research from European countries [ 58 ]. Saed et al. [ 59 ] used submissions to an international cancer conference (American Society of Clinical Oncology (ASCO)) to evaluate the dissemination of cancer research to the academic community, and Lewison and colleagues [ 60 , 61 , 62 , 63 ] assessed academic, as well as policy impact and dissemination of cancer research findings to the lay media.

The category of the health impact was also commonly evaluated, with particular focus on the assessment of survival gains. Life years gained or deaths averted [ 64 ], life expectancy gains [ 65 ] and years of extra survival [ 66 ] were all used as indicators of the health impact attributable to cancer research. Glover and colleagues [ 67 ] used a measure of health utility, the quality adjusted life year (QALY), which combines both survival and quality of life assessments. Lakdawalla and colleagues [ 66 ] considered the impact of both research on cancer screening and treatments, and concluded that survival gains were 80% attributable to treatment improvement. In contrast, Glover and colleagues [ 67 ] acknowledged the importance of improved cancer therapies due to research but also highlight the major impacts from research around smoking cessation, as well as cervical and bowel cancer screening. Several of these studies that assessed health impact, also used the information on health gains to assess the economic impact of the same research [ 64 , 65 , 66 , 67 ].

Finally, two studies [ 68 , 69 ] performed multi-dimensional research impact assessments, which incorporated nearly all of the seven categories of impact identified from the previous literature (Fig.  2 ). In their assessment of the impact of research funded by one breast cancer charity in Australia, Donovan and colleagues [ 69 ] evaluated academic, capacity building, policy, health, and wider economic impacts. Montague and Valentim [ 68 ] assessed the impact of one randomised clinical trial (MA17) which investigated the use of a hormonal medication as an adjuvant treatment for patients with breast cancer. In their study, they assessed the dissemination of research findings, academic impact, capacity building for future trials and international collaborations, policy citation, and the health impact of decreased breast cancer recurrence attributable to the clinical trial.

Methods of data collection and analysis for cancer research impact evaluation

Methods for data collection and analysis used in these studies aligned with the categories of impact assessed. For example, studies assessing academic impact used traditional bibliometric searching of publication databases and associated metrics. Ugolini et al. [ 57 ] applied a normalised journal impact factor to publications from a cancer research centre as an indicator of the research quality and productivity from that centre. This analysis was adjusted for the number of employees within each department and the scores were used to apportion 20% of future research funding. The same bibliometric method of analysis was used in a second study by the same authors to compare and contrast national level, cancer research efforts across Europe [ 58 ]. They assessed the quantity and the mean impact factor of the journals for publications from each country and compared this to the location-specific population and GDP. A similar approach was used for the manual assessment of 10% of cancer research abstracts submitted to an international conference (ASCO) between 2001–2003 and 2006–2008 [ 59 ]. These authors examined if the location of authors affected the likelihood of the abstract being presented orally, as a face-to-face poster or online only.

Lewison and colleagues, who performed four of the studies identified [ 60 , 61 , 62 , 63 ], used a different bibliometric method of publication citation count to analyse the dissemination, academic, and policy impact of cancer research. The authors also assigned a research level to publications to differentiate if the research was a basic science or clinical cancer study by coding the words in the title of each article or the journal in which the paper was published. The cancer research types assessed by these authors included cancer research at a national level for two different countries (UK and Russia) and research performed by cancer centres in the UK.

To assess policy impact these authors extracted journal publications from cancer clinical guidelines and for media impact they looked at publications cited in articles stored within an online repository from a well-known UK media organisation (British Broadcasting Co-operation). Interestingly, most of the cancer research publications contained in guidelines and cited in the UK media were clinical studies whereas a much higher proportion published by UK cancer centres were basic science studies. These authors also identified that funders of cancer research played an critical role as commentators to explain the importance of the research in the lay media. The top ten most frequent commentators (commenting on > 19 media articles (out of 725) were all representatives from the UK charity CRUK.

A combination of clinical trial findings and documentary analysis of large data repositories were used to estimate health system or health impact. In their study, Montague and Valentim [ 68 ] cited the effect size for a decrease in cancer recurrence from a clinical trial and implied the same health gains would be expected in real life for patients with breast cancer living in Canada. In their study of the impact of charitable and publicly funded cancer research in the UK, Glover et al. [ 67 ] used CRUK and Office for National Statistics (ONS) cancer incidence data, as well as national hospital databases listing episodes of radiotherapy delivered, number of cancer surgeries performed and systemic anti-cancer treatments prescribed, to evaluate changes in practice attributable to cancer research. In their USA perspective study, Lakdawalla et al. [ 66 ] used the population-based Surveillance, Epidemiology and End Results Program (SEER) database to evaluate the number of patients likely to be affected by the implementation of cancer research findings [ 66 ]. Survival calculations from clinical trials were also applied to population incidence estimates to predict the scale of survival gain attributable to cancer research [ 64 , 66 ].

The methods of data collection and analysis used for economic evaluations aligned with the categories of assessment identified by Buxton in their 2004 literature review [ 34 ]. For example, three studies [ 65 , 66 , 67 ] estimated direct healthcare cost savings from implementation of cancer research. This was particularly relevant in one ex-ante assessment of the potential impact of a clinical trial testing the equivalence of using less intensive follow up for patients following cancer surgery [ 65 ]. These authors assessed the number of years it would take (“years to payback”) of implementing the hypothetical clinical trial findings to outweigh the money spent developing and running the trial. The return on investment calculation was performed by estimating direct cost savings to the healthcare system by using less intensive follow up without any detriment to survival.

The second of Buxton’s categories was an estimation of productivity loss using the human capital approach. In this method, the economic value of survival gains from cancer research are calculated by measuring the monetary contribution from patients surviving longer who are of working age. This approach was used in two studies [ 64 , 66 ] and in both, estimates of average income (USA) were utilised. Buxton’s fourth category, an estimation of an individual’s willingness to pay for a statistical life, was used in two assessments [ 65 , 66 ], and Glover and colleagues [ 67 ] adapted this method, placing a monetary value on the opportunity cost of QALYs forgone in the UK health service within a fixed budget [ 70 ]. One of the studies that used this method identified that there may be differences in how patients diagnosed with distinct cancer types value the impact of research on cancer specific survival [ 66 ]. In particular, individuals with pancreatic cancer seemed to be willing to spend up to 80% of their annual income for the extra survival attributable to implementation of cancer research findings, whereas this fell to below 50% for breast and colorectal cancer. Only one of the studies considered Buxton’s third category of benefits to the economy from commercial development [ 66 ]. These authors calculated the gain to commercial companies from sales of on-patent pharmaceuticals and concluded that economic gains to commercial producers were small relative to gains from research experienced by cancer patients.

The cost estimates used in these impact evaluations came from documentary analysis, clinical trial publications, real-life data repositories, surveys, and population average income estimates. For example, in one study, cost information from NCI trials was supplemented by using a telephone phone survey to pharmacies, historical Medicare documents and estimates of the average income from the 1986 US Bureau of the Census Consumer Income [ 64 ]. In their study, Coyle et al. [ 65 ] costed annual follow up and treatment for cancer recurrence based on the Ontario Health Insurance plan, a cost model relevant to an Ottawa hospital and cost estimates from Statistics Canada [ 71 ]. The data used to calculate the cost of performing cancer research was usually from funding bodies and research institutions. For example, charity reports and Canadian research institution documents were used to estimate that it costs the National Cancer Institute in Canada $1500 per patient accrued to a clinical trial [ 65 ]. Government research investment outgoings were used to calculate that $300 billion was spent on cancer research in the USA from 1971 to 2000, 25% of which was contributed by the NCI [ 66 ] and that the NCI spent over $10 million USD in the 1980s to generate the knowledge that adjuvant chemotherapy was beneficial to colorectal cancer patients [ 64 ]. Charity and research institution spending reports, along with an estimation of the proportion of funds spent specifically on cancer research, were used to demonstrate £15 billion of UK charity and public money was spent on cancer research between 1970 and 2009 [ 67 ].

Lastly, the two studies [ 68 , 69 ] which adopted a multi-category approach to impact assessment used the highest number and broadest range of methods identified from the previous literature (Additional file 2 : Table S1). The methods utilised included surveys and semi-structured telephone interviews with clinicians, documentary analysis of funding and project reports, case studies, content analysis of media release, peer review, bibiliometrics, budget analysis, large data repository review, and observations of meetings.

Frameworks for cancer research impact evaluation

Only two of the empirical studies identified used an impact framework. Unsurprisingly, these were also the studies that performed a multi-category assessment and used the broadest range of methods within their analyses. Donovan et al. [ 69 ] used the Payback framework (Additional file 2 : Table S2) to guide the categories of impact assessed and the questions in their researcher surveys and interviews. They also reported the results of their evaluation using the same categories: from knowledge production, through capacity building to health and wider economic impacts. Montague and Valentim [ 68 ] used the Canadian Academy Health Services (CAHS) Framework (Additional file 2 : Table S2). Rather than using the framework in it is original form, they arranged impact indicators from the CAHS framework within a hierarchy to illustrate impacts occurring over time. The authors distinguished short term, intermediate and longer-term changes resulting from one clinical cancer trial, aligning with the concept of categorising impacts based on when they occur, which was described in one of the literature reviews identified in the first part of this study [ 33 ].

Lastly, the challenges of time lags and attribution of impact were identified and addressed by several of these empirical studies. Lewison and colleagues tracked the citation of over 3000 cancer publications in UK cancer clinical guidelines over time [ 61 ], and in their analysis Donovan et al. [ 69 ] explicitly acknowledged that the short time frame between their analysis and funding of the research projects under evaluations was likely to under-estimate the impact achieved. Glover et al. [ 67 ] used bibliometric analysis of citations in clinical cancer guidelines to estimate the average time from publication to clinical practice change (8 years). They added 7 years to account for the time between funding allocation and publication of research results giving an overall time lag from funding cancer research to impact of 15 years. The challenge of attribution was addressed in one study by using a time-line to describe impacts occurring at different time-points but linking back to the original research in question [ 68 ]. The difficultly of estimating time lags and attributing impact to cancer research were both specifically addressed in a companion study [ 72 ] to the one conducted by Glover and colleagues. In this study, instead of quantifying the return on cancer research investment, qualitative methods of assessment were used. This approach identified factors that enhanced and accelerated the process of impact occurring and helped to provide a narrative to link impacts to research.

This study has identified several examples of the evaluation of the impact of cancer research. These evaluations  were performed over three decades, and mostly assessed research performed in high-income countries. Justification for the approach to searching the literature used  in this study is given by looking at the titles of the articles identified. In only 14% (2/14) was the word “impact” included, suggesting that performing a search for empirical examples of cancer research impact evaluation using traditional publication databases would have been challenging. Furthermore, all the studies identified were included within reviews of approaches to research impact evaluation, which negated the subjective decision of whether the studies complied with a particular definition of research impact.

Characteristics of research that were specifically relevant to cancer studies can be identified from these impact assessments. Firstly, many of these evaluations acknowledged the contribution of both basic and applied studies to the body of cancer research, and several studies categorised research publications based on this distinction. Second, the strong focus on health impact and the expectation that cancer research will improve health was not surprising. The focus on survival in particular, especially in economic studies looking at the value of health gains, reflects the high mortality of cancer as a disease entity. This contrasts with similar evaluations of musculoskeletal or mental health research, which have focused on improvements in morbidity [ 73 , 74 ]. Third, several studies highlighted the distinction between research looking at different aspects of the cancer care continuum; from screening, prevention and diagnosis, to treatment and end of life care. The division of cancer as a disease entity by the site of disease was also recognised. Studies that analysed the number of patients diagnosed with cancer, or population-level survival gains, often used site-specific cancer incidence and other studies evaluated research relating to only one type of cancer [ 64 , 65 , 68 , 69 ]. Lastly, the empirical examples of cancer research impact identified in this study confirm the huge investment into cancer research that exists, and the desire by many research organisations and funders to quantify a rate of return on that investment. Most of these studies concluded that cancer research investment far exceeded expectations of the return on investment. Even using the simple measure of future research grants attracted by researchers funded by one cancer charity, the monetary value of these grants outweighed the initial investment [ 69 ].

There were limitations in the approaches to impact evaluation used in these studies which were recognised by reflecting on the findings from the broader literature. Several studies assessed academic impact in isolation, and studies using the journal impact factor or the location of authors on publications were limited in the information they provided. In particular, using the journal impact factor (JIF) to allocate funding research which was used in one study, is now outdated and controversial. The policy impact of cancer research was commonly evaluated by using clinical practice guidelines, but other policy types that could be used in impact assessment [ 30 ], such as national government reports or local guidelines, were rarely used. In addition, using cancer guidelines as a surrogate for clinical practice change and health service impact could have drawbacks. For example, guidelines can often be outdated, irrelevant or simply not used by cancer clinicians and in addition, local hospitals often have their own local clinical guidelines, which may take precedent over national documents. Furthermore, the other aspects of policy impact described in the broader literature [ 30 ], such as impact on policy agenda setting and implementation, were rarely assessed. There were also no specific examples of social, environmental or cultural impacts and very few of the studies mentioned wider economic benefits from cancer research, such as spin out companies and patents. It may be that these types of impact were less relevant to cancer research being assessed, however unexpected impacts may have be identified if they were considered at the time of impact evaluation.

Reflecting on how the methods of data collection and analysis used in these studies aligned with those listed in Additional file 2 : Table S1 bibliometrics, alternative metrics (media citation), documentary analysis, surveys and economic approaches were often used. Methods less commonly adopted were interviews, using a scale and focus groups. This may have been due to the time and resource implications of using qualitative techniques and more in depth analysis, or a lack of awareness by authors regarding the types of scales that could be used. An example of a scale that could be used to assess the impact of research on policy is provided in one of the literature reviews identified [ 30 ]. The method of collecting expert testimony from researchers was utilised in the studies identified, but there were no obvious examples of testimony about the impact of cancer research from stakeholders such as cancer patients or their families.

Lastly, despite the large number of examples identified from the previous literature, a minority of the empirical assessments used an impact framework. The Payback Framework, and an iteration of the CAHS Framework were used with success and these studies are excellent examples of how frameworks can be used for cancer research impact evaluation in future. Other frameworks identified from the literature (Additional file 2 : Table S2) that may be appropriate for the assessment of cancer research impact in future include Anthony Weiss’s logic model [ 75 ], the research impact framework [ 76 ] and the research utilisation ladder [ 77 ]. Weiss’s model is specific to medical research and encourages evaluation of how clinical trial publication results are implemented in practice and lead to health gain. He describes an efficacy-efficiency gap [ 75 ] between clinical decision makers becoming aware of research findings, changing their practice and this having impact on health. The Research Impact Framework, developed by the Department of Public Health and Policy at the UK London School of Hygiene and Tropical Medicine [ 76 ], is an aid for researchers to self-evaluate their research impact, and offers an extensive list of categories and indicators of research which could be applied to evaluating the impact of cancer research. Finally, Landry’s Research Utilisation Ladder [ 77 ] has similarities to the hierarchy used in the empirical study by Montegue and Valentim [ 68 ], and focuses on the role of the individual researcher in determining how research is utilised and its subsequent impact.

Reflecting on the strengths and limitations of the empirical approaches to cancer research impact identified in this study, Fig.  3 outlines recommendations for the future. One of these recommendations refers to improving the use of real-life data to assess the actual impact of research on incidence, treatment, and outcomes, rather than predicting these impacts by using clinical trial results. Databases for cancer incidence, such as SEER (USA) and the Office of National Statistics (UK), are relatively well established. However, those that collect data on treatments delivered and patient outcomes are less so, and when they do exist, they have been difficult to establish and maintain and often have large quantities of missing data [ 78 , 79 ]. In their study, Glover et al. [ 67 ] specifically identified the lack of good quality data documenting radiotherapy use in the UK in 2012.

figure 3

1 Thonon F, Boulkedid R, Teixeira M, Gottot S, Saghatchian M, Alberti C. Identifying potential indicators to measure the outcome of translational cancer research: a mixed methods approach. Health Res Policy Syst. 2015;13:72

Suggestions for approaching cancer research impact evaluation.

The recommendations also suggest that impact assessment for cancer and other health research could be made more robust by giving researchers access to cost data linked to administrative datasets. This type of data was used in empirical impact assessments performed in the USA [ 64 , 66 ] because the existing Medicare and Medicaid health service infrastructure collects and provides access to this data. In the UK, hospital cost data is collected for accounting purposes but this could be unlocked as a resource for research impact assessments going forward. A good example of where attempts are being made to link resource use to cost data for cancer care in the UK is through the UK Colorectal Cancer Intelligence Hub [ 80 ].

Lastly, several empirical examples highlighted that impact from cancer research can be increased when researchers or research organisations advocate, publicise and help to interpret research findings for a wider audience [ 60 , 72 ]. In addition, it is clear from these studies that organisations that want to evaluate the impact of their cancer research must also appreciate that research impact evaluation is a multi-disciplinary effort, requiring the skills and input from individuals with different skill sets, such as basic scientists, clinicians, social scientists, health economists, statisticians, and information technology analysts. Furthermore, the users and benefactors from cancer research, such as patients and their families, should not be forgotten, and asking them which impacts from cancer research are important will help direct and improve future evaluations.

The strengths of this study are the broad, yet systematic approach used to identify existing reviews within the research impact literature. This allowed a more informed assessment of cancer research evaluations than would have been possible if a primary review of these empirical examples had been undertaken. Limitations of the study include the fact that the review protocol was not registered in advance and that one researcher screened the full articles for review. The later was partly mitigated by using pre-defined inclusion criteria.

Impact assessment is a way of communicating to funders and patients the merits of undertaking cancer research and learning from previous research to develop better studies that will have positive impacts on society in the future. To the best of our knowledge, this is the first review to consider how to approach evaluation of the impact of cancer research. At the policy level, a lesson learned from this study for institutions, governments, and funders of cancer research, is that an exact prescription for how to conduct cancer research impact evaluation cannot be provided, but a multi-disciplinary approach and sufficient resources are required if a meaningful assessment can be achieved. The approach to impact evaluation used to assess cancer research will depend on the type of research being assessed, the unit of analysis, rationale for the assessment and the resources available. This study has added to an important dialogue for cancer researchers, funders and patients about how cancer research can be evaluated and ultimately how future cancer research impact can be improved.

Availability of data and materials

Additional files included. No primary research data analysed.

Abbreviations

National Cancer Institute

United States of America

United States

United Kingdom

Cancer Research UK

Medical subject heading

Preferred reporting items for systematic reviews and meta-analysis

Gross Domestic Product

American Society of Clinical Oncology

Surveillance, Epidemiology and End Results Program

Journal impact factor

Research evaluation framework

Health Technology Assessment

Doctor of Philosophy

Research and Development

Quality adjusted life year

Canadian Academy of Health Sciences

Office for National Statistics

UK CR. CRUK: "Current clinical trial research". https://www.cancerresearchuk.org/our-research-by-cancer-topic/our-clinical-trial-research/current-clinical-trial-research . Accessed 20th May 2019.

UK CR. CRUK: "Cancer statistics for the UK.". https://www.cancerresearchuk.org/health-professional/cancer-statistics-for-the-uk . Accessed 20th May 2019.

Cancer Research UK. Cancer risk statistics 2019. https://www.cancerresearchuk.org/health-professional/cancer-statistics/risk . Accessed 20th Dec 2019.

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gulmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383(9912):156–65.

Article   PubMed   Google Scholar  

Ioannidis JPA, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Article   PubMed   PubMed Central   Google Scholar  

Richard S. BMJ blogs. In: Godlee F, editor. 2018. https://blogs.bmj.com/bmj/2018/07/30/richard-smith-measuring-research-impact-rage-hard-get-right/ . Accessed 31st July 2019.

Department for the Economy, Higher Education Funding Council for Wales, Research England, Scottish Funding Council. Draft guidance on submissions REF 2018/01 2018 July 2018. Report No.

Gunn A, Mintrom M. Social science space network blogger. 2017. https://www.socialsciencespace.com/2017/01/five-considerations-policy-measure-research-impact/ . Accessed 2019.

Callaway E. Beat it, impact factor! Publishing elite turns against controversial metric. Nature. 2016;535(7611):210–1.

Article   CAS   PubMed   Google Scholar  

Research Councils UK. Excellence with impact 2018. https://webarchive.nationalarchives.gov.uk/20180322123208/http://www.rcuk.ac.uk/innovation/impact/ . Accessed 24th Aug 2020.

Cancer Research UK. Measuring the impact of research 2017. https://www.cancerresearchuk.org/funding-for-researchers/research-features/2017-06-20-measuring-the-impact-of-research . Accessed 24th Aug 2020.

Gordon LG, Bartley N. Views from senior Australian cancer researchers on evaluating the impact of their research: results from a brief survey. Health Res Policy Syst. 2016;14:2.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Thompson MK, Poortmans P, Chalmers AJ, Faivre-Finn C, Hall E, Huddart RA, et al. Practice-changing radiation therapy trials for the treatment of cancer: where are we 150 years after the birth of Marie Curie? Br J Cancer. 2018;119(4):389–407.

Downing A, Morris EJ, Aravani A, Finan PJ, Lawton S, Thomas JD, et al. The effect of the UK coordinating centre for cancer research anal cancer trial (ACT1) on population-based treatment and survival for squamous cell cancer of the anus. Clin Oncol. 2015;27(12):708–12.

Article   CAS   Google Scholar  

Tsang Y, Ciurlionis L, Kirby AM, Locke I, Venables K, Yarnold JR, et al. Clinical impact of IMPORT HIGH trial (CRUK/06/003) on breast radiotherapy practices in the United Kingdom. Br J Radiol. 2015;88(1056):20150453.

South A, Parulekar WR, Sydes MR, Chen BE, Parmar MK, Clarke N, et al. Estimating the impact of randomised control trial results on clinical practice: results from a survey and modelling study of androgen deprivation therapy plus radiotherapy for locally advanced prostate cancer. Eur Urol Focus. 2016;2(3):276–83.

Horsley T DO, Sampson M. Checking reference lists to find additional studies for systematic reviews. Cochrane Database Syst Rev. 2011(8).

Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA Statement. Open medicine : a peer-reviewed, independent, open-access journal. 2009;3(3):e123–30.

Google Scholar  

Raftery JH, Greenhalgh S, Glover T, Blatch-Jones MA. Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme. Health Technol Assess. 2016;20(76):1–254.

Hanney SB, Buxton M, Green C, Coulson D, Raftery J. An assessment of the impact of the NHS Health Technology Assessment Programme. Health Technol Assess. 2007;11(53):82.

Article   Google Scholar  

Greenhalgh T, Raftery J, Hanney S, Glover M. Research impact: a narrative review. BMC Med. 2016;14:78.

Coryn CLS, Hattie JA, Scriven M, Hartmann DJ. Models and mechanisms for evaluating government-funded research—an international comparison. Am J Eval. 2007;28(4):437–57.

Philipp-Bastian Brutscher SW, Grant J. Health research evaluation frameworks: an international comparison. RAND Europe. 2008.

Guthrie S, Wamae W, Diepeveen S, Wooding S, Grant J. Measuring Research. A guide to research evaluation frameworks and tools. RAND Europe Cooperation 2013.

Health Economics Research Group OoHE, RAND Europe. Medical research: what’s it worth? Estimating the economic benefits from medical research in the UK. London: UK Evaluation Forum; 2008.

Marjanovic SHS, Wooding Steven. A historical reflection on research evaluation studies, their recurrent themes and challenges. Technical report. 2009.

Chikoore L. Perceptions, motivations and behaviours towards 'research impact': a cross-disciplinary perspective. University of Loughborough; 2016.

Pollitt A, Potoglou D, Patil S, Burge P, Guthrie S, King S, et al. Understanding the relative valuation of research impact: a best-worst scaling experiment of the general public and biomedical and health researchers. BMJ Open. 2016;6(8).

Wouters P, Thelwall M, Kousha K, Waltman L, de Rijcke S, Rushforth A, Franssen T. The metric tide literature review (Supplementary report I to the independent review of the role of metrics in research assessment and management). 2015.

Hanney SR, Gonzalez-Block MA, Buxton MJ, Kogan M. The utilisation of health research in policy-making: concepts, examples and methods of assessment. Health Res Policy Syst. 2003;1(1):2.

Boaz A, Fitzpatrick S, Shaw B. Assessing the impact of research on policy: a literature review. Sci Public Policy. 2009;36(4):255–70.

Newson R, King L, Rychetnik L, Milat A, Bauman A. Looking both ways: a review of methods for assessing research impacts on policy and the policy utilisation of research. Health Res Policy Syst. 2018;16(1):54.

Cruz Rivera S, Kyte DG, Aiyegbusi OL, Keeley TJ, Calvert MJ. Assessing the impact of healthcare research: a systematic review of methodological frameworks. PLoS Med. 2017;14(8):e1002370.

Buxton M, Hanney S, Jones T. Estimating the economic value to societies of the impact of health research: a critical review. Bull World Health Organ. 2004;82(10):733–9.

PubMed   PubMed Central   Google Scholar  

Yazdizadeh B, Majdzadeh R, Salmasian H. Systematic review of methods for evaluating healthcare research economic impact. Health Res Policy Syst. 2010;8:6.

Hanney S. Ways of assessing the economic value or impact of research: is it a step too far for nursing research? J Res Nurs. 2011;16(2):151–66.

Banzi R, Moja L, Pistotti V, Facchini A, Liberati A. Conceptual frameworks and empirical approaches used to assess the impact of health research: an overview of reviews. Health Res Policy Syst. 2011;9:26.

Bornmann L. What is societal impact of research and how can it be assessed? A literature survey. J Am Soc Inform Sci Technol. 2013;64(2):217–33.

Pedrini M, Langella V, Battaglia MA, Zaratin P. Assessing the health research’s social impact: a systematic review. Scientometrics. 2018;114(3):1227–50.

Thonon F, Boulkedid R, Delory T, Rousseau S, Saghatchian M, van Harten W, et al. Measuring the outcome of biomedical research: a systematic literature review. PLoS ONE. 2015;10(4):e0122239.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Raftery J, Hanley S, Greenhalgh T, Glover M, Blatch-Jones A. Models and applications for measuring the impact of health research: Update of a systematic review for the Health Technology Assessment Programme. Health Technol Assess. 2016;20(76).

Ruscio J, Seaman F, D'Oriano C, Stremlo E, Mahalchik K. Measuring scholarly impact using modern citation-based indices. Meas Interdiscip Res Perspect. 2012;10(3):123–46; 24.

Carpenter CR, Cone DC, Sarli CC. Using publication metrics to highlight academic productivity and research impact. Acad Emerg Med. 2014;21(10):1160–72.

Patel VM, Ashrafian H, Ahmed K, Arora S, Jiwan S, Nicholson JK, et al. How has healthcare research performance been assessed? A systematic review. J R Soc Med. 2011;104(6):251–61.

Smith KM, Crookes E, Crookes PA. Measuring research "Impact" for academic promotion: issues from the literature. J High Educ Policy Manag. 2013;35(4):410–20; 11.

Penfield T, Baker M, Scoble R, Wykes M. Assessment, evaluations and definitions of research impact: a review. Res Eval. 2014;23(1):21–32.

Agarwal A, Durairajanayagam D, Tatagari S, Esteves SC, Harlev A, Henkel R, et al. Bibliometrics: tracking research impact by selecting the appropriate metrics. Asian J Androl. 2016;18(2):296–309.

Braithwaite J, Herkes J, Churruca K, Long J, Pomare C, Boyling C, et al. Comprehensive researcher achievement model (CRAM): a framework for measuring researcher achievement, impact and influence derived from a systematic literature review of metrics and models. BMJ Open. 2019;9(3):e025320.

Moed HF, Halevi G. Multidimensional assessment of scholarly research impact. J Assoc Inf Sci Technol. 2015;66(10):1988–2002.

Milat AJ, Bauman AE, Redman S. A narrative review of research impact assessment models and methods. Health Res Policy Syst. 2015;13:18.

Weißhuhn P, Helming K, Ferretti J. Research impact assessment in agriculture—a review of approaches and impact areas. Res Eval. 2018;27(1):36-42; 7.

Deeming S, Searles A, Reeves P, Nilsson M. Measuring research impact in Australia’s medical research institutes: a scoping literature review of the objectives for and an assessment of the capabilities of research impact assessment frameworks. Health Res Policy Syst. 2017;15(1):22.

Peter N, Kothari A, Masood S. Identifying and understanding research impact: a review for occupational scientists. J Occup Sci. 2017;24(3):377–92.

Buxton M, Hanney S. How can payback from health services research be assessed? J Health Serv Res Policy. 1996;1(1):35–43.

Bornmann L. Measuring impact in research evaluations: a thorough discussion of methods for, effects of and problems with impact measurements. High Educ Int J High Educ Res. 2017;73(5):775–87; 13.

Reale E, Avramov D, Canhial K, Donovan C, Flecha R, Holm P, et al. A review of literature on evaluating the scientific, social and political impact of social sciences and humanities research. Res Eval. 2017;27(4):298–308.

Ugolini D, Bogliolo A, Parodi S, Casilli C, Santi L. Assessing research productivity in an oncology research institute: the role of the documentation center. Bull Med Libr Assoc. 1997;85(1):33–8.

CAS   PubMed   PubMed Central   Google Scholar  

Ugolini D, Casilli C, Mela GS. Assessing oncological productivity: is one method sufficient? Eur J Cancer. 2002;38(8):1121–5.

Saad ED, Mangabeira A, Masson AL, Prisco FE. The geography of clinical cancer research: analysis of abstracts presented at the American Society of Clinical Oncology Annual Meetings. Ann Oncol. 2010;21(3):627–32.

Lewison G, Tootell S, Roe P, Sullivan R. How do the media report cancer research? A study of the UK’s BBC website. Br J Cancer. 2008;99(4):569–76.

Lewison GS, Sullivan R. The impact of cancer research: how publications influence UK cancer clinical guidelines. Br J Cancer. 2008;98(12):1944–50.

Lewison G, Markusova V. The evaluation of Russian cancer research. Res Eval. 2010;19(2):129–44.

Sullivan R, Lewison G, Purushotham AD. An analysis of research activity in major UK cancer centres. Eur J Cancer. 2011;47(4):536–44.

Brown ML, Nayfield SG, Shibley LM. Adjuvant therapy for stage III colon cancer: economics returns to research and cost-effectiveness of treatment. J Natl Cancer Inst. 1994;86(6):424–30.

Coyle D, Grunfeld E, Wells G. The assessment of the economic return from controlled clinical trials. A framework applied to clinical trials of colorectal cancer follow-up. Eur J Health Econ. 2003;4(1):6–11.

Lakdawalla DN, Sun EC, Jena AB, Reyes CM, Goldman DP, Philipson TJ. An economic evaluation of the war on cancer. J Health Econ. 2010;29(3):333–46.

Glover M, Buxton M, Guthrie S, Hanney S, Pollitt A, Grant J. Estimating the returns to UK publicly funded cancer-related research in terms of the net value of improved health outcomes. BMC Med. 2014;12:99.

Montague S, Valentim R. Evaluation of RT&D: from ‘prescriptions for justifying’ to ‘user-oriented guidance for learning.’ Res Eval. 2010;19(4):251–61.

Donovan C, Butler L, Butt AJ, Jones TH, Hanney SR. Evaluation of the impact of National Breast Cancer Foundation-funded research. Med J Aust. 2014;200(4):214–8.

Excellence NIfHaC. Guide to the methods of technology appraisal: 2013. 2013.

Government of Canada. Statistics Canada 2020. https://www.statcan.gc.ca/eng/start . Accessed 31st Aug 2020.

Guthrie S, Pollitt A, Hanney S, Grant J. Investigating time lags and attribution in the translation of cancer research: a case study approach. Rand Health Q. 2014;4(2):16.

Glover M, Montague E, Pollitt A, Guthrie S, Hanney S, Buxton M, et al. Estimating the returns to United Kingdom publicly funded musculoskeletal disease research in terms of net value of improved health outcomes. Health Res Policy Syst. 2018;16(1):1.

Wooding S, Pollitt A, Castle-Clarke S, Cochrane G, Diepeveen S, Guthrie S, et al. Mental health retrosight: understanding the returns from research (lessons from schizophrenia): policy report. Rand Health Q. 2014;4(1):8.

Weiss AP. Measuring the impact of medical research: moving from outputs to outcomes. Am J Psychiatry. 2007;164(2):206–14.

Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: a research impact framework. BMC Health Serv Res. 2006;6:134.

Landry RA, Lamari NM. Climbing the ladder of research utilization. Evidence from social science research. Sci Commun. 2001;22(4):396–422.

Eisemann N, Waldmann A, Katalinic A. Imputation of missing values of tumour stage in population-based cancer registration. BMC Med Res Methodol. 2011;11(1):129.

Morris EJ, Taylor EF, Thomas JD, Quirke P, Finan PJ, Coleman MP, et al. Thirty-day postoperative mortality after colorectal cancer surgery in England. Gut. 2011;60(6):806–13.

University of Leeds. CORECT-R 2020. https://bci.leeds.ac.uk/applications/ . Accessed 24th Aug 2020.

Download references

Acknowledgements

We would like to acknowledge the help of Ms Lorraine MacLeod, specialist librarian from the Beatson West of Scotland Cancer Network in NHS Greater Glasgow and Clyde for her assistance in formulating the search strategy. We would like to acknowledge that Professor Stephen Hanney provided feedback on an earlier version of this review.

Dr. Catherine Hanna has a CRUK and University of Glasgow grant. Grant ID: C61974/A2429.

Author information

Authors and affiliations.

CRUK Clinical Trials Unit, Institute of Cancer Sciences, University of Glasgow, Glasgow, United Kingdom

Catherine R. Hanna & Robert J. Jones

Health Economics and Health Technology Assessment, Institute of Health and Wellbeing, University of Glasgow, Glasgow, United Kingdom

Kathleen A. Boyd

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the concept and design of the study. CH was responsible for the main data analysis and writing of the manuscript. KAB and RJJ responsible for writing, editing and final approval of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Catherine R. Hanna .

Ethics declarations

Ethics approval and consent to participate.

No ethical approval required.

Consent for publication

No patient level or third party copyright material used.

Competing interests

Nil declared.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

Research Council UK Impact definition, summary of search terms for part one, and inclusion criteria for both parts of the study.

Additional file 2: Table S1

(List of methods for research impact evaluation) and Table S2 (List if frameworks for research impact evaluation).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Hanna, C.R., Boyd, K.A. & Jones, R.J. Evaluating cancer research impact: lessons and examples from existing reviews on approaches to research impact assessment. Health Res Policy Sys 19 , 36 (2021). https://doi.org/10.1186/s12961-020-00658-x

Download citation

Received : 05 March 2020

Accepted : 09 November 2020

Published : 11 March 2021

DOI : https://doi.org/10.1186/s12961-020-00658-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Online Help

Our 24/7 cancer helpline provides information and answers for people dealing with cancer. We can connect you with trained cancer information specialists who will answer questions about a cancer diagnosis and provide guidance and a compassionate ear. 

message icon

Chat live online

Select the  Live Chat button at the bottom of the page 

video camera icon

Schedule a Video Chat

Face to face support

phone handset icon

Call us at  1-800-227-2345

Available any time of day or night

Our highly trained specialists are available 24/7 via phone and on weekdays can assist through video calls and online chat. We connect patients, caregivers, and family members with essential services and resources at every step of their cancer journey. Ask us how you can get involved and support the fight against cancer. Some of the topics we can assist with include:

  • Referrals to patient-related programs or resources
  • Donations, website, or event-related assistance
  • Tobacco-related topics
  • Volunteer opportunities
  • Cancer Information

For medical questions, we encourage you to review our information with your doctor.

  • ACS Programs and Services
  • Cancer Updates
  • Publications

Impact Ratings Show Cancer Journal Continues to Outperform

Covers of all three ACS Journals

The annual scientific and clinical Journal Impact Factors were released on June 30, and the American Cancer Society’s CA: A Cancer Journal for Clinicians   outperformed.  The CA impact factor climbed from 292.3 last year to a staggering 508.7 and remains the highest-rated oncology journal in the world.

CA  is not only the leading journal for the entire oncology subject category but ranks highest for all 254 categories included in the  Web of Science (clarivate.com) . 

CA ’s score is due mainly to the global cancer statistics article and the annual report on cancer statistics, two of the most cited cancer articles in the world. These citations, factored along with a small number of publications, accounts for  CA ’s high score. Other highly cited manuscripts include the ACS screening guidelines, cancer statistics for special populations, and comprehensive reviews.

Our journal  Cancer Cytopathology   ranked No. 1 in the field of cytopathology. It scored 5.284. Our journal  Cancer  scored 6.860. Each journal falls within the top quartile of its subject category. 

Bill Cance , MD, our chief medical and scientific officer, is editor in chief of  CA: A Cancer Journal for Clinicians , and  Ted Gansler,  MD, MBA, MPH, is its editor. 

  • Reviewed by
  • Helpful resources

is cancer research a good journal

The American Cancer Society medical and editorial content team

Our team is made up of doctors and oncology certified nurses with deep knowledge of cancer care as well as journalists, editors, and translators with extensive experience in medical writing.

American Cancer Society Publications

American Cancer Society news stories are copyrighted material and are not intended to be used as press releases . For reprint requests, please see our Content Usage Policy .

Related News and Stories

Help us end cancer as we know it, for everyone..

 Charity Navigator 4-star rating button

  • Open access
  • Published: 02 March 2018

Systematic reviews and cancer research: a suggested stepwise approach

  • George A. Kelley 1 &
  • Kristi S. Kelley 1  

BMC Cancer volume  18 , Article number:  246 ( 2018 ) Cite this article

7726 Accesses

9 Citations

7 Altmetric

Metrics details

Systematic reviews, with or without meta-analysis, play an important role today in synthesizing cancer research and are frequently used to guide decision-making. However, there is now an increase in the number of systematic reviews on the same topic, thereby necessitating a systematic review of previous systematic reviews. With a focus on cancer, the purpose of this article is to provide a practical, stepwise approach for systematically reviewing the literature and publishing the results. This starts with the registration of a protocol for a systematic review of previous systematic reviews and ends with the publication of an original or updated systematic review, with or without meta-analysis, in a peer-reviewed journal. Future directions as well as potential limitations of the approach are also discussed. It is hoped that the stepwise approach presented in this article will be helpful to both producers and consumers of cancer-related systematic reviews and will contribute to the ultimate goal of preventing and treating cancer.

Peer Review reports

Given the proliferation of original studies on the same topic, often with varying results, systematic reviews, with or without meta-analysis, play an important role today in evidence synthesis. As an example of the importance of this approach with respect to the sheer volume of information, it has been reported that in 1992, a primary care physician would need to read 17 research articles per day, 365 days per year, to stay current in her/his field [ 1 ], something that is obviously not realistic. Using a rigorous and systematic approach to reviewing the literature provides one with a more valid and reliable consolidation of information regarding the topic of interest. If a meta-analysis is included as part of a systematic review, such analyses can (1) increase statistical power for primary outcomes as well as subgroups, (2) address uncertainty when study results differ, (3) improve estimates of effect size, and (4) answer questions not established at the start of individual trials [ 2 ]. In the field of cancer, the number of systematic reviews, with or without meta-analysis, has increased dramatically over the past 30 years. For example, a recent PubMed search conducted by the authors on November 7, 2017 and limited to systematic reviews, with or without meta-analysis in the field of cancer yielded two citations for the five-year period 1978 through 1982 compared to 25,591 citations from 2013 through 2017 (Fig.  1 ). Specific to BMC Cancer , the number of indexed citations have increased from 24 for the five-year period 2001 through 2005 to 378 for the years 2013 through 2017 (Fig.  2 ). Given the dramatic increase in the number of systematic reviews in cancer research, we are now faced with multiple systematic reviews, with or without meta-analysis, on the same topic. As a result, there is now a need to conduct systematic reviews of previous systematic reviews (SRPSR), with or without meta-analysis, in order to provide decision-makers with the “state-of-the evidence” as well as researchers of original studies and systematic reviews with direction for both the conduct and reporting of future research, including information on whether an original or updated systematic review on the topic of interest is needed [ 3 , 4 ]. This latter factor may be especially relevant given the recent criticism regarding the production of redundant and unnecessary systematic reviews on the same topic [ 5 ]. As an example of a SRPSR that was limited to studies that conducted meta-analyses, the authors recently published a study in BMC Cancer on exercise and cancer-related fatigue in adult cancer patients and survivors [ 6 ]. From the 16 systematic reviews with meta-analyses that included up to 3245 participants, it was concluded that a lack of certainty currently exists regarding the benefits of exercise on cancer-related fatigue in adult cancer patients and survivors [ 6 ]. It was suggested that while additional research is needed, exercise did not appear to increase cancer-related fatigue. Unfortunately, and to the best of the author’s knowledge, there is a lack of consolidated and practical guidance on the entire systematic review process, starting with the idea of conducting a SRPSR and possibly ending with an updated or new systematic review, with or without meta-analysis. The purpose of this paper is to try and help fill that gap.

Results of PubMed search for systematic reviews, with or without meta-analysis, in the field of cancer up to November 7, 2017

Results of PubMed search for systematic reviews, with or without meta-analysis, limited to the journal BMC Cancer up to November 8, 2017

Suggested guidelines for the systematic review process

Given the proliferation of original studies as well as systematic reviews, with or without meta-analysis, on the same topic, it would appear plausible to suggest that one first conduct a SRPSR, with or without meta-analysis, prior to making any decision regarding the conduct of an original or updated systematic review on that topic. While previous guidelines for conducting a SRPSR, also known as ‘umbrella reviews’, ‘overviews of reviews’, ‘reviews of reviews’, ‘summary of systematic reviews’, ‘synthesis of reviews’, and ‘meta-reviews’, have been suggested [ 4 , 7 , 8 ], there appears to be a lack of detail as well as stepwise guidance in one document regarding (1) the process for conducting and publishing a SRPSR, (2) the decision-making process for whether an original or updated systematic review, with or without meta-analysis, is needed, and (3) the process for conducting and publishing one’s own systematic review, with or without meta-analysis. In this article, the authors draw upon their own experiences as well as previous research to address this gap, including a practical, stepwise approach for systematically reviewing the literature and publishing the results. Broadly, this process tentatively starts with the registration of a protocol for a SRPSR and possibly ends with the publication of an original systematic review, with or without meta-analysis, in a peer-reviewed journal. A description of the proposed stepwise approach is shown in Fig.  3 with a more detailed, but not exhaustive, description that follows. For the current article, the focus is on the systematic review process as applied to studies relevant to cancer in health care settings, although much of this information can be applied to other health conditions and settings. References and additional files that provide more detailed information on current methods for reporting and conducting systematic reviews, with or without meta-analysis, are included as well as a revised checklist specific to SRPSR. It is the hope that this stepwise document will serve as a guide to both producers (authors and journal editors), consumers (researchers, health care practitioners, guideline developers) and funding agencies with respect to the process for producing high-quality systematic reviews that have a meaningful impact on the field of cancer.

Suggested stepwise approach for the systematic review process

Step 1. Register protocol for systematic review of previous systematic reviews, with or without meta-analysis, in trial registry

Similar to clinical trials as well as original systematic reviews, with or without meta-analysis, registering a protocol prospectively for a SRPSR in a systematic review trial registry such as the International Prospective Register of Systematic Reviews (PROSPERO, https://www.crd.york.ac.uk/prospero/ ) is an important first step when conducting a SRPSR, with or without meta-analysis. The assumptions here are that no previous SRPSR exist on the topic of interest but previous and original systematic reviews do exist. This determination should be made based on a preliminary search of PROSPERO as well as other databases, for example PubMed, since all SRPSR as well as original systematic reviews, are not registered in PROSPERO. If no previous systematic reviews exist, one could then move on to developing and registering the protocol for one’s own systematic review, with or without meta-analysis, assuming sufficient rationale is provided for such. A search of the PROSPERO trial registry on November 11, 2017 using the keyword “cancer” yielded 3615 registered protocols. While PROSPERO was originally intended for an original systematic review, with or without meta-analysis, they also allow for registration of SRPSR [ 6 ]. Generally, registration for a SRPSR should take approximately 30 min to complete and includes specific items to address, some of which may not be applicable to one’s own SRPSR [ 9 ]. Ideally, trial registration should occur prior to screening studies that meet one’s inclusion criteria for a SRPSR [ 9 ]. When reporting the proposed electronic database search strategies planned, one should make sure to include the terms, or some form (s) of the terms, ‘systematic review’ and ‘meta-analysis’. This will help reduce the number of false positives when conducting searches for previous systematic reviews. Similar to original systematic reviews, the major reasons for registration, described in detail elsewhere [ 9 ], include avoiding bias when conducting a review as well as avoiding unintended duplication of effort [ 5 ]. The registration of a SRPSR can benefit a number of entities. These include (1) researchers by making others aware of their work, (2) commissioning and funding organizations by avoiding unplanned duplication of effort, (3) guideline developers in the planning and development of guidelines, (4) journal editors as a protective measure against reporting biases as well as a means to enhance the peer review process, and (5) methodologists by providing information about how researchers design, conduct and report their SRPSR [ 9 ].

Step 2. Publish protocol for SRPSR, with or without meta-analysis, in journal

The next step would be to submit one’s own SRPSR protocol for publication consideration in a peer-reviewed journal. For example, both BMJ Open and Systematic Reviews currently allow for the submission of protocols for publication consideration, and it is suggested that others, including BMC Cancer , allow for the same. The submission of a protocol for publication consideration can increase awareness of the planned study above and beyond PROSPERO registration as well improve the protocol based on feedback from reviewers and editors. In addition, this allows one to provide greater detail than that afforded in a trial registry such as PROSPERO, although at least a tertiary description of all planned activities should be reported in a trial registry. If the review is modified based on feedback from reviewers, the authors should go back and amend their protocol in PROSPERO or whatever registry in which the review is registered. When submitting one’s SRPSR protocol for publication consideration in a journal, the trial registry number should be included. In addition, a protocol checklist for one’s SRPSR should be included. Unfortunately, while previous guidelines for the conduct and reporting of a SRPSR have been suggested [ 4 , 7 ], the authors are not aware of any protocol-based checklists for a SRPSR similar to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol checklist (PRISMA-P) [ 10 , 11 ]. Given the former, a suggested PRISMA-based protocol checklist for a SRPSR, developed by the authors and termed PRISMA-SRPSR-P, can be found in Additional file  1 . The inclusion of a protocol checklist can benefit reviewers, editors, and authors in ensuring that all relevant items of a SRPSR are adequately addressed. For authors, the checklist inextricably leads back to both the appropriate conduct and reporting of a SRPSR.

Broadly, the authors believe that the three primary aims of a SRPSR, with or without meta-analysis, are to (1) provide a summary of the overall results from each included systematic review, with or without meta-analysis, as well as possibly conducting additional statistical analyses, (2) determine the quality and strength of evidence of the prior reviews, and (3) determine whether or not an original or updated systematic review, with or without meta-analysis is needed. We describe the first two aims below with a separate detailed description for aim three in step 4.

For aim 1, summarizing the overall results from a previous systematic review of systematic reviews, with or without meta-analysis, may be described at two levels, (1) a description of the characteristics of the SRPSR itself, and (2) a description of the characteristics of each included systematic review. For the former, items to report include such things as the number of previous systematic reviews that meet one’s inclusion, as well as exclusion, criteria. For each included systematic review, the presence or absence of a trial registration number should be recorded and reported as well as the methods used to assess for risk of bias in the original studies that were included. In addition, a study characteristics table along with a more detailed description in the text should be planned. Information to provide should include, but not necessarily be limited to, the following: (1) the reference for each previous systematic review, including the year in which the review was published, (2) the country in which the review was conducted, (3) the number of studies and participants included, including the sex of the participants, (4) the types of participants included, for example, breast cancer, colorectal cancer, etc., (5) the types of interventions, if any, that were included, and (6) the method (s) of assessment for the primary outcome (s) of interest in the original systematic review. If a meta-analysis was included, data to report should include, but not necessarily be limited to (1) the statistical methods used in the original systematic review to calculate effect sizes (original metric, standardized mean difference, odds ratios, relative risks, etc.), (2) statistical methods used to pool results (random-effects, inverse heterogeneity, confidence intervals, prediction intervals, etc.), (3) methods used for the assessment of heterogeneity and inconsistency (Q statistic, I-squared, meta-regression, influence analysis), (4) methods used to examine for small-study effects (quantitative tests, funnel plots, etc.) and (5) cumulative meta-analysis. While previous guidelines for conducting a SRPSR do not recommend such [ 4 ], one might also be interested in describing, a priori, any additional planned analyses that might enhance the robustness and applicability of findings not reported in the original meta-analyses. These may include such things as prediction intervals [ 12 , 13 ], influence analysis, cumulative meta-analysis [ 14 ], percentile improvement [ 15 , 16 ] and number needed to treat (NNT) [ 7 , 17 , 18 , 19 , 20 ]. In addition, the authors might be interested in pooling results separately for each individual meta-analysis using pooling models that they may consider to be more robust than those used in the original meta-analyses. Furthermore, it is suggested that producers of SRPSR recalculate the results of all meta-analyses included to ensure that no errors were made in the original meta-analyses. Finally, one might also be interested in pooling the results from each study nested within each meta-analysis into one ‘mega’ meta-analysis [ 21 ]. However, if this is done, care should be taken to ensure that the same original study is not included more than once since the original meta-analyses included most likely were comprised of some of the same studies.

A second suggested aim is to assess the quality and/or risk of bias of each systematic review, with or without meta-analysis, as well as the strength of evidence of each review. One commonly used instrument for assessing the quality of systematic reviews, with or without meta-analysis, is A MeaSurement Tool to Assess systematic Reviews (AMSTAR), an 11-item instrument in which responses are coded as either ‘yes’, ‘no’, ‘can’t answer’ or ‘not applicable’ [ 22 , 23 , 24 ]. To asses for risk of bias, a recent instrument known as the Risk of Bias in Systematic Reviews (ROBIS), has been developed [ 25 ]. This instrument is completed in three phases: (1) assessing relevance (optional), (2) identifying concerns with the review process, and (3) judging the risk of bias [ 25 ].

In addition to assessing the quality and/or risk of bias of each included systematic review, one may also want to examine the strength of evidence from each systematic review. One commonly used instrument is The Grading of Recommendations, Assessment, Development and Evaluations (GRADE) instrument [ 26 ]. This tool evaluates the certainty of evidence for each pre-specified outcome [ 26 ]. Levels of evidence are rated as high, moderate, low or very low [ 27 ]. A study’s initial ranking is established by the study design but may be increased or decreased depending on other factors that include (1) risk of bias, (2) inconsistency, (3) indirectness, (4) imprecision, (5) publication bias, (6) size of the effect, (7) dose-response, and (8) residual confounding [ 27 ]. For SRPSR that include or focus on network meta-analyses [ 28 ], an alternative instrument based on the GRADE methodology has also been developed [ 29 ]. Based on the currently available evidence, it is suggested that the GRADE instrument be used to make decisions about the results of previous systematic reviews with meta-analyses that lead to different results and/or conclusions regarding such things as the effects of an intervention on the outcome of interest. Details regarding GRADE are described elsewhere [ 27 ].

Step 3. Conduct and publish SRPSR, with or without meta-analysis, in a journal

After registering the protocol for a SRPSR in a trial registry such as PROSPERO and publishing the protocol in a peer-reviewed journal, the next step would be to actually conduct the SRPSR, with or without meta-analysis, and submit it for publication consideration in a peer-reviewed journal. Similar to the protocol for a SRPSR, no PRISMA guidelines or checklists currently exist for reporting the results of a SRPSR. Therefore, a suggested PRISMA-based checklist for a SRPSR, developed by the authors and termed PRISMA-SRPSR, is shown in Additional file  2 . Assuming that the protocol for a SRPSR was published in a peer-reviewed journal, another potential issue has to do with self-plagiarism given that the rationale and methods will have already been published. However, this should only be an issue if authors fail to disclose to editors the existence of previous, related publications and do not explain the reasons for any overlap and/or editors fail to read the plagiarism software’s similarity report properly. Because plagiarism software calculates a percentage of similarities, this also needs to be checked manually to determine the reason for any potentially high scores. To minimize the perception of self-plagiarism, a cover letter to the editor should include a statement and citation for the previously published protocol along with a description of the extent of the overlap between the protocol and the submitted SRPSR. In addition, the protocol should be referenced in the Methods section of the submitted paper.

Step 4. Decide whether another systematic review, with or without meta-analysis, is needed

The third aim of a SRPSR, with or without meta-analysis, is a decision about whether another systematic review, with or without meta-analysis, is needed. Given potential confusion and the lack of a true consensus on what constitutes an updated versus new systematic review, the term ‘another’ may be preferred. If no previous systematic review, with or without meta-analysis, has been identified, one may then conduct an original one (see steps 5-7). While there is currently no consensus on the one best approach for determining when another systematic review should be conducted, at least three different groups have provided guidelines for such, all with an organizational versus individual author (s) focus [ 30 , 31 , 32 ]. The Cochrane Collaboration currently recommends that another systematic review be based on needs and priority [ 30 ]. This decision is based on three primary factors: (1) strategic importance, (2) practical aspects with respect to organizing the review, and (3) impact of another review [ 30 ]. A decision is then made to either make the review a priority, postpone the review, or to no longer require such. In the United States, the Agency for Healthcare Research and Quality takes a needs-based approach to another review that focuses on stakeholder impact as well as the currency and need for the review [ 31 ]. Based on these general criteria, a decision is made to either create, archive, or continue surveillance [ 31 ]. More recently, the Panel for Updating Guidance for Systematic Reviews (PUGS) developed a consensus and checklist for when and how to conduct another systematic review [ 32 ]. This process includes (1) assessing the currency of the review, assuming one exists, (2) identifying relevant new methods, studies, or other information that may warrant another review, and (3) assessing the potential effect of another review [ 32 ]. From the authors’ perspective, the PUGS guidelines and checklist may be the most appropriate approach for researchers interested in conducting another systematic review, with or without meta-analysis.

Step 5. Register protocol for own systematic review, with or without meta-analysis in trial registry

If a decision is made that another systematic review, with or without meta-analysis, is needed, the next suggested step would be to develop and submit one’s protocol to a systematic review registry such as PROSPERO. The reasons for this are similar to those described for registering the protocol for a SRPSR, with or without meta-analysis. In addition, one should reference and describe the previous SRPSR to help justify the need for another systematic review, with or without meta-analysis. Furthermore, authors should determine, a priori, whether they plan on including a meta-analysis in their systematic review. Usually, a meta-analysis should be planned for a systematic review with the protocol amended if one cannot be conducted.

Step 6. Publish protocol for own systematic review, with or without meta-analysis, in journal

The next logical step would be to submit the completed protocol, including the trial registration number, to a peer-reviewed journal for publication consideration. Steps 6 and 7 are similar to previous steps 2 and 3 except now the focus is on retrieving individual studies to include in a systematic review rather than reviewing multiple systematic reviews. In addition to the protocol, it is suggested that authors include, and editors require, that the previously developed PRISMA protocol checklist be included (Additional file  3 ). A detailed description of the items in the checklist can be found elsewhere [ 10 , 11 ]. The inclusion of a protocol will aid authors, reviewers and editors with respect to the work proposed. When writing the protocol, it is also important to provide a rationale for the methods one plans to use.

Step 7. Conduct and publish own systematic review, with or without meta-analysis, in journal

The next step in the process is to conduct and publish one’s own systematic review, with or without meta-analysis. When submitting the manuscript for publication consideration, authors should also include, and journals should require, the inclusion of the appropriate PRISMA checklist, depending on the type of review conducted. This will also aid in the conduct of the review itself. Additional files  4 , 5 , and 6 include the PRISMA Checklists for an aggregate data meta-analysis, network meta-analysis, and individual participant data meta-analysis, respectively, with elaboration on these items described elsewhere [ 33 , 34 , 35 , 36 , 37 , 38 ]. Any previously published work (protocol for SRPSR, SRPSR study itself, protocol for own systematic review) should also be cited as well as all relevant protocol registration numbers. In addition, the issue of potential self-plagiarism should be handled similar to that described in step 3. If the protocol for one’s own systematic review has been published, both the Introduction and Methods sections should be similar to what was published in the original protocol. However, the Introduction and Methods section may warrant updating based on new information since the protocol was published. Any changes to the Methods section may require updating the registered protocol. What will be new are the Results, Discussion and Conclusions sections of the paper. The information reported in the Results section should be based on the planned analyses. If any type of analysis to explain heterogeneity is reported, for example meta-regression, it should be clearly pointed out in the Discussion section that because studies are not randomly assigned to covariates in meta-analysis, they are considered to be observational in nature. Consequently, the results do not support causal inferences. Rather, they should be considered as hypothesis-generating and thus, tested in original studies. In addition, because most meta-analytic studies use aggregate data and conduct a large number of statistical tests, it should be pointed out that there is the possibility that some statistically significant findings could have been nothing more than the play of chance. This is of course assuming that no adjustments for multiple testing were made. Finally, items to include in the Discussion section of one’s article could include (1) a description of the overall findings of the review, (2) implications for research, (3) implications for practice, (4) implications for policy, and (5) strengths and limitations of the review. The Conclusions section could briefly state the main findings of the review as well as any need for future research on the topic examined.

Future directions

The information provided in this document provides suggested guidance starting with the potential conduct of a SRPSR and possibly ending with one’s own systematic review, with or without meta-analysis. In the future, it is expected that one will be faced with multiple SRPSR’s, thus creating another level of synthesis in cancer research as well as other areas. From a synthesis perspective, aggregate data meta-analyses with pairwise comparisons will probably continue to be the most common type of meta-analysis in the field of cancer as well as other fields. However, while it is anticipated that the number of individual participant data meta-analyses will continue to increase in the future, the authors do not expect that they will ever exceed the number of aggregate data meta-analyses because of the increased resources required as well as challenges in obtaining individual participant data from investigators. For example, the costs associated with an individual participant data meta-analysis of 11 studies on oral contraceptive use and the risk for ovarian cancer was reported to be $259,300, approximately five times the costs of an aggregate, i.e., summary data meta-analysis [ 39 ]. With respect to the availability of individual participant data, Nevitt et al., recently reported that only 25% of published IPD meta-analyses had access to all IPD [ 40 ]. Given the potential benefits with respect to treatment decisions, it is also expected that future meta-analyses will rely more heavily on network meta-analysis. This approach will allow for the integration of multiple treatments based on direct and indirect evidence for the outcome (s) of interest [ 41 , 42 , 43 , 44 ]. For those meta-analyses that include two or more dependent variables that are correlated, for example, sleep and fatigue in cancer patients [ 45 ], there is also expected to be an increase in the use of multivariate meta-analyses [ 42 , 46 ]. Furthermore, it is expected that future meta-analytic research will include both network and multivariate meta-analysis so that both multiple treatments and outcomes can be examined in the same analysis [ 42 , 47 , 48 , 49 , 50 , 51 ]. Finally, an area of increasing meta-analytic research has to do with genetic association studies. A PubMed search of meta-analytic genetic association citations limited to cancer up to November 8, 2017 demonstrated an increase from 12 for the five-year period 1992 to 1996 to 2684 for the 2013 to 2017 period. Issues and methods related to the specific conduct of meta-analysis for genetic association studies have been previously described in detail elsewhere [ 52 , 53 , 54 , 55 , 56 , 57 ].

Limitations of suggested approach

One of the potential limitations of this stepwise approach is the time lag involved. For example, while obtaining a registration number for a protocol in PROSPERO occurs rather quickly, usually within a week based on the authors’ experience, the submission, review, and eventual publication of the protocol and article for both a SRPSR as well as an actual systematic review, with or without meta-analysis would most likely take several months each. As a result, and despite registering one’s protocol (s) in a trial registry such as PROSPERO, the research may have already been undertaken by another research group. Ideally, all journals should require that any work of this nature that is submitted should include a trial registration number and that editors should follow-up with the registry to ensure that similar work has not been previously planned and/or conducted. However, that is probably not realistic, would be difficult to enforce, and may stifle work that may be on the fringes of being similar, but in fact, may be quite different and/or unique. A second potential limitation that is not indigenous to this approach is the difficulty in determining when an updated systematic review is needed. While suggestions have been provided for such in this article, there is still a large degree of subjectivity involved. To a lesser extent, the same is true for a systematic review that has never been conducted on the topic of interest.

Conclusions

The increasing number of systematic reviews, with or without meta-analysis, on the same topic now suggests that there is a need for a SRPSR as well as a decision regarding whether an original or updated systematic review, with or without meta-analysis, is appropriate. The suggested stepwise approach presented in this article provides a realistic way of addressing this and should be useful to anyone who is a producer or consumer of cancer-related systematic reviews. Ultimately, it is hoped that this work will contribute to improvements in evidence-based information aimed at the prevention and treatment of cancer.

Abbreviations

A MeaSurement Tool to Assess systematic Reviews

Grading of Recommendations, Assessment, Development and Evaluations

Preferred Reporting Items for Systematic Reviews and Meta-Analysis

Risk of Bias in Systematic Reviews

Systematic review of previous systematic reviews

Davidoff F, Haynes B, Sackett D, Smith R. Evidence based medicine. BMJ. 1995;310(6987):1085–6.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC. Meta-analysis of randomized controlled trials. N Engl J Med. 1987;316:450–5.

Article   CAS   PubMed   Google Scholar  

Smith V, Devane D, Begley CM, Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Med Res Methodol. 2011;11(1):15.

Article   PubMed   PubMed Central   Google Scholar  

Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Int J Evid Based Healthc. 2015;13(3):132–40.

Article   PubMed   Google Scholar  

Ioannidis JPA. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94(5):485–514.

Kelley GA, Kelley KS. Exercise and cancer-related fatigue in adults: a systematic review of previous systematic reviews with meta-analyses. BMC Cancer. 2017;17(1):693.

JPT H, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]: The Cochrane Collaboration. p. 2011. www.cochrane-handbook.org .

Sarrami-Foroushani P, Travaglia J, Debono D, Clay-Williams R, Braithwaite J. Scoping meta-review: introducing a new methodology. Clin Transl Sci. 2015;8(1):77–81.

Stewart L, Moher D, Shekelle P. Why prospective registration of systematic reviews makes sense. Syst Rev. 2012;1:7.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. Br Med J. 2015;349:g7647.

Article   Google Scholar  

Borenstein M, Higgins JP, Hedges LV, Rothstein HR. Basics of meta-analysis: I 2 is not an absolute measure of heterogeneity. Res Synth Methods. 2017;8(1):5–18.

Higgins JP, Thompson SG, Spiegelhalter DJ. A re-evaluation of random-effects meta-analysis. J R Stat Soc Series A. 2009;172(1):137–59.

Lau J, Schmid CH, Chalmers TC. Cumulative meta-analysis of clinical trials builds evidence for exemplary medical care: the Potsdam international consultation on meta-analysis. J Clin Epidemiol. 1995;48(1):45–57.

Cohen J. Statistical power analysis for the behavioral sciences. New York: Academic Press; 1988.

Google Scholar  

Durlak JA. How to select, calculate, and interpret effect sizes. J Pediatr Psychol. 2009;34(9):917–28.

Kraemer HC, Kupfer DJ. Size of treatment effects and their importance to clinical research and practice. Biol Psychiatry. 2006;59(11):990–6.

da Costa BR, Rutjes AW, Johnston BC, Reichenbach S, Nuesch E, Tonia T, Gemperli A, Guyatt GH, Juni P. Methods to convert continuous outcomes into odds ratios of treatment response and numbers needed to treat: meta-epidemiological study. Int J Epidemiol. 2012;41(5):1445–59.

Froud R, Eldridge S, Lall R, Underwood M. Estimating the number needed to treat from continuous outcomes in randomised controlled trials: methodological challenges and worked example using data from the UK back pain exercise and manipulation (BEAM) trial. BMC Med Res Methodol. 2009;9:35.

Furukawa TA, Guyatt GH, Griffith LE. Can we individualize the 'number needed to treat'? An empirical study of summary effect measures in meta-analyses. Int J Epidemiol. 2002;31(1):72–6.

Kelley GA, Kelley KS. Exercise and sleep: a systematic review of previous meta-analyses. J Evid Based Med. 2017;10:11.

Shea BJ, Bouter LM, Peterson J, Boers M, Andersson N, Ortiz Z, Ramsay T, Bai A, Shukla VK, Grimshaw JM. External validation of a measurement tool to assess systematic reviews (AMSTAR). PLoS One. 2007;2(12):e1350.

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7:10.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Whiting P, Savovic J, Higgins JP, Caldwell DM, Reeves BC, Shea B, Davies P, Kleijnen J, Churchill R. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, et al. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4):383–94.

Balshem H, Helfand M, Schunemann HJ, Oxman AD, Kunz R, Brozek J, Vist GE, Falck-Ytter Y, Meerpohl J, Norris S, et al. GRADE guidelines: 3. Rating the quality of evidence. J Clin Epidemiol. 2011;64(4):401–6.

Catala-Lopez F, Tobias A, Cameron C, Moher D, Hutton B. Network meta-analysis for comparing treatment effects of multiple interventions: an introduction. Rheumatol Int. 2014;34(11):1489–96.

Salanti G, Del Giovane C, Chaimani A, Caldwell DM, JPT H. Evaluating the quality of evidence from a network meta-analysis. PLoS One. 2014: 9 (7).

Cochrane Collaboration. Editorial and publishing policy resource. 2016. http://community.cochrane.org/editorial-and-publishing-policy-resource . Accessed 15 Nov 2017.

Shojania KG, Sampson M, Ansari MT, Ji J, Garritty C, Rader T, Moher D. Updating systematic reviews: Technical review No. 16. In. Rockville:Agency for Healthcare Research and Quality; 2007.

Garner P, Hopewell S, Chandler J, MacLehose H, Schunemann HJ, Akl EA, Beyene J, Chang S, Churchill R, Dearness K, et al. When and how to update systematic reviews: consensus and checklist. BMJ. 2016;354:i3507.

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9. W264

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. 2009;62(10):e1–34.

Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, Ioannidis JP, Straus S, Thorlund K, Jansen JP, et al. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations. Ann Intern Med. 2015;162(11):777–84.

Stewart LA, Clarke M, Rovers M, Riley RD, Simmonds M, Stewart G, Tierney JF. Preferred reporting items for systematic review and meta-analyses of individual participant data: the PRISMA-IPD statement. JAMA. 2015;313(16):1657–65.

Steinberg KK, Smith SJ, Stroup DF, Olkin I, Lee NC, Williamson GD, Thacker SB. Comparison of effect size estimates from a meta-analysis of summary data from published studies and from a meta-analysis using individual patient data for ovarian cancer studies. Am J Epidemiol. 1997;145:917–25.

Nevitt SJ, Marson AG, Davie B, Reynolds S, Williams L, Smith CT. Exploring changes over time and characteristics associated with data retrieval across individual participant data meta-analyses: systematic review. BMJ. 2017;357:j1390.

Rouse B, Chaimani A, Li TJ. Network meta-analysis: an introduction for clinicians. Intern Emerg Med. 2017;12(1):103–11.

Riley RD, Jackson D, Salanti G, Burke DL, Price M, Kirkham J, White IR. Multivariate and network meta-analysis of multiple outcomes and multiple treatments: rationale, concepts, and examples. Br Med J. 2017;358:j3932.

Madden LV, Piepho HP, Paul PA. Statistical models and methods for network meta-analysis. Phytopathology. 2016;106(8):792–806.

Zhang J, Carlin BP, Neaton JD, Soon GG, Nie L, Kane R, Virnig BA, Chu HT. Network meta-analysis of randomized clinical trials: reporting the proper summaries. Clin Trials. 2014;11(2):246–62.

Otte JL, Carpenter JS, Manchanda S, Rand KL, Skaar TC, Weaver M, Chernyak Y, Zhong X, Igega C, Landis C. Systematic review of sleep disorders in cancer patients: can the prevalence of sleep disorders be ascertained? Cancer Med. 2015;4(2):183–200.

Mavridis D, Salanti G. A practical introduction to multivariate meta-analysis. Stat Methods Med Res. 2013;22(2):133–58.

Efthimiou O, Mavridis D, Riley RD, Cipriani A, Salanti G. Joint synthesis of multiple correlated outcomes in networks of interventions. Biostat. 2015;16(1):84–97.

Efthimiou O, Mavridis D, Cipriani A, Leucht S, Bagos P, Salanti G. An approach for modelling multiple correlated outcomes in a network of interventions using odds ratios. Stat Med. 2014;33(13):2275–87.

Hong H, Carlin BP, Shamliyan TA, Wyman JF, Ramakrishnan R, Fo S, Kane RL. Comparing bayesian and frequentist approaches for multiple outcome mixed treatment comparisons. Med Decis Mak. 2013;33(5):702–14.

Hong H, Chu H, Zhang J, Carlin BP. A Bayesian missing data framework for generalized multiple outcome mixed treatment comparisons. Res Synth Methods. 2016;7(1):6–22.

Jackson D, Bujkiewicz S, Law M, Riley RD, White IR. A matrix-based method of moments for fitting multivariate network meta-analysis models with multiple outcomes and random inconsistency effects. Biometrics. 2017. epub ahead of print.

Kavvoura FK, Ioannidis JP. Methods for meta-analysis in genetic association studies: a review of their potential and pitfalls. Hum Genet. 2008;123(1):1–14.

Lee YH. Meta-analysis of genetic association studies. Ann Lab Med. 2015;35(3):283–7.

Munafo MR, Flint J. Meta-analysis of genetic association studies. Trends Genet. 2004;20(9):439–44.

Evangelou E, Ioannidis JPA. Meta-analysis methods for genome-wide association studies and beyond. Nat Rev Genet. 2013;14(6):379–89.

Nakaoka H, Inoue I. Meta-analysis of genetic association studies: methodologies, between-study heterogeneity and winner's curse. J Hum Genet. 2009;54:615.

Salanti G, Sanderson S, Higgins J. Obstacles and opportunities in meta-analysis of genetic association studies. Genet Med. 2005;7(1):13–20.

Download references

Acknowledgements

Not applicable.

No funding was received for this work.

Availability of data and materials

Not applicable. This is not a data-based article. It is a review and guideline document.

Author information

Authors and affiliations.

School of Public Health, Department of Biostatistics, Robert C. Byrd Health Sciences Center, West Virginia University, PO Box 9190, Morgantown, WV, 26506-9190, USA

George A. Kelley & Kristi S. Kelley

You can also search for this author in PubMed   Google Scholar

Contributions

GAK was responsible for the conception and design, acquisition of data, analysis and interpretation of data, drafting the initial manuscript and revising it critically for important intellectual content. KSK was responsible for the conception and design, acquisition of data, and reviewing all drafts of the manuscript. Both authors have read and approved the final manuscript.

Corresponding author

Correspondence to George A. Kelley .

Ethics declarations

Authors’ information.

GAK has more than 20 years of successful experience in the design and conduct of all aspects of meta-analysis. With a unique background in applied biostatistics and meta-analysis, he has been an NIH-R01 funded principal investigator for approximately 20 years, with all funding aimed at conducting meta-analytic research. KSK has approximately 19 years of successful experience in conducting meta-analytic research in collaboration with GAK.

Ethics approval and consent to participate

Not applicable. This is a review and guideline document.

Consent for publication

Competing interests.

The first author (GAK) serves as a Statistical Advisor for BMC Cancer . Both authors declare that they have no other competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

PRISMA protocol checklist for SRPRS (PRISMA-SRPSR-P). This file includes a modified PRISMA protocol checklist specific to systematic reviews of previous systematic reviews. (DOC 82 kb)

Additional file 2:

PRISMA checklist for SRPRS. This file includes a modified PRISMA checklist specific to systematic reviews of previous systematic reviews (PRISMA-SRPSR). (DOC 60 kb)

Additional file 3:

PRISMA protocol checklist. This file includes the PRISMA protocol checklist for systematic review protocols. (DOC 81 kb)

Additional file 4:

PRISMA checklist for systematic reviews and meta-analyses. This file includes the PRISMA checklist for systematic reviews and meta-analyses, exclusive of network meta-analyses and IPD meta-analyses. (DOC 62 kb)

Additional file 5:

PRISMA checklist for systematic reviews and network meta-analyses. This file includes the PRISMA checklist for systematic reviews and network meta-analysis. (DOCX 156 kb)

Additional file 6:

PRISMA checklist for systematic reviews and IPD meta-analyses. This file includes the PRISMA checklist for systematic reviews and IPD meta-analyses. (DOCX 19 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Kelley, G.A., Kelley, K.S. Systematic reviews and cancer research: a suggested stepwise approach. BMC Cancer 18 , 246 (2018). https://doi.org/10.1186/s12885-018-4163-6

Download citation

Received : 06 December 2017

Accepted : 22 February 2018

Published : 02 March 2018

DOI : https://doi.org/10.1186/s12885-018-4163-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Meta-analysis

ISSN: 1471-2407

is cancer research a good journal

Select "Patients / Caregivers / Public" or "Researchers / Professionals" to filter your results. To further refine your search, toggle appropriate sections on or off.

Cancer Research Catalyst The Official Blog of the American Association for Cancer Research

is cancer research a good journal

Home > Cancer Research Catalyst > Two Years of Cancer Research Communications: A Conversation with the Journal’s Editors-in-Chief

Two Years of Cancer Research Communications: A Conversation with the Journal’s Editors-in-Chief

In 2021, Lillian L. Siu, MD , and Elaine R. Mardis, PhD, FAACR , each received a call from AACR Chief Executive Officer Margaret Foti, PhD, MD (hc), asking if they’d be interested in serving as the inaugural co-editors-in-chief of Cancer Research Communications , the organization’s first open-access journal.

“I was, of course, flattered,” said Siu, who is an oncologist at the Princess Margaret Cancer Centre and a clinician-scientist at the University of Toronto. “I thought it made perfect sense for AACR to have an open access journal. Knowing that it would be alongside Elaine Mardis made it even more compelling, so I gladly agreed.”

Mardis also jumped at the opportunity.

“I’ve always had a strong commitment to open-access publishing, so I was really delighted to get the call asking about my interest and hearing that Lillian Siu, whom I’ve always held in very, very high regard, would also be invited to be a co-editor-in-chief,” said Mardis, who is co-executive director of the Institute for Genomic Medicine at Nationwide Children’s Hospital, a professor of pediatrics at The Ohio State University, and an AACR Past President.

Together, the pair have led Cancer Research Communications through its first two years of submissions, reviews, and publications, helping define the journal’s editorial ethos and place in the AACR portfolio of peer-reviewed journals. Under their leadership, the journal has published over 300 articles to date.

As Cancer Research Communications wrapped up its second year, Mardis and Siu offered their perspectives on the journal’s mission and top publications to date.

Photos of Dr. Elaine Mardis and Dr. Lillian Siu

How does Cancer Research Communications differ from other AACR journals and other cancer research journals in general?

Mardis: One clear area of distinction that we planned from the very beginning was the breadth of the journal. Cancer research encompasses multiple areas of expertise. It’s very interdisciplinary, so we wanted to capture that breadth in the content we publish. Beyond diverse research topics, we also have breadth in terms of the types of cancer research—everything from basic to clinical science plus correlative science coming out of clinical trials. It’s truly bench to bedside and back.

The other thing that distinguishes Cancer Research Communications from other journals of its type is that we have within our guidelines some unique aspects that I haven’t seen in other journals. For example, we’re interested in manuscripts that reproduce others’ results with different materials and experimental approaches.

Siu: I would say that in addition to the breadth of research we publish, another feature that sets us apart is that we’re interested in the kind of papers that do not necessarily go from beginning to end and address everything in a nice package. Perhaps they leave some questions for others to help fill in over time, but they have the scientific quality that is worthy of an AACR journal. We understand that it is not always possible to deliver a complete story in one manuscript.

Obviously, if a researcher thinks that additional experiments are achievable in a reasonable time and they’re ready to tackle them, then by all means, they should finish everything that they can and submit their study to a journal such as Cancer Discovery .

But if they think it would take a lot more time, resources, or infrastructure to tie up all the loose ends, and the results as they stand are interesting enough for others to be aware of and build upon, I think Cancer Research Communications would be a very good fit for it, assuming, of course, that the analyses and methods are sound.

Similarly, for negative studies or studies with small sample sizes, if the results answer a question that people have been asking repeatedly, we would consider publishing it. For these studies, the discussion section will be key for researchers to state the limitations and to put their findings into context for readers who are outside their field.

Can you highlight a few studies from the past two years that exemplify the journal’s niche?

Mardis: There are three studies that I think illustrate the breadth of inquiry that Cancer Research Communications supports, with the first uncovering basic features of tumor biology, the second having implications for clinical research, and the third using data science to understand clinical outcomes.

In the first study , researchers employed cutting-edge technologies, such as single-cell RNA-seq, CITE-seq, and mass cytometry, to examine the immune microenvironment in multiple myeloma. The novelty of their approach is that they looked at the intersectionality between all three types of data. This approach may be especially significant in multiple myeloma where we don’t fully understand how the immune microenvironment contributes to disease progression. This paper defined some of the markers in the immune microenvironment that were associated with rapid progression in multiple myeloma patients and also highlighted key differences between these three cutting-edge technologies in terms of how they “perceive” differences in gene expression between cells in a tumor.

Another study examined genomic heterogeneity across 42 different tissue and blood samples from a single patient with a metastatic pulmonary atypical carcinoid. The main finding was that genetic variants that were shared across different metastatic sites could be detected in circulating tumor DNA, but emerging variants were not always detectable. Even though this study examined samples from just one patient, the finding has important implications. If we want to utilize circulating tumor DNA to study cancer evolution or monitor patient outcomes, we must recognize that there’s a level of sensitivity that we will have to transcend before we can detect emerging variants, which are often the ones that we most want to identify.

The last paper I want to mention examined sex-specific differences in brain tumors. This was a paper looking at these very specific differences between male and female patients with TP53-mutated brain cancer. The researchers found differential gain of function activity across groups and illustrated with a nice set of data the importance of sex as a biological variable, which I think often gets overlooked.

Siu: I’ll also highlight the study Dr. Mardis mentioned that sequenced samples from a single patient with a metastatic pulmonary atypical carcinoid. While this study only examined one patient, the authors went above and beyond a simple case report to conduct a very in-depth examination with multiregional sampling to provide valuable insights into how tumors evolve and metastasize. I think that’s something that sets this study apart.

In another paper that I would say exemplifies our niche, researchers performed a comprehensive immune profiling of localized leukoplakia (precancer changes in the oral cavity) with the goal of understanding the immune landscape of these lesions. We don’t have a lot of data in this area because patients with these precancer lesions are typically seen by dentists, rather than in cancer centers.

These types of data may help us understand how these lesions progress to cancer, uncover how to prevent this progression, or determine if the resulting tumors might be susceptible to immune checkpoint inhibition, for example. Moreover, this study sets the stage for others to perform similar types of analyses for precancer lesions at other sites, not just in the oral cavity.

Another study I would highlight examined the effect of combining an EXO1 inhibitor, which is a nuclear transport inhibitor, with a KRAS G12C inhibitor in KRAS G12C-mutated tumor models. Responses to clinically approved KRAS G12C inhibitors tend to be fairly short-lived, and there’s a lot of interest in learning how to deepen responses. Having preclinical models to interrogate different combinations will help researchers discover ways to prolong the efficacy of these drugs in the clinic. Manuscripts reporting on preclinical models that explore and inform clinical evaluations would fit the types of articles we are looking for, especially if new and improved models are used to address a research question that is not addressable with current existent models.

What is your vision for the next two years—and beyond—of Cancer Research Communications ?

Mardis: I would like to see us review and publish more research in the field of data science and how it impacts cancer research and clinical trials. Another area in which I’d love to see us grow is novel experimental models. There’s been a lot of skepticism of late around mouse models in particular and whether they’re reflective of human cancers. In response to that, there have been a number of efforts—some of which I’m aware of, many more of which I’m probably not—around patient-derived model systems.

We need to think outside the box to create model systems that don’t require years of careful breeding and genetic engineering to produce. I think organoid systems and tumoroid systems have started to fill that void, but we need to keep pushing the envelope. There has also been a lot of interest around tissue slice cultures to examine tumor growth kinetics and genetics in tandem with therapeutic responses. I would like to see us review and publish more of these studies.

Siu: Well, first and foremost, we want more people to submit their papers to us, and we want more researchers to serve as reviewers. We have published over 300 papers in two years, which I think is impressive for a new journal, but we would like to grow even more.

While we are proud to cover a range of article types, at the same time, I think it would be good for us to be remembered for something that sets us apart from other journals out there. In particular, I’d like to us be known as the journal that publishes studies that make perhaps incremental changes but changes that are important nonetheless because they lead to more questions and stimulate other areas of research.

I think that is an important goal for us to achieve—to be a catalyst for the next big discovery.

  • About This Blog
  • Blog Policies
  • Tips for Contributors

From the Journals: Fear May Reduce Colorectal Cancer Screening Uptake

From the Journals: Fear May Reduce Colorectal Cancer Screening Uptake

What are the Side Effects of Immunotherapies and How Do They Arise?

What are the Side Effects of Immunotherapies and How Do They Arise?

AACR Journal Spearheads Trendsetting Blood Cancer Conference 

AACR Journal Spearheads Trendsetting Blood Cancer Conference 

Cancel reply

Your email address will not be published. Required fields are marked *

Join the Discussion (max: 750 characters)...

This site uses Akismet to reduce spam. Learn how your comment data is processed .

  • A Discussion with Cancer Discovery’s Editors-in-Chief
  • AACR Launches Cancer Research Communications
  • AACR Names New Editor-in-Chief of the Journal...

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS FEATURE
  • 30 April 2024

Do cutting-edge CAR-T-cell therapies cause cancer? What the data say

  • Cassandra Willyard 0

Cassandra Willyard is a science journalist in Madison, Wisconsin.

You can also search for this author in PubMed   Google Scholar

A composite microscope image of engineered T cells (grey) attacking a breast cancer cell. Credit: Steve Gschmeissner/Science Photo Library

You have full access to this article via your institution.

US drug regulators dropped a bombshell in November 2023 when they announced an investigation into one of the most celebrated cancer treatments to emerge in decades. The US Food and Drug Administration (FDA) said it was looking at whether a strategy that involves engineering a person’s immune cells to kill cancer was leading to new malignancies in people who had been treated with it.

Bruce Levine, an immunologist at the University of Pennsylvania Perelman School of Medicine in Philadelphia who helped to pioneer the approach known as chimeric antigen receptor (CAR) T-cell therapy, says he didn’t hear the news until a reporter asked him for comments on the FDA’s announcement.

“Better get smart about it quick,” he remembers thinking.

Although the information provided by the FDA was thin at the time, the agency told reporters that it had observed 20 cases in which immune-cell cancers known as lymphomas had developed in people treated with CAR T cells. Levine, who is a co-inventor of Kymriah, the first CAR-T-cell therapy to be approved, started jotting down questions. Who were these patients? How many were there? And what other drugs had they received before having CAR-T-cell therapy?

is cancer research a good journal

How to supercharge cancer-fighting cells: give them stem cell skills

The FDA has since documented more cases. As of 25 March, the agency had received 33 reports of such lymphomas among some 30,000 people who had been treated. It now requires all CAR-T therapies to carry a boxed warning on the drug’s packaging, which states that such cancers have occurred. And the European Medicines Agency has launched its own investigation. But many of the questions that Levine had in November remain unanswered. It is unclear how many, if any, of the observed cancers came directly from the manipulations made to the CAR T cells. A lot of cancer therapies carry a risk of causing secondary malignancies, and the treated individuals had received other therapies. As Crystal Mackall, a paediatric oncologist who heads the cancer immunotherapy programme at Stanford University in California, puts it: “Do you have a smoking gun?”

Scientists are now racing to determine whether the cellular therapy is driving these cancers or contributing in some way to their development. From the data available so far, the secondary cancers seem to be a rare phenomenon, and the benefits of CAR T cells still outweigh the risks for most prospective recipients. But it’s an important puzzle to solve so that researchers can improve and expand the use of these engineered cells in medicine. CAR-T-cell treatments were once reserved for people who had few other options for therapy. But the FDA has approved several of these treatments as a relatively early, second-line option for lymphoma and multiple myeloma. And some companies are working to expand the therapy’s repertoire to solid tumours , autoimmune diseases , ageing , HIV and more .

Aric Hall, a haematologist at the University of Wisconsin–Madison, says that despite the enthusiasm for CAR-T therapy, the technology is still new. “I used to joke that for the first ten years there were more review articles about CAR T than there were patients who had been treated by CAR T products,” he says. He adds that the risks might be rare, but as CAR-T therapy moves into a bigger pool of patients who aren’t desperately ill, the calculus could change. “The problem is rare risks become a bigger deal when patients have better options.”

Vector safety

Throughout the development of these blockbuster therapies , researchers had reason to think that CAR T cells could become cancerous. CAR-T therapies are personalized — created from a person’s immune cells. Their T cells are extracted and then genetically modified in the laboratory to express a chimeric antigen receptor — or CAR — a protein that targets the T cell to specific cells that they will kill. T cells sporting these receptors are made to multiply and grow in the lab, and physicians then infuse them back into the individual, where they start battling cancer cells. The six CAR-T-cell therapies currently approved in the United States and Europe target antigens on the immune system’s B cells, so they work only against B-cell malignancies — leukaemias, lymphomas and multiple myeloma. But researchers are aiming to develop CAR-T therapies that work on other kinds of cancer, and for other conditions.

The genetic engineering is the step that creates a risk of malignancy. All six FDA-approved CAR-T therapies rely on a retrovirus — typically a lentivirus, such as HIV, or a gammaretrovirus — to ferry the genetic information into the cell. Scientists remove the parts of the viral genome that allow the virus to replicate, making room for the gene they want the virus vector to carry. Once inside a cell, the virus inserts the gene for the CAR into the cell’s genome. But there isn’t a good way to control exactly where the gene goes. If it slips in near a gene that can promote cancer development and activates it, or if it deactivates a tumour-suppressing gene, that boosts the risk of causing a T-cell cancer (see ‘CAR-T concerns’).

CAR-T concerns: graphic that shows how CAR T cells are engineered for treatment, and how they could become cancerous themselves.

This phenomenon, known as insertional mutagenesis, is a risk with most gene therapies . About 20 years ago, for example, groups in London and Paris treated 20 infants who had severe combined immunodeficiency syndrome (SCID) with a gene therapy that used a retrovirus. The therapy worked for most participants, but the retrovirus switched on cancer genes in some. That activation led to leukaemia in five of the participants ; four recovered and one died.

As a result, scientists have reworked the vectors to make them safer, ensuring that their genes don’t recombine, for example. The FDA recommends that CAR-T products undergo testing to prove that the vectors cannot replicate. “The scrutiny we’ve been under has been tremendous,” says Hans-Peter Kiem, an oncologist at the Fred Hutchinson Cancer Center in Seattle, Washington, who has studied viral vectors for decades. Many felt confident about using viral vectors in CAR-T therapies, because T cells are difficult to prod towards malignancy, says Marco Ruella, a haematologist at the Perelman School of Medicine. “Truly the general feeling was that, in T cells, lenti- and retroviruses are extremely safe.”

Search for the smoking gun

When the FDA issued its warning in November, it wasn’t clear what specific reports had prompted the agency to act, or whether the link was causal. Levine recruited some of the biggest names in CAR-T therapies to co-write a commentary on the matter and discuss some of the questions he still had 1 . “I felt — we felt — that it was important to say, ‘Well, let’s take a step back for a minute and see what we really know,’” he says.

is cancer research a good journal

Turbocharged CAR-T cells melt tumours in mice — using a trick from cancer cells

In January, the FDA released more information. In an article in the New England Journal of Medicine , Peter Marks and Nicole Verdun at the FDA’s Center for Biologics Evaluation and Research in Silver Spring, Maryland, revealed that the agency had received 22 reports of leukaemia out of more than 27,000 people treated with various CAR-T therapies 2 . In three secondary cancers that were sequenced, the agency found that the cancerous T cells contained the CAR gene, “which indicates that the CAR-T product was most likely involved in the development of the T-cell cancer”, the authors wrote.

According to Paul Richards, a spokesperson for the FDA, 11 further reports of secondary cancer have since come in, as of 25 March. None of the extra cases has been confirmed as having the CAR gene, but neither are any of the cases so far definitively CAR-negative, Richards said in an e-mail. In many instances, the agency didn’t have a sample of the secondary cancer to analyse; in others, the genomic analysis isn’t yet complete. He adds that certain reports, specifically those positive for the CAR gene, “strongly suggest” that T-cell cancer should be considered a risk of the therapy.

But even when the CAR gene is present, proving causality can be tricky. In one case study, researchers in Australia described 3 a 51-year-old man who had been treated for multiple myeloma with a CAR-T therapy. The treatment was part of a clinical trial of Carvykti, made by Legend Biotech in Somerset, New Jersey, in partnership with the drug giant Johnson & Johnson. The treatment worked to clear his cancer, but five months later he developed an unusual, fast-growing bump on his nose. A biopsy revealed that it was T-cell lymphoma. When the team examined the cancerous cells, they found the gene for the CAR wedged into the regulatory region of a gene called PBX2 .

is cancer research a good journal

Cutting-edge CAR-T cancer therapy is now made in India — at one-tenth the cost

The finding is provocative, Mackall says, but still not a smoking gun, in her opinion. The researchers found that the cancer cells also carried a mutation often seen in lymphomas, and the person had a genetic variant that put him at increased risk of developing cancer, even without the CAR insertion. It’s likely that the cells harvested to create the therapy contained some pre-cancerous T cells, says Piers Blombery, a haematologist at the Peter MacCallum Cancer Centre in Melbourne, Australia, who leads the diagnostic lab that assessed the tumour samples. Now, the team is looking at samples taken before the therapy to determine whether that’s the case.

Other people who have received Carvykti have developed secondary cancers, too. The FDA’s initial warning focused on T-cell malignancies. But long-term follow-up of participants who’d been in an early trial of Carvykti revealed that 10 out of 97 people developed either myelodysplastic syndrome (a kind of pre-leukaemia) or acute myeloid leukaemia (see go.nature.com/3q8vrym ). Nine of them died. As a result, in December 2023, Legend Biotech added language to its boxed warning for Carvykti about the risk of secondary blood cancers.

Craig Tendler, head of oncology clinical development and global medical affairs at Johnson & Johnson Innovative Medicine in Raritan, New Jersey, says that the company looked for the CAR gene in cancer cells from these individuals, but didn’t find it. When the researchers looked at samples taken before the trial participants received treatment, they found pre-malignant cells with the same genetic make-up as the cancer cells. “So, it is likely that, in many of these cases, the prior therapies for multiple myeloma may have already predisposed these patients to secondary malignancy,” Tendler says. Then, it’s possible that the prolonged immune suppression related to the CAR-T treatment process nudged the cells to become cancerous.

is cancer research a good journal

Can autoimmune diseases be cured? Scientists see hope at last

When Ruella first saw the FDA warning, he immediately thought back to a 64-year-old man he had treated who, in 2020, developed a T-cell lymphoma 3 months after receiving CAR-T therapy for a B-cell lymphoma. Ruella and his colleagues identified the CAR gene in the biopsy taken from the man’s lymph node 4 . But it was at such low levels that it seemed unlikely it had integrated into the cancer cells, Ruella says. Instead, the genes could have come from CAR T cells that just happened to be circulating through that lymph node at the time the biopsy was taken. “We thought this was just an accidental finding,” Ruella says.

But after Ruella saw the FDA’s warning, he decided to revisit the case. He and his colleagues went back to a blood sample taken before the person received CAR-T therapy. The team assessed whether T cells with the same T-cell receptor as the lymphoma cells were present before treatment. They were, suggesting that the seeds of the lymphoma pre-dated the therapy. (The low number of cells made further analysis difficult.) Ruella adds that it’s possible the CAR-T treatment produced an inflammatory environment that allowed such seeds to become cancerous. “So this is not something that appears magically out of nowhere,” Ruella says.

Rare outcome

The good news is that these secondary cancers — CAR-driven or not — seem to be rare. After the FDA warning, Ruella and his colleagues also looked back at the files of people who had been treated with commercial CAR-T products at the University of Pennsylvania. Between January 2018 and November 2023, the centre treated 449 individuals who had leukaemias, lymphomas or multiple myeloma with CAR-T therapies 4 . Sixteen patients (3.6%) went on to develop a secondary cancer. But most of those were solid tumours, not the kind of cancer one would expect to come directly from the treatment. Only five of the treated patients developed blood cancers, and only one of those developed a T-cell cancer.

At the Mayo Clinic in Phoenix, Arizona, haematologist Rafael Fonseca and his colleagues also wondered whether the incidence of secondary cancers in people who had received CAR-T therapy differed from the incidence in those with the same cancers who had not. They combed through a data set containing medical records from 330 million people to find individuals who had been newly diagnosed with multiple myeloma or diffuse large B-cell lymphoma between 2018 and 2022. They then looked at how many of them developed T-cell lymphomas. The prevalence didn’t differ drastically from the 22 cases out of 27,000 people that the FDA had reported. The researchers published their findings on the online newsletter platform Substack (see go.nature.com/3u97s38 ). “We wanted to get it out as soon as possible because of the timeliness of what was going on,” Fonseca says.

is cancer research a good journal

The race to supercharge cancer-fighting T cells

Since the FDA’s warning, Hall has started talking about the possibility of secondary cancers to individuals who are contemplating CAR-T therapy. He presents it as a real risk, but a rare one — and explains that it is dwarfed by the risk posed by their current cancer. “For my late-stage myeloma patient, the main risk is that the CAR T doesn’t work and they die of their myeloma,” he says. Mackall and others agree. “I don’t think anyone believes that this will change practice in any way at the current time,” she adds. “Most cancer therapies can cause cancer. This is one of the paradoxes of our business.”

But what about other diseases? Researchers have already tested CAR T cells as a therapy for the autoimmune condition lupus , with impressive results 5 . And more clinical trials of these therapies for other autoimmune diseases are likely to follow. If most of the secondary cancers seen in people treated with CAR T cells are related to the litany of treatments they received beforehand, people with these conditions might not all have the same risk. But even if the therapy is driving some cancers, many say the benefits might still be worth the risk. “Autoimmune diseases are not benign diseases,” said Marks in response to an audience question at an industry briefing in January (see go.nature.com/3jpk6qj ). “Anyone who’s ever known somebody who’s had lupus cerebritis or lupus nephritis will know that those are potentially lethal diseases.”

CAR T cells also hold promise as a treatment for HIV infection, and a trial to test this idea kicked off in 2022 . Researchers are also studying how CAR T cells could be used as a way to curb rejection of transplanted kidneys, or to clear out zombie-like senescent cells that have been implicated in ageing. The possibilities are continuously expanding.

As for whether the benefit of CAR-T therapy outweighs the risk of secondary cancers for these other indications, only time will tell.

Nature 629 , 22-24 (2024)

doi: https://doi.org/10.1038/d41586-024-01215-0

Levine, B. L. et al. Nature Med. 30 , 338–341 (2024).

Article   PubMed   Google Scholar  

Verdun, N. & Marks, P. N . Engl. J. Med. 390 , 584–586 (2024).

Harrison, S. J. et al. Blood 142 (Suppl. 1), 6939 (2023).

Article   Google Scholar  

Ghilardi, G. et al. Nature Med . https://doi.org/10.1038/s41591-024-02826-w (2024).

Müller, F. et al. N. Engl. J. Med. 390 , 687–700 (2024).

Download references

Reprints and permissions

Related Articles

is cancer research a good journal

  • Health care

Mapping genotypes to chromatin accessibility profiles in single cells

Mapping genotypes to chromatin accessibility profiles in single cells

Article 08 MAY 24

Targetable leukaemia dependency on noncanonical PI3Kγ signalling

Targetable leukaemia dependency on noncanonical PI3Kγ signalling

Genomics reveal unknown mutation-promoting agents at global sites

Genomics reveal unknown mutation-promoting agents at global sites

News & Views 01 MAY 24

How ignorance and gender inequality thwart treatment of a widespread illness

How ignorance and gender inequality thwart treatment of a widespread illness

Outlook 09 MAY 24

Bird flu in US cows: where will it end?

Bird flu in US cows: where will it end?

News 08 MAY 24

US funders to tighten oversight of controversial ‘gain-of-function’ research

US funders to tighten oversight of controversial ‘gain-of-function’ research

News 07 MAY 24

Research Associate - Neural Development Disorders

Houston, Texas (US)

Baylor College of Medicine (BCM)

is cancer research a good journal

Staff Scientist - Mitochondria and Surgery

Recruitment of talent positions at shengjing hospital of china medical university.

Call for top experts and scholars in the field of science and technology.

Shenyang, Liaoning, China

Shengjing Hospital of China Medical University

is cancer research a good journal

Faculty Positions at SUSTech School of Medicine

SUSTech School of Medicine offers equal opportunities and welcome applicants from the world with all ethnic backgrounds.

Shenzhen, Guangdong, China

Southern University of Science and Technology, School of Medicine

is cancer research a good journal

Manager, Histology Laboratory - Pathology

is cancer research a good journal

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

is cancer research a good journal

  • Adolescent and Young Adult Cancer
  • Bile Duct Cancer
  • Bladder Cancer
  • Brain Cancer
  • Breast Cancer
  • Cervical Cancer
  • Childhood Cancer
  • Colorectal Cancer
  • Endometrial Cancer
  • Esophageal Cancer
  • Head and Neck Cancer
  • Kidney Cancer
  • Liver Cancer
  • Lung Cancer
  • Mouth Cancer
  • Mesothelioma
  • Multiple Myeloma
  • Neuroendocrine Tumors
  • Ovarian Cancer
  • Pancreatic Cancer
  • Prostate Cancer
  • Skin Cancer/Melanoma
  • Stomach Cancer
  • Testicular Cancer
  • Throat Cancer
  • Thyroid Cancer
  • Prevention and Screening
  • Diagnosis and Treatment
  • Research and Clinical Trials
  • Survivorship

is cancer research a good journal

Request an appointment at Mayo Clinic

is cancer research a good journal

Clinical trials: A significant part of cancer care

Share this:.

Share to facebook

Editor's note: May is National Cancer Research Month.

By Mayo Clinic staff

A cancer diagnosis is an emotional experience. Learning that you have cancer can create feelings of hopelessness, fear and sadness. This is especially true if your cancer is advanced or available treatments are unable to stop or slow its growth.

"Often, when patients are diagnosed with cancer , they feel hopeless and scared. Clinical trials are one way patients can be proactive. They can make a choice in how their care is going to be," says Matthew Block, M.D., Ph.D. , a Mayo Clinic medical oncologist.

Cancer clinical trials help physician-scientists test new and better ways to control and treat cancer. During a clinical trial, participants receive specific interventions, and researchers determine if those interventions are safe and effective. Interventions studied in clinical trials might be new cancer drugs or new combinations of drugs, new medical procedures, new surgical techniques or devices, new ways to use existing treatments, and lifestyle or behavior changes.

Clinical trials provide access to potential treatments under investigation, giving options to people who otherwise may face limited choices. "Clinical trials open the door to a new hope that maybe we can fight their cancer back and give them a better quality of life," says Geoffrey Johnson, M.D., Ph.D. , a Mayo Clinic radiologist, nuclear medicine specialist and co-chair of the Mayo Clinic Comprehensive Cancer Center Experimental and Novel Therapeutics Disease Group.

You will receive cancer treatment if you participate in a clinical trial. "I think one common misperception about clinical trials is that if you enter a clinical trial, you may not get treatment (receive a placebo). And that's actually very much not true. Most clinical trials are looking at one treatment compared to another treatment," says Judy C. Boughey, M.D. , a Mayo Clinic surgical oncologist, chair of Breast and Melanoma Surgical Oncology at Mayo Clinic in Rochester, Minnesota, and chair of the Mayo Clinic Comprehensive Cancer Center Breast Cancer Disease Group.

"I think one common misperception about clinical trials is that if you enter a clinical trial, you may not get treatment (receive a placebo). And that's actually very much not true. Most clinical trials are looking at one treatment compared to another treatment." Judy C. Boughey, M.D.

Watch this video to hear the experiences of people who have participated in cancer clinical trials and to hear Drs. Block, Johnson and Boughey discuss the importance of clinical trials in cancer care:

Clinical trials are a significant part of cancer care at Mayo Clinic Comprehensive Cancer Center. Cancer care teams work together across specialties to make sure the right clinical trials are available to serve the needs of people with cancer who come to Mayo Clinic.

"We are very particular in how we select the clinical trials that we have available for patients," says Dr. Boughey. "We want to have the best trials available for our patients. Some of the clinical trials are evaluating drugs — we are so excited about those drugs, but we can't prescribe those drugs for patients without having that trial. And so we will actually fight to try to get that trial open here to have it available as an opportunity for our patients."

If you choose to participate in a clinical trial, you will continue to receive cancer care. "For most patients that we evaluate, there's always the standard of care treatment option for those patients. And then, in many situations, there's also a clinical trial that the patient can participate in," says Dr. Boughey.

People who participate in clinical trials help make new and better cancer care available for future patients. The treatments available for cancer patients today exist because of the clinical trial participants of yesterday. "We couldn't advance medicine if it wasn't for people volunteering for trials. And the promise from our side is to say we're not going to put patients on trials or offer trials for them to consider unless we think there's a good chance that they'll get a benefit or that society at large will get a benefit," says Dr. Johnson.

"We couldn't advance medicine if it wasn't for people volunteering for trials. And the promise from our side is to say we're not going to put patients on trials or offer trials for them to consider unless we think there's a good chance that they'll get a benefit or that society at large will get a benefit." Geoffrey Johnson, M.D., Ph.D.

Participating in a clinical trial may give you access to cutting-edge treatment, improve your quality of life and extend your time with loved ones.

"It's definitely worth reaching out to your healthcare provider and asking, 'What clinical trials could I be a potential candidate for?'" says Dr. Boughey. "And remember, you can ask this of your surgical oncologist, your medical oncologist, your radiation oncologist, or any of the physicians you're seeing because there are trials in all disciplines. There are also ongoing trials that require the collection of tissue or the donation of blood. They can also be important in trying to help future generations as we continue to work to end cancer."

Participating in a clinical trial is an important decision with potential risks and benefits. Explore these FAQ about cancer clinical trials, and ask your care team if a clinical trial might be right for you.

Learn more about cancer clinical trials and find a clinical trial at Mayo Clinic.

Join the Cancer Support Group on Mayo Clinic Connect , an online community moderated by Mayo Clinic for patients and caregivers.

Read these articles about people who have participated in clinical trials at Mayo Clinic:

  • A silent tumor, precancerous polyps and the power of genetic screening
  • Mayo Clinic’s DNA study reveals BRCA1 mutations in 3 sisters, prompts life-changing decisions

Read more articles about Mayo Clinic cancer research made possible by people participating in clinical trials.

Related Posts

is cancer research a good journal

Dr. Sujay Vora is studying a new approach to glioblastoma treatment that is improving health outcomes and quality of life for elderly people like Richard Casper.

is cancer research a good journal

Dr. S. John Weroha discusses new treatments and research that are helping more people survive ovarian cancer.

is cancer research a good journal

Hypothesis-driven AI offers an innovative way to use massive datasets to help discover the complex causes of diseases such as cancer and improve treatment strategies.

A novel preclinical model of the normal human breast

  • Open access
  • Published: 02 May 2024
  • Volume 29 , article number  9 , ( 2024 )

Cite this article

You have full access to this open access article

is cancer research a good journal

  • Anthony J. Wilby   ORCID: orcid.org/0009-0004-3165-9606 1 , 2 ,
  • Sara Cabral   ORCID: orcid.org/0009-0007-5164-6935 1 , 2 , 3 ,
  • Nastaran Zoghi   ORCID: orcid.org/0000-0003-0987-6677 4 ,
  • Sacha J. Howell   ORCID: orcid.org/0000-0001-8141-6515 1 , 2 , 5 , 6 ,
  • Gillian Farnie   ORCID: orcid.org/0000-0002-1407-2529 7 &
  • Hannah Harrison   ORCID: orcid.org/0000-0002-0054-8047 1 , 2  

556 Accesses

17 Altmetric

Explore all metrics

Improved screening and treatment have decreased breast cancer mortality, although incidence continues to rise. Women at increased risk of breast cancer can be offered risk reducing treatments, such as tamoxifen, but this has not been shown to reduce breast cancer mortality. New, more efficacious, risk-reducing agents are needed. The identification of novel candidates for prevention is hampered by a lack of good preclinical models. Current patient derived in vitro and in vivo models cannot fully recapitulate the complexities of the human tissue, lacking human extracellular matrix, stroma, and immune cells, all of which are known to influence therapy response. Here we describe a normal breast explant model utilising a tuneable hydrogel which maintains epithelial proliferation, hormone receptor expression, and residency of T cells and macrophages over 7 days. Unlike other organotypic tissue cultures which are often limited by hyper-proliferation, loss of hormone signalling, and short treatment windows (< 48h), our model shows that tissue remains viable over 7 days with none of these early changes. This offers a powerful and unique opportunity to model the normal breast and study changes in response to various risk factors, such as breast density and hormone exposure. Further validation of the model, using samples from patients undergoing preventive therapies, will hopefully confirm this to be a valuable tool, allowing us to test novel agents for breast cancer risk reduction preclinically.

Similar content being viewed by others

is cancer research a good journal

Patient-Derived Explant Cultures of Normal and Tumor Human Breast Tissue

is cancer research a good journal

In Vitro Models for Studying Invasive Transitions of Ductal Carcinoma In Situ

is cancer research a good journal

Methods to Evaluate Cell Growth, Viability, and Response to Treatment in a Tissue Engineered Breast Cancer Model

Avoid common mistakes on your manuscript.

Introduction

Over the last 40 years, improved screening and treatment have significantly decreased breast cancer mortality in the UK [ 1 , 2 ] with a combined 41% decrease since the 1970s in both females and males[ 3 ]. Despite this, the incidence of breast cancer continues to rise [ 4 , 5 ] with an 18% increase in the UK between 1993 and 2016 [ 6 ]. This increase in incidence can, in part, be attributed to improved screening techniques, however, it highlights the importance of prevention and risk-reduction interventions. Women at high risk of breast cancer can be offered risk-reducing agents including selective oestrogen receptor modulators, tamoxifen and raloxifene, and an aromatase inhibitor, anastrozole [ 7 ]. While they help reduce the risk of primary breast cancer by 30–50%, they have not been shown to decrease mortality [ 8 , 9 ]. New preventative agents that reduce the risk of potentially fatal breast cancers are required [ 10 ]. Such agents require extensive preclinical testing before they can be used in the clinic. Current in vitro and in vivo models of the normal human breast do not fully recapitulate the human breast extracellular matrix (ECM) and its complex cellular environment. In the cancer treatment setting, this is thought to contribute to a poor rate of translation from preclinical studies to human trials with around 90–95% of drugs failing before reaching the clinic [ 11 ]. To discover new preventative agents capable of successful translation to the clinic, a model which overcomes these limitations is required.

Normal breast tissues xenografted into immunocompromised mice allow for human epithelial duct persistence, but the ECM and stromal cells are murine and lack most host immune cells [ 12 , 13 ]. Each of these components is required for normal tissue homeostasis and plays a role in cancer development and progression [ 14 , 15 , 16 , 17 ]. Their absence may result in differential responses to therapies compared with the intact human gland in vivo.

In vitro models, such as patient-derived organoids, use tissue which is enzymatically digested prior to culture, where it is grown in rodent-derived ECM supports, such as Matrigel. This Setup lacks many of the normal human ECM components and cell–cell interactions. Organotypic tissue slice and explant models retain the complexity of the normal breast whilst supporting tissue on gelatin sponges. They are predominantly used for cancer research, but bring their own challenges, such as abnormal proliferation, loss of hormone signalling and loss of viability after 96h [ 18 ] and employ unphysiological levels of glucocorticoids which may interfere with signalling by other steroids.[ 19 ].

Tissue stiffness is a key factor in breast carcinogenesis [ 12 , 13 , 20 ] and the maintenance of hormone receptor expression in vitro [ 21 ], meaning selecting the correct matrix and the correct elastic modulus is of utmost importance.

Intact breast organoid culture has delivered an improved in vitro model for hormone investigations but there are few models that accurately recapitulate the structure of the ECM. These models also have a time-limited treatment window of between 24 and 72h. Additionally, whilst models have been produced that incorporate other cell types, for example stromal cells [ 22 ] and fibroblasts [ 23 ], no current model accurately reproduces the entire repertoire of cell types, limiting our ability to model normal breast physiology [ 24 , 25 , 26 ].

We describe here a tissue explant model utilising a tuneable hydrogel which preserves cellular heterogeneity and hormone signalling for 7 days. This model will be used to identify novel agents to translate into the clinical prevention setting and to study how risk factors, such as breast density and exposure to hormones or chemicals impact on cancer development.

Materials and methods

Explant culture.

Figure  1 shows the culture procedure we have developed.

figure 1

Explant model schematic: Individual steps are highlighted from tissue collection to fixing for immunohistochemistry. i . shows example of tissue processing, ii . tissue can be seen within the Boyden chamber, encased in hydrogel and iii . shows an example of the tissue microarray (TMA) produced for each sample

Tissue Collection and Dissection

Normal non-cancerous breast tissue was collected following risk-reduction surgery. Research samples were obtained from the Manchester Cancer Research Centre (MCRC) Biobank with fully informed consent. Ethical approval for the study was granted by the MCRC Biobank under authorisation number 18/NW/0092. Sample details are displayed in Table  1 .

Tissue was placed immediately into collection medium consisting of DMEM High Glucose (SIGMA, D6546) with 100U/mL Penicillin/100 µg/mL Streptomycin (SIGMA, P0781) and stored for up to 24 h at 4°C. Excess adipose tissue was removed, and tissue was cut into 2–4 mm 3 pieces before culture.

An animal-free hydrogel (VitroGel RGD, TebuBio, TWG003) was used to provide support to explants. The hydrogel was mixed with 0.5 × PBS to achieve the desired elastic modulus (which is representative of stiffness), according to the manufacturer’s instructions [ 27 ]. This was then mixed with medium to initiate hydrogel gelation and 100 µL was immediately pipetted into Boyden chambers suspended over a well containing 700 µL of medium. Chambers were incubated for 2 h at 37°C to allow the hydrogels to set. The same procedure was followed to overlay 150 µL of hydrogel on top of the explant and, once the hydrogel was set, 200 µL of medium was added to the top. During culture, 50% of the medium below and above the explant was refreshed every 2–3 days.

Culture Media

Explant medium (ExM): DMEM/F12 (Thermo, 11330032) containing B27 supplement (no vitamin A; Invitrogen, Paisley, UK, 12587010), 2 mM L-glutamine (SIGMA, G7513) and 100 U/mL Penicillin/100 µg/mL Streptomycin (SIGMA, P0781).

Clevers’ medium (CM[ 28 ]): DMEM/F12 containing 5% R-spondin conditioned medium, 5 nM neuregulin (Peprotech, 100–03), 5 ng/mL epidermal growth factor (Peprotech, AF-100–15), 100 ng/mL noggin (Peprotech, 120-10C), 500 nM A83-01 (Tocris, 2939), 5 µM Y27632 (Abmole, S1049), 500 nM SB202190 (Sigma, S7067), 1 × B27 (with vitamin A, Gibco, 1750444), 1.25 nM N-acetylcysteine (Sigma, A9165), 5 mM nicotinamide (Sigma, N0636), 1 × Glutamax (Invitrogen, 12634–034), 10 mM HEPES (Invitrogen, 15,630–056), 100 U/mL Penicillin/100 µg/mL Streptomycin and 50 ng/mL FGF2 (Thermo, 100-18B).

FCS medium: DMEM/F12 and 10% foetal calf serum (FCS, Thermo, 10270106), 100 U/mL Penicillin/100 µg/mL Streptomycin.

Hormone Responsiveness assays

For activation and inhibition studies, 10 nM 17β-oestradiol (SIGMA, E2758) and/or 100 nM fulvestrant (SIGMA, I4409) was added to the medium following explant encapsulation in its hydrogel support and was refreshed with each medium change.

Immunohistochemistry: Staining

Tissue was removed from the hydrogel, formalin-fixed and paraffin-embedded, with 6 explants per block, and 4 µm slices were prepared for immunohistochemistry. Staining was performed using the Bond Max autostainer (Leica) and Ventana Discovery autostainer (Roche). TMAs were scanned using the Olympus VS120.

Staining on the Leica Bond Max was performed with 20 min of antigen retrieval at pH6 (Ki67, progesterone receptor (PR) and cleaved caspase 3) and pH9 (oestrogen receptor α (ERα)). Primary antibodies: mouse α-ERα (6F11, Life Technologies, MA513304) 1:200, mouse α-Ki67 (MIB-1, DAKO, M7240) 1:100, mouse α-PR (636, DAKO, M3569) 1:500, rabbit α-cleaved caspase 3, 1:200 (5A1E, New England Biolabs, 9664S) and EnVision + Single Reagents (HRP, Mouse, Agilent, K400111-2) used as secondary, following the manufacturer’s instructions.

Staining on the Ventana was performed for CD4 and CD8 using an ultra View Universal DAB Detection Kit (Roche, 760–500) and CD68 using an OptiView DAB IHC Detection Kit (Roche, 76–700). Slides were deparaffinised, antigens were retrieved using standard cell conditioning (CC1), primary antibody incubations were performed for 16 min, and bluing with haematoxylin II was performed for 4 min. The following primary antibodies were used according to the manufacturer's instructions: α-CD4 (SP35, Roche, 790–4423), α-CD8 (SP57, Roche, 790–4460) and α-CD68 (KP-1, Roche, 790–2931).

Immunohistochemistry: Scoring

Scoring was performed, and percentage positive calculated, in https://imagej.net/ij/ at 10 × magnification. All epithelial cells in the explant (ERα, PR, Ki67, Caspase) were counted and immune cells (CD4, CD8, CD68) were treated as a single population whether inter- or intraductal. Fold change from control was calculated to demonstrate changes following culture.

To test the elastic modulus of the low, moderate and high stiffness gels, 500 µL samples were prepared using ExM in Boyden chamber hanging inserts, as described above. The inserts were incubated at 37°C for 24 h prior to rheological testing. Hydrogels were removed from the hanging insert and transferred to the rheometer. The 25 mm upper parallel plate of the rheometer was lowered to the desired trim gap size of 500 µm, and the gels were allowed to equilibrate for 3 min at room temperature. Single frequency (1Hz) amplitude sweeps were performed between 0.001% and 100% shear strain using an Anton Parr MCR 302E rheometer.

Statistical Analysis

One-way ANOVA tests were performed, with pairing of samples, and comparisons made to day 0 or untreated sample as appropriate. A Dunnett’s correction for multiple testing was performed. For rheology, measurements were repeated four times, and a one-way ANOVA was performed on the linear viscoelastic regions of the gels. Significance is highlighted in each figure; * P  < 0.05, ** P  < 0.01, *** P  < 0.001.

Model Development: Culture Medium Selection

We began model development by altering the media within our model, whilst maintaining a consistent hydrogel elastic modulus (Vitrogel RGD:0.5X PBS:medium 1:1:1, described as “moderate” according to manufacturer’s instructions, see Table  2 ). We compared 3 culture media: explant medium (ExM), Clevers’ medium (CM[ 28 ]), which is commonly used for organoid culture and FCS medium, which is commonly used in our lab for 3D Matrigel cell culture.

Proliferation in the different media was assessed by staining tissue for Ki67, and representative images are shown in Fig.  2 A. Proliferation was significantly increased in both CM and FCS medium at day 7 whilst in ExM, proliferation was unchanged at both time points (Fig.  2 B, n = 4). In the remaining studies reported here, we selected ExM to be our standard explant medium, as this maintained our cultured explant proliferation at a rate similar to that measured in matched non-cultured breast tissue.

figure 2

The effect of medium on proliferation. Following 3 and 7 days of culture in each of the media tested, tissue was fixed and assessed for proliferative rate using Ki67. a ) shows representative images from each medium at each time point (TMA108). b ) shows fold change in proliferation from day 0 in multiple cores from 4 patient samples. Significant increases in proliferation were seen in both Clevers’ medium (CM) and FCS medium at day 7. * P  < 0.05 ** P  < 0.01. Scale bar shows 50 µm

Model Development: 3D Matrix

To ensure we selected the best elasticity of hydrogel for our model we tested three preparations, termed low, moderate and high, against tissue cultured in no matrix (Table  2 ). The moderate support was clearly superior at preventing hyper-proliferation when compared to standard membrane supported tissue (no hydrogel) or the low and high hydrogels, with significant changes in Ki67 expression in these conditions over 7 days (Fig.  3 , n = 3). Rheology was performed on each hydrogel mix (Suppl. Figure 1 ) and showed that the moderate hydrogel had an elastic modulus of 413.78 Pa ± 9.73 and that even small differences can significantly affect proliferation with increased proliferation in low and high gels which measured 244.43 Pa ± 6.99 and 1472.11 Pa ± 26.65 respectively.

figure 3

The effect of hydrogel support on proliferation . Following 3 and 7 days of culture in no hydrogel or each of the 3 hydrogel densities tested, tissue was fixed and assessed for proliferative rate using Ki67. a ) shows representative images from each hydrogel support at each time point (TMA110). b ) shows fold change in proliferation from day 0 in multiple cores from 3 patient samples. Significant increases in proliferation were seen when tissue was cultured with no support at day 7, within low support hydrogel at days 3 and 7 and in high support hydrogels at day 3. No change was seen using our moderate hydrogel (413.78 Pa). * P  < 0.05 ** P  < 0.01 *** P  < 0.001. Scale bar shows 50 µm

Culture Validation: Proliferation and Viability are Maintained Over 7 days

To validate our selected culture conditions, using ExM and moderate hydrogel, we first performed staining on our full tissue panel ( n  = 13) with H&E (Fig.  4 A) and Ki67 at 0, 3 and 7 days (Fig.  4 B). By day 7, there is evidence of vacuolation within the myoepithelial cells, which occurs during the luteal phase in normal tissue [ 29 , 30 ], and may suggest the tissue is responding to progesterone, a component of B27, in the medium. No significant change in proliferation was observed over 7 days (Fig.  4 C).

figure 4

Assessment of structure, proliferation and apoptosis in all samples . a ) Representative images of H&E staining (TMA134) and b ) Ki67 staining following 3 and 7 days of culture (TMA134) in optimised conditions. c ) No significant change in proliferation was seen ( n  = 13). d ) Representative images of caspase staining following 3 and 7 days of culture (TMA115), red arrow highlights single positive cell. Tissue cultured in FCS medium on day 3 used as an example of positive staining. Scale bar shows 50 µm

Cleaved caspase 3 was used to assess cell viability. The number of cells staining positive for caspase was low and zero in many slices, meaning we were unable to perform statistical testing but drew the conclusion that no change was seen in viability over 7 days. Figure  4 D shows representative images of explants cultured in our selected conditions as well as an example of positive staining when explants were cultured in FCS medium.

Culture Validation: Expression of Hormone Receptors is Maintained During Explant Culture

Next, we stained our TMAs for ERα and PR to confirm whether their expression remained unchanged during culture. Figure  5 shows representative images of ERα and PR staining. A small, but significant, increase in both ERα (Fig.  5 B) and PR (Fig.  5 D) was seen after 7 days ( n  = 13), but was unchanged at day 3.

figure 5

Assessment of hormone receptor expression in all samples. a ) Representative images of ERα staining following 3 and 7 days of culture (TMA138). b ) No significant change in ERα was seen at day 3 but there was a small, but significant increase at day 7 ( n  = 13). c ) Representative images of PR staining following 3 and 7 days of culture (TMA134). d ) No significant change in progesterone receptor was seen at day 3 but a small but significant, increase is seen at day 7 ( n  = 13). * P  < 0.05 ** P  < 0.01. Scale bar shows 50 µm

Culture Validation: Explants Remain Responsive to Oestrogen

To assess whether the tissue explants remained responsive to oestrogen, 10 nM 17β-oestradiol (E2) was added to the cultures for 7 days in the presence or absence of 100 nM fulvestrant, an anti-oestrogen. The proliferative response was assessed by Ki67 staining, and the transcriptional response was assessed by staining for PR, a transcriptional target of ERα.

Following the addition of E2, proliferation was significantly increased at days 3 and 7 showing a response to hormone stimulation (Fig.  6 A). This increase in proliferation was blocked with the addition of fulvestrant at day 7 confirming the effect was a direct influence of E2 acting through ERα. Similar changes were seen when measuring the expression level of PR; the number of positive cells increased significantly with the addition of E2 at day 3 and 7 (Fig.  6 B). This increase was lost in the presence of fulvestrant, with expression levels being significantly reduced compared to controls.

figure 6

Assessment of oestrogen responsiveness. a ) Proliferation was significantly increased at days 3 and 7 following the addition of 10 nM 17β-oestradiol. At day 3 proliferation remained significantly increased following the addition of 100 nM fulvestrant but at day 7, proliferation had fallen below control levels (day 7, no 17β-oestradiol). b ) Progesterone receptor (PR) expression was significantly increased in the presence of 10 nM 17β-oestradiol at days 3 and 7 and this effect was blocked by 100 nM fulvestrant. * P  < 0.05 ** P  < 0.01 *** P  < 0.001

Culture Validation: Immune cells can be seen within tissue explants

The TMAs were stained for recognised immune cell markers including CD4, T helper cells (n = 4), CD8, cytotoxic T cells (n = 7), and CD68, macrophages (n = 4) (Fig.  7 A). CD4 and CD8 T cells persisted throughout 7 days of culture but there was a significant decrease in these cells by day 7 (Fig.  7 B). Macrophages (CD68) were also seen throughout culture with a small but significant decrease in the numbers seen (Fig.  7 B).

figure 7

Assessment of immune cell infiltration . a ) Representative images of CD4, CD8 and CD68 staining following 3 and 7 days of culture (TMA156). b ) A significant decrease was seen in CD4 (n = 4) and CD8 ( n  = 7) cells after 7 days of culture and CD68 ( n  = 4) at 3 and 7 days. * P  < 0.05 ** P  < 0.01. Scale bar shows 50 µm

We have developed a robust in vitro model of human breast tissue that maintains cell viability and tissue morphology like that of uncultured tissue over 7 days. Typically, proliferation, measured using Ki67 staining, is increased in the early days (24–72h) of culture and this has been attributed to growth promoting factors within the medium or release from systemic control upon removal from the patient [ 31 ]. Our data show that, using our defined medium and hydrogel support, cellular proliferation in normal breast explants remains unchanged throughout 7 days of culture. This, and the maintenance of PR expression, suggests that the low level of oestrogenic compounds in the phenol red containing medium are sufficient to maintain oestrogen receptor activation and signalling representative of that seen in the uncultured tissue. By day 7 there appears to be vacuolation of the myoepithelial cells which is typical of the normal breast as it enters the luteal phase of the menstrual cycle. This could suggest that the tissue is responding to progesterone, a component in the B27 additive, but this requires further investigation to confirm. The ability of a normal breast preclinical model to maintain unperturbed proliferation and hormone status is essential for investigating endocrine risk-reducing agents, such as tamoxifen, in vitro. Proliferation is a typical primary pathological end point in prevention trials and is robustly maintained in our model. Cell death is another common endpoint in preclinical drug testing and our established model conditions maintained close to 100% viability over 7 days, allowing any observed changes in viability after treatment to be measured. Our culture model is currently limited to 7 days as, although some of our explants were able to survive for longer periods, levels of ERα and PR began to fall. We hypothesise that changes in the hydrogel structure over time may be implicated in these changes [ 32 ] and passage of explants to a fresh hydrogel may overcome this issue, although this remains to be tested.

Our data show that an elastic modulus of 413.78 Pa was best suited to maintain normal breast characteristics during culture compared to stiffer hydrogels of 1472.11 Pa and softer hydrogels of 244.43 Pa. The variation seen between the different hydrogel elastic moduli show the importance of tight control during culture of the normal breast tissue in vitro with even small changes having profound effects. This model also offers the opportunity to experimentally adjust elastic modulus to mimic the changes resulting from higher mammographic density related to increased risk. This will allow the model to be used to answer questions about the impact of increased tissue density on the biology of the cellular and stromal components within these normal tissues. Currently our model is based upon a hydrogel enriched with arginylglycylaspartic acid (RGD), which is the most abundant integrin ligand found within the extracellular matrix [ 33 ]. Whilst this support has proven to be a good model, it will be interesting in the future to include other components such as collagen and hyaluronan as these molecules are known to be highly influential in the interaction between epithelial and stromal cells and the extracellular matrix [ 34 , 35 ].

Our normal breast explant model not only shows excellent viability and maintenance of proliferation and hormone receptor expression over 7 days but also that explants remain responsive to hormone stimulation and antagonism. Typically, oestrogen receptor expression is highly variable in normal tissue and rarely remains stable during in vitro culture [ 19 ], making our model appealing for investigations into hormone receptor signalling. Response to the anti-oestrogen, fulvestrant, also suggests this model may be useful in testing the efficacy of endocrine and other drug classes in the preclinical setting.

It would also be of great interest to assess the cellular hierarchy within our explants, asking whether luminal progenitors/stem cells persist, and to investigate changes in gene expression in each subpopulation with and without treatment [ 36 ]. Assessment of the stromal compartment would also be interesting to assess changes or maintenance of fibroblast and adipocytes, for example. Immune cells persist in our explant culture over at least 7 days but have not yet been fully explored. Such cells play a vital role in tissue maintenance and in the response to certain therapies [ 37 , 38 ]. Their persistence and role in the maintenance of tissue homeostasis is an active avenue of research and we will continue to refine the medium in this model in the hope of maintaining these cells for longer periods. Investigation of the short-term inhibition or stimulation of immune cells is feasible in the explant model and may shed light on their relative importance in determining the efficacy of preventative therapies.

We have established a foundation of characteristics, viability, proliferation and hormone responsiveness, that are crucial for a preclinical normal breast model to aid translational investigations. To build further confidence and proof of concept of our explant model we will need to assess gene expression changes in normal breast tissue pre- and post-explant culture. Preliminary unpublished data suggest that there is minimal change in gene expression through culture alone, but more patient samples are needed to confirm this. Ultimately, we will assess the ability of our explant model to truly recapitulate the clinical situation in vitro through the assessment of cellular characteristics and gene expression changes with pre- and post-tamoxifen treatment in culture and directly compare them to the gene expression changes in biopsies from patients before and during treatment with preventative tamoxifen (Biomarkers of Breast Cancer Prevention, BBCP. Funded by the Biomedical Research Centre (IS-BRC-1215–20,007)). If we can show that short-term responses in our explant model can mimic those in patients, we will have a useful tool for preclinical testing of novel agents that could be explored in the next generation of clinical prevention trials [ 36 , 37 , 38 ].

Ethics approval and consent to participate

The authors declare no competing interests.

Availability of data and materials

No datasets were generated or analysed during the current study.

Abbreviations

Biomarkers of Breast Cancer Prevention

Biomedical Research Centre

Clevers’ medium

17β-Oestradiol

Extracellular matrix

Oestrogen receptor α

Explant medium

Foetal calf serum

Manchester Cancer Research Centre

Not recorded

Progesterone receptor

Arginylglycylaspartic acid

Tissue microarray

Autier P, et al. Disparities in breast cancer mortality trends between 30 European countries: retrospective trend analysis of WHO mortality database. BMJ. 2010;341:c3620.

Article   PubMed   PubMed Central   Google Scholar  

Kohler BA, et al. Annual Report to the Nation on the Status of Cancer, 1975–2011, Featuring Incidence of Breast Cancer Subtypes by Race/Ethnicity, Poverty, and State. J Natl Cancer Inst. 2015;107(6):djv048.

CRUK. Breast Cancer Statistics. 2021; Available from: https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/breast-cancer#:~:text=Breast%20cancer%20mortality,all%20cancer%20deaths%20(2018) . Accessed Mar 2024.

Arnold M, et al. Current and future burden of breast cancer: Global statistics for 2020 and 2040. Breast. 2022;66:15–23.

Weir HK, et al. The past, present, and future of cancer incidence in the United States: 1975 through 2020. Cancer. 2015;121(11):1827–37.

Article   PubMed   Google Scholar  

CRUK. Breast cancer statistics. 2022; Available from: https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/breast-cancer#:~:text=Breast%20cancer%20risk&text=1%20in%207%20UK%20females,caused%20by%20post%2Dmenopausal%20hormones . Accessed Mar 2024.

Evans DG, Howell A. Can the breast screening appointment be used to provide risk assessment and prevention advice? Breast Cancer Res. 2015;17(1):84.

Cuzick J, et al. Selective oestrogen receptor modulators in prevention of breast cancer: an updated meta-analysis of individual participant data. Lancet. 2013;381(9880):1827–34.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Cuzick J, et al. Tamoxifen for prevention of breast cancer: extended long-term follow-up of the IBIS-I breast cancer prevention trial. Lancet Oncol. 2015;16(1):67–75.

Donnelly LS, et al. Uptake of tamoxifen in consecutive premenopausal women under surveillance in a high-risk breast cancer clinic. Br J Cancer. 2014;110(7):1681–7.

Fernandez-Moure JS. Lost in Translation: The Gap in Scientific Advancements and Clinical Application. Front Bioeng Biotechnol. 2016;4:43.

Habel LA, et al. Mammographic density and risk of second breast cancer after ductal carcinoma in situ. Cancer Epidemiol Biomarkers Prev. 2010;19(10):2488–95.

Hansen KC, et al. An in-solution ultrasonication-assisted digestion method for improved extracellular matrix proteome coverage. Mol Cell Proteomics. 2009;8(7):1648–57.

Gentile P. Breast Cancer Therapy: The Potential Role of Mesenchymal Stem Cells in Translational Biomedical Research. Biomedicines. 2022;10(5):1179.

Novoseletskaya E, et al. Mesenchymal Stromal Cell-Produced Components of Extracellular Matrix Potentiate Multipotent Stem Cell Response to Differentiation Stimuli. Front Cell Dev Biol. 2020;8:555378.

Zhao Y, et al. Extracellular Matrix: Emerging Roles and Potential Therapeutic Targets for Breast Cancer. Front Oncol. 2021;11:650453.

Amens JN, Bahçecioglu G, Zorlutuna P. Immune System Effects on Breast Cancer. Cell Mol Bioeng. 2021;14(4):279–92.

Kenerson HL, et al. Tumor slice culture as a biologic surrogate of human cancer. Ann Transl Med. 2020;8(4):114.

Dunphy KA, et al. Inter-Individual Variation in Response to Estrogen in Human Breast Explants. J Mammary Gland Biol Neoplasia. 2020;25(1):51–68.

Yaghjyan L, et al. Mammographic breast density and subsequent risk of breast cancer in postmenopausal women according to tumor characteristics. J Natl Cancer Inst. 2011;103(15):1179–89.

Munne PM, et al. Compressive stress-mediated p38 activation required for ERα + phenotype in breast cancer. Nat Commun. 2021;12(1):6967.

Rosenbluth JM, et al. Organoid cultures from normal and cancer-prone human breast tissues preserve complex epithelial lineages. Nat Commun. 2020;11(1):1711.

Davaadelger B, et al. BRCA1 mutation influences progesterone response in human benign mammary organoids. Breast Cancer Res. 2019;21(1):124.

Mohan SC, et al. Current Status of Breast Organoid Models. Front Bioeng Biotechnol. 2021;9:745943.

Sflomos G, Shamsheddin M, Brisken C. An ex vivo model to study hormone action in the human breast. J Vis Exp. 2015;95:e52436.

Google Scholar  

Zhao Z, et al. Organoids Nature Reviews Methods Primers. 2022;2(1):94.

Article   CAS   PubMed   Google Scholar  

Bioscience TW. WHAT IS THE ELASTIC MODULUS OF THE “HIGH-CONCENTRATION” VITROGELS AFTER DILUTION?. Available from: https://www.thewellbio.com/docs/what-is-the-elastic-modulus-of-the-high-concentration-different-dilution-vitrogels/ . Accessed Mar 2024.

Sachs N, et al. A Living Biobank of Breast Cancer Organoids Captures Disease Heterogeneity. Cell. 2018;172(1–2):373–386.e10.

Ramakrishnan R, Khan SA, Badve S. Morphological changes in breast tissue with menstrual cycle. Mod Pathol. 2002;15(12):1348–56.

Vogel PM, et al. The correlation of histologic changes in the human breast with the menstrual cycle. Am J Pathol. 1981;104(1):23–34.

CAS   PubMed   PubMed Central   Google Scholar  

Centenera MM, et al. A patient-derived explant (PDE) model of hormone-dependent cancer. Mol Oncol. 2018;12(9):1608–22.

Ahearne M. Introduction to cell-hydrogel mechanosensing. Interface. Focus. 2014;4(2):20130038.

Plow EF, et al. Ligand binding to integrins. J Biol Chem. 2000;275(29):21785–8.

Bellis SL. Advantages of RGD peptides for directing cell association with biomaterials. Biomaterials. 2011;32(18):4205–10.

Rijns L, et al. The Importance of Effective Ligand Concentration to Direct Epithelial Cell Polarity in Dynamic Hydrogels. Adv Mater. 2023:2300873.

Bhat-Nakshatri P, et al. A single-cell atlas of the healthy breast tissues reveals clinically relevant clusters of breast epithelial cells. Cell Rep Med. 2021;2(3):100219.

Khan S, et al. Ex vivo explant model of adenoma and colorectal cancer to explore mechanisms of action and patient response to cancer prevention therapies. Mutagenesis. 2022;37(5–6):227–37.

Turpin R, et al. 123P - Patient-Derived Explant Cultures (PDECs) as a Model System for Immuno-Oncology Studies. Annals of Oncology. 2019;30:xi45.

Article   Google Scholar  

Download references

Acknowledgments

Grant support was received from Prevent Breast Cancer; GA18-002 (HH) and GA23-04 (AJW). SJH is supported by the Manchester National Institute for Health Research (NIHR) Biomedical Research Centre (IS-BRC-1215-20007). We want to thank Jianhua Tang in the Visualisation, Irradiation & Analysis Facility for slide scanning support and Caron Abbey in the Histology core facility, for endless help and advice throughout the project. We would also like to thank the MCRC Biobank for consent and sample collection as well as the patients of The Christie NHS Foundation Trust and the University Hospitals of South Manchester who donated samples for this research. All figures were produced using Biorender.com.

Author information

Authors and affiliations.

Division of Cancer Sciences, Manchester Cancer Research Centre, University of Manchester, Oglesby Cancer Research Building, Wilmslow Road, Manchester, M20 4GJ, United Kingdom

Anthony J. Wilby, Sara Cabral, Sacha J. Howell & Hannah Harrison

Manchester Breast Centre, University of Manchester, Wilmslow Road, Manchester, M20 4GJ, United Kingdom

Henry Royce Institute, University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom

Sara Cabral

Department of Materials & Institute of Biotechnology, University of Manchester, Manchester, M1 7DN, United Kingdom

Nastaran Zoghi

NIHR Manchester Biomedical Research Centre, Manchester Academic Health Science Centre, Central Manchester University Hospitals NHS Foundation Trust, 29 Grafton St, Manchester, M13 9WU, United Kingdom

Sacha J. Howell

The Nightingale and Prevent Breast Cancer Centre, Manchester University NHS Foundation Trust, Manchester, M23 9LT, United Kingdom

Cancer Research Horizons, The Francis Crick Institute, 1 Midland Road, Manchester, NW1 1AT, United Kingdom

Gillian Farnie

You can also search for this author in PubMed   Google Scholar

Contributions

HH-Performed experimental work, wrote manuscript and prepared figures, AW - Assisted with data analysis, edited manuscript and figures, SC - Rheology and editing, NZ – Rheology, SJH - Intellectual input and guidance, GF - Intellectual input and guidance. All authors reviewed manuscript.

Corresponding author

Correspondence to Hannah Harrison .

Ethics declarations

Competing interests, additional information, publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file.

1 . Rheometry was performed, and elastic modulus calculated for low, moderate and high hydrogels ( n =4). *** P <0.001.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Wilby, A.J., Cabral, S., Zoghi, N. et al. A novel preclinical model of the normal human breast. J Mammary Gland Biol Neoplasia 29 , 9 (2024). https://doi.org/10.1007/s10911-024-09562-4

Download citation

Received : 24 November 2023

Accepted : 01 April 2024

Published : 02 May 2024

DOI : https://doi.org/10.1007/s10911-024-09562-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Normal breast
  • Risk-reduction
  • In vitro modelling

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Cancer Transitions Class for Those Finishing Treatment

    is cancer research a good journal

  2. Journal of Cancer Research and Clinical Oncology

    is cancer research a good journal

  3. International Journal of Cancer

    is cancer research a good journal

  4. Volumes and issues

    is cancer research a good journal

  5. Cancer Research UK launches first ad from Anomaly

    is cancer research a good journal

  6. Cancer Research UK sees budget cut by £45m

    is cancer research a good journal

COMMENTS

  1. Journal Rankings on Cancer Research

    International Scientific Journal & Country Ranking. SCImago Institutions Rankings SCImago Media Rankings SCImago Iber SCImago Research Centers Ranking SCImago Graphica Ediciones Profesionales de la Información

  2. Cancer Research

    About the Journal. Cancer Research publishes impactful original studies, reviews, and opinion pieces of high significance to the broad cancer research community. Cancer Research seeks manuscripts that offer conceptual or technological advances leading to basic and translational insights into cancer biology. Read More About the Journal.

  3. Cancer Biology, Epidemiology, and Treatment in the 21st Century

    The Biology of Cancer. Cancer is a disease that begins with genetic and epigenetic alterations occurring in specific cells, some of which can spread and migrate to other tissues. 4 Although the biological processes affected in carcinogenesis and the evolution of neoplasms are many and widely different, we will focus on 4 aspects that are particularly relevant in tumor biology: genomic and ...

  4. Cancer

    Cancer, an international interdisciplinary journal of the American Cancer Society, publishes high-impact, peer-reviewed original articles and solicited content on the latest clinical research findings. Spanning the breadth of oncology disciplines, Cancer delivers something for everyone involved in cancer research, risk reduction, treatment, and ...

  5. Rethinking cancer: current challenges and opportunities in cancer research

    Summary: Pablo Meyer and Ross Cagan discuss the main themes emerging from a recent workshop 'Rethinking Cancer', highlighting the key challenges faced by the research community and outlining potential strategies to promote translation of basic findings. In November 2016, a unique workshop titled 'Rethinking Cancer' was hosted by The ...

  6. Focus Issue: The Future Of Cancer Research

    Cancer care has advanced at an impressive pace in recent years. New insights into tumor immunology and biology, combined with advances in artificial intelligence, nano tools, genetic engineering ...

  7. Volume 84 Issue 9

    The Cancer Research Data Commons (CRDC) serves a foundational role in the NCI's cancer data ecosystem, empowering researchers by making nearly 10 petabytes of data FAIR—Findable, Accessible, Interoperable, and Reusable—in line with the NIH's Data Management and Sharing Policy. To commemorate ten years of the CRDC, this issue of Cancer ...

  8. Cancer

    F. Castinetti and F. Borson-ChazotN Engl J Med 2023;389:1916-1918. Although medullary thyroid cancer accounts for less than 5% of thyroid cancers, it deserves attention because of its phenotypic ...

  9. Evaluating cancer research impact: lessons and examples from existing

    Performing cancer research relies on substantial financial investment, and contributions in time and effort from patients. It is therefore important that this research has real life impacts which are properly evaluated. The optimal approach to cancer research impact evaluation is not clear. The aim of this study was to undertake a systematic review of review articles that describe approaches ...

  10. The recent landscape of cancer research worldwide: a bibliometric and

    The aim of this paper is to map the scientific landscape related to cancer research worldwide between 2012 and 2017. We use scientific publication data from Web of Science Core Collection and combine bibliometrics and social network analysis techniques to identify the most relevant journals, research areas, countries and research organizations in cancer scientific landscape.

  11. Impact Ratings Show Cancer Journal Continues to Outperform

    Help us end cancer as we know it, for everyone. The annual scientific and clinical Journal Impact Factors were released on June 30, and the American Cancer Society's CA: A Cancer Journal for Clinicians outperformed. The CA impact factor climbed from 292.3 last year to a staggering 508.7 and remains the highest-rated oncology journal in the world.

  12. About Cancer Research

    About Cancer Research Scope. Cancer Research publishes impactful original studies, reviews, and opinion pieces of high significance to the broad cancer research community.Cancer Research seeks manuscripts that offer conceptual or technological advances leading to basic and translational insights into cancer biology. Manuscripts that focus on convergence science, the bridging of two or more ...

  13. REPRESENT recommendations: improving inclusion and trust in cancer

    Detecting cancer early is essential to improving cancer outcomes. Minoritized groups remain underrepresented in early detection cancer research, which means that findings and interventions are not ...

  14. Systematic reviews and cancer research: a suggested stepwise approach

    Systematic reviews, with or without meta-analysis, play an important role today in synthesizing cancer research and are frequently used to guide decision-making. However, there is now an increase in the number of systematic reviews on the same topic, thereby necessitating a systematic review of previous systematic reviews. With a focus on cancer, the purpose of this article is to provide a ...

  15. Identifying research themes and trends in the top 20 cancer journals

    The three highest contributing journals to this topic were Cancer Research (CA_R), Clinical Cancer Research (CCR), and Leukemia (LEUK) with 5539, 1935, and 930 abstracts respectively. The lowest contributing journals were CA Cancer J Clin (CA-CJC) and Journal of the National Comprehensive Cancer Network with 1 and 2 abstracts respectively (Table 1

  16. Cancer Research

    AACR Journals. Blood Cancer Discovery; Cancer Discovery; Cancer Epidemiology, Biomarkers & Prevention; Cancer Immunology Research; Cancer Prevention Research; Cancer Research; Cancer Research Communications; Clinical Cancer Research; Molecular Cancer Research; Molecular Cancer Therapeutics

  17. Understanding the Publication and Format of Cancer Research Studies

    Most cancer research studies include background information, the researcher's methods, results, and the meaning of the findings. Studies published in many journals present this data in a certain format known as Introduction, Methods, Results, and Discussion (IMRAD). The IMRAD format allows other scientists to do similar studies to see if there ...

  18. Blueprint for cancer research: Critical gaps and opportunities

    Introduction. Recent discoveries and rapidly accumulating data are providing novel insights into the extrinsic and intrinsic causes and mechanisms in the development and progression of cancer. 1 These and earlier advances in cancer prevention, early detection, and treatment have contributed to the continuous decline in the age-standardized cancer death rate from 1991 to 2017 by a total of 29%.

  19. Two Years of Cancer Research Communications: A Conversation with the

    Mardis: One clear area of distinction that we planned from the very beginning was the breadth of the journal. Cancer research encompasses multiple areas of expertise. It's very interdisciplinary, so we wanted to capture that breadth in the content we publish. ... I think Cancer Research Communications would be a very good fit for it, assuming ...

  20. Recent developments in cancer research: Expectations for a new remedy

    Organoid biology will further develop with a goal of translating the research into personalized therapy. These research areas may result in the creation of new cancer treatments in the future. Keywords: exosomes, immunotherapy, microbiome, organoid. Cancer research has made remarkable progress and new discoveries are beginning to be made.

  21. Do cutting-edge CAR-T-cell therapies cause cancer? What the ...

    In an article in the New England Journal of Medicine, Peter Marks and Nicole Verdun at the FDA's Center for Biologics Evaluation and Research in Silver Spring, Maryland, revealed that the agency ...

  22. Clinical trials: A significant part of cancer care

    Editor's note: May is National Cancer Research Month. By Mayo Clinic staff. A cancer diagnosis is an emotional experience. Learning that you have cancer can create feelings of hopelessness, fear and sadness. This is especially true if your cancer is advanced or available treatments are unable to stop or slow its growth.

  23. A novel preclinical model of the normal human breast

    Over the last 40 years, improved screening and treatment have significantly decreased breast cancer mortality in the UK [1, 2] with a combined 41% decrease since the 1970s in both females and males[].Despite this, the incidence of breast cancer continues to rise [4, 5] with an 18% increase in the UK between 1993 and 2016 [].This increase in incidence can, in part, be attributed to improved ...

  24. Clinical Cancer Research

    About the Journal. Clinical Cancer Research publishes articles that focus on innovative clinical and translational research bridging the laboratory and the clinic. Topics include targeted therapies; mechanisms of drug sensitivity and resistance; pharmacogenetics and pharmacogenomics; personalized medicine; immunotherapy; gene therapy; diagnostic biomarkers; innovative imaging, and clinical ...