• Library databases
  • Library website

Evidence-Based Research: Levels of Evidence Pyramid

Introduction.

One way to organize the different types of evidence involved in evidence-based practice research is the levels of evidence pyramid. The pyramid includes a variety of evidence types and levels.

  • systematic reviews
  • critically-appraised topics
  • critically-appraised individual articles
  • randomized controlled trials
  • cohort studies
  • case-controlled studies, case series, and case reports
  • Background information, expert opinion

Levels of evidence pyramid

The levels of evidence pyramid provides a way to visualize both the quality of evidence and the amount of evidence available. For example, systematic reviews are at the top of the pyramid, meaning they are both the highest level of evidence and the least common. As you go down the pyramid, the amount of evidence will increase as the quality of the evidence decreases.

Levels of Evidence Pyramid

Text alternative for Levels of Evidence Pyramid diagram

EBM Pyramid and EBM Page Generator, copyright 2006 Trustees of Dartmouth College and Yale University. All Rights Reserved. Produced by Jan Glover, David Izzo, Karen Odato and Lei Wang.

Filtered Resources

Filtered resources appraise the quality of studies and often make recommendations for practice. The main types of filtered resources in evidence-based practice are:

Scroll down the page to the Systematic reviews , Critically-appraised topics , and Critically-appraised individual articles sections for links to resources where you can find each of these types of filtered information.

Systematic reviews

Authors of a systematic review ask a specific clinical question, perform a comprehensive literature review, eliminate the poorly done studies, and attempt to make practice recommendations based on the well-done studies. Systematic reviews include only experimental, or quantitative, studies, and often include only randomized controlled trials.

You can find systematic reviews in these filtered databases :

  • Cochrane Database of Systematic Reviews Cochrane systematic reviews are considered the gold standard for systematic reviews. This database contains both systematic reviews and review protocols. To find only systematic reviews, select Cochrane Reviews in the Document Type box.
  • JBI EBP Database (formerly Joanna Briggs Institute EBP Database) This database includes systematic reviews, evidence summaries, and best practice information sheets. To find only systematic reviews, click on Limits and then select Systematic Reviews in the Publication Types box. To see how to use the limit and find full text, please see our Joanna Briggs Institute Search Help page .

Open Access databases provide unrestricted access to and use of peer-reviewed and non peer-reviewed journal articles, books, dissertations, and more.

You can also find systematic reviews in this unfiltered database :

Some journals are peer reviewed

To learn more about finding systematic reviews, please see our guide:

  • Filtered Resources: Systematic Reviews

Critically-appraised topics

Authors of critically-appraised topics evaluate and synthesize multiple research studies. Critically-appraised topics are like short systematic reviews focused on a particular topic.

You can find critically-appraised topics in these resources:

  • Annual Reviews This collection offers comprehensive, timely collections of critical reviews written by leading scientists. To find reviews on your topic, use the search box in the upper-right corner.
  • Guideline Central This free database offers quick-reference guideline summaries organized by a new non-profit initiative which will aim to fill the gap left by the sudden closure of AHRQ’s National Guideline Clearinghouse (NGC).
  • JBI EBP Database (formerly Joanna Briggs Institute EBP Database) To find critically-appraised topics in JBI, click on Limits and then select Evidence Summaries from the Publication Types box. To see how to use the limit and find full text, please see our Joanna Briggs Institute Search Help page .
  • National Institute for Health and Care Excellence (NICE) Evidence-based recommendations for health and care in England.
  • Filtered Resources: Critically-Appraised Topics

Critically-appraised individual articles

Authors of critically-appraised individual articles evaluate and synopsize individual research studies.

You can find critically-appraised individual articles in these resources:

  • EvidenceAlerts Quality articles from over 120 clinical journals are selected by research staff and then rated for clinical relevance and interest by an international group of physicians. Note: You must create a free account to search EvidenceAlerts.
  • ACP Journal Club This journal publishes reviews of research on the care of adults and adolescents. You can either browse this journal or use the Search within this publication feature.
  • Evidence-Based Nursing This journal reviews research studies that are relevant to best nursing practice. You can either browse individual issues or use the search box in the upper-right corner.

To learn more about finding critically-appraised individual articles, please see our guide:

  • Filtered Resources: Critically-Appraised Individual Articles

Unfiltered resources

You may not always be able to find information on your topic in the filtered literature. When this happens, you'll need to search the primary or unfiltered literature. Keep in mind that with unfiltered resources, you take on the role of reviewing what you find to make sure it is valid and reliable.

Note: You can also find systematic reviews and other filtered resources in these unfiltered databases.

The Levels of Evidence Pyramid includes unfiltered study types in this order of evidence from higher to lower:

You can search for each of these types of evidence in the following databases:

TRIP database

Background information & expert opinion.

Background information and expert opinions are not necessarily backed by research studies. They include point-of-care resources, textbooks, conference proceedings, etc.

  • Family Physicians Inquiries Network: Clinical Inquiries Provide the ideal answers to clinical questions using a structured search, critical appraisal, authoritative recommendations, clinical perspective, and rigorous peer review. Clinical Inquiries deliver best evidence for point-of-care use.
  • Harrison, T. R., & Fauci, A. S. (2009). Harrison's Manual of Medicine . New York: McGraw-Hill Professional. Contains the clinical portions of Harrison's Principles of Internal Medicine .
  • Lippincott manual of nursing practice (8th ed.). (2006). Philadelphia, PA: Lippincott Williams & Wilkins. Provides background information on clinical nursing practice.
  • Medscape: Drugs & Diseases An open-access, point-of-care medical reference that includes clinical information from top physicians and pharmacists in the United States and worldwide.
  • Virginia Henderson Global Nursing e-Repository An open-access repository that contains works by nurses and is sponsored by Sigma Theta Tau International, the Honor Society of Nursing. Note: This resource contains both expert opinion and evidence-based practice articles.
  • Previous Page: Phrasing Research Questions
  • Next Page: Evidence Types
  • Office of Student Disability Services

Walden Resources

Departments.

  • Academic Residencies
  • Academic Skills
  • Career Planning and Development
  • Customer Care Team
  • Field Experience
  • Military Services
  • Student Success Advising
  • Writing Skills

Centers and Offices

  • Center for Social Change
  • Office of Academic Support and Instructional Services
  • Office of Degree Acceleration
  • Office of Research and Doctoral Services
  • Office of Student Affairs

Student Resources

  • Doctoral Writing Assessment
  • Form & Style Review
  • Quick Answers
  • ScholarWorks
  • SKIL Courses and Workshops
  • Walden Bookstore
  • Walden Catalog & Student Handbook
  • Student Safety/Title IX
  • Legal & Consumer Information
  • Website Terms and Conditions
  • Cookie Policy
  • Accessibility
  • Accreditation
  • State Authorization
  • Net Price Calculator
  • Contact Walden

Walden University is a member of Adtalem Global Education, Inc. www.adtalem.com Walden University is certified to operate by SCHEV © 2024 Walden University LLC. All rights reserved.

Systematic Reviews

  • Levels of Evidence
  • Evidence Pyramid
  • Joanna Briggs Institute

The evidence pyramid is often used to illustrate the development of evidence. At the base of the pyramid is animal research and laboratory studies – this is where ideas are first developed. As you progress up the pyramid the amount of information available decreases in volume, but increases in relevance to the clinical setting.

Meta Analysis  – systematic review that uses quantitative methods to synthesize and summarize the results.

Systematic Review  – summary of the medical literature that uses explicit methods to perform a comprehensive literature search and critical appraisal of individual studies and that uses appropriate st atistical techniques to combine these valid studies.

Randomized Controlled Trial – Participants are randomly allocated into an experimental group or a control group and followed over time for the variables/outcomes of interest.

Cohort Study – Involves identification of two groups (cohorts) of patients, one which received the exposure of interest, and one which did not, and following these cohorts forward for the outcome of interest.

Case Control Study – study which involves identifying patients who have the outcome of interest (cases) and patients without the same outcome (controls), and looking back to see if they had the exposure of interest.

Case Series   – report on a series of patients with an outcome of interest. No control group is involved.

  • Levels of Evidence from The Centre for Evidence-Based Medicine
  • The JBI Model of Evidence Based Healthcare
  • How to Use the Evidence: Assessment and Application of Scientific Evidence From the National Health and Medical Research Council (NHMRC) of Australia. Book must be downloaded; not available to read online.

When searching for evidence to answer clinical questions, aim to identify the highest level of available evidence. Evidence hierarchies can help you strategically identify which resources to use for finding evidence, as well as which search results are most likely to be "best".                                             

Hierarchy of Evidence. For a text-based version, see text below image.

Image source: Evidence-Based Practice: Study Design from Duke University Medical Center Library & Archives. This work is licensed under a Creativ e Commons Attribution-ShareAlike 4.0 International License .

The hierarchy of evidence (also known as the evidence-based pyramid) is depicted as a triangular representation of the levels of evidence with the strongest evidence at the top which progresses down through evidence with decreasing strength. At the top of the pyramid are research syntheses, such as Meta-Analyses and Systematic Reviews, the strongest forms of evidence. Below research syntheses are primary research studies progressing from experimental studies, such as Randomized Controlled Trials, to observational studies, such as Cohort Studies, Case-Control Studies, Cross-Sectional Studies, Case Series, and Case Reports. Non-Human Animal Studies and Laboratory Studies occupy the lowest level of evidence at the base of the pyramid.

  • Finding Evidence-Based Answers to Clinical Questions – Quickly & Effectively A tip sheet from the health sciences librarians at UC Davis Libraries to help you get started with selecting resources for finding evidence, based on type of question.
  • << Previous: What is a Systematic Review?
  • Next: Locating Systematic Reviews >>
  • Getting Started
  • What is a Systematic Review?
  • Locating Systematic Reviews
  • Searching Systematically
  • Developing Answerable Questions
  • Identifying Synonyms & Related Terms
  • Using Truncation and Wildcards
  • Identifying Search Limits/Exclusion Criteria
  • Keyword vs. Subject Searching
  • Where to Search
  • Search Filters
  • Sensitivity vs. Precision
  • Core Databases
  • Other Databases
  • Clinical Trial Registries
  • Conference Presentations
  • Databases Indexing Grey Literature
  • Web Searching
  • Handsearching
  • Citation Indexes
  • Documenting the Search Process
  • Managing your Review

Research Support

  • Last Updated: May 1, 2024 4:09 PM
  • URL: https://guides.library.ucdavis.edu/systematic-reviews

Elsevier QRcode Wechat

  • Research Process

Levels of evidence in research

  • 5 minute read
  • 99.1K views

Table of Contents

Level of evidence hierarchy

When carrying out a project you might have noticed that while searching for information, there seems to be different levels of credibility given to different types of scientific results. For example, it is not the same to use a systematic review or an expert opinion as a basis for an argument. It’s almost common sense that the first will demonstrate more accurate results than the latter, which ultimately derives from a personal opinion.

In the medical and health care area, for example, it is very important that professionals not only have access to information but also have instruments to determine which evidence is stronger and more trustworthy, building up the confidence to diagnose and treat their patients.

5 levels of evidence

With the increasing need from physicians – as well as scientists of different fields of study-, to know from which kind of research they can expect the best clinical evidence, experts decided to rank this evidence to help them identify the best sources of information to answer their questions. The criteria for ranking evidence is based on the design, methodology, validity and applicability of the different types of studies. The outcome is called “levels of evidence” or “levels of evidence hierarchy”. By organizing a well-defined hierarchy of evidence, academia experts were aiming to help scientists feel confident in using findings from high-ranked evidence in their own work or practice. For Physicians, whose daily activity depends on available clinical evidence to support decision-making, this really helps them to know which evidence to trust the most.

So, by now you know that research can be graded according to the evidential strength determined by different study designs. But how many grades are there? Which evidence should be high-ranked and low-ranked?

There are five levels of evidence in the hierarchy of evidence – being 1 (or in some cases A) for strong and high-quality evidence and 5 (or E) for evidence with effectiveness not established, as you can see in the pyramidal scheme below:

Level 1: (higher quality of evidence) – High-quality randomized trial or prospective study; testing of previously developed diagnostic criteria on consecutive patients; sensible costs and alternatives; values obtained from many studies with multiway sensitivity analyses; systematic review of Level I RCTs and Level I studies.

Level 2: Lesser quality RCT; prospective comparative study; retrospective study; untreated controls from an RCT; lesser quality prospective study; development of diagnostic criteria on consecutive patients; sensible costs and alternatives; values obtained from limited stud- ies; with multiway sensitivity analyses; systematic review of Level II studies or Level I studies with inconsistent results.

Level 3: Case-control study (therapeutic and prognostic studies); retrospective comparative study; study of nonconsecutive patients without consistently applied reference “gold” standard; analyses based on limited alternatives and costs and poor estimates; systematic review of Level III studies.

Level 4: Case series; case-control study (diagnostic studies); poor reference standard; analyses with no sensitivity analyses.

Level 5: (lower quality of evidence) – Expert opinion.

Levels of evidence in research hierarchy

By looking at the pyramid, you can roughly distinguish what type of research gives you the highest quality of evidence and which gives you the lowest. Basically, level 1 and level 2 are filtered information – that means an author has gathered evidence from well-designed studies, with credible results, and has produced findings and conclusions appraised by renowned experts, who consider them valid and strong enough to serve researchers and scientists. Levels 3, 4 and 5 include evidence coming from unfiltered information. Because this evidence hasn’t been appraised by experts, it might be questionable, but not necessarily false or wrong.

Examples of levels of evidence

As you move up the pyramid, you will surely find higher-quality evidence. However, you will notice there is also less research available. So, if there are no resources for you available at the top, you may have to start moving down in order to find the answers you are looking for.

  • Systematic Reviews: -Exhaustive summaries of all the existent literature about a certain topic. When drafting a systematic review, authors are expected to deliver a critical assessment and evaluation of all this literature rather than a simple list. Researchers that produce systematic reviews have their own criteria to locate, assemble and evaluate a body of literature.
  • Meta-Analysis: Uses quantitative methods to synthesize a combination of results from independent studies. Normally, they function as an overview of clinical trials. Read more: Systematic review vs meta-analysis .
  • Critically Appraised Topic: Evaluation of several research studies.
  • Critically Appraised Article: Evaluation of individual research studies.
  • Randomized Controlled Trial: a clinical trial in which participants or subjects (people that agree to participate in the trial) are randomly divided into groups. Placebo (control) is given to one of the groups whereas the other is treated with medication. This kind of research is key to learning about a treatment’s effectiveness.
  • Cohort studies: A longitudinal study design, in which one or more samples called cohorts (individuals sharing a defining characteristic, like a disease) are exposed to an event and monitored prospectively and evaluated in predefined time intervals. They are commonly used to correlate diseases with risk factors and health outcomes.
  • Case-Control Study: Selects patients with an outcome of interest (cases) and looks for an exposure factor of interest.
  • Background Information/Expert Opinion: Information you can find in encyclopedias, textbooks and handbooks. This kind of evidence just serves as a good foundation for further research – or clinical practice – for it is usually too generalized.

Of course, it is recommended to use level A and/or 1 evidence for more accurate results but that doesn’t mean that all other study designs are unhelpful or useless. It all depends on your research question. Focusing once more on the healthcare and medical field, see how different study designs fit into particular questions, that are not necessarily located at the tip of the pyramid:

  • Questions concerning therapy: “Which is the most efficient treatment for my patient?” >> RCT | Cohort studies | Case-Control | Case Studies
  • Questions concerning diagnosis: “Which diagnose method should I use?” >> Prospective blind comparison
  • Questions concerning prognosis: “How will the patient’s disease will develop over time?” >> Cohort Studies | Case Studies
  • Questions concerning etiology: “What are the causes for this disease?” >> RCT | Cohort Studies | Case Studies
  • Questions concerning costs: “What is the most cost-effective but safe option for my patient?” >> Economic evaluation
  • Questions concerning meaning/quality of life: “What’s the quality of life of my patient going to be like?” >> Qualitative study

Find more about Levels of evidence in research on Pinterest:

Elsevier News Icon

17 March 2021 – Elsevier’s Mini Program Launched on WeChat Brings Quality Editing Straight to your Smartphone

  • Manuscript Review

Professor Anselmo Paiva: Using Computer Vision to Tackle Medical Issues with a Little Help from Elsevier Author Services

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

Writing in Environmental Engineering

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

choosing the Right Research Methodology

Choosing the Right Research Methodology: A Guide for Researchers

Why is data validation important in research

Why is data validation important in research?

Writing a good review article

Writing a good review article

Input your search keywords and press Enter.

  • Archives & Special Collections home
  • Art Library home
  • Ekstrom Library home
  • Kornhauser Health Sciences Library home
  • Law Library home
  • Music Library home
  • University of Louisville Hospital home
  • Interlibrary Loan
  • Off-Campus Login
  • Renew Books
  • Cardinal Card
  • My Print Center
  • Business Ops
  • Cards Career Connection

Search Site

Search catalog, evidence-based practice: types of evidence.

  • Introduction
  • Finding Evidence (PICO)
  • Types of Evidence
  • Appraising Evidence
  • EBP Resources
  • Write a Systematic Review
  • Evidence-Based Dentistry
  • Evidence-Based Nursing
  • Evidence-Based SPL and Audiology
  • Evidence-Based Public Health

Types of Research

Once you have your focused question, it's time to decide on the type of evidence you need to answer it. Understanding the types of research will help guide you to proper evidence that will support your question.

Evidence Based Pyramid

Hierarchy of evidence and research designs.

Pyramid Logo

As you move up the pyramid, the study designs are more rigorous and are less biased.

What type of study should you use?

Question definitions:.

Intervention/Therapy: Questions addressing the treatment of an illness or disability.

Etiology: Questions addressing the causes or origins of disease (i.e., factors that produce or predispose toward a certain disease or disorder).

Diagnosis: Questions addressing the act or process of identifying or determining the nature and cause of a disease or injury through evaluation.

Prognosis/Prediction: Questions addressing the prediction of the course of a disease.

The type of question you have will often lead you to the type of research that will best answer the question:

Intervention/Prevention:   RCT > Cohort Study > Case Control > Case Series

Therapy:   RCT > Cohort > Case Control > Case Series

Prognosis/Prediction:   Cohort Study > Case Control > Case Series

Diagnosis/Diagnostic:   Prospective, blind comparison to Gold Standard

Etiology:   RCT > Cohort Study > Case Control > Case Series

Definitions

Cebm study design tree.

Flow-chart depicting study design

The type of study can generally be worked at by looking at three issues:

Q1. What was the aim of the study?

  • To simply describe a population (PO questions) descriptive
  • To quantify the relationship between factors (PICO questions) analytic.

Q2. If analytic, was the intervention randomly allocated?

  • Yes? RCT
  • No? Observational study

For observational study the main types will then depend on the timing of the measurement of outcome, so our third question is:

Q3. When were the outcomes determined?

  • Some time after the exposure or intervention? cohort study (‘prospective study’)
  • At the same time as the exposure or intervention? cross sectional study or survey
  • Before the exposure was determined? case-control study (‘retrospective study’ based on recall of the exposure)

from Centre for Evidence-Based Medicine https://www.cebm.net/2014/04/study-designs/

  • << Previous: Finding Evidence (PICO)
  • Next: Appraising Evidence >>
  • Last Updated: Apr 26, 2024 9:08 AM
  • Librarian Login

types of evidence in research articles

  • What is the best evidence and how to find it

Why is research evidence better than expert opinion alone?

In a broad sense, research evidence can be any systematic observation in order to establish facts and reach conclusions. Anything not fulfilling this definition is typically classified as “expert opinion”, the basis of which includes experience with patients, an understanding of biology, knowledge of pre-clinical research, as well as of the results of studies. Using expert opinion as the only basis to make decisions has proved problematic because in practice doctors often introduce new treatments too quickly before they have been shown to work, or they are too slow to introduce proven treatments.

However, clinical experience is key to interpret and apply research evidence into practice, and to formulate recommendations, for instance in the context of clinical guidelines. In other words, research evidence is necessary but not sufficient to make good health decisions.

Which studies are more reliable?

Not all evidence is equally reliable.

Any study design, qualitative or quantitative, where data is collected from individuals or groups of people is usually called a primary study. There are many types of primary study designs, but for each type of health question there is one that provides more reliable information.

For treatment decisions, there is consensus that the most reliable primary study is the randomised controlled trial (RCT). In this type of study, patients are randomly assigned to have either the treatment being tested or a comparison treatment (sometimes called the control treatment). Random really means random. The decision to put someone into one group or another is made like tossing a coin: heads they go into one group, tails they go into the other.

The control treatment might be a different type of treatment or a dummy treatment that shouldn't have any effect (a placebo). Researchers then compare the effects of the different treatments.

Large randomised trials are expensive and take time. In addition sometimes it may be unethical to undertake a study in which some people were randomly assigned not to have a treatment. For example, it wouldn't be right to give oxygen to some children having an asthma attack and not give it to others. In cases like this, other primary study designs may be the best choice.

Laboratory studies are another type of study. Newspapers often have stories of studies showing how a drug cured cancer in mice. But just because a treatment works for animals in laboratory experiments, this doesn't mean it will work for humans. In fact, most drugs that have been shown to cure cancer in mice do not work for people.

Very rarely we cannot base our health decisions on the results of studies. Sometimes the research hasn't been done because doctors are used to treating a condition in a way that seems to work. This is often true of treatments for broken bones and operations. But just because there's no research for a treatment doesn't mean it doesn't work. It just means that no one can say for sure.

Why we shouldn’t read studies

An enormous amount of effort is required to be able to identify and summarise everything we know with regard to any given health intervention. The amount of data has soared dramatically. A conservative estimation is there are more than 35,000 medical journals and almost 20 million research articles published every year. On the other hand, up to half of existing data might be unpublished.

How can anyone keep up with all this? And how can you tell if the research is good or not? Each primary study is only one piece of a jigsaw that may take years to finish. Rarely does any one piece of research answer either a doctor's, or a patient's questions.

Even though reading large numbers of studies is impractical, high-quality primary studies, especially RCTs, constitute the foundations of what we know, and they are the best way of advancing the knowledge. Any effort to support or promote the conduct of sound, transparent, and independent trials that are fully and clearly published is worth endorsing. A prominent project in this regard is the All trials initiative.

Why we should read systematic reviews

Most of the time a single study doesn't tell us enough. The best answers are found by combining the results of many studies.

A systematic review is a type of research that looks at the results from all of the good-quality studies. It puts together the results of these individual studies into one summary. This gives an estimate of a treatment's risks and benefits. Sometimes these reviews include a statistical analysis, called a meta-analysis , which combines the results of several studies to give a treatment effect.

Systematic reviews are increasingly being used for decision making because they reduce the probability of being misled by looking at one piece of the jigsaw. By being systematic they are also more transparent, and have become the gold standard approach to synthesise the ever-expanding and conflicting biomedical literature.

Systematic reviews are not fool proof. Their findings are only as good as the studies that they include and the methods they employ. But the best reviews clearly state whether the studies they include are good quality or not.

Three reasons why we shouldn’t read (most) systematic reviews

Firstly, systematic reviews have proliferated over time. From 11 per day in 2010, they skyrocketed up to 40 per day or more in 2015.[1][2] Some have described this production as having reached epidemic proportions where the large majority of produced systematic reviews and meta-analyses are unnecessary, misleading, and/or conflicted.[3][4] So, finding more than one systematic review for a question is the rule more than the exception, and it is not unusual to find several dozen for the hottest questions.

Second, most systematic reviews address a narrow question. It is difficult to put them in the context of all of the available alternatives for an individual case. Reading multiple reviews to assess all of the alternatives is impractical, even more if we consider they are typically difficult to read for the average clinician, who will need to solve several questions each day.[5]

Third, systematic reviews do not tell you what to do, or what is advisable for a given patient or situation. Indeed, good systematic reviews explicitly avoid making recommendations.

So, even though systematic reviews play a key role in any evidence-based decision-making process, most of them are low-quality or outdated, and they rarely provide all the information needed to make decisions in the real world.

How to find the best available evidence?

Considering the massive amount of information available, we can quickly discard periodically reviewing our favourite journals as a means of sourcing the best available evidence.

The traditional approach to search for evidence has been using major databases, such as PubMed  or EMBASE . These constitute comprehensive sources including millions of relevant, but also irrelevant articles. Even though in the past they were the preferred approach to searching for evidence, information overload has made them impractical, and most clinicians would fail to find the best available evidence in this way, however hard they tried.

Another popular approach is simply searching in Google. Unfortunately, because of its lack of transparency, Google is not a reliable way to filter current best evidence from unsubstantiated or non-scientifically supervised sources.[6]

Three alternatives to access the best evidence

Alternative 1 - Pick the best systematic review Mastering the art of identifying, appraising, and applying high-quality systematic reviews into practice can be very rewarding. It is not easy, but once mastered it gives a view of the bigger picture: of what is known, and what is not known.

The best single source of highest-quality systematic reviews is produced by an international organisation called the Cochrane Collaboration, named after a well-known researcher.[4] They can be accessed at The Cochrane Library .

Unfortunately, Cochrane reviews do not cover all of the existing questions and they are not always up to date. Also, there might be non-Cochrane reviews out-performing Cochrane reviews.

There are many resources that facilitate access to systematic reviews (and other resources), such as Trip database , PubMed Health , ACCESSSS , or Epistemonikos (the Cochrane Collaboration maintains a comprehensive list of these resources).

Epistemonikos database is innovative both in simultaneously searching multiple resources and in indexing and interlinking relevant evidence. For example, Epistemonikos connects systematic reviews and their included studies, and thus allows clustering of systematic reviews based on the primary studies they have in common. Epistemonikos is also unique in offering an appreciable multilingual user interface, multilingual search, and translation of abstracts in more than nine languages.[6] This database includes several tools to compare systematic reviews, including the matrix of evidence, a dynamic table showing all of the systematic reviews, and the primary studies included in those reviews.

Additionally, Epistemonikos partnered with Cochrane, and during 2017 a combined search in both the Cochrane Library and Epistemonikos was released.

Alternative 2 - Read trustworthy guidelines Although systematic reviews can provide a synthesis of the benefits and harms of the interventions, they do not integrate these factors with patients’ values and preferences or resource considerations to provide a suggested course of action. Also, to fully address the questions, clinicians would need to integrate the information of several systematic reviews covering all the relevant alternatives and outcomes. Most clinicians will likely prefer guidance rather than interpreting systematic reviews themselves.

Trustworthy guidelines, especially if developed with high standards, such as the Grading of Recommendations, Assessment, Development, and Evaluation ( GRADE ) approach, offer systematic and transparent guidance in moving from evidence to recommendations.[7]

Many online guideline websites promote themselves as “evidence based”, but few have explicit links to research findings.[8] If they don’t have in-line references to relevant research findings, dismiss them. If they have, you can judge the strength of the commitment to evidence to support inference, checking whether statements are based on high-quality versus low-quality evidence using alternative 1 explained above.

Unfortunately, most guidelines have serious limitations or are outdated.[9][10] The exercise of locating and appraising the best guideline is time consuming. This is particularly challenging for generalists addressing questions from different conditions or diseases.

Alternative 3 - Use point-of-care tools Point-of-care tools, such as BMJ Best Practice, have been developed as a response to the genuine need to summarise the ever-expanding biomedical literature on an ever-increasing number of alternatives in order to make evidence-based decisions. In this competitive market, the more successful products have been those delivering innovative, user-friendly interfaces that improve the retrieval, synthesis, organisation, and application of evidence-based content in many different areas of clinical practice.

However, the same impossibility in catching up with new evidence without compromising quality that affects guidelines also affects point-of-care tools. Clinicians should become familiar with the point-of-care information resource they want or can access, and examine the in-line references to relevant research findings. Clinicians can easily judge the strength of the commitment to evidence checking whether statements are based on high-quality versus low-quality evidence using alternative 1 explained above. Comprehensiveness, use of GRADE approach, and independence are other characteristics to bear in mind when selecting among point-of-care information summaries.

A comprehensive list of these resources can be found in a study by Kwag et al .

Finding the best available evidence is more challenging than it was in the dawn of the evidence-based movement, and the main cause is the exponential growth of evidence-based information, in any of the flavours described above.

However, with a little bit of patience and practice, the busy clinician will discover evidence-based practice is far easier than it was 5 or 10 years ago. We are entering a stage where information is flowing between the different systems, technology is being harnessed for good, and the different players are starting to generate alliances.

The early adopters will surely enjoy the first experiments of living systematic reviews (high-quality, up-to-date online summaries of health research that are updated as new research becomes available), living guidelines, and rapid reviews tied to rapid recommendations, just to mention a few. [13][14][15]

It is unlikely that the picture of countless low-quality studies and reviews will change in the foreseeable future. However, it would not be a surprise if, in 3 to 5 years, separating the wheat from the chaff becomes trivial. Maybe the promise of evidence-based medicine of more effective, safer medical intervention resulting in better health outcomes for patients could be fulfilled.

Author: Gabriel Rada

Competing interests: Gabriel Rada is the co-founder and chairman of Epistemonikos database, part of the team that founded and maintains PDQ-Evidence, and an editor of the Cochrane Collaboration.

 Related Blogs

  Living Systematic Reviews: towards real-time evidence for health-care decision making

  • Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010 Sep 21;7(9):e1000326. doi: 10.1371/journal.pmed.1000326
  • Epistemonikos database [filter= systematic review; year=2015]. A Free, Relational, Collaborative, Multilingual Database of Health Evidence. https://www.epistemonikos.org/en/search?&q=*&classification=systematic-review&year_start=2015&year_end=2015&fl=14542 Accessed 5 Jan 2017.
  • Ioannidis JP. The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses. Milbank Q. 2016 Sep;94(3):485-514. doi: 10.1111/1468-0009.12210.
  • Page MJ, Shamseer L, Altman DG, et al. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med. 2016;13(5):e1002028.
  • Del Fiol G, Workman TE, Gorman PN. Clinical questions raised by clinicians at the point of care: a systematic review. JAMA Intern Med. 2014 May;174(5):710-8. doi: 10.1001/jamainternmed.2014.368.
  • Agoritsas T, Vandvik P, Neumann I, Rochwerg B, Jaeschke R, Hayward R, et al. Chapter 5: finding current best evidence. In: Users' guides to the medical literature: a manual for evidence-based clinical practice. Chicago: MacGraw-Hill, 2014.
  • Guyatt GH, Oxman AD, Vist GE, et al. GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924-926. doi: 10.1136/bmj.39489.470347
  • Neumann I, Santesso N, Akl EA, Rind DM, Vandvik PO, Alonso-Coello P, Agoritsas T, Mustafa RA, Alexander PE, Schünemann H, Guyatt GH. A guide for health professionals to interpret and use recommendations in guidelines developed with the GRADE approach. J Clin Epidemiol. 2016 Apr;72:45-55. doi: 10.1016/j.jclinepi.2015.11.017
  • Alonso-Coello P, Irfan A, Solà I, Gich I, Delgado-Noguera M, Rigau D, Tort S, Bonfill X, Burgers J, Schunemann H. The quality of clinical practice guidelines over the last two decades: a systematic review of guideline appraisal studies. Qual Saf Health Care. 2010 Dec;19(6):e58. doi: 10.1136/qshc.2010.042077
  • Martínez García L, Sanabria AJ, García Alvarez E, Trujillo-Martín MM, Etxeandia-Ikobaltzeta I, Kotzeva A, Rigau D, Louro-González A, Barajas-Nava L, Díaz Del Campo P, Estrada MD, Solà I, Gracia J, Salcedo-Fernandez F, Lawson J, Haynes RB, Alonso-Coello P; Updating Guidelines Working Group. The validity of recommendations from clinical guidelines: a survival analysis. CMAJ. 2014 Nov 4;186(16):1211-9. doi: 10.1503/cmaj.140547
  • Kwag KH, González-Lorenzo M, Banzi R, Bonovas S, Moja L. Providing Doctors With High-Quality Information: An Updated Evaluation of Web-Based Point-of-Care Information Summaries. J Med Internet Res. 2016 Jan 19;18(1):e15. doi: 10.2196/jmir.5234
  • Banzi R, Cinquini M, Liberati A, Moschetti I, Pecoraro V, Tagliabue L, Moja L. Speed of updating online evidence based point of care summaries: prospective cohort analysis. BMJ. 2011 Sep 23;343:d5856. doi: 10.1136/bmj.d5856
  • Elliott JH, Turner T, Clavisi O, Thomas J, Higgins JP, Mavergames C, Gruen RL. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap. PLoS Med. 2014 Feb 18;11(2):e1001603. doi: 10.1371/journal.pmed.1001603
  • Vandvik PO, Brandt L, Alonso-Coello P, Treweek S, Akl EA, Kristiansen A, Fog-Heen A, Agoritsas T, Montori VM, Guyatt G. Creating clinical practice guidelines we can trust, use, and share: a new era is imminent. Chest. 2013 Aug;144(2):381-9. doi: 10.1378/chest.13-0746
  • Vandvik PO, Otto CM, Siemieniuk RA, Bagur R, Guyatt GH, Lytvyn L, Whitlock R, Vartdal T, Brieger D, Aertgeerts B, Price S, Foroutan F, Shapiro M, Mertz R, Spencer FA. Transcatheter or surgical aortic valve replacement for patients with severe, symptomatic, aortic stenosis at low to intermediate surgical risk: a clinical practice guideline. BMJ. 2016 Sep 28;354:i5085. doi: 10.1136/bmj.i5085

Discuss EBM

  • What does evidence-based actually mean?
  • Simply making evidence simple
  • Six proposals for EBMs future
  • Promoting informed healthcare choices by helping people assess treatment claims
  • The blind leading the blind in the land of risk communication
  • Transforming the communication of evidence for better health
  • Clinical search, big data, and the hunt for meaning
  • Living systematic reviews: towards real-time evidence for health-care decision making
  • The rise of rapid reviews
  • Evidence for the Brave New World on multimorbidity
  • Genetics and personalised medicine: where’s the revolution?
  • Policy, practice, and politics
  • The straw men of integrative health and alternative medicine
  • Where’s the evidence for teaching evidence-based medicine?

EBM Toolkit home

Learn, Practise, Discuss, Tools

Banner

Research Guide for Masters and Doctoral Nursing Students

  • Nursing Literature
  • Setting up Library Access
  • Literature Reviews
  • Managing Citations
  • Understanding Evidence-based Article Types
  • Develop your Writing

Types of Evidence-Based Articles

In order to reduce bias in research, health sciences professionals have developed standards for determining which types of articles and research demonstrate strong evidence in their methodology. 

Typically these types of studies build upon each other and what sometimes starts off as a Case Control Study will eventually move through other phases of research and eventually work into a Systematic Review with Meta-Analysis. However, other times there are factors that prevent certain problems in health sciences from being properly examined with a Meta-Analysis due to divergent research methods, etc. 

How do I know?

Depending on the search tool you use, there are a few ways to tell...

Firstly, many high-evidence article types will explicitly state what type of an article it is in the title. 

Secondly, if you cannot figure out the article type, sometimes that will be revealed in the abstract of the article.Look for words like "systematic analysis" to indicate high levels of evidence.

In PubMed, you can see it by clicking on "Publication Type/MeSH Terms" 

In CINAHL you can see it by looking at the Publication Type 

Levels of Evidence

  • The Pyramid
  • Non-Research Information
  • Observational Studies
  • Experimental Studies
  • Critical Analysis

Levels of evidence, shown in pyramid form.

  • Non-Evidence-based Expert Opinion - Commentary statements, speeches, or editorials written by prominent experts asserting ideas that are reached by conjecture, casual observation, emotion, religious belief, or ego. 
  • Non-EBP guidelines - practice guidelines that exist because of eminence, authority, eloquence, providence, or diffidence based approaches to healthcare.
  • News Articles - brief summaries of research or medical opinions written by journalists for the general public.
  • Editorials - Opinions asserted by experts, lay-people, non-experts, or anyone else in a news outlet, magazine, or academic journal.
  • Commentary - similar to an editorial, but it may be identified as a commentary, which can be an invited informal and non-reviewed short article pertaining to a particular concept or idea.

Let's Talk about Review Articles

Review articles are common in health literature. They are typically overviews of literature found on topics, but do not go so far as to meet the methodological requirements for a Systematic Review.

These articles may contain some critical analysis, but will not have the rigorous criteria that a Systematic Review does. They can be used to demonstrate evidence, albeit they do not make a very strong case as they are secondary articles and not originally conducted observational or experimental research.

  • Example:  a patient enters the ER with some of the key symptoms for Mononucleosis, but complains of nausea and stomach pain. Upon further testing, the doctor concludes that the Mono infection has affected the liver. 
  • Example:  A pediatrician notices that a children of a specific city jurisdiction are being diagnosed with lead poisoning. Upon further examination for the cause, she finds out that the city's dated and crumbling lead pipe plumbing infrastructure is affecting the water quality, leading to this high incidence.
  • Example:  A group of doctors studied the long term health effects of people who smoked with particular frequencies alongside people who did not smoke at all. From this study doctors concluded that smoking poses significant health hazards such as increased risks of heart disease, cancers, and lung diseases.
  • Example:  Researchers investigating the difference between between yoga and acupuncture as relief for lower back pain. Since the activity involved in the trial cannot be disguised (as one might with a placebo), randomization cannot be a part of the research study. '
  • Example:  Researchers need to test the effects of a new drug for Parkinson's, so they recruit subjects for the study. A control group would be given the drugs standard for Parkinson's treatment, and the experimental group would have the trial drug. Participants would not know which treatment they receive during the trial.
  • Example:  After thorough testing and experimentation, researchers, doctors, and product developers created and started using less-invasive oxygen monitoring devices to improve recovery times after surgeries. These are now standard equipment.
  • Example:  Researchers want to look at the literature on mammogram for breast cancer screenings to figure out--based on the literature--when someone should start getting regular mammograms as a preventive measure. They conduct a thorough literature search that consults multiple research databases and sources of literature, documenting every step, sorting through and selecting articles for inclusion in their research based on the criteria. They will then qualitatively analyze the results of selected articles to determine what type of recommendation for routine mammogram screening for breast cancer they should provide.

Many Systematic Reviews Contain Meta-Analysis and will specify so, usually in the title.

​ Example:  Researchers want to know what the rate of depression is in overweight women of Latin American heritage and examine self-reported sociocultural factors involved in their mental health. They conduct a literature search, exactly like researchers might do for a Systematic Review (see above) and do a quantitative analysis of the data using advanced statistical methods to synthesize conclusions from the numbers aggregated from a variety of studies.

Qualitative vs Quantitative Methods

Sources: Qualitative Vs Quantitative from Maricopa Community Colleges Helpful Definitions from Simmons College Libraries

  • << Previous: Managing Citations
  • Next: Develop your Writing >>
  • Last Updated: May 2, 2024 1:46 PM
  • URL: https://guides.library.uwm.edu/gradnursing
  • SPECIAL COLLECTIONS
  • COVID-19 Library Updates
  • Make Appointment

Evidence-Based Nursing Research Guide: Evidence Levels & Types

  • Key Resources
  • Evidence-Based Nursing Definitions
  • Evidence Levels & Types
  • PICO Search Strategy
  • Systematic Reviews This link opens in a new window
  • Books & eBooks
  • Background Information
  • Organizations

Evidence Pyramid

Depending on their purpose, design, and mode of reporting or dissemination, health-related research studies can be ranked according to the strength of evidence they provide, with the sources of strongest evidence at the top, and the weakest at the bottom:

pyramid of levels of evidence

Secondary Sources: studies of studies

Systematic Review

  • Identifies, appraises, and synthesizes all empirical evidence that meets pre-specified eligibility criteria
  • Methods section outlines a detailed search strategy used to identify and appraise articles
  • May include a meta-analysis, but not required (see Meta-Analysis below)

Meta-Analysis

  • A subset of systematic reviews: uses quantitative methods to combine the results of independent studies and synthesize the summaries and conclusions
  • Methods section outlines a detailed search strategy used to identify and appraise articles; often surveys clinical trials
  • Can be conducted independently, or as a part of a systematic review
  • All meta-analyses are systematic reviews, but not all systematic reviews are meta-analyses

Evidence-Based Guideline

  • Provides a brief summary of evidence for a general clinical question or condition
  • Produced by professional health care organizations, practices, and agencies that systematically gather, appraise, and combine the evidence
  • Click on the 'Evidence-Based Care Sheets' link located at the top of the  CINAHL  screen to find short overviews of evidence-based care recommendations covering 140 or more health care topics.

screenshot of CINAHL database

Meta-Synthesis or Qualitative Synthesis (Systematic Review of Qualitative or Descriptive Studies)

  • a systematic review of qualitative or descriptive studies, low strength level

Primary Sources: original studies

Randomized Controlled Trial

  • Experiment where individuals are randomly assigned to an experimental or control group to test the value or efficiency of a treatment or intervention

Non-Randomized Controlled Clinical Trial (Quasi-Experimental)

  • Involves one or more test treatments, at least one control treatment, specified outcome measures for evaluating the studied intervention, and a bias-free method for assigning patients to the test treatment

Case-Control or Case-Comparison Study (Non-Experimental)

  • Individuals with a particular condition or disease (the cases) are selected for comparison with individuals who do not have the condition or disease (the controls)

Cohort Study (Non-Experimental)

  • Identifies subsets (cohorts) of a defined population
  • Cohorts may or may not be exposed to factors that researchers hypothesize will influence the probability that participants will have a particular disease or other outcome
  • Researchers follow cohorts in an attempt to determine distinguishing subgroup characteristics

Further Reading

  • Levels of Evidence - EBP Toolkit Winona State University
  • Levels of Evidence Northern Virginia Community College
  • Types of Evidence University of Missouri - St Louis
  • << Previous: Evidence-Based Nursing Definitions
  • Next: PICO Search Strategy >>
  • Last Updated: Apr 9, 2024 2:36 PM
  • URL: https://libguides.depaul.edu/ebp
  • Chamberlain University Library
  • Chamberlain Library Core

Finding Types of Research

  • Evidence-Based Research

On This Guide

About this guide, understand evidence-based practice, identify research study types.

  • Quantitative Studies
  • Qualitative Studies
  • Meta-Analysis
  • Systematic Reviews
  • Randomized Controlled Trials
  • Observational Studies
  • Literature Reviews
  • Finding Research Tools This link opens in a new window

Throughout your schooling, you may need to find different types of evidence and research to support your course work. This guide provides a high-level overview of evidence-based practice as well as the different types of research and study designs. Each page of this guide offers an overview and search tips for finding articles that fit that study design.

Note! If you need help finding a specific type of study, visit the  Get Research Help guide  to contact the librarians.

What is Evidence-Based Practice?

One of the requirements for your coursework is to find articles that support evidence-based practice. But what exactly is evidence-based practice? Evidence-based practice is a method that uses relevant and current evidence to plan, implement and evaluate patient care. This definition is included in the video below, which explains all the steps of evidence-based practice in greater detail.

  • Video - Evidence-based practice: What it is and what it is not. Medcom (Producer), & Cobb, D. (Director). (2017). Evidence-based practice: What it is and what it is not [Streaming Video]. United States of America: Producer. Retrieved from Alexander Street Press Nursing Education Collection

Quantitative and Qualitative Studies

Research is broken down into two different types: quantitative and qualitative. Quantitative studies are all about measurement. They will report statistics of things that can be physically measured like blood pressure, weight and oxygen saturation. Qualitative studies, on the other hand, are about people's experiences and how they feel about something. This type of information cannot be measured using statistics. Both of these types of studies report original research and are considered single studies. Watch the video below for more information.

Watch the Identifying Quantitative and Qualitative video

Study Designs

Some research study types that you will encounter include:

  • Case-Control Studies
  • Cohort Studies
  • Cross-Sectional Studies

Studies that Synthesize Other Studies

Sometimes, a research study will look at the results of many studies and look for trends and draw conclusions. These types of studies include:

  • Meta Analyses

Tip! How do you determine the research article's study type or level of evidence? First, look at the article abstract. Most of the time the abstract will have a methodology section, which should tell you what type of study design the researchers are using. If it is not in the abstract, look for the methodology section of the article. It should tell you all about what type of study the researcher is doing and the steps they used to carry out the study.

Read the book below to learn how to read a clinical paper, including the types of study designs you will encounter.

Understanding Clinical Papers Cover

  • Search Website
  • Library Tech Support
  • Services for Colleagues

Chamberlain College of Nursing is owned and operated by Chamberlain University LLC. In certain states, Chamberlain operates as Chamberlain College of Nursing pending state authorization for Chamberlain University.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 21, Issue 4
  • New evidence pyramid
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • M Hassan Murad ,
  • Mouaz Alsawas ,
  • http://orcid.org/0000-0001-5481-696X Fares Alahdab
  • Rochester, Minnesota , USA
  • Correspondence to : Dr M Hassan Murad, Evidence-based Practice Center, Mayo Clinic, Rochester, MN 55905, USA; murad.mohammad{at}mayo.edu

https://doi.org/10.1136/ebmed-2016-110401

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • EDUCATION & TRAINING (see Medical Education & Training)
  • EPIDEMIOLOGY
  • GENERAL MEDICINE (see Internal Medicine)

The first and earliest principle of evidence-based medicine indicated that a hierarchy of evidence exists. Not all evidence is the same. This principle became well known in the early 1990s as practising physicians learnt basic clinical epidemiology skills and started to appraise and apply evidence to their practice. Since evidence was described as a hierarchy, a compelling rationale for a pyramid was made. Evidence-based healthcare practitioners became familiar with this pyramid when reading the literature, applying evidence or teaching students.

Various versions of the evidence pyramid have been described, but all of them focused on showing weaker study designs in the bottom (basic science and case series), followed by case–control and cohort studies in the middle, then randomised controlled trials (RCTs), and at the very top, systematic reviews and meta-analysis. This description is intuitive and likely correct in many instances. The placement of systematic reviews at the top had undergone several alterations in interpretations, but was still thought of as an item in a hierarchy. 1 Most versions of the pyramid clearly represented a hierarchy of internal validity (risk of bias). Some versions incorporated external validity (applicability) in the pyramid by either placing N-1 trials above RCTs (because their results are most applicable to individual patients 2 ) or by separating internal and external validity. 3

Another version (the 6S pyramid) was also developed to describe the sources of evidence that can be used by evidence-based medicine (EBM) practitioners for answering foreground questions, showing a hierarchy ranging from studies, synopses, synthesis, synopses of synthesis, summaries and systems. 4 This hierarchy may imply some sort of increasing validity and applicability although its main purpose is to emphasise that the lower sources of evidence in the hierarchy are least preferred in practice because they require more expertise and time to identify, appraise and apply.

The traditional pyramid was deemed too simplistic at times, thus the importance of leaving room for argument and counterargument for the methodological merit of different designs has been emphasised. 5 Other barriers challenged the placement of systematic reviews and meta-analyses at the top of the pyramid. For instance, heterogeneity (clinical, methodological or statistical) is an inherent limitation of meta-analyses that can be minimised or explained but never eliminated. 6 The methodological intricacies and dilemmas of systematic reviews could potentially result in uncertainty and error. 7 One evaluation of 163 meta-analyses demonstrated that the estimation of treatment outcomes differed substantially depending on the analytical strategy being used. 7 Therefore, we suggest, in this perspective, two visual modifications to the pyramid to illustrate two contemporary methodological principles ( figure 1 ). We provide the rationale and an example for each modification.

  • Download figure
  • Open in new tab
  • Download powerpoint

The proposed new evidence-based medicine pyramid. (A) The traditional pyramid. (B) Revising the pyramid: (1) lines separating the study designs become wavy (Grading of Recommendations Assessment, Development and Evaluation), (2) systematic reviews are ‘chopped off’ the pyramid. (C) The revised pyramid: systematic reviews are a lens through which evidence is viewed (applied).

Rationale for modification 1

In the early 2000s, the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group developed a framework in which the certainty in evidence was based on numerous factors and not solely on study design which challenges the pyramid concept. 8 Study design alone appears to be insufficient on its own as a surrogate for risk of bias. Certain methodological limitations of a study, imprecision, inconsistency and indirectness, were factors independent from study design and can affect the quality of evidence derived from any study design. For example, a meta-analysis of RCTs evaluating intensive glycaemic control in non-critically ill hospitalised patients showed a non-significant reduction in mortality (relative risk of 0.95 (95% CI 0.72 to 1.25) 9 ). Allocation concealment and blinding were not adequate in most trials. The quality of this evidence is rated down due to the methodological imitations of the trials and imprecision (wide CI that includes substantial benefit and harm). Hence, despite the fact of having five RCTs, such evidence should not be rated high in any pyramid. The quality of evidence can also be rated up. For example, we are quite certain about the benefits of hip replacement in a patient with disabling hip osteoarthritis. Although not tested in RCTs, the quality of this evidence is rated up despite the study design (non-randomised observational studies). 10

Rationale for modification 2

Another challenge to the notion of having systematic reviews on the top of the evidence pyramid relates to the framework presented in the Journal of the American Medical Association User's Guide on systematic reviews and meta-analysis. The Guide presented a two-step approach in which the credibility of the process of a systematic review is evaluated first (comprehensive literature search, rigorous study selection process, etc). If the systematic review was deemed sufficiently credible, then a second step takes place in which we evaluate the certainty in evidence based on the GRADE approach. 11 In other words, a meta-analysis of well-conducted RCTs at low risk of bias cannot be equated with a meta-analysis of observational studies at higher risk of bias. For example, a meta-analysis of 112 surgical case series showed that in patients with thoracic aortic transection, the mortality rate was significantly lower in patients who underwent endovascular repair, followed by open repair and non-operative management (9%, 19% and 46%, respectively, p<0.01). Clearly, this meta-analysis should not be on top of the pyramid similar to a meta-analysis of RCTs. After all, the evidence remains consistent of non-randomised studies and likely subject to numerous confounders.

Therefore, the second modification to the pyramid is to remove systematic reviews from the top of the pyramid and use them as a lens through which other types of studies should be seen (ie, appraised and applied). The systematic review (the process of selecting the studies) and meta-analysis (the statistical aggregation that produces a single effect size) are tools to consume and apply the evidence by stakeholders.

Implications and limitations

Changing how systematic reviews and meta-analyses are perceived by stakeholders (patients, clinicians and stakeholders) has important implications. For example, the American Heart Association considers evidence derived from meta-analyses to have a level ‘A’ (ie, warrants the most confidence). Re-evaluation of evidence using GRADE shows that level ‘A’ evidence could have been high, moderate, low or of very low quality. 12 The quality of evidence drives the strength of recommendation, which is one of the last translational steps of research, most proximal to patient care.

One of the limitations of all ‘pyramids’ and depictions of evidence hierarchy relates to the underpinning of such schemas. The construct of internal validity may have varying definitions, or be understood differently among evidence consumers. A limitation of considering systematic review and meta-analyses as tools to consume evidence may undermine their role in new discovery (eg, identifying a new side effect that was not demonstrated in individual studies 13 ).

This pyramid can be also used as a teaching tool. EBM teachers can compare it to the existing pyramids to explain how certainty in the evidence (also called quality of evidence) is evaluated. It can be used to teach how evidence-based practitioners can appraise and apply systematic reviews in practice, and to demonstrate the evolution in EBM thinking and the modern understanding of certainty in evidence.

  • Leibovici L
  • Agoritsas T ,
  • Vandvik P ,
  • Neumann I , et al
  • ↵ Resources for Evidence-Based Practice: The 6S Pyramid. Secondary Resources for Evidence-Based Practice: The 6S Pyramid Feb 18, 2016 4:58 PM. http://hsl.mcmaster.libguides.com/ebm
  • Vandenbroucke JP
  • Berlin JA ,
  • Dechartres A ,
  • Altman DG ,
  • Trinquart L , et al
  • Guyatt GH ,
  • Vist GE , et al
  • Coburn JA ,
  • Coto-Yglesias F , et al
  • Sultan S , et al
  • Montori VM ,
  • Ioannidis JP , et al
  • Altayar O ,
  • Bennett M , et al
  • Nissen SE ,

Contributors MHM conceived the idea and drafted the manuscript. FA helped draft the manuscript and designed the new pyramid. MA and NA helped draft the manuscript.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles

  • Editorial Pyramids are guides not rules: the evolution of the evidence pyramid Terrence Shaneyfelt BMJ Evidence-Based Medicine 2016; 21 121-122 Published Online First: 12 Jul 2016. doi: 10.1136/ebmed-2016-110498
  • Perspective EBHC pyramid 5.0 for accessing preappraised evidence and guidance Brian S Alper R Brian Haynes BMJ Evidence-Based Medicine 2016; 21 123-125 Published Online First: 20 Jun 2016. doi: 10.1136/ebmed-2016-110447

Read the full text or download the PDF:

  • Privacy Policy

Research Method

Home » Evidence – Definition, Types and Example

Evidence – Definition, Types and Example

Table of Contents

Evidence

Definition:

Evidence is any information or data that supports or refutes a claim, hypothesis, or argument. It is the basis for making decisions, drawing conclusions, and establishing the truth or validity of a statement.

Types of Evidence

Types of Evidence are as follows:

Empirical evidence

This type of evidence comes from direct observation or measurement, and is usually based on data collected through scientific or other systematic methods.

Expert Testimony

This is evidence provided by individuals who have specialized knowledge or expertise in a particular area, and can provide insight into the validity or reliability of a claim.

Personal Experience

This type of evidence comes from firsthand accounts of events or situations, and can be useful in providing context or a sense of perspective.

Statistical Evidence

This type of evidence involves the use of numbers and data to support a claim, and can include things like surveys, polls, and other types of quantitative analysis.

Analogical Evidence

This involves making comparisons between similar situations or cases, and can be used to draw conclusions about the validity or applicability of a claim.

Documentary Evidence

This includes written or recorded materials, such as contracts, emails, or other types of documents, that can provide support for a claim.

Circumstantial Evidence

This type of evidence involves drawing inferences based on indirect or circumstantial evidence, and can be used to support a claim when direct evidence is not available.

Examples of Evidence

Here are some examples of different types of evidence that could be used to support a claim or argument:

  • A study conducted on a new drug, showing its effectiveness in treating a particular disease, based on clinical trials and medical data.
  • A doctor providing testimony in court about a patient’s medical condition or injuries.
  • A patient sharing their personal experience with a particular medical treatment or therapy.
  • A study showing that a particular type of cancer is more common in certain demographics or geographic areas.
  • Comparing the benefits of a healthy diet and exercise to maintaining a car with regular oil changes and maintenance.
  • A contract showing that two parties agreed to a particular set of terms and conditions.
  • The presence of a suspect’s DNA at the crime scene can be used as circumstantial evidence to suggest their involvement in the crime.

Applications of Evidence

Here are some applications of evidence:

  • Law : In the legal system, evidence is used to establish facts and to prove or disprove a case. Lawyers use different types of evidence, such as witness testimony, physical evidence, and documentary evidence, to present their arguments and persuade judges and juries.
  • Science : Evidence is the foundation of scientific inquiry. Scientists use evidence to support or refute hypotheses and theories, and to advance knowledge in their fields. The scientific method relies on evidence-based observations, experiments, and data analysis.
  • Medicine : Evidence-based medicine (EBM) is a medical approach that emphasizes the use of scientific evidence to inform clinical decision-making. EBM relies on clinical trials, systematic reviews, and meta-analyses to determine the best treatments for patients.
  • Public policy : Evidence is crucial in informing public policy decisions. Policymakers rely on research studies, evaluations, and other forms of evidence to develop and implement policies that are effective, efficient, and equitable.
  • Business : Evidence-based decision-making is becoming increasingly important in the business world. Companies use data analytics, market research, and other forms of evidence to make strategic decisions, evaluate performance, and optimize operations.

Purpose of Evidence

The purpose of evidence is to support or prove a claim or argument. Evidence can take many forms, including statistics, examples, anecdotes, expert opinions, and research studies. The use of evidence is important in fields such as science, law, and journalism to ensure that claims are backed up by factual information and to make decisions based on reliable information. Evidence can also be used to challenge or question existing beliefs and assumptions, and to uncover new knowledge and insights. Overall, the purpose of evidence is to provide a foundation for understanding and decision-making that is grounded in empirical facts and data.

Characteristics of Evidence

Some Characteristics of Evidence are as follows:

  • Relevance : Evidence must be relevant to the claim or argument it is intended to support. It should directly address the issue at hand and not be tangential or unrelated.
  • Reliability : Evidence should come from a trustworthy and reliable source. The credibility of the source should be established, and the information should be accurate and free from bias.
  • Sufficiency : Evidence should be sufficient to support the claim or argument. It should provide enough information to make a strong case, but not be overly repetitive or redundant.
  • Validity : Evidence should be based on sound reasoning and logic. It should be based on established principles or theories, and should be consistent with other evidence and observations.
  • Timeliness : Evidence should be current and up-to-date. It should reflect the most recent developments or research in the field.
  • Accessibility : Evidence should be easily accessible to others who may want to review or evaluate it. It should be clear and easy to understand, and should be presented in a way that is appropriate for the intended audience.

Advantages of Evidence

The use of evidence has several advantages, including:

  • Supports informed decision-making: Evidence-based decision-making enables individuals or organizations to make informed choices based on reliable information rather than assumptions or opinions.
  • Enhances credibility: The use of evidence can enhance the credibility of claims or arguments by providing factual support.
  • Promotes transparency: The use of evidence promotes transparency in decision-making processes by providing a clear and objective basis for decisions.
  • Facilitates evaluation : Evidence-based decision-making enables the evaluation of the effectiveness of policies, programs, and interventions.
  • Provides insights: The use of evidence can provide new insights and perspectives on complex issues, enabling individuals or organizations to approach problems from different angles.
  • Enhances problem-solving : Evidence-based decision-making can help individuals or organizations to identify the root causes of problems and develop more effective solutions.

Limitations of Evidence

Some Limitations of Evidence are as follows:

  • Limited availability : Evidence may not always be available or accessible, particularly in areas where research is limited or where data collection is difficult.
  • Interpretation challenges: Evidence can be open to interpretation, and individuals may interpret the same evidence differently based on their biases, experiences, or values.
  • Time-consuming: Gathering and evaluating evidence can be time-consuming and require significant resources, which may not always be feasible in certain contexts.
  • May not apply universally : Evidence may be context-specific and may not apply universally to other situations or populations.
  • Potential for bias: Even well-designed studies or research can be influenced by biases, such as selection bias, measurement bias, or publication bias.
  • Ethical concerns : Evidence may raise ethical concerns, such as the use of personal data or the potential harm to research participants.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

What is Art

What is Art – Definition, Types, Examples

What is Anthropology

What is Anthropology – Definition and Overview

What is Literature

What is Literature – Definition, Types, Examples

Economist

Economist – Definition, Types, Work Area

Anthropologist

Anthropologist – Definition, Types, Work Area

What is History

What is History – Definitions, Periods, Methods

The Writing Center • University of North Carolina at Chapel Hill

What this handout is about

This handout will provide a broad overview of gathering and using evidence. It will help you decide what counts as evidence, put evidence to work in your writing, and determine whether you have enough evidence. It will also offer links to additional resources.

Introduction

Many papers that you write in college will require you to make an argument ; this means that you must take a position on the subject you are discussing and support that position with evidence. It’s important that you use the right kind of evidence, that you use it effectively, and that you have an appropriate amount of it. If, for example, your philosophy professor didn’t like it that you used a survey of public opinion as your primary evidence in your ethics paper, you need to find out more about what philosophers count as good evidence. If your instructor has told you that you need more analysis, suggested that you’re “just listing” points or giving a “laundry list,” or asked you how certain points are related to your argument, it may mean that you can do more to fully incorporate your evidence into your argument. Comments like “for example?,” “proof?,” “go deeper,” or “expand” in the margins of your graded paper suggest that you may need more evidence. Let’s take a look at each of these issues—understanding what counts as evidence, using evidence in your argument, and deciding whether you need more evidence.

What counts as evidence?

Before you begin gathering information for possible use as evidence in your argument, you need to be sure that you understand the purpose of your assignment. If you are working on a project for a class, look carefully at the assignment prompt. It may give you clues about what sorts of evidence you will need. Does the instructor mention any particular books you should use in writing your paper or the names of any authors who have written about your topic? How long should your paper be (longer works may require more, or more varied, evidence)? What themes or topics come up in the text of the prompt? Our handout on understanding writing assignments can help you interpret your assignment. It’s also a good idea to think over what has been said about the assignment in class and to talk with your instructor if you need clarification or guidance.

What matters to instructors?

Instructors in different academic fields expect different kinds of arguments and evidence—your chemistry paper might include graphs, charts, statistics, and other quantitative data as evidence, whereas your English paper might include passages from a novel, examples of recurring symbols, or discussions of characterization in the novel. Consider what kinds of sources and evidence you have seen in course readings and lectures. You may wish to see whether the Writing Center has a handout regarding the specific academic field you’re working in—for example, literature , sociology , or history .

What are primary and secondary sources?

A note on terminology: many researchers distinguish between primary and secondary sources of evidence (in this case, “primary” means “first” or “original,” not “most important”). Primary sources include original documents, photographs, interviews, and so forth. Secondary sources present information that has already been processed or interpreted by someone else. For example, if you are writing a paper about the movie “The Matrix,” the movie itself, an interview with the director, and production photos could serve as primary sources of evidence. A movie review from a magazine or a collection of essays about the film would be secondary sources. Depending on the context, the same item could be either a primary or a secondary source: if I am writing about people’s relationships with animals, a collection of stories about animals might be a secondary source; if I am writing about how editors gather diverse stories into collections, the same book might now function as a primary source.

Where can I find evidence?

Here are some examples of sources of information and tips about how to use them in gathering evidence. Ask your instructor if you aren’t sure whether a certain source would be appropriate for your paper.

Print and electronic sources

Books, journals, websites, newspapers, magazines, and documentary films are some of the most common sources of evidence for academic writing. Our handout on evaluating print sources will help you choose your print sources wisely, and the library has a tutorial on evaluating both print sources and websites. A librarian can help you find sources that are appropriate for the type of assignment you are completing. Just visit the reference desk at Davis or the Undergraduate Library or chat with a librarian online (the library’s IM screen name is undergradref).

Observation

Sometimes you can directly observe the thing you are interested in, by watching, listening to, touching, tasting, or smelling it. For example, if you were asked to write about Mozart’s music, you could listen to it; if your topic was how businesses attract traffic, you might go and look at window displays at the mall.

An interview is a good way to collect information that you can’t find through any other type of research. An interview can provide an expert’s opinion, biographical or first-hand experiences, and suggestions for further research.

Surveys allow you to find out some of what a group of people thinks about a topic. Designing an effective survey and interpreting the data you get can be challenging, so it’s a good idea to check with your instructor before creating or administering a survey.

Experiments

Experimental data serve as the primary form of scientific evidence. For scientific experiments, you should follow the specific guidelines of the discipline you are studying. For writing in other fields, more informal experiments might be acceptable as evidence. For example, if you want to prove that food choices in a cafeteria are affected by gender norms, you might ask classmates to undermine those norms on purpose and observe how others react. What would happen if a football player were eating dinner with his teammates and he brought a small salad and diet drink to the table, all the while murmuring about his waistline and wondering how many fat grams the salad dressing contained?

Personal experience

Using your own experiences can be a powerful way to appeal to your readers. You should, however, use personal experience only when it is appropriate to your topic, your writing goals, and your audience. Personal experience should not be your only form of evidence in most papers, and some disciplines frown on using personal experience at all. For example, a story about the microscope you received as a Christmas gift when you were nine years old is probably not applicable to your biology lab report.

Using evidence in an argument

Does evidence speak for itself.

Absolutely not. After you introduce evidence into your writing, you must say why and how this evidence supports your argument. In other words, you have to explain the significance of the evidence and its function in your paper. What turns a fact or piece of information into evidence is the connection it has with a larger claim or argument: evidence is always evidence for or against something, and you have to make that link clear.

As writers, we sometimes assume that our readers already know what we are talking about; we may be wary of elaborating too much because we think the point is obvious. But readers can’t read our minds: although they may be familiar with many of the ideas we are discussing, they don’t know what we are trying to do with those ideas unless we indicate it through explanations, organization, transitions, and so forth. Try to spell out the connections that you were making in your mind when you chose your evidence, decided where to place it in your paper, and drew conclusions based on it. Remember, you can always cut prose from your paper later if you decide that you are stating the obvious.

Here are some questions you can ask yourself about a particular bit of evidence:

  • OK, I’ve just stated this point, but so what? Why is it interesting? Why should anyone care?
  • What does this information imply?
  • What are the consequences of thinking this way or looking at a problem this way?
  • I’ve just described what something is like or how I see it, but why is it like that?
  • I’ve just said that something happens—so how does it happen? How does it come to be the way it is?
  • Why is this information important? Why does it matter?
  • How is this idea related to my thesis? What connections exist between them? Does it support my thesis? If so, how does it do that?
  • Can I give an example to illustrate this point?

Answering these questions may help you explain how your evidence is related to your overall argument.

How can I incorporate evidence into my paper?

There are many ways to present your evidence. Often, your evidence will be included as text in the body of your paper, as a quotation, paraphrase, or summary. Sometimes you might include graphs, charts, or tables; excerpts from an interview; or photographs or illustrations with accompanying captions.

When you quote, you are reproducing another writer’s words exactly as they appear on the page. Here are some tips to help you decide when to use quotations:

  • Quote if you can’t say it any better and the author’s words are particularly brilliant, witty, edgy, distinctive, a good illustration of a point you’re making, or otherwise interesting.
  • Quote if you are using a particularly authoritative source and you need the author’s expertise to back up your point.
  • Quote if you are analyzing diction, tone, or a writer’s use of a specific word or phrase.
  • Quote if you are taking a position that relies on the reader’s understanding exactly what another writer says about the topic.

Be sure to introduce each quotation you use, and always cite your sources. See our handout on quotations for more details on when to quote and how to format quotations.

Like all pieces of evidence, a quotation can’t speak for itself. If you end a paragraph with a quotation, that may be a sign that you have neglected to discuss the importance of the quotation in terms of your argument. It’s important to avoid “plop quotations,” that is, quotations that are just dropped into your paper without any introduction, discussion, or follow-up.

Paraphrasing

When you paraphrase, you take a specific section of a text and put it into your own words. Putting it into your own words doesn’t mean just changing or rearranging a few of the author’s words: to paraphrase well and avoid plagiarism, try setting your source aside and restating the sentence or paragraph you have just read, as though you were describing it to another person. Paraphrasing is different than summary because a paraphrase focuses on a particular, fairly short bit of text (like a phrase, sentence, or paragraph). You’ll need to indicate when you are paraphrasing someone else’s text by citing your source correctly, just as you would with a quotation.

When might you want to paraphrase?

  • Paraphrase when you want to introduce a writer’s position, but their original words aren’t special enough to quote.
  • Paraphrase when you are supporting a particular point and need to draw on a certain place in a text that supports your point—for example, when one paragraph in a source is especially relevant.
  • Paraphrase when you want to present a writer’s view on a topic that differs from your position or that of another writer; you can then refute writer’s specific points in your own words after you paraphrase.
  • Paraphrase when you want to comment on a particular example that another writer uses.
  • Paraphrase when you need to present information that’s unlikely to be questioned.

When you summarize, you are offering an overview of an entire text, or at least a lengthy section of a text. Summary is useful when you are providing background information, grounding your own argument, or mentioning a source as a counter-argument. A summary is less nuanced than paraphrased material. It can be the most effective way to incorporate a large number of sources when you don’t have a lot of space. When you are summarizing someone else’s argument or ideas, be sure this is clear to the reader and cite your source appropriately.

Statistics, data, charts, graphs, photographs, illustrations

Sometimes the best evidence for your argument is a hard fact or visual representation of a fact. This type of evidence can be a solid backbone for your argument, but you still need to create context for your reader and draw the connections you want them to make. Remember that statistics, data, charts, graph, photographs, and illustrations are all open to interpretation. Guide the reader through the interpretation process. Again, always, cite the origin of your evidence if you didn’t produce the material you are using yourself.

Do I need more evidence?

Let’s say that you’ve identified some appropriate sources, found some evidence, explained to the reader how it fits into your overall argument, incorporated it into your draft effectively, and cited your sources. How do you tell whether you’ve got enough evidence and whether it’s working well in the service of a strong argument or analysis? Here are some techniques you can use to review your draft and assess your use of evidence.

Make a reverse outline

A reverse outline is a great technique for helping you see how each paragraph contributes to proving your thesis. When you make a reverse outline, you record the main ideas in each paragraph in a shorter (outline-like) form so that you can see at a glance what is in your paper. The reverse outline is helpful in at least three ways. First, it lets you see where you have dealt with too many topics in one paragraph (in general, you should have one main idea per paragraph). Second, the reverse outline can help you see where you need more evidence to prove your point or more analysis of that evidence. Third, the reverse outline can help you write your topic sentences: once you have decided what you want each paragraph to be about, you can write topic sentences that explain the topics of the paragraphs and state the relationship of each topic to the overall thesis of the paper.

For tips on making a reverse outline, see our handout on organization .

Color code your paper

You will need three highlighters or colored pencils for this exercise. Use one color to highlight general assertions. These will typically be the topic sentences in your paper. Next, use another color to highlight the specific evidence you provide for each assertion (including quotations, paraphrased or summarized material, statistics, examples, and your own ideas). Lastly, use another color to highlight analysis of your evidence. Which assertions are key to your overall argument? Which ones are especially contestable? How much evidence do you have for each assertion? How much analysis? In general, you should have at least as much analysis as you do evidence, or your paper runs the risk of being more summary than argument. The more controversial an assertion is, the more evidence you may need to provide in order to persuade your reader.

Play devil’s advocate, act like a child, or doubt everything

This technique may be easiest to use with a partner. Ask your friend to take on one of the roles above, then read your paper aloud to them. After each section, pause and let your friend interrogate you. If your friend is playing devil’s advocate, they will always take the opposing viewpoint and force you to keep defending yourself. If your friend is acting like a child, they will question every sentence, even seemingly self-explanatory ones. If your friend is a doubter, they won’t believe anything you say. Justifying your position verbally or explaining yourself will force you to strengthen the evidence in your paper. If you already have enough evidence but haven’t connected it clearly enough to your main argument, explaining to your friend how the evidence is relevant or what it proves may help you to do so.

Common questions and additional resources

  • I have a general topic in mind; how can I develop it so I’ll know what evidence I need? And how can I get ideas for more evidence? See our handout on brainstorming .
  • Who can help me find evidence on my topic? Check out UNC Libraries .
  • I’m writing for a specific purpose; how can I tell what kind of evidence my audience wants? See our handouts on audience , writing for specific disciplines , and particular writing assignments .
  • How should I read materials to gather evidence? See our handout on reading to write .
  • How can I make a good argument? Check out our handouts on argument and thesis statements .
  • How do I tell if my paragraphs and my paper are well-organized? Review our handouts on paragraph development , transitions , and reorganizing drafts .
  • How do I quote my sources and incorporate those quotes into my text? Our handouts on quotations and avoiding plagiarism offer useful tips.
  • How do I cite my evidence? See the UNC Libraries citation tutorial .
  • I think that I’m giving evidence, but my instructor says I’m using too much summary. How can I tell? Check out our handout on using summary wisely.
  • I want to use personal experience as evidence, but can I say “I”? We have a handout on when to use “I.”

Works consulted

We consulted these works while writing this handout. This is not a comprehensive list of resources on the handout’s topic, and we encourage you to do your own research to find additional publications. Please do not use this list as a model for the format of your own reference list, as it may not match the citation style you are using. For guidance on formatting citations, please see the UNC Libraries citation tutorial . We revise these tips periodically and welcome feedback.

Lunsford, Andrea A., and John J. Ruszkiewicz. 2016. Everything’s an Argument , 7th ed. Boston: Bedford/St Martin’s.

Miller, Richard E., and Kurt Spellmeyer. 2016. The New Humanities Reader , 5th ed. Boston: Cengage.

University of Maryland. 2019. “Research Using Primary Sources.” Research Guides. Last updated October 28, 2019. https://lib.guides.umd.edu/researchusingprimarysources .

You may reproduce it for non-commercial use if you use the entire handout and attribute the source: The Writing Center, University of North Carolina at Chapel Hill

Make a Gift

University of Texas

  • University of Texas Libraries
  • UT Libraries

Systematic Reviews & Evidence Synthesis Methods

Types of reviews.

  • Formulate Question
  • Find Existing Reviews & Protocols
  • Register a Protocol
  • Searching Systematically
  • Supplementary Searching
  • Managing Results
  • Deduplication
  • Critical Appraisal
  • Glossary of terms
  • Librarian Support
  • Video tutorials This link opens in a new window
  • Systematic Review & Evidence Synthesis Boot Camp

Not sure what type of review you want to conduct?

There are many types of reviews ---  narrative reviews ,  scoping reviews , systematic reviews, integrative reviews, umbrella reviews, rapid reviews and others --- and it's not always straightforward to choose which type of review to conduct. These Review Navigator tools (see below) ask a series of questions to guide you through the various kinds of reviews and to help you determine the best choice for your research needs.

  • Which review is right for you? (Univ. of Manitoba)
  • What type of review is right for you? (Cornell)
  • Review Ready Reckoner - Assessment Tool (RRRsAT)
  • A typology of reviews: an analysis of 14 review types and associated methodologies. by Grant & Booth
  • Meeting the review family: exploring review types and associated information retrieval requirements | Health Info Libr J, 2019

Reproduced from Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies . Health Info Libr J. 2009 Jun;26(2):91-108. doi: 10.1111/j.1471-1842.2009.00848.x

  • Last Updated: Apr 9, 2024 8:57 PM
  • URL: https://guides.lib.utexas.edu/systematicreviews

Creative Commons License

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Using Research and Evidence

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

What type of evidence should I use?

There are two types of evidence.

First hand research is research you have conducted yourself such as interviews, experiments, surveys, or personal experience and anecdotes.

Second hand research is research you are getting from various texts that has been supplied and compiled by others such as books, periodicals, and Web sites.

Regardless of what type of sources you use, they must be credible. In other words, your sources must be reliable, accurate, and trustworthy.

How do I know if a source is credible?

You can ask the following questions to determine if a source is credible.

Who is the author? Credible sources are written by authors respected in their fields of study. Responsible, credible authors will cite their sources so that you can check the accuracy of and support for what they've written. (This is also a good way to find more sources for your own research.)

How recent is the source? The choice to seek recent sources depends on your topic. While sources on the American Civil War may be decades old and still contain accurate information, sources on information technologies, or other areas that are experiencing rapid changes, need to be much more current.

What is the author's purpose? When deciding which sources to use, you should take the purpose or point of view of the author into consideration. Is the author presenting a neutral, objective view of a topic? Or is the author advocating one specific view of a topic? Who is funding the research or writing of this source? A source written from a particular point of view may be credible; however, you need to be careful that your sources don't limit your coverage of a topic to one side of a debate.

What type of sources does your audience value? If you are writing for a professional or academic audience, they may value peer-reviewed journals as the most credible sources of information. If you are writing for a group of residents in your hometown, they might be more comfortable with mainstream sources, such as Time or Newsweek . A younger audience may be more accepting of information found on the Internet than an older audience might be.

Be especially careful when evaluating Internet sources! Never use Web sites where an author cannot be determined, unless the site is associated with a reputable institution such as a respected university, a credible media outlet, government program or department, or well-known non-governmental organizations. Beware of using sites like Wikipedia , which are collaboratively developed by users. Because anyone can add or change content, the validity of information on such sites may not meet the standards for academic research.

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Course: LSAT   >   Unit 1

  • Getting started with Logical Reasoning
  • Introduction to arguments
  • Catalog of question types
  • Types of conclusions

Types of evidence

  • Types of flaws
  • Identify the conclusion | Quick guide
  • Identify the conclusion | Learn more
  • Identify the conclusion | Examples
  • Identify an entailment | Quick guide
  • Identify an entailment | Learn more
  • Strongly supported inferences | Quick guide
  • Strongly supported inferences | Learn more
  • Disputes | Quick guide
  • Disputes | Learn more
  • Identify the technique | Quick guide
  • Identify the technique | Learn more
  • Identify the role | Quick guide
  • Identify the role | learn more
  • Identify the principle | Quick guide
  • Identify the principle | Learn more
  • Match structure | Quick guide
  • Match structure | Learn more
  • Match principles | Quick guide
  • Match principles | Learn more
  • Identify a flaw | Quick guide
  • Identify a flaw | Learn more
  • Match a flaw | Quick guide
  • Match a flaw | Learn more
  • Necessary assumptions | Quick guide
  • Necessary assumptions | Learn more
  • Sufficient assumptions | Quick guide
  • Sufficient assumptions | Learn more
  • Strengthen and weaken | Quick guide
  • Strengthen and weaken | Learn more
  • Helpful to know | Quick guide
  • Helpful to know | learn more
  • Explain or resolve | Quick guide
  • Explain or resolve | Learn more

Types of Evidence

  • This can help you avoid getting “lost” in the words; if you’re reading actively and recognizing what type of evidence you’re looking at, then you’re more likely to stay focused.
  • Different types of evidence are often associated with specific types of assumptions or flaws, so if a question presents a classic evidence structure, you may be able to find the answer more quickly.

Common Evidence Types

Examples as evidence.

  • [Paola is the best athlete in the state.] After all, Paola has won medals in 8 different Olympic sports.
  • Paola beat last year's decathlon state champion on Saturday, so [she is the best athlete in the state].

What others say

  • [Paola is the best athlete in the state.] We know this because the most highly-acclaimed sports magazine has named her as such.
  • Because the population voted Paola the Best Athlete in the state in a landslide, [it would be absurd to claim that anyone else is the best athlete in the state].

Using the past

  • [Paola is the best athlete in the state.] She must be, since she won the state championships last year, two years ago, three years ago, and four years ago.
  • [Paola is the best athlete in the state], because she won the most athletic awards. Look at Jude, who's currently the Best Chef in the State because he won the most cooking awards.

Generalizing from a Sample

  • [Paola is the best athlete in the state], because she won every local tournament in every spring sport.

Common Rebuttal Structures

Counterexamples, alternate possibilities, other types of argument structures, conditional.

  • Penguins win → ‍   Flyers make big mistake
  • Flyers make big mistake → ‍   coach tired
  • Friday → ‍   coach is not tired

Causation based on correlation

Want to join the conversation.

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Determining the level of evidence: Nonresearch evidence

Affiliation.

  • 1 Amy Glasofer is a nurse scientist at Virtua Center for Learning in Mt. Laurel, N.J., and Ann B. Townsend is an adult NP with The NP Group, LLC.
  • PMID: 33674537
  • DOI: 10.1097/01.NURSE.0000733964.06881.23

To support evidence-based nursing practice, the authors provide guidelines for nonresearch evidence, which includes clinical practice guidelines, consensus or position statements, literature review, expert opinion, organizational experience, case reports, community standards, clinician experience, and consumer preferences. This is the third in a three-part series.

Copyright © 2021 Wolters Kluwer Health, Inc. All rights reserved.

  • Evidence-Based Nursing / organization & administration*
  • Practice Guidelines as Topic
  • Open access
  • Published: 29 April 2024

Evaluating implementation of a community-focused patient navigation intervention at an NCI-designated cancer center using RE-AIM

  • Elizabeth S. Ver Hoeve 1 ,
  • Elizabeth Calhoun 3 ,
  • Monica Hernandez 2 ,
  • Elizabeth High 2 ,
  • Julie S. Armin 1 ,
  • Leila Ali-Akbarian 4 ,
  • Michael Frithsen 2 , 5 ,
  • Wendy Andrews 2 , 5 &
  • Heidi A. Hamann 1  

BMC Health Services Research volume  24 , Article number:  550 ( 2024 ) Cite this article

164 Accesses

2 Altmetric

Metrics details

Patient navigation is an evidence-based intervention that reduces cancer health disparities by directly addressing the barriers to care for underserved patients with cancer. Variability in design and integration of patient navigation programs within cancer care settings has limited this intervention’s utility. The implementation science evaluation framework, RE-AIM, allows quantitative and qualitative examination of effective implementation of patient navigation programs into cancer care settings.

The Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) framework was used to evaluate implementation of a community-focused patient navigation intervention at an NCI-designated cancer center between June 2018 and October 2021. Using a 3-month longitudinal, non-comparative measurement period, univariate and bivariate analyses were conducted to examine associations between participant-level demographics and primary (i.e., barrier reduction) and secondary (i.e., patient-reported outcomes) effectiveness outcomes. Mixed methods analyses were used to examine adoption and delivery of the intervention into the cancer center setting. Process-level analyses were used to evaluate maintenance of the intervention.

Participants ( n  = 311) represented a largely underserved population, as defined by the National Cancer Institute, with the majority identifying as Hispanic/Latino, having a household income of $35,000 or less, and being enrolled in Medicaid. Participants were diagnosed with a variety of cancer types and most had advanced staged cancers. Pre-post-intervention analyses indicated significant reduction from pre-intervention assessments in the average number of reported barriers, F(1, 207) = 117.62, p  < .001, as well as significant increases in patient-reported physical health, t (205) = − 6.004, p  < .001, mental health, t (205) = − 3.810, p  < .001, self-efficacy, t (205) = − 5.321, p  < .001, and satisfaction with medical team communication, t (206) = − 2.03, p  = .029. Referral patterns and qualitative data supported increased adoption and integration of the intervention into the target setting, and consistent intervention delivery metrics suggested high fidelity to intervention delivery over time. Process-level data outlined a successful transition from a grant-funded community-focused patient navigation intervention to an institution-funded program.

Conclusions

This study utilized the implementation science evaluation framework, RE-AIM, to evaluate implementation of a community-focused patient navigation program. Our analyses indicate successful implementation within a cancer care setting and provide a potential guide for other oncology settings who may be interested in implementing community-focused patient navigation programs.

Peer Review reports

Contributions to the literature

• This manuscript represents one of only a couple of manuscripts that have applied the Implementation Science Evaluation Framework, RE-AIM, to assess implementation of a patient navigation intervention within a cancer care setting.

• Operationalization of RE-AIM for evaluation of the patient navigation intervention implementation was comprehensive and closely aligned with guidance from www.re-aim.org and the RE-AIM Model Dimension Items Checklist developed by the National Cancer Institute in partnership with RE-AIM authors.

• Patient navigation is an evidence-based healthcare intervention that could benefit from guidance on effective intervention implementation effectiveness.

• Implementation evaluation metrics provided by RE-AIM support effective implementation of this evidence-based intervention into cancer care and suggest improvement in cancer health equity.

Patient navigation is an evidence-based intervention designed to reduce patients’ barriers to cancer care, strengthen patients’ adherence to cancer screening guidelines and treatment, and improve timeliness to cancer diagnostic resolution [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 ]. Accumulating evidence suggests that these explicated outcomes of patient navigation programs reduce cancer health disparities by reaching underserved patients directly to address their barriers to cancer care and coordination [ 13 , 14 , 15 ]. Despite widespread introduction of patient navigation programs into cancer centers throughout the United States over the last 5–10 years, national statistics on cancer health disparities remain stagnant [ 16 , 17 ] and limitted data exist on the implementation effectiveness of these programs within cancer care settings [ 18 , 19 , 20 ]. Further, the majority of patient navigation programs have focused on early detection within several common cancer types (e.g., breast, colorectal, cervical), and knowledge about implementation of patient navigation programs inclusive of patients across the cancer care continuum and with diverse cancer types remains limited [ 21 ].

With the goal of eliminating health care disparities by improving cancer care access and coordination, the Commisssion on Cancer (CoC) required all accredited cancer centers to include patient navigation as part of their cancer care programing [ 22 ]. Although this CoC standard increased adoption of patient navigation programs across the country [ 23 ], these programs have varied substantially in design and scope [ 24 ]. Results from a recent nationwide survey suggest that nurses, social workers, and nonclinical staff currently provide patient navigation services but differ greatly in their roles and responsibilites based on clinical designation [ 25 ]. Additionally, patient navigation services differ by funding type and cancer care continuum stage, with nonclinical (i.e., lay) navigators being more likely to be grant-funded and more likely to be providing navigation services at earlier stages of the cancer care continuum [ 26 ].

Although patient navigation programs should not need to, nor be expected to, subscribe to a one-size-fits-all model because they are designed to accommodate and address the unique unmet needs of specific cancer care settings and patient populations [ 18 , 24 , 27 ], such variability in design, training, and integration of patient navigation programs into cancer care settings has presented challenges for standardized evaluations of patient navigation intervention implementation effectiveness [ 18 , 19 , 28 ]. Patient navigation was originally developed as an intervention to reduce cancer health disparities [ 14 ], and it has been shown to demonstrate strong efficacy among underserved patients with cancer [ 13 , 29 , 30 ]. Therefore, greater attention to implementation and sustainability of both the structural processes and the health equity processes associated with delivery of this evidence-based intervention is warranted [ 31 ].

This paper employs an implementation science evaluation approach [ 32 ] to assess the processes involved in effectively introducing and maintaining a community-focused patient navigation program at an NCI-designated cancer center that has a clinical affiliation with a nonprofit health care system [ 33 ] and is located in the Southwestern United States. Consistent with recent calls to utilize implementation science as a method for better addressing health equity within the context of intervention implementation evaluation [ 34 ], the development and evaluation of this community-focused patient navigation intervention’s implementation was guided by a health equity lens such that we utilized RE-AIM to guide our primary enrollment and evaluation objective to serve a medically underserved patient population. We evaluated our patient navigation program’s implementation during the period of June 2018 to October 2021, based on key domains of Reach (the representativeness of individuals participating in the intervention), Effectiveness (the impact of the intervention on specified outcomes), Adoption (the degree to which individuals within the setting utilize the intervention), Implementation (the consistency with which the intervention is delivered), and Maintenance (the extent to which the intervention becomes sustainable) [ 35 ]. To deliver maximum impact, optimal implementation research must explicitly examine both the individual-level impact of reaching primarily medically underserved patients to address barriers to care and the setting-level factors that promote long-term sustainability of patient navigation interventions. The present work describes the results of a 5-year effort to implement a community-focused patient navigation intervention into one cancer care setting using an operationalized RE-AIM framework to guide program implementation evaluation.

Study design

As part of this intervention, a lay navigator (i.e., an individual with no higher education health care degree) was hired by the research team in 2018 and trained to become a community-focused patient navigator according to best practices [ 36 ]. Consistent with data collected through an earlier needs assessment at the cancer center [ 37 ], the investigator team was interested in hiring an interventionist who was fluent in English and Spanish, had strong connections to and engagement with the local community, was familiar with a local federally qualified health center, and who was exceptionally organized and motivated to help patients with cancer. The hired interventionist (i.e., navigator) was bilingual (English and Spanish) and bicultural (United States and Mexico). The same community-focused patient navigator remained active at 1.0 FTE throughout the duration of this intervention’s implementation. A supplemental interventionist was hired at .5 FTE for approximately one-year (2018–2019) to support intervention initiation.

The patient navigator worked individually with each patient who was referred to the community-focused patient navigation intervention. Over the course of a 3-month longitudinal non-comparative (i.e., continuous enrollment, no control group) measurement period, the patient navigator helped patients identify their barriers to cancer care, provided necessary services including referrals, community resource assistance, insurance-related assistance, and clinical care coordination improvements in an effort to reduce patients’ specific barriers, and at the end of the intervention, encouraged participants to re-assess their barriers to care. All aspects of the intervention, including all participant encounters as well as all efforts made on behalf of each participant, were systematically documented within REDCap by the navigator and study team personnel. Notably, the community-focused patient navigator was able to continue working with participants who expressed ongoing need for assistance following the conclusion of the intervention, although no additional data were collected beyond the 3-month follow-up.

Data collected for the intervention included: 1) demographic and disease characteristics, 2) pre- and post-barrier assessments conducted by the community-focused navigator as well as calculation of a post-intervention barrier resolution assessment conducted by two-trained research study members, and 3) patient-reported outcome questionnaires that were administered at intervention enrollment and again 3-months post-enrollment. Although enrolled participants were contacted approximately 2 months after enrollment by the navigator to review barrier resolution efforts (“two-month barrier assessment”), this assessment was primarily included to support implementation standardization and adherence, as opposed to data collection. The community-focused patient navigation intervention was initially implemented in June 2018 and enrolled its final patient in October 2021.

Participant eligibility and recruitment

Any individual with a diagnosis of cancer, who had established cancer care in the clinical care system was eligible for participation in the community-focused patient navigation intervention. Specifically, participation was inclusive of any cancer type, cancer stage, or treatment point along the cancer care continuum (diagnosis, treatment, post-treatment survivorship). Although there were no eligibility criteria in terms of patient age, race, ethnicity, or primary language status, the community-focused navigation program was focused on enrolling underserved patients, and these patients often received referral priority from one of the referring clinical care teams (e.g., social worker, nurse, doctor, research coordinator, financial advisor). Once a referral was received, the navigator contacted the patient directly to introduce the study and obtain informed consent. This study received ethical approval from the Institutional Review Board at the University of Arizona (#1804483104), and informed consent was obtained from all participants. All methods were carried out in accordance with Declaration of Helsinki.

Patient demographics and disease history

To characterize the sample of participants in the patient navigation intervention, patient-reported demographic histories were collected including ethnicity, race, primary language, gender, age, birth country, marital status, zip code, employment, highest level of education, household income, insurance, insurance type, home ownership, and housing insecurity. Consistent with the definition provided by the NCI, a demographic category labeled “underserved” was developed to represent any enrolled participant who was an ethnic/racial minority and/or insured through Medicaid [ 38 ]. Cancer history, including type of cancer, stage at diagnosis, and status on the cancer care continuum were collected via electronic medical records review by a trained study team member.

Patient-reported barriers to Cancer care

At the time of intervention enrollment, the community-focused patient navigator conducted a barriers assessment to identify each patient’s specific barriers to cancer care. The barriers assessment was based on that used in the NCI Patient Navigation Research Program [ 39 ] and contained 88 possible barriers to cancer care. Identified patient-reported barriers to cancer care were documented, for each patient, by the community-focused patient navigator in REDCap.

At the 3-month post-intervention time point, all efforts taken by the patient navigator to resolve each participant’s barriers to cancer care (as documented in REDCap) were systematically reviewed and evaluated by two trained research team members. Reviewers used three categories to assess whether each barrier was: Not Addressed (i.e., the navigator either did not attempt to work on a solution to the barrier or attempted but was never able to identify a solution such as a referral or community resource); Addressed (i.e., the navigator was able to provide the patient with a solution such as a referral or a resource for a specific barrier); or Completely Addressed (i.e., the navigator was able to provide the patient with a solution such as a referral or a resource for a specific barrier and documentation within REDCap indicated that the patient was no longer experiencing the particular barrier). A ‘percent barrier addressed’ assessment was calculated for each patient using the formula: (#Addressed + #Completely Resolved)/(Total # Pre-Intervention Barriers) × 100.

Patient-reported outcomes

Global health.

PROMIS Global Health (v1.2) (PROMIS Health Organization (PHO), 2018) is a health-related quality of life assessment that is part of the larger set of PROMIS (Patient-Reported Outcomes Measurement Information System) instruments [ 40 , 41 ] funded by the National Institutes of Health (NIH) and normalized to the U.S. adult population. PROMIS Global Health assesses an individual’s physical, mental, and social health, and consists of two primary subscales: Global Physical Health (4 items: Global03, Global06, Global07rc, and Global08r) and Global Mental Health (4 items: Global02, Gloabl04, Global05, and Global10r). Raw scores were calculated based on the PROMIS Scoring Manual and then converted to T-scores using the PROMIS T-Score Tables [ 42 ]. Each item was rated on a 5-point Likert scale, where higher scores indicated that a greater amount of that subscale domain was being measured.

Self-efficacy

PROMIS General Self-Efficacy-Short Form 4a v1.0 [ 43 ] is a patient-reported assessment of one’s ability to successfully perform specific tasks or behaviors. This assessment contains four items that ask participants to rate, on a 5-point Likert scale, their levels of self-confidence in performing various tasks, where higher scores indicate a higher level of self-efficacy.

Patient satisfaction with medical services

Patient-reported satisfaction with medical services was evaluated using the Patient Satisfaction Questionnaire-18 (PSQ-18 ) [ 44 ] which contains 18 items organized into seven subscale domains including 1) general satisfaction, 2) technical quality, 3) interpersonal manner, 4) communication, 5) financial aspects, 6) time spent with doctor, and 7) accessibility and convenience. Each item was rated with a 5-point Likert response of strongly agree, agree, uncertain, disagree, and strongly disagree, where Strongly Agree (i.e., ‘5’) indicated higher patient-reported satisfaction with their medical services.

Patient satisfaction with patient navigation program

Patient-reported satisfaction with the patient navigator was evaluated using the Patient Satisfaction with Navigator Interpersonal Relationship (PSN-I) questionnaire [ 45 ]. Nine items assessed the extent to which the navigator spent sufficient time with the patient, made the patient feel comfortable, was dependable, was respectful of the patient, listened to the patient, was easy to communicate with, cared about the patient’s well-being, worked to problem-solve patient’s barriers to care, and was readily accessible. Each item was assessed with a 5-point Likert scale in which higher PSN-I total scores indicated higher patient-reported satisfaction with their interpersonal relationship with the patient navigator.

Data analyses using RE-AIM framework

The RE-AIM framework dimensions, definitions, operationalizations, and data sources used in the evaluation of the implementation of this patient navigation intervention are outlined in Table  1 and are briefly described below. RE-AIM measurement is closely aligned with the RE-AIM Model Dimension Items Checklist [ 46 ], and data are presented in a manner consistent with the recommendations designed to systematically evaluate each dimension of the intervention.

RE-AIM: reach (R)

The ‘Reach’ metric assessed the extent to which the patients who participated in the community-focused patient navigation intervention were representative of the population of medically underserved patients within the designated catchment area of the cancer center [ 47 , 48 ]. Demographic and disease characteristics of enrolled participants were further analyzed to ensure overall comparability among participants who (a) consented to the intervention ( n  = 311), (b) completed the 3-month intervention ( n  = 255), and (c) completed the 3-month intervention as well as the post-intervention survey ( n  = 207). Descriptive statistics were used to assess the representativeness of the participant sample of the intervention in terms of (a) the number of individuals who agreed to participate in the community-focused patient navigation intervention relative to the total number of patients with cancer seen for clinical care, and (b) the extent to which the patients who participated in the community-focused patient navigation intervention were representative of the population of underserved patients within the designated catchment area of the cancer center. Consistent with the definition provided by the NCI, “underserved” patients were identified as individuals representing an ethnic/racial minority and/or insured through Medicaid [ 38 ].

RE-AIM: effectiveness (E)

Univariate and bivariate analyses were conducted to assess the impact of the intervention on specified outcomes. Specifically, to what extent was the community-focused patient navigation intervention effective at producing the positive types of results obtained from previous patient navigation intervention efficacy studies? Analyses of primary outcomes involved comparison of the counts of patients’ pre-intervention and post-intervention barriers, with a focus on the overall robustness of barrier reduction at the participant level using repeated measures ANOVA. Analyses of secondary outcomes included comparisons of pre-post patient-reported outcomes of global physical health and global mental health (PROMIS), self-efficacy (PROMIS), patient satisfaction with medical services (PSQ-18), and patient satisfaction with navigator (PSNI-I) using paired t-tests, as well as examination of the robustness of these comparisons across participant subgroups. All analyses were conducted using the sample of patients who completed the intervention as well as the post-intervention questionnaires ( n  = 207).

RE-AIM: adoption (a)

The Adoption component of the evaluation assessed the extent to which the community-focused patient navigation program was utilized among individuals and clinical teams within the cancer center: (a) Descriptive analyses were used to evaluate the number of clinical referrals made by cancer center staff over the course of the intervention. The 39-month period of active enrollment was split evenly into three time periods (Year 1, Year 2, and Year 3). Percent change in the total number of referrals was evaluated over time and organized by staff specialty to evaluate levels of utilization and cumulative referral count was graphed in order to descriptively assess adoption over time; and (b) A qualitative mixed methods survey designed to assess staff perspectives on intervention uptake and acceptability was distributed to staff who had been invited to make referrals to the intervention.

RE-AIM: implementation (I)

Program implementation was evaluated through a multidimensional assessment that included (a) univariate analyses to assess the fidelity of implementation by specifically assessing the timeliness between referral and first patient contact, calculated as the percentage of intervention deliveries that adhered to this program’s expectation of three or fewer days; (b) process-level analyses of standardized documentation and technology systems (e.g., REDCap and manual EMR data checks), including necessary adaptations made during the course of the three-year implementation, to address consistency of implementation across time; and (c) retrospective quantification of intervention costs.

Maintenance (M)

The extent to which the program was able to become successfully maintained within the cancer care setting was evaluated through (a) direct assessment of whether the intervention was still ongoing 6 months post-study funding [ 46 ], and (b) process-level analyses of efforts taken, at the research team and setting levels, to identify adaptations made following completion of the intervention to ensure sustainability of the community-focused patient navigation intervention.

Statistical analyses

Data stored in REDCap were exported in a de-identified format and imported into SPSS and R, where all statistical analyses were conducted. Patient-reported demographic and disease characteristics were analyzed using descriptive statistics associated with the Reach aim. Pre- and post-intervention barriers data and patient-reported outcomes were analyzed using univariate, bivariate, and repeated measures analyses in SPSS to address the Effectiveness aim. Clinical referral data were descriptively evaluated based on percent increase/decrease across the 3 years of the intervention, and qualitative data were analyzed by content analysis to determine the extent to which the Adoption aim was met. Analyses of the fidelity of the intervention, including timeliness, consistency, and costs, were documented as part of the Implementation aim. Finally, process-level descriptions, including the summation of weekly team meetings and direct communications with the Principal Investigator (Hamann) of the community-focused patient navigation intervention were used to assess the Maintenance aim. All graphics were produced in R.

Reach: participant demographic and disease characteristics

Patients ( n  = 311) were enrolled regardless of cancer type or stage and were excluded only if they did not have a definitive diagnosis of cancer or if they died prior to first contact with the study team ( n  = 3; See CONSORT; Fig.  1 ). Descriptive analyses of the demographic characteristics of enrolled participants reflected a largely underserved population. The majority of participants self-identified as Hispanic/Latino, reported being enrolled in Medicaid, and reported household incomes of less than $35,000 per year (Table  2 ). A strong minority of patients indicated Spanish to be their primary language (41.2%) and reported experiencing housing insecurity (i.e., “worry or concern about not having stable housing”) within the past 6 months (41.2%). Based on zip code analysis, approximately 12 % of participants lived in areas designated by HRSA as ‘rural’ [ 49 ]. Examination of disease characteristics indicated that enrolled patients were most commonly diagnosed with gastrointestinal cancer and with late stage (Stage III or Stage IV) disease; approximately half had only recently been diagnosed with cancer or recently initiated cancer treatment (Table 2 ). The analyses of variance comparing demographic and disease characteristics among participants who enrolled in the patient navigation intervention ( n  = 311), participants who completed the intervention ( n  = 255), and participants who completed both the intervention and the post-intervention survey ( n  = 207) indicated overall comparability and no statistically significant differences ( p ’s > .05) (Table 2 ). Across the intervention, there was a 10% attrition rate due to mortality (See CONSORT; Fig. 1 ).

figure 1

Community-Focused Patient Navigation CONSORT Diagram

Reach: participant representativeness

For each of the 3 years of intervention implementation, cancer registry data were summarized to identify: (1) the size of the total patient population seen at the cancer center ( n  = 1943 patients in Year 1; n  = 1937 patients in Year 2; and n  = 2225 patients in Year 3), (2) the number of patients seen at the cancer center who met criteria for being ‘underserved’ (i.e., uninsured, on Medicaid, and/or Hispanic, Black, American Indian/Alaska Native, Native Hawaiian, Multiracial) ( n  = 375 patients in Year 1; n  = 314 patients in Year 2; and n  = 395 patients in Year 3), and (3) the percentage of the population of interest (i.e., underserved patients at the cancer center) that the community-focused patient navigation intervention was able to reach (19.7% of patients in Year 1; 30.6% of patients in Year 2; and 21.3% of patients in Year 3) (Table  3 ). These comparative ratios indicate that 82% of enrolled participants in the community-focused patient navigation intervention were ‘underserved,’ thus the intervention reached approximately 23% of the total population of interest (i.e., total number of underserved patients seen at the cancer center) over the 3 years of enrollment.

Effectiveness: reducing barriers to cancer care

The primary effectiveness outcome of the community-focused patient navigation intervention was operationalized as the percentage of pre-intervention barriers that were adequately addressed, per patient, by the community-focused navigator over the course of the 3-month intervention. Number of barriers reported at pre-intervention did not statistically differ between those who completed the intervention ( n  = 207) and those who did not ( n  = 104); t (310) = − 1.33, p  = .159. The average number of pre-intervention patient-reported barriers for the 207 participants who completed the 3-month intervention and the post-intervention survey was 3.54 (range: 1–10). Examination of endorsement frequency indicated that the 10 most prevalent patient-reported barriers to cancer care (in order of descending frequency) included: (1) can’t afford utilities, (2) needs vision care, (3) can’t afford housing, (4) public transportation not readily available, (5) no health insurance, (6) can’t afford co-pay/deductible, (7) no primary care provider, (8) needs hearing test, (9) feels depressed, and (10) feels overwhelmed by paperwork. At the time of the post-intervention assessment, their average number of unresolved barriers was 0.94 (range: 0–7) with an average of 74.7% of each patients’ pre-intervention barriers being either adequately addressed (i.e., a resource was provided, although the barrier may not have been completely resolved) or fully resolved (i.e., a resource was provided, and the barrier was resolved) (Fig.  2 ). Number of reported pre-intervention barriers did not differ by participant age, cancer stage, or status along the cancer care continuum but did differ based on intervention year, r(207) = −.227, p  < .001. Therefore, a repeated-measures ANOVA accounting for a covariate of intervention year (Year 1, Year 2, or Year 3) indicated that barrier number significantly decreased between the initiation (i.e., pre-intervention) and completion (i.e., post-intervention) of the community-focused patient navigation intervention, F(1,207) = 117.62, p  < .001, as well as a large effect size, .365. The interaction between intervention year and barrier was not significant ( p  = .061). The two most common actions taken by the navigator to address a patient-reported barrier were to provide a resource to the patient and to contact a resource on behalf of the patient.

figure 2

Primary Effectiveness Outcome: Average Barrier Count Per Participant at Pre-Intervention and Post-Intervention. At pre-intervention, participants reported, on average, 3.44 barriers to cancer care (dark blue). At post-intervention, participants had, on average, 0.94 barriers to cancer that were unresolved or unaddressed (light blue)

Effectiveness: improvement in patient-reported outcomes

To assess the impact of the community-focused patient navigation intervention on patient-reported outcomes, paired-sample t-tests were conducted on patients’ pre-intervention and post-intervention questionnaires. As indicated in Fig.  3 , participants ( n  = 207) exhibited significant improvement in their global mental health after completing the intervention ( M  = 45.93, SD  = 5.5) compared to before the intervention ( M  = 42.64, SD  = 9.4) and this improvement, − 3.3, 95%CI [− 5.0, − 1.6], was statistically significant, t (205) = − 3.810, p  < .001; d  = −.265. Similarly, participants exhibited significant improvement in their global physical health after completing the intervention ( M  = 44.3, SD  = 6.1) compared to before the intervention ( M  = 40.7, SD  = 9.0), and this improvement, − 3.7, 95%CI [− 4.9, − 2.5], was statistically significant, t (205) = − 6.004, p  < .001; d  = −.418. There was also significant improvement in patient-reported self-efficacy such that participants’ scores after completing the intervention ( M  = 50.55, SD  = 12.23) were higher than their scores before the intervention ( M  = 45.38, SD  = 13.19) and this improvement, − 5.16, 95%CI [− 7.1, − 3.3], was statistically significant, t (205) = − 5.321, p  < .001; d  = −.371.

figure 3

Secondary Effectiveness Outcomes: Patient-Reported Outcomes at Pre-Intervention and Post-Intervention. Participants demonstrated significant improvement in their global mental health between pre- and post-intervention, t (205) = −3.810, p  < .001; d  = −.265. Participants demonstrated a significant increase in their global physical health between pre- and post-intervention, t (205) = −6.004, p  < .001; d  = −.418. Participants demonstrated a significant increase in self-efficacy between pre- and post-intervention, t (205) = −5.321, p  < .001; d  = −.371

Following the intervention, participants ( n  = 207) also demonstrated overall increases in their satisfaction with their medical services, including patient reports of significantly greater general satisfaction, greater satisfaction with their communication with their medical teams, improvement in financial aspects of their medical care, greater amount of time that their doctors spent with them, and greater accessibility and convenience related to their medical care (Table  4 ). Following completion of the intervention, patients also reported strong satisfaction with the community-focused patient navigator, the navigator’s efforts to resolve their barriers, and the interpersonal relationships they established over the course of the intervention as evidenced by an average score of 40.03 ( SD  = 6.2) on the Patient Satisfaction with Navigator Interpersonal Relationship Scale, a score consistent with previous interventions that included effective navigators [ 45 ].

Adoption: quantitative assessment of staff-level engagement in intervention

Prior to implementing the community-focused patient navigation intervention, the research team evaluated the existing clinical flow of supportive care referrals at the cancer center. Informal communication with the social work team, a palliative care physician, and a manager of the nurse navigators revealed that nurse navigators were typically the first members of the clinical care team to identify patient’s barriers to cancer care in advance of their initial appointments with their oncologists. Following treatment initiation and at further points along the cancer care continuum, social workers tended to receive the majority of referrals for assistance with patients’ barriers to care and requests for supportive care services. Based on this preliminary assessment, our research team invited members of all clinical care teams to participate in the patient navigation intervention (i.e., social workers, nurse navigators, financial counselors, physicians, and clinical research coordinators), but placed primary focus for staff engagement efforts on the social worker and nurse navigator teams.

Adoption of the intervention by staff at the cancer center was assessed quantitatively in terms of how referral rates across provider specialties (e.g., social worker, nurse navigator, financial specialist, etc.,) changed over the course of the 3-year intervention. The community-focused patient navigation intervention received a total of 360 referrals across the 3-year period. This included 189 referrals from social workers, 108 referrals from nurse navigators, and 63 referrals from members of other clinical specialty teams (3 from physicians, 10 from financial specialists, and 50 from clinical research team members). Percent increase in the number of referrals made to the patient navigation intervention was operationalized as a metric to reflect the increasing strength of adoption across the course of intervention implementation. Results indicated a 42.9% increase in the number of referrals between Year 1 (105 referrals) and Year 2 (150 referrals); a 1% decrease in the number of referrals between Year 1 and Year 3 (104 referrals); and a 44.2% decrease in the number of referrals between Year 2 and Year 3 (Fig.  4 ). Evidence of a decrease in referrals in Year 3 was likely due to both the onset of the COVID-19 pandemic in March 2020 and changes in intervention enrollment capacity associated with the anticipated end of the intervention in Fall 2021 (Fig.  5 ). Within clinical provider specialties, referrals from the social work team increased 27.8% between Year 1 (54 referrals) and Year 3 (69 referrals) and referrals from the nurse navigation team increased 68.4% between Year 1 (19 referrals) and Year 3 (32 referrals), suggesting increasingly wider adoption of the community-focused patient navigation intervention into the clinical flow for the two clinical teams that were primarily responsible for managing supportive care referrals.

figure 4

Adoption: Referrals Over Time by Specialty Provider Type. The number of referrals to the community-focused patient navigation intervention were organized across the 3 years of the intervention between June 2018 and October 2021 (Year 1 = 1st 13 months; Year 2 = 2nd 13 months; Year 3 = 3rd 13 months). Referrals to the community-focused patient navigation intervention are organized by provider type (social worker, nurse navigator, or other provider)

figure 5

Cumulative Referral Count by Referral Date. The number of referrals to the community-focused patient navigation intervention are represented as a cumulative count across the duration of the intervention between June 2018 and October 2021. The solid line begins with the first patient referral (June 15, 2018) and concludes with the final patient referral (September 30, 2021). The dashed line indicates the start of COVID-19 (March 1, 2020)

Adoption: qualitative assessment of staff-level engagement in intervention

Qualitative methods were used to assess staff perceptions of the community-focused patient navigation intervention and the utility of the community-focused patient navigator’s care coordination efforts within the cancer center setting. Sixteen key cancer center clinical staff members were asked to complete an anonymous feedback survey at the conclusion of the study. Eight staff members responded to the survey including three nurse navigators, three social workers, one nurse navigator manager, and one care coordinator. Overall, clinical staff reported being “very satisfied” with the community-focused patient navigation program. The majority of respondents ( n  = 7) indicated that they had submitted a minimum of 20 referrals to the community-focused patient navigation program, with two respondents reporting that they had submitted over 50 referrals. When asked to write free responses to support their satisfaction endorsement, several adoption-related themes emerged that facilitated the navigator’s integration with cancer center staff including the importance of bilingual delivery of service, a strong connection to and knowledge of community resources, and the ability to take initiative quickly. Barriers to intervention adoption included a mismatch between research related goals (e.g., study enrollment requirements such as use of patient consent form and use of baseline patient-reported questionnaires) versus clinical expectations (i.e., being able to receive a referral and quickly start working with a patient to address barriers to care). In addition, separation of the setting’s clinical delivery organization (i.e., Non-Profit Health System) and the research enterprise (Public university) [ 33 ] was identified as a barrier in the intervention delivery (e.g., regulatory challenges and EMR accessibility).

Implementation: timeliness of intervention delivery

The fidelity of an intervention includes an assessment of the timeliness with which the intervention was implemented. The community-focused patient navigator was instructed to act quickly (modeling as close to a “warm handoff” as possible) following receipt of a referral, to connect with the patient (in person or by phone), to explain the intervention and answer any questions the patient might have, and to invite the patient to participate in the intervention. We calculated the number of days between patient referral and date of first contact, and found the average to be 2.66 days (range: 1–35; SD  = 5.39). Setting a maximum threshold of 3 days, we found that the community-focused patient navigator successfully met the threshold criteria 76.4% ( n  = 275) of the time, suggesting that the majority of intervention initiations were delivered in a timely manner.

Implementation: intervention consistency

Aligned with the objective of intervention fidelity, all intervention activities were tracked through REDCap, a secure web application for managing electronic databases. These data provided an opportunity for assessment of the extent to which intervention procedures were adhered to consistently and in the expected manner. Following the longitudinal project design, REDCap automatically delivered email reminders to the community-focused navigator and to the primary study coordinator to ensure fidelity of intervention components including completion of baseline surveys, two-month barrier assessment check-in phone calls, and 3-month post-intervention phone calls. Although an email reminder does not guarantee follow-through, this feature contributed to consistency of implementation across participants over the course of the 3-year intervention. For example, the email reminder for the two-month barrier assessment check-in phone call was automated to be sent to the Navigator and study project coordinator exactly 62 days after patient’s consent date. Of the 255 participants who completed the intervention (i.e., remained in the intervention for a total of 3 months), 237 participants (93%) received a two-month barrier assessment check-in phone call and/or had a documented reason for not being contacted (e.g., participant had passed away). The average number of days between the consent date and the date the actual two-month barrier assessment check-in phone call was completed was 62.2 (range: 28 days – 139 days; standard deviation: 14.8 days), suggesting close adherence to the automated schedule.

Throughout the 3-year intervention, significant adaptations in the structure of intervention delivery did not occur. Minor adaptations to the protocol were considered by the study team at weekly meetings, and were sometimes implemented. For example, within the first 6 months of intervention implementation, the navigator indicated that she had greater success at communicating with patients outside of the standard 9–5 work hours, so adjustments were made to her work schedule to accommodate after-hours contact.

The study’s research-clinical partnerships (See Fig.  6 ) promoted consistency of intervention implementation by maintaining regularly scheduled meetings and building consistently open lines of communication for intervention delivery. Specifically, the study team (i.e., investigators, project coordinator, and community-focused navigator) met weekly to review participant accrual and to discuss any database-related challenges. The study team and the primary clinical liaison (i.e., manager of nurse navigation program at the cancer center) met bi-weekly to discuss any clinically relevant questions (e.g., clarifying the categorization of a particular cancer type) and to review patient referral processes. The primary clinical liaison also provided direct supervision and clinical support for the community-focused navigator throughout the intervention. Biannually, the study investigators met with cancer center administrators to review intervention-related outcomes (e.g., patient-reported satisfaction with their medical care) and cost-effectiveness aspects of the intervention. These research-clinical partnerships remained strong and consistent across the course of the intervention’s implementation.

figure 6

Clinical Research Partnership of the Community-Focused Patient Navigation Intervention. This diagram depicts the structure of clinical-research partnerships within the community-focused patient navigation intervention. Clinical-Research collaboration refers to members of the research or clinical team who were directly involved in day-to-day operations of the intervention. Clinical Collaboration/Oversight refers to members of the clinical or health care administrative teams who were indirectly involved in the intervention

Implementation: intervention costs

Evaluation of intervention costs was modeled off of an existing Patient Navigation Cost Framework [ 50 ], summarized within overarching categories of 1) Clinical Service Delivery Cost, 2) Maintenance of Research Infrastructure Cost, and 3) Clinical Partnership Cost (Table  5 ). Personnel represented the primary cost of delivering the intervention (i.e., Clinical Service Delivery Cost); space and supply costs were kept relatively minimal due to contributions made, in kind, by the intervention’s Principal Investigator within the cancer center.

Maintenance: status of intervention 6 months Post-study funding

At the setting level, the Clinical Service Delivery (i.e., 1 FTE, lay patient navigator) of the community-focused patient navigation intervention has been maintained following the conclusion of the intervention study. Specifically, the study’s community-focused patient navigator was hired by the cancer center as a full-time employee in 2021 and has remained present and integrated within the cancer center’s nurse navigation program, receiving approximately 25 referrals per week. As part of this transition, the clinical service provider took over full salary support for the study’s community-focused patient navigator. Notably, the clinical service provider’s support did not include protected time for the community-focused patient navigator to continue maintenance of the study’s research infrastructure (e.g., database maintenance) for ongoing data collection. Thus, while maintenance of this intervention was achieved at the setting-level, the maintenance – at the individual patient level – cannot be adequately assessed because long-term patient-level follow-up was not conducted.

Maintenance: process-level description of program sustainability

The transition from grant funding to institution funding by the clinical partner in December 2021 represented overall maintenance success. Planning for this objective had been initiated by the Principal Investigator (Hamann) and Co-Investigators (Calhoun, Armin, and Ali-Akbarian) in 2018, and included frequent routine contacts as well as scheduled biannual meetings between study investigators and clinical care administrators. The primary objectives of the biannual meetings were to 1) increase care coordination within the network of community clinics, and 2) establish the value of the community-focused patient navigation intervention for the clinical care system. Specifically, the study team aimed to provide evidence to administrators of the value of having a bilingual and bicultural navigator embedded within the supportive care team to provide necessary support for Spanish-only speaking patients and to provide culturally appropriate community resources to address underserved patients’ barriers to cancer care.

Once the community-focused patient navigation program was incorporated into the nurse navigator program within the cancer center clinical care team, the navigator worked with administrators and supportive care staff to distinguish her role and responsibilities from those of other established team members. For example, it was necessary to differentiate the navigator’s role in finding financial resources for patients from the role of the financial counselor, who specialized in insurance and financial payment plans. The community-focused patient navigator continued to receive referrals from clinical teams (primarily social work and nurse navigation) and continued to utilize a comprehensive barriers assessment. However, necessary adaptations to the program were also made: The patient navigator was no longer involved in consenting patients, collecting patient-reported outcome data, or tracking barrier reduction efforts and participant communications on REDCap.

Following conclusion of the intervention, the PI (Hamann) and research assistant (Ver Hoeve) conducted an informal, semi-structured interview with two lead cancer center administrators who were directly involved in the hiring of the community-focused patient navigator into the cancer center’s supportive care team. The administrators identified facilitating factors that supported the hiring process including the existence of a previously established FTE for a lay navigator at the cancer center institution, recognition that the community-focused patient navigation grant program “checked a lot of boxes” for what the cancer center was looking to improve upon, and perception that the continued presence of the patient navigator was fully supported by current supportive care staff.

The RE-AIM model was utilized to guide the implementation evaluation of an evidence-based, community-focused patient navigation intervention, with a focus on health equity in a new setting at an NCI-designated cancer center in the Southwestern United States. The implementation effectiveness of this three-year effort was demonstrated by the intervention’s ability to reach (‘R’) the population of underserved patients with cancer, effectively (‘E’) reduce barriers to cancer care while enhancing patient-reported outcomes, gain adoption (‘A’) among cancer center staff, be implemented (‘I’) with fidelity and consistency while maintaining costs, and ultimately maintain (‘M’) sustainability by successfully transitioning from a grant-funded intervention into an institution-funded community-focused patient navigation program.

To deliver on its maximum potential of reducing cancer health disparities, a patient navigation program must ensure that it focuses its efforts primarily on reaching historically medically underserved patients who carry the greatest cancer care burden. Approximately 40% of the population within the catchment area of this NCI-designated cancer center identifies as Hispanic [ 51 ], and is characterized as living in poverty (over 25%) or being uninsured (15%) [ 52 ]. The results of our implementation study indicate that 82% of the enrolled patients in our community-focused patient navigation intervention met the defined criteria for being ‘underserved’ and, over the three-year intervention period, the community-focused patient navigation intervention reached (i.e., enrolled) 23% of the total number of ‘underserved’ patients seen at the cancer center. Taken together, the utilization of an implementation science framework, particularly the use of RE-AIM’s ‘reach’ metric, facilitated an enhanced health equity focus by documenting the navigation intervention’s successful efforts at reaching and enrolling a representative sample of the medically underserved population of interest.

This implementation science study achieved effectiveness outcomes similar to those reported in prior patient navigation interventions [ 53 , 54 , 55 , 56 ]. Patients experienced significant reductions in their reported barriers to cancer care as well as significant improvements in their patient-reported outcomes, including their physical health, mental health, self-efficacy, and satisfaction with their medical care. Quality of life improvements among a primarily Hispanic participant sample represents a particularly meaningful outcome in light of a recent review suggesting that Hispanic patients with cancer often experience a lower quality of life (QoL) within the domains of psychological, physical, and social well-being [ 57 ]. In addition, robust improvements in patient satisfaction, including satisfaction with financial aspects of their medical care, suggest that the 3-month intervention may have been particularly useful for patients experiencing financial challenges. Notably, although some of these outcomes (e.g., patient satisfaction, quality of life, etc.,) have been previously demonstrated [ 21 ], our program’s inclusive approach regarding enrollment of participants with diverse cancer stage, cancer type, and status along the cancer care continuum further strengthens and expands the existing patient-report literature that uses patient navigation within an oncology setting. Taken together, this community-focused patient navigation intervention effectively demonstrated the expected result of barrier reduction and improved patient-reported health outcomes, and also highlighted patient-reported satisfaction with medical care (e.g., communication with doctor, time spent with doctor, financial distress, etc.) as a potentially valuable metric associated with healthcare quality [ 56 , 58 ].

By tracking the number of referrals as a metric aligned with assessing the adoption of the community-focused patient navigation intervention at the staff-level, this study found that approximately 53% of referrals were initiated by the social work team, 30% were initiated by the nurse navigation team, and the remaining 17% were initiated by other clinical care providers. An increase of 43% in the number of referrals between the first and second year of the intervention indicated substantial adoption of the community-focused patient navigation intervention into the clinical care flow, particularly for the social work and nurse navigation teams who were primarily responsible for managing supportive care referrals as part of the cancer center’s standard procedures. However, a decrease in the number of referrals between Year 2 and Year 3 was also evident. This decrease coincided with the onset of the COVID-19 pandemic, at a time before vaccines were accessible, and represented not only a challenge for longitudinal research but also an indication of shifting clinical priorities as necessity required immediate responsiveness to the pandemic by all medical personnel, a powerful shift experienced by multiple clinical trials and research teams across the United States [ 59 ]. Recruitment declined slightly during the pandemic, and the work of the community-focused patient navigator transitioned from largely in-person (at the cancer center) to entirely remote, complicating the navigator’s ability to connect with patients who lacked consistent access to phone or internet services. Despite the pandemic, however, results of the mixed methods survey confirmed that, clinical care providers were clearly motivated to make referrals to this community-focused patient navigator, whom they generally viewed as someone with unique linguistic and cultural abilities who worked effectively with patients to reduce their barriers to care.

Despite previous efforts [ 23 ] and strong motivation [ 19 ] to formalize a business case for the incorporation of community-focused patient navigation into clinical cancer care, the sustainability of health equity-focused navigation programs remains a significant challenge [ 60 , 61 ] and standardized navigation metrics on program implementation and sustainability are needed [ 62 ]. A critical process-level component of the present study was our description of intervention maintenance and how our research-clinical partnerships envisioned – from the outset of the study – a goal of transitioning the community-focused patient navigation intervention from a fully grant-funded project into a fully institutionalized cancer center program. Key strategies that supported this transition included routinely bringing clinic administrators into sustainability discussions throughout the intervention’s duration, effectively demonstrating the cultural and community value of the intervention in addressing the unmet needs within the cancer center’s catchment area, and providing convincing arguments on cost-effectiveness (e.g., showing benefit based on number of patients’ navigated to insurance coverage who subsequently initiated care at the cancer center). Our study team was also able to demonstrate consistency of this intervention’s delivery and fidelity through diligent use of REDCap’s data entry and reporting features which also supported our program sustainability goal. Importantly, however, the transition of our fully grant-funded community-focused patient navigation intervention into a fully institutionalized cancer center program did not include protected research time for ongoing data collection, a finding that appears consistent with the results of a recent national survey which identified a need for greater data collection among institutionally-funded patient navigation programs [ 62 ].

This study is not without limitations. The adoption component of RE-AIM recommends that a total number of staff (i.e., absolute number) within a designated setting be obtained to fully understand the percentage of staff that actually utilized the newly implemented intervention. We have provided estimated numbers of nurse navigators and social workers, but were unable to quantify the exact numbers of individual participants who sent referrals beyond their provider designation (e.g., social work, nurse navigation, clinical research coordinator). Further, there is no absolute number associated with other types of staff providers (e.g., doctors) because the research team loosely advertised the intervention to “any” clinical cancer care staff member who interacted with patients experiencing barriers to care. We also did not use a standardized measure to guide our assessment of intervention cost-effectiveness, although doing so might have strengthened our presentation of maintenance results and may have provided more guidance to other cancer centers looking to implement a community-focused patient navigation intervention. Additionally, our effectiveness outcomes (barrier reduction and patient-reported outcomes) could have been more meaningful if also associated with a clinical outcome (e.g., adherence) but this was not feasible within this study. Finally, although regular feedback was obtained from the navigator throughout the intervention (i.e., through weekly team meetings), the formal qualitative components of this implementation evaluation study did not include a direct perspective from the navigator (e.g., the navigator’s perspective on intervention adoption, delivery, and sustainability) was not documented in a way that directly aligned with RE-AIM processes. We received limited data from adoption survey responses and included only two semi-structured interviews assessing intervention sustainability factors only at the post-intervention time point. Thus, taken together, limited qualitative data within this intervention reduces scalability and generalizability of these findings.

This study provides an important contribution to the existing patient navigation and implementation science literature. First, to the best of our knowledge, it represents one of only a couple of published reports [ 63 , 64 ] that uses the comprehensive implementation science evaluation framework, RE-AIM, to assess the implementation of a patient navigation program at a cancer care setting. By mapping our results onto the RE-AIM framework, we explicitly lay the groundwork for establishing the validity of these components of intervention implementation and strengthen the potential for building upon this type of patient navigation intervention to ultimately reduce cancer health disparities. Second, this study utilized patient navigation in patients experiencing more than 15 different types of cancer and at various stages along the cancer care continuum, including a significant proportion of patients with metastatic disease, as opposed to the bulk of patient navigation literature that focuses on early detection and diagnostic resolution. Third, this implementation science study included both quantitative and qualitative data to strengthen the depth of evaluation into the success of this intervention. Finally, the process-level strategies we identified when discussing the maintenance of our intervention represent an important contribution to the literature as the field struggles to establish the business case for community-focused patient navigation using funding provided directly by the cancer center institution. Taken together, this community-focused patient navigation intervention achieved successful implementation based on the RE-AIM metrics, demonstrated the use of implementation science to support improved health equity, and provided a description of processes to support transferability and scalability of patient navigation programs focused on reaching medically underserved patients with cancer.

This research study used the implementation science evaluation framework, RE-AIM, to evaluate the implementation of a community-focused patient navigation program. The implementation effectiveness was demonstrated by the intervention’s ability to reach a population of underserved patients with cancer, effectively reduce barriers to cancer care while enhancing patient-reported outcomes, gain adoption among cancer center staff, be implemented with fidelity and consistency across time, and ultimately be maintained through transition from a grant-funded intervention into an institution-funded program. Program sustainability was achieved by routinely bringing clinic administrators into sustainability discussions throughout the intervention’s duration, by effectively demonstrating the cultural and community value of the intervention in addressing the unmet needs within the cancer center’s catchment area, and by demonstrating cost-effectiveness. These analyses indicate successful program implementation within a cancer care setting and lay the groundwork for establishing a standardized evaluation process for introducing and maintaining patient navigation programs focused on reaching and supporting underserved patients with cancer.

Availability of data and material

The datasets analyzed during the current study are available from the corresponding author upon reasonable request.

Abbreviations

Commisssion on cancer

National cancer institute

Reach, effectiveness, adoption, implementation, and maintenance

Full-time equivalent

National institutes of health

Electronic medical record

Quality of life

Basu M, Linebarger J, Gabram SGA, Patterson SG, Amin M, Ward KC. The effect of nurse navigation on timeliness of breast cancer care at an academic comprehensive cancer center. Cancer [Internet]. 2013;119(14) [cited 2023 Jan 28]; Available from: https://pubmed.ncbi.nlm.nih.gov/23585059/ .

Battaglia TA, Roloff K, Posner MA, Freund KM. Improving follow-up to abnormal breast cancer screening in an urban population: A patient navigation intervention. In: Cancer [Internet]. Cancer. 2007;109:359–67 [cited 2021 Mar 29]. Available from: https://pubmed.ncbi.nlm.nih.gov/17123275/ .

Article   PubMed   Google Scholar  

Battaglia TA, Freund KM, Haas JS, Casanova N, Bak S, Cabral H, et al. Translating research into practice: Protocol for a community-engaged, stepped wedge randomized trial to reduce disparities in breast cancer treatment through a regional patient navigation collaborative. Contemp Clin Trials. 2020;93 [cited 2021 Mar 29];Available from: https://pubmed.ncbi.nlm.nih.gov/32305457/ .

Bensink ME, Ramsey SD, Battaglia T, Fiscella K, Hurd TC, McKoy JM, et al. Costs and outcomes evaluation of patient navigation after abnormal cancer screening: evidence from the patient navigation research program. Cancer. 2014;120(4):570–8.

Dobbs RW, Stinson J, Vasavada SR, Caldwell BM, Freeman VL, Garvey DF, et al. Helping men find their way: improving prostate Cancer clinic attendance via patient navigation. J Community Health. 2020;45(3):561–8.

Ell K, Vourlekis B, Muderspach L, Nissly J, Padgett D, Pineda D, et al. Abnormal cervical screen follow-up among low-income latinas: project SAFe. J Women’s Health Gend-Based Med. 2002;11(7):639–51.

Article   Google Scholar  

Ell K, Katon W, Xie B, Lee PJ, Kapetanovic S, Guterman J, et al. One-year postcollaborative depression care trial outcomes among predominantly Hispanic diabetes safety net patients. Gen Hosp Psychiatry. 2011;33(5):436–42.

Article   PubMed   PubMed Central   Google Scholar  

Luckett R, Pena N, Vitonis A, Bernstein MR, Feldman S. Effect of patient navigator program on no-show rates at an academic referral colposcopy clinic. J Women’s Health. 2015;24(7):608–15.

Nash D, Azeez S, Vlahov D, Schori M. Evaluation of an intervention to increase screening colonoscopy in an urban public hospital setting. J Urban Health. 2006;83(2):231–43.

Oluwole SF, Ali AO, Adu A, Blane BP, Barlow B, Oropeza R, et al. Impact of a cancer screening program on breast cancer stage at diagnosis in a medically underserved urban community. J Am Coll Surg. 2003;196(2):180–8.

Percac-Lima S, Ashburner JM, McCarthy AM, Piawah S, Atlas SJ. Patient navigation to improve follow-up of abnormal mammograms among disadvantaged women. J Women’s Health. 2015;24(2):138–43.

Ramirez AG, Choi BY, Munoz E, Perez A, Gallion KJ, Moreno PI, et al. Assessing the effect of patient navigator assistance for psychosocial support services on health-related quality of life in a randomized clinical trial in Latino breast, prostate, and colorectal cancer survivors. Cancer. 2020;126(5):1112–23.

Rodday AM, Parsons SK, Snyder F, Simon MA, Llanos AAM, Warren-Mears V, et al. Impact of patient navigation in eliminating economic disparities in cancer care. Cancer. 2015;121(22):4025–34.

Freeman HP. Patient navigation: a community based strategy to reduce cancer disparities. J Urban Health : Bull New York Acad Med. 2006;83(2):139–41.

Freund KM. Patient navigation: the promise to reduce health disparities. J Gen Intern Med. 2011;26(2):110–2.

Singh GK, Jemal A. Socioeconomic and racial/ethnic disparities in Cancer mortality, incidence, and survival in the United States, 1950-2014: over six decades of changing patterns and widening inequalities. J Environ Public Health. 2017;2017.

Zavala VA, Bracci PM, Carethers JM, Carvajal-Carmona L, Coggins NB, Cruz-Correa MR, et al. Cancer health disparities in racial/ethnic minorities in the United States. Br J Cancer. 2021;124(2):315–32.

Barrington WE, DeGroff A, Melillo S, Vu T, Cole A, Escoffery C, et al. Patient navigator reported patient barriers and delivered activities in two large federally-funded cancer screening programs. Prev Med. 2019;1(129):105858.

Bernardo BM, Zhang X, Beverly Hery CM, Meadows RJ, Paskett ED. The efficacy and cost-effectiveness of patient navigation programs across the cancer continuum: A systematic review. Cancer. 2019;125(16):cncr.32147.

Byers T. Assessing the value of patient navigation for completing cancer screening. Cancer Epidemiol Biomarkers Prev. 2012;21(10):1618–9.

Chan RJ, Milch VE, Crawford-Williams F, Agbejule OA, Joseph R, Johal J, et al. Patient navigation across the cancer care continuum: an overview of systematic reviews and emerging literature. CA Cancer J Clin. 2023;73(6):565–89.

Commission on Cancer, American College of Surgeons: Cancer Program Standards: Ensuring Patient-Centered Care. 2016. p. 54. https://www.facs.org/quality-programs/cancer-programs/commission-on-cancer/standards-and-resources/ .

Kline RM, Rocque GB, Rohan EA, Blackley KA, Cantril CA, Pratt-Chapman ML, et al. Patient navigation in Cancer: the business case to support clinical needs. J Oncol Pract. 2019;15(11):585–90.

Wells KJ, Valverde P, Ustjanauskas AE, Calhoun EA, Risendal BC. What are patient navigators doing, for whom, and where? A national survey evaluating the types of services provided by patient navigators. Patient Educ Couns. 2018;101(2):285–94.

Wells KJ, Wightman P, Cobian Aguilar R, Dwyer AJ, Garcia-Alcaraz C, Saavedra Ferrer EL, et al. Comparing clinical and nonclinical cancer patient navigators: a national study in the United States. Cancer. 2022;128(Suppl 13):2601–9.

Garcia-Alcaraz C, Roesch SC, Calhoun E, Wightman P, Mohan P, Battaglia TA, et al. Exploring classes of cancer patient navigators and determinants of navigator role retention. Cancer. 2022;128(Suppl 13):2590–600.

Ver Hoeve ES, Simon MA, Danner SM, Washington AJ, Coples SD, Percac-Lima S, et al. Implementing patient navigation programs: considerations and lessons learned from the Alliance to advance patient-centered Cancer care. Cancer. 2022;128(14):2806–16.

Freund KM, Haas JS, Lemon SC, Burns White K, Casanova N, Dominici LS, et al. Standardized activities for lay patient navigators in breast cancer care: recommendations from a citywide implementation study. Cancer. 2019;125(24):4532–40.

Ko NY, Snyder FR, Raich PC, Paskett ED, Dudley DJ, Lee JH, et al. Racial and ethnic differences in patient navigation: results from the patient navigation research program. Cancer. 2016;122(17):2715–22.

Nelson HD, Cantor A, Wagner J, Jungbauer R, Fu R, Kondo K, et al. Effectiveness of patient navigation to increase Cancer screening in populations adversely affected by health disparities: a Meta-analysis. J Gen Intern Med. 2020;35(10):3026–35.

Baumann AA, Shelton RC, Kumanyika S, Haire-Joshu D. Advancing healthcare equity through dissemination and implementation science. Health Serv Res [Internet]. 2023 [cited 2023 May 28];n/a(n/a); https://doi.org/10.1111/1475-6773.14175 .

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):53.

Cairns CB, Bollinger K, Garcia JGN. A transformative approach to academic medicine: the partnership between the University of Arizona and banner health. Acad Med. 2017;92(1):20–2.

Kerkhoff AD, Farrand E, Marquez C, Cattamanchi A, Handley MA. Addressing health disparities through implementation science—a need to integrate an equity lens from the outset. Implement Sci. 2022;17(1):13.

Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC, et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health. 2019;7.

Calhoun EA, Whitley EM, Esparza A, Ness E, Greene A, Garcia R, et al. A national patient navigator training program. Health Promot Pract. 2010;11(2):205–15.

Bueno Y, VerHoeve E, Armin J, Ali Akbarian L, Calhoun E, Johnson N, Hernandez M, Pape M, Verdugo L, Hamann H. Implementing a cancer navigation program to improve oncology and primary care coordination, 4th Annual El Rio - Wright Center for GME Community Health Research Fair, May 9, 2019, Tucson, AZ.

National Cancer Institute, National Institute of Health. Cancer disparities. 2020. [cited 2021 mar 30]. Available from: https://www.cancer.gov/about-cancer/understanding/disparities .

Google Scholar  

Freund KM, Battaglia TA, Calhoun E, Dudley DJ, Fiscella K, Paskett E, et al. National Cancer Institute patient navigation research program: methods, protocol, and measures. Cancer. 2008;113(12):3391–9.

Cella D, Yount S, Rothrock N, Gershon R, Cook K, Reeve B, et al. The patient-reported outcomes measurement information system (PROMIS). Med Care. 2007;45(Suppl 1):S3-11.

Gershon RC, Rothrock N, Hanrahan R, Bass M, Cella D. The use of PROMIS and assessment center to deliver patient-reported outcome measures in clinical research. J Appl Meas. 2010;11(3):304–14.

PubMed   PubMed Central   Google Scholar  

PROMIS Health Organization (PHO). PROMIS® Scale v1.2 – Global Health 13 [Internet]. 2018. [cited 2019 Apr 5]. Available from: https://www.healthmeasures.net/index.php?option=com_instruments&task=Search.pagination&Itemid=992 .

PROMIS Health Organization (PHO). PROMIS® General Self Efficacy – Short Form 4a [Internet]. 2021. [cited 2023 Jan 28]. Available from: https://www.healthmeasures.net/search-view-measures?task=Search.search .

Marshall GN, Hays RD. The patient satisfaction questionnaire short form (PSQ-18). Santa Monica, CA: RAND Corporation; 1994.

Jean-Pierre P, Fiscella K, Winters PC, Post D, Wells KJ, McKoy JM, et al. Psychometric development and reliability analysis of a patient satisfaction with interpersonal relationship with navigator measure: a multi-site patient navigation research program study. Psycho-oncology. 2012;21(9):986–92.

RE-AIM. Measures and Checklists – RE-AIM [Internet]. 2023. [cited 2023 Jan 28]. Available from: https://re-aim.org/resources-and-tools/measures-and-checklists/ .

Batai K, Gachupin FC, Estrada AL, Garcia DO, Gomez J, Kittles RA. Patterns of cancer related health disparities in Arizona. Cancer Health Disparities. 2019;3:e1-20.

National Cancer Institute. Cancer Disparities [Internet]. 2021. [cited 2021 Aug 24]. Available from: https://www.cancer.gov/about-cancer/understanding/disparities .

Health Resources & Services Administration. Federal Office of Rural Health Policy (FORHP) Data Files. 2022. https://www.hrsa.gov/rural-health/about-us/what-is-rural/data-files .

Whitley E, Valverde P, Wells K, Williams L, Teschner T, Shih YCT. Establishing common cost measures to evaluate the economic value of patient navigation programs. Cancer. 2011;117(SUPPL. 15):3616–23.

Community Outreach and Engagement. Community Outreach and Engagement University of Arizona Cancer Center. 2022. p. 1–4. https://cancercontrol.cancer.gov/sites/default/files/2022-06/coe_case_study_template_updated_option_1_070921_uacc_lc_formatted_lc_v3_508.pdf .

Making Action Possible for Southern Arizona. Median Household Income | MAP Dashboard [Internet]. 2019 [cited 2021 Sep 12]. Available from: https://mapazdashboard.arizona.edu/economy/median-household-income .

Battaglia TA, Gunn CM, Bak SM, Flacks J, Nelson KP, Wang N, et al. Patient navigation to address sociolegal barriers for patients with cancer: a comparative-effectiveness study. Cancer. 2022;1(128 Suppl 13):2623–35.

Freeman HP, Chu KC. Determinants of cancer disparities: barriers to cancer screening, diagnosis, and treatment. Surg Oncol Clin N Am. 2005;14(4):655–69.

Katz ML, Young GS, Reiter PL, Battaglia TA, Wells KJ, Sanders M, et al. Barriers reported among patients with breast and cervical abnormalities in the patient navigation research program: impact on timely care. Womens Health Issues. 2014;24(1):e155.

Wells KJ, Campbell K, Kumar A, Clark T, Jean-Pierre P. Effects of patient navigation on satisfaction with cancer care: a systematic review and meta-analysis. Support Care Cancer. 2018;26(5):1369–82.

Samuel CA, Mbah OM, Elkins W, Pinheiro LC, Szymeczek MA, Padilla N, et al. Calidad de Vida: a systematic review of quality of life in Latino cancer survivors in the USA. Qual Life Res. 2020;29(10):2615–30.

Jean-Pierre P, Winters PC, Clark JA, Warren-Mears V, Wells KJ, Post DM, et al. Do better-rated navigators improve patient satisfaction with cancer-related care? J Cancer Educ. 2013;28(3):527–34.

Ledford H. The COVID pandemic’s lingering impact on clinical trials. Nature. 2021;595(7867):341–2.

Article   CAS   PubMed   Google Scholar  

Neal CD, Weaver DT, Raphel TJ, Lietz AP, Flores EJ, Percac-Lima S, et al. Patient navigation to improve Cancer screening in underserved populations: reported experiences, opportunities, and challenges. J Am Coll Radiol. 2018;15(11):1565–72.

Pratt-Chapman ML, Silber R, Tang J, Thao P, Le D. Implementation factors for patient navigation program success: a qualitative study. Implement Sci Commun. 2021;2(1):1–9.

Battaglia TA, Fleisher L, Dwyer AJ, Wiatrek DE, Wells KJ, Wightman P, et al. Barriers and opportunities to measuring oncology patient navigation impact: results from the National Navigation Roundtable survey. Cancer. 2022;128(S13):2568–77.

DeGroff A, Gressard L, Glover-Kudon R, Rice K, Tharpe FS, Escoffery C, et al. Assessing the implementation of a patient navigation intervention for colonoscopy screening. BMC Health Serv Res. 2019;19(1):803.

Inrig SJ, Tiro JA, Melhado TV, Argenbright KE, Craddock Lee SJ. Evaluating a De-centralized regional delivery system for breast Cancer screening and patient navigation for the rural underserved. Tex Public Health J. 2014;66(2):25–34.

Download references

Acknowledgements

We thank the patients who participated in this intervention study.

This work was supported by grants from the Merck Foundation Alliance to Advance Patient-Centered Cancer Care (PI: Heidi Hamann, PhD).

Author information

Authors and affiliations.

University of Arizona, Tucson, AZ, USA

Elizabeth S. Ver Hoeve, Julie S. Armin & Heidi A. Hamann

Banner Health, Tucson, AZ, USA

Monica Hernandez, Elizabeth High, Michael Frithsen & Wendy Andrews

University of Illinois Chicago, Chicago, IL, USA

Elizabeth Calhoun

Banyan Integrative Health, Tucson, AZ, USA

Leila Ali-Akbarian

University of Arizona College of Medicine, Tucson, AZ, USA

Michael Frithsen & Wendy Andrews

You can also search for this author in PubMed   Google Scholar

Contributions

EV participated in conceptualization, data curation, data analysis, and writing the original draft. EC participated in conceptualization, data curation, and projective administration. MH participated in data curation and project administration. EH participated in project administration. JA participated in conceptualization and project administration. LA participated in conceptualization. MF and WA participated in project administration. HH participated in conceptualization, data curation, data analysis, funding acquisition, and writing the original draft. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Elizabeth S. Ver Hoeve .

Ethics declarations

Ethics approval and consent to participate.

This study received ethical approval from the Institutional Review Board at the University of Arizona (#1804483104), and informed consent was obtained from all participants. All methods were carried out in accordance with Declaration of Helsinki.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ver Hoeve, E.S., Calhoun, E., Hernandez, M. et al. Evaluating implementation of a community-focused patient navigation intervention at an NCI-designated cancer center using RE-AIM. BMC Health Serv Res 24 , 550 (2024). https://doi.org/10.1186/s12913-024-10919-y

Download citation

Received : 13 July 2023

Accepted : 28 March 2024

Published : 29 April 2024

DOI : https://doi.org/10.1186/s12913-024-10919-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Community-focused patient navigation
  • Implementation science
  • Cancer care coordination
  • Sustainability
  • Supportive care interventions

BMC Health Services Research

ISSN: 1472-6963

types of evidence in research articles

ORIGINAL RESEARCH article

Cross-sectional associations of self-perceived stress and hair cortisol with metabolic outcomes and microvascular complications in type 2 diabetes.

Magdalena Buckert

  • 1 IU International University of Applies Sciences, Mannheim, Germany
  • 2 Heidelberg University Hospital, Heidelberg, Baden-Württemberg, Germany

The final, formatted version of the article will be published soon.

Select one of your emails

You have multiple emails registered with Frontiers:

Notify me on publication

Please enter your email address:

If you already have an account, please login

You don't have a Frontiers account ? You can register here

Increasing evidence supports chronic psychological stress as a risk factor for the development of type 2 diabetes. Much less is known, however, about the role of chronic stress in established diabetes.The aim of the current study was to comprehensively assess chronic stress in a sample of 73 patients with type 2 diabetes and 48 non-diabetic control participants, and to investigate associations with indicators of glycemic control (HbA1c), insulin resistance (HOMA-IR), β-cell functioning (C-peptide), illness duration, and the presence of microvascular complications.Chronic stress was measured using questionnaires (the Perceived Stress Scale (PSS), the Screening Scale of the Trier Inventory of Chronic Stress (SSCS), the Perceived Health Questionnaire (PHQ) as well as the Questionnaire on Stress in Patients with Diabetes-Revised (QSD-R)); hair cortisol was used as a biological indicator.We found that patients with type 2 diabetes had higher levels of hair cortisol in comparison to the control group (F(1,112) = 5.3; p = .023). Within the diabetic group, higher hair cortisol was associated with a longer duration of the illness (r = .25, p = .04). General perceived stress did not show significant associations with metabolic outcomes in type 2 diabetes patients. In contrast, higher diabetes-related distress, as measured with the QSD-R, was associated with lower glycemic control (r = .28, p = .02), higher insulin resistance (r = .26, p = .03) and a longer duration of the illness (r = .30, p = .01).Our results corroborate the importance of chronic psychological stress in type 2 diabetes. It appears, however, that once type 2 diabetes has developed, diabetes-specific distress gains in importance over general subjective stress. On a biological level, increased cortisol production could be linked to the course of the illness.

Keywords: psychological stress, hair cortisol, type 2 diabetes, microvascular complications, HbA1c

Received: 21 Sep 2023; Accepted: 29 Apr 2024.

Copyright: © 2024 Buckert, Streibel, Hartmann, Monzer, Kopf, Szendroedi and Wild. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Magdalena Buckert, IU International University of Applies Sciences, Mannheim, Germany Beate Wild, Heidelberg University Hospital, Heidelberg, 69120, Baden-Württemberg, Germany

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

IMAGES

  1. A Rough Guide to Types of Scientific Evidence

    types of evidence in research articles

  2. Types of Studies

    types of evidence in research articles

  3. Types of scientific evidence

    types of evidence in research articles

  4. 5 Types of Evidence

    types of evidence in research articles

  5. Types of Evidence/ Understanding Research Articles: Evidence Based Nursing Practice

    types of evidence in research articles

  6. Levels of evidence in research

    types of evidence in research articles

VIDEO

  1. Writing for Psychology: Evidence & Research

  2. #영어단어 #영어 단어 하루 5개 암기하기

  3. Step3-1:Study designs: Overview and terminology. اهتموا بالتعريفات هى دى اللى بتنزل فى لجان الترقيات

  4. What is evidence?

  5. Arguments And Evidence

  6. Types of Research Articles

COMMENTS

  1. The Levels of Evidence and their role in Evidence-Based Medicine

    As the name suggests, evidence-based medicine (EBM), is about finding evidence and using that evidence to make clinical decisions. A cornerstone of EBM is the hierarchical system of classifying evidence. This hierarchy is known as the levels of evidence. Physicians are encouraged to find the highest level of evidence to answer clinical questions.

  2. Evidence-Based Research: Evidence Types

    Not all evidence is the same, and appraising the quality of the evidence is part of evidence-based practice research.The hierarchy of evidence is typically represented as a pyramid shape, with the smaller, weaker and more abundant research studies near the base of the pyramid, and systematic reviews and meta-analyses at the top with higher validity but a more limited range of topics.

  3. Evidence-Based Research: Levels of Evidence Pyramid

    One way to organize the different types of evidence involved in evidence-based practice research is the levels of evidence pyramid. The pyramid includes a variety of evidence types and levels. Filtered resources: pre-evaluated in some way. systematic reviews. critically-appraised topics. critically-appraised individual articles.

  4. Research Guides: Systematic Reviews: Levels of Evidence

    Levels of Evidence. The evidence pyramid is often used to illustrate the development of evidence. At the base of the pyramid is animal research and laboratory studies - this is where ideas are first developed. As you progress up the pyramid the amount of information available decreases in volume, but increases in relevance to the clinical ...

  5. Levels of evidence in research

    Basically, level 1 and level 2 are filtered information - that means an author has gathered evidence from well-designed studies, with credible results, and has produced findings and conclusions appraised by renowned experts, who consider them valid and strong enough to serve researchers and scientists. Levels 3, 4 and 5 include evidence ...

  6. UofL Libraries: Evidence-Based Practice: Types of Evidence

    The summary and analysis of already existing research. Examples include systematic reviews, meta-analyses, review articles, and textbooks. Qualitative Research. Quantitative Research. Hypothesis generating. Collecting and analyzing non-numerical data to understand concepts, opinions, or experiences. Hypothesis testing.

  7. What is the best evidence and how to find it

    The best answers are found by combining the results of many studies. A systematic review is a type of research that looks at the results from all of the good-quality studies. It puts together the results of these individual studies into one summary. This gives an estimate of a treatment's risks and benefits.

  8. Levels of Evidence, Quality Assessment, and Risk of Bias: Evaluating

    Often, that estimate is derived from studies with the same study design or a narrow range of study designs from high levels in the evidence hierarchy for the research question type. Therefore, the focus is on a specific parameter estimate based on multiple studies, rather than a descriptive summary of the evidentiary strength of those studies.

  9. Understanding Evidence-based Article Types

    Firstly, many high-evidence article types will explicitly state what type of an article it is in the title. Secondly, if you cannot figure out the article type, sometimes that will be revealed in the abstract of the article.Look for words like "systematic analysis" to indicate high levels of evidence.

  10. Evidence Levels & Types

    A subset of systematic reviews: uses quantitative methods to combine the results of independent studies and synthesize the summaries and conclusions. Methods section outlines a detailed search strategy used to identify and appraise articles; often surveys clinical trials. Can be conducted independently, or as a part of a systematic review.

  11. Home: Finding Types of Research: Evidence-Based Research

    Throughout your schooling, you may need to find different types of evidence and research to support your course work. This guide provides a high-level overview of evidence-based practice as well as the different types of research and study designs. Each page of this guide offers an overview and search tips for finding articles that fit that ...

  12. New evidence pyramid

    The proposed new evidence-based medicine pyramid. (A) The traditional pyramid. (B) Revising the pyramid: (1) lines separating the study designs become wavy (Grading of Recommendations Assessment, Development and Evaluation), (2) systematic reviews are 'chopped off' the pyramid. (C) The revised pyramid: systematic reviews are a lens through ...

  13. Evidence

    Law: In the legal system, evidence is used to establish facts and to prove or disprove a case. Lawyers use different types of evidence, such as witness testimony, physical evidence, and documentary evidence, to present their arguments and persuade judges and juries. Science: Evidence is the foundation of scientific inquiry.

  14. Evidence

    Books, journals, websites, newspapers, magazines, and documentary films are some of the most common sources of evidence for academic writing. Our handout on evaluating print sources will help you choose your print sources wisely, and the library has a tutorial on evaluating both print sources and websites. A librarian can help you find sources ...

  15. Types of Reviews

    Current state of knowledge and priorities for future investigation and research: Systematic review: Seeks to systematically search for, appraise and synthesis research evidence, often adhering to guidelines on the conduct of a review: Aims for exhaustive, comprehensive searching: Quality assessment may determine inclusion/exclusion

  16. Research and Evidence

    First hand research is research you have conducted yourself such as interviews, experiments, surveys, or personal experience and anecdotes. Second hand research is research you are getting from various texts that has been supplied and compiled by others such as books, periodicals, and Web sites. Regardless of what type of sources you use, they ...

  17. Types of evidence (article)

    Types of Evidence. It can be useful to separate and identify different types of evidence used in an argument to support a conclusion. This can help you avoid getting "lost" in the words; if you're reading actively and recognizing what type of evidence you're looking at, then you're more likely to stay focused.

  18. Evidence-Based Practice in Nursing: What It Is & Why It Matters

    Types of evidence are graded by level, and evidence is at the heart of developing and implementing treatments and protocols. ... Since evaluating evidence-based research is key in deciding what to implement into nursing practice, having a framework to rank evidence from strongest to weakest helps to keep researchers and professionals aligned ...

  19. Applying GRADE-CERQual to Interpretive Review Findings: Reflections

    'Qualitative evidence syntheses' (QES) - or systematic reviews of qualitative evidence - is a term for the broad group of methods for systematically synthesising the findings from multiple primary qualitative studies (Noyes et al., 2018a).QES methods tend to follow a similar logic to a quantitative systematic review, however, their procedures are tailored to the significant ...

  20. Determining the level of evidence: Nonresearch evidence

    Abstract. To support evidence-based nursing practice, the authors provide guidelines for nonresearch evidence, which includes clinical practice guidelines, consensus or position statements, literature review, expert opinion, organizational experience, case reports, community standards, clinician experience, and consumer preferences. This is the ...

  21. Types of Evidence

    Introduction to Evidence; Types of Evidence. Types of Evidence; Relevance; Character Evidence; Witnessess; Lay vs. Expert Testimony/Scientific Evidence; Documentary Evidence/Authentication; Hearsay Rule (and exceptions) Testimonial Privileges (psychotherapist, attorney, clergy) The Exclusionary Rule (and exceptions)

  22. Evaluating implementation of a community-focused patient navigation

    Patient navigation is an evidence-based intervention that reduces cancer health disparities by directly addressing the barriers to care for underserved patients with cancer. Variability in design and integration of patient navigation programs within cancer care settings has limited this intervention's utility. The implementation science evaluation framework, RE-AIM, allows quantitative and ...

  23. ORIGINAL RESEARCH article

    Increasing evidence supports chronic psychological stress as a risk factor for the development of type 2 diabetes. Much less is known, however, about the role of chronic stress in established diabetes.The aim of the current study was to comprehensively assess chronic stress in a sample of 73 patients with type 2 diabetes and 48 non-diabetic control participants, and to investigate associations ...

  24. The effect of livelihood diversification on food security: evidence

    All the three authors contributed fairly equally during the data analysis as well as reviewing and editing the article. The discussion was reviewed by Yismaw Ayelign Mengistu and edited by Awoke Dejen Minyiwab for final research report. Lastly, Yismaw Ayelign Mengistu prepared the article in line with the authors' guideline of the journal.

  25. Synthesising quantitative and qualitative evidence to inform guidelines

    Advocating the integration of quantitative and qualitative evidence assumes a complementarity between research methodologies, and a need for both types of evidence to inform policy and practice. Below, we briefly outline the current designs for integrating qualitative and quantitative evidence within a mixed-method review or synthesis.

  26. More Evidence Showing Vitamin D Combats Cancer

    All Global Research articles can be read in 51 languages by activating the Translate Website button below the author's name (only available in desktop version). ... evidence continues to accumulate showing that vitamin D is a strong ally to combat cancer. 4. ... Vitamin D Has Anticancer Effects Against Many Types of Cancer.