Get science-backed answers as you write with Paperpal's Research feature

How to Write a Conclusion for Research Papers (with Examples)

How to Write a Conclusion for Research Papers (with Examples)

The conclusion of a research paper is a crucial section that plays a significant role in the overall impact and effectiveness of your research paper. However, this is also the section that typically receives less attention compared to the introduction and the body of the paper. The conclusion serves to provide a concise summary of the key findings, their significance, their implications, and a sense of closure to the study. Discussing how can the findings be applied in real-world scenarios or inform policy, practice, or decision-making is especially valuable to practitioners and policymakers. The research paper conclusion also provides researchers with clear insights and valuable information for their own work, which they can then build on and contribute to the advancement of knowledge in the field.

The research paper conclusion should explain the significance of your findings within the broader context of your field. It restates how your results contribute to the existing body of knowledge and whether they confirm or challenge existing theories or hypotheses. Also, by identifying unanswered questions or areas requiring further investigation, your awareness of the broader research landscape can be demonstrated.

Remember to tailor the research paper conclusion to the specific needs and interests of your intended audience, which may include researchers, practitioners, policymakers, or a combination of these.

Table of Contents

What is a conclusion in a research paper, summarizing conclusion, editorial conclusion, externalizing conclusion, importance of a good research paper conclusion, how to write a conclusion for your research paper, research paper conclusion examples.

  • How to write a research paper conclusion with Paperpal? 

Frequently Asked Questions

A conclusion in a research paper is the final section where you summarize and wrap up your research, presenting the key findings and insights derived from your study. The research paper conclusion is not the place to introduce new information or data that was not discussed in the main body of the paper. When working on how to conclude a research paper, remember to stick to summarizing and interpreting existing content. The research paper conclusion serves the following purposes: 1

  • Warn readers of the possible consequences of not attending to the problem.
  • Recommend specific course(s) of action.
  • Restate key ideas to drive home the ultimate point of your research paper.
  • Provide a “take-home” message that you want the readers to remember about your study.

conclusion of variables in research

Types of conclusions for research papers

In research papers, the conclusion provides closure to the reader. The type of research paper conclusion you choose depends on the nature of your study, your goals, and your target audience. I provide you with three common types of conclusions:

A summarizing conclusion is the most common type of conclusion in research papers. It involves summarizing the main points, reiterating the research question, and restating the significance of the findings. This common type of research paper conclusion is used across different disciplines.

An editorial conclusion is less common but can be used in research papers that are focused on proposing or advocating for a particular viewpoint or policy. It involves presenting a strong editorial or opinion based on the research findings and offering recommendations or calls to action.

An externalizing conclusion is a type of conclusion that extends the research beyond the scope of the paper by suggesting potential future research directions or discussing the broader implications of the findings. This type of conclusion is often used in more theoretical or exploratory research papers.

Align your conclusion’s tone with the rest of your research paper. Start Writing with Paperpal Now!  

The conclusion in a research paper serves several important purposes:

  • Offers Implications and Recommendations : Your research paper conclusion is an excellent place to discuss the broader implications of your research and suggest potential areas for further study. It’s also an opportunity to offer practical recommendations based on your findings.
  • Provides Closure : A good research paper conclusion provides a sense of closure to your paper. It should leave the reader with a feeling that they have reached the end of a well-structured and thought-provoking research project.
  • Leaves a Lasting Impression : Writing a well-crafted research paper conclusion leaves a lasting impression on your readers. It’s your final opportunity to leave them with a new idea, a call to action, or a memorable quote.

conclusion of variables in research

Writing a strong conclusion for your research paper is essential to leave a lasting impression on your readers. Here’s a step-by-step process to help you create and know what to put in the conclusion of a research paper: 2

  • Research Statement : Begin your research paper conclusion by restating your research statement. This reminds the reader of the main point you’ve been trying to prove throughout your paper. Keep it concise and clear.
  • Key Points : Summarize the main arguments and key points you’ve made in your paper. Avoid introducing new information in the research paper conclusion. Instead, provide a concise overview of what you’ve discussed in the body of your paper.
  • Address the Research Questions : If your research paper is based on specific research questions or hypotheses, briefly address whether you’ve answered them or achieved your research goals. Discuss the significance of your findings in this context.
  • Significance : Highlight the importance of your research and its relevance in the broader context. Explain why your findings matter and how they contribute to the existing knowledge in your field.
  • Implications : Explore the practical or theoretical implications of your research. How might your findings impact future research, policy, or real-world applications? Consider the “so what?” question.
  • Future Research : Offer suggestions for future research in your area. What questions or aspects remain unanswered or warrant further investigation? This shows that your work opens the door for future exploration.
  • Closing Thought : Conclude your research paper conclusion with a thought-provoking or memorable statement. This can leave a lasting impression on your readers and wrap up your paper effectively. Avoid introducing new information or arguments here.
  • Proofread and Revise : Carefully proofread your conclusion for grammar, spelling, and clarity. Ensure that your ideas flow smoothly and that your conclusion is coherent and well-structured.

Write your research paper conclusion 2x faster with Paperpal. Try it now!

Remember that a well-crafted research paper conclusion is a reflection of the strength of your research and your ability to communicate its significance effectively. It should leave a lasting impression on your readers and tie together all the threads of your paper. Now you know how to start the conclusion of a research paper and what elements to include to make it impactful, let’s look at a research paper conclusion sample.

conclusion of variables in research

How to write a research paper conclusion with Paperpal?

A research paper conclusion is not just a summary of your study, but a synthesis of the key findings that ties the research together and places it in a broader context. A research paper conclusion should be concise, typically around one paragraph in length. However, some complex topics may require a longer conclusion to ensure the reader is left with a clear understanding of the study’s significance. Paperpal, an AI writing assistant trusted by over 800,000 academics globally, can help you write a well-structured conclusion for your research paper. 

  • Sign Up or Log In: Create a new Paperpal account or login with your details.  
  • Navigate to Features : Once logged in, head over to the features’ side navigation pane. Click on Templates and you’ll find a suite of generative AI features to help you write better, faster.  
  • Generate an outline: Under Templates, select ‘Outlines’. Choose ‘Research article’ as your document type.  
  • Select your section: Since you’re focusing on the conclusion, select this section when prompted.  
  • Choose your field of study: Identifying your field of study allows Paperpal to provide more targeted suggestions, ensuring the relevance of your conclusion to your specific area of research. 
  • Provide a brief description of your study: Enter details about your research topic and findings. This information helps Paperpal generate a tailored outline that aligns with your paper’s content. 
  • Generate the conclusion outline: After entering all necessary details, click on ‘generate’. Paperpal will then create a structured outline for your conclusion, to help you start writing and build upon the outline.  
  • Write your conclusion: Use the generated outline to build your conclusion. The outline serves as a guide, ensuring you cover all critical aspects of a strong conclusion, from summarizing key findings to highlighting the research’s implications. 
  • Refine and enhance: Paperpal’s ‘Make Academic’ feature can be particularly useful in the final stages. Select any paragraph of your conclusion and use this feature to elevate the academic tone, ensuring your writing is aligned to the academic journal standards. 

By following these steps, Paperpal not only simplifies the process of writing a research paper conclusion but also ensures it is impactful, concise, and aligned with academic standards. Sign up with Paperpal today and write your research paper conclusion 2x faster .  

The research paper conclusion is a crucial part of your paper as it provides the final opportunity to leave a strong impression on your readers. In the research paper conclusion, summarize the main points of your research paper by restating your research statement, highlighting the most important findings, addressing the research questions or objectives, explaining the broader context of the study, discussing the significance of your findings, providing recommendations if applicable, and emphasizing the takeaway message. The main purpose of the conclusion is to remind the reader of the main point or argument of your paper and to provide a clear and concise summary of the key findings and their implications. All these elements should feature on your list of what to put in the conclusion of a research paper to create a strong final statement for your work.

A strong conclusion is a critical component of a research paper, as it provides an opportunity to wrap up your arguments, reiterate your main points, and leave a lasting impression on your readers. Here are the key elements of a strong research paper conclusion: 1. Conciseness : A research paper conclusion should be concise and to the point. It should not introduce new information or ideas that were not discussed in the body of the paper. 2. Summarization : The research paper conclusion should be comprehensive enough to give the reader a clear understanding of the research’s main contributions. 3 . Relevance : Ensure that the information included in the research paper conclusion is directly relevant to the research paper’s main topic and objectives; avoid unnecessary details. 4 . Connection to the Introduction : A well-structured research paper conclusion often revisits the key points made in the introduction and shows how the research has addressed the initial questions or objectives. 5. Emphasis : Highlight the significance and implications of your research. Why is your study important? What are the broader implications or applications of your findings? 6 . Call to Action : Include a call to action or a recommendation for future research or action based on your findings.

The length of a research paper conclusion can vary depending on several factors, including the overall length of the paper, the complexity of the research, and the specific journal requirements. While there is no strict rule for the length of a conclusion, but it’s generally advisable to keep it relatively short. A typical research paper conclusion might be around 5-10% of the paper’s total length. For example, if your paper is 10 pages long, the conclusion might be roughly half a page to one page in length.

In general, you do not need to include citations in the research paper conclusion. Citations are typically reserved for the body of the paper to support your arguments and provide evidence for your claims. However, there may be some exceptions to this rule: 1. If you are drawing a direct quote or paraphrasing a specific source in your research paper conclusion, you should include a citation to give proper credit to the original author. 2. If your conclusion refers to or discusses specific research, data, or sources that are crucial to the overall argument, citations can be included to reinforce your conclusion’s validity.

The conclusion of a research paper serves several important purposes: 1. Summarize the Key Points 2. Reinforce the Main Argument 3. Provide Closure 4. Offer Insights or Implications 5. Engage the Reader. 6. Reflect on Limitations

Remember that the primary purpose of the research paper conclusion is to leave a lasting impression on the reader, reinforcing the key points and providing closure to your research. It’s often the last part of the paper that the reader will see, so it should be strong and well-crafted.

  • Makar, G., Foltz, C., Lendner, M., & Vaccaro, A. R. (2018). How to write effective discussion and conclusion sections. Clinical spine surgery, 31(8), 345-346.
  • Bunton, D. (2005). The structure of PhD conclusion chapters.  Journal of English for academic purposes ,  4 (3), 207-224.

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • 5 Reasons for Rejection After Peer Review
  • Ethical Research Practices For Research with Human Subjects

7 Ways to Improve Your Academic Writing Process

  • Paraphrasing in Academic Writing: Answering Top Author Queries

Preflight For Editorial Desk: The Perfect Hybrid (AI + Human) Assistance Against Compromised Manuscripts

You may also like, measuring academic success: definition & strategies for excellence, phd qualifying exam: tips for success , ai in education: it’s time to change the..., is it ethical to use ai-generated abstracts without..., what are journal guidelines on using generative ai..., quillbot review: features, pricing, and free alternatives, what is an academic paper types and elements , should you use ai tools like chatgpt for..., 9 steps to publish a research paper, what are the different types of research papers.

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Independent and Dependent Variables
  • Purpose of Guide
  • Design Flaws to Avoid
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Definitions

Dependent Variable The variable that depends on other factors that are measured. These variables are expected to change as a result of an experimental manipulation of the independent variable or variables. It is the presumed effect.

Independent Variable The variable that is stable and unaffected by the other variables you are trying to measure. It refers to the condition of an experiment that is systematically manipulated by the investigator. It is the presumed cause.

Cramer, Duncan and Dennis Howitt. The SAGE Dictionary of Statistics . London: SAGE, 2004; Penslar, Robin Levin and Joan P. Porter. Institutional Review Board Guidebook: Introduction . Washington, DC: United States Department of Health and Human Services, 2010; "What are Dependent and Independent Variables?" Graphic Tutorial.

Identifying Dependent and Independent Variables

Don't feel bad if you are confused about what is the dependent variable and what is the independent variable in social and behavioral sciences research . However, it's important that you learn the difference because framing a study using these variables is a common approach to organizing the elements of a social sciences research study in order to discover relevant and meaningful results. Specifically, it is important for these two reasons:

  • You need to understand and be able to evaluate their application in other people's research.
  • You need to apply them correctly in your own research.

A variable in research simply refers to a person, place, thing, or phenomenon that you are trying to measure in some way. The best way to understand the difference between a dependent and independent variable is that the meaning of each is implied by what the words tell us about the variable you are using. You can do this with a simple exercise from the website, Graphic Tutorial. Take the sentence, "The [independent variable] causes a change in [dependent variable] and it is not possible that [dependent variable] could cause a change in [independent variable]." Insert the names of variables you are using in the sentence in the way that makes the most sense. This will help you identify each type of variable. If you're still not sure, consult with your professor before you begin to write.

Fan, Shihe. "Independent Variable." In Encyclopedia of Research Design. Neil J. Salkind, editor. (Thousand Oaks, CA: SAGE, 2010), pp. 592-594; "What are Dependent and Independent Variables?" Graphic Tutorial; Salkind, Neil J. "Dependent Variable." In Encyclopedia of Research Design , Neil J. Salkind, editor. (Thousand Oaks, CA: SAGE, 2010), pp. 348-349;

Structure and Writing Style

The process of examining a research problem in the social and behavioral sciences is often framed around methods of analysis that compare, contrast, correlate, average, or integrate relationships between or among variables . Techniques include associations, sampling, random selection, and blind selection. Designation of the dependent and independent variable involves unpacking the research problem in a way that identifies a general cause and effect and classifying these variables as either independent or dependent.

The variables should be outlined in the introduction of your paper and explained in more detail in the methods section . There are no rules about the structure and style for writing about independent or dependent variables but, as with any academic writing, clarity and being succinct is most important.

After you have described the research problem and its significance in relation to prior research, explain why you have chosen to examine the problem using a method of analysis that investigates the relationships between or among independent and dependent variables . State what it is about the research problem that lends itself to this type of analysis. For example, if you are investigating the relationship between corporate environmental sustainability efforts [the independent variable] and dependent variables associated with measuring employee satisfaction at work using a survey instrument, you would first identify each variable and then provide background information about the variables. What is meant by "environmental sustainability"? Are you looking at a particular company [e.g., General Motors] or are you investigating an industry [e.g., the meat packing industry]? Why is employee satisfaction in the workplace important? How does a company make their employees aware of sustainability efforts and why would a company even care that its employees know about these efforts?

Identify each variable for the reader and define each . In the introduction, this information can be presented in a paragraph or two when you describe how you are going to study the research problem. In the methods section, you build on the literature review of prior studies about the research problem to describe in detail background about each variable, breaking each down for measurement and analysis. For example, what activities do you examine that reflect a company's commitment to environmental sustainability? Levels of employee satisfaction can be measured by a survey that asks about things like volunteerism or a desire to stay at the company for a long time.

The structure and writing style of describing the variables and their application to analyzing the research problem should be stated and unpacked in such a way that the reader obtains a clear understanding of the relationships between the variables and why they are important. This is also important so that the study can be replicated in the future using the same variables but applied in a different way.

Fan, Shihe. "Independent Variable." In Encyclopedia of Research Design. Neil J. Salkind, editor. (Thousand Oaks, CA: SAGE, 2010), pp. 592-594; "What are Dependent and Independent Variables?" Graphic Tutorial; “Case Example for Independent and Dependent Variables.” ORI Curriculum Examples. U.S. Department of Health and Human Services, Office of Research Integrity; Salkind, Neil J. "Dependent Variable." In Encyclopedia of Research Design , Neil J. Salkind, editor. (Thousand Oaks, CA: SAGE, 2010), pp. 348-349; “Independent Variables and Dependent Variables.” Karl L. Wuensch, Department of Psychology, East Carolina University [posted email exchange]; “Variables.” Elements of Research. Dr. Camille Nebeker, San Diego State University.

  • << Previous: Design Flaws to Avoid
  • Next: Glossary of Research Terms >>
  • Last Updated: May 2, 2024 4:39 PM
  • URL: https://libguides.usc.edu/writingguide

Jump to navigation

Home

Cochrane Training

Chapter 15: interpreting results and drawing conclusions.

Holger J Schünemann, Gunn E Vist, Julian PT Higgins, Nancy Santesso, Jonathan J Deeks, Paul Glasziou, Elie A Akl, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group

Key Points:

  • This chapter provides guidance on interpreting the results of synthesis in order to communicate the conclusions of the review effectively.
  • Methods are presented for computing, presenting and interpreting relative and absolute effects for dichotomous outcome data, including the number needed to treat (NNT).
  • For continuous outcome measures, review authors can present summary results for studies using natural units of measurement or as minimal important differences when all studies use the same scale. When studies measure the same construct but with different scales, review authors will need to find a way to interpret the standardized mean difference, or to use an alternative effect measure for the meta-analysis such as the ratio of means.
  • Review authors should not describe results as ‘statistically significant’, ‘not statistically significant’ or ‘non-significant’ or unduly rely on thresholds for P values, but report the confidence interval together with the exact P value.
  • Review authors should not make recommendations about healthcare decisions, but they can – after describing the certainty of evidence and the balance of benefits and harms – highlight different actions that might be consistent with particular patterns of values and preferences and other factors that determine a decision such as cost.

Cite this chapter as: Schünemann HJ, Vist GE, Higgins JPT, Santesso N, Deeks JJ, Glasziou P, Akl EA, Guyatt GH. Chapter 15: Interpreting results and drawing conclusions. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

15.1 Introduction

The purpose of Cochrane Reviews is to facilitate healthcare decisions by patients and the general public, clinicians, guideline developers, administrators and policy makers. They also inform future research. A clear statement of findings, a considered discussion and a clear presentation of the authors’ conclusions are, therefore, important parts of the review. In particular, the following issues can help people make better informed decisions and increase the usability of Cochrane Reviews:

  • information on all important outcomes, including adverse outcomes;
  • the certainty of the evidence for each of these outcomes, as it applies to specific populations and specific interventions; and
  • clarification of the manner in which particular values and preferences may bear on the desirable and undesirable consequences of the intervention.

A ‘Summary of findings’ table, described in Chapter 14 , Section 14.1 , provides key pieces of information about health benefits and harms in a quick and accessible format. It is highly desirable that review authors include a ‘Summary of findings’ table in Cochrane Reviews alongside a sufficient description of the studies and meta-analyses to support its contents. This description includes the rating of the certainty of evidence, also called the quality of the evidence or confidence in the estimates of the effects, which is expected in all Cochrane Reviews.

‘Summary of findings’ tables are usually supported by full evidence profiles which include the detailed ratings of the evidence (Guyatt et al 2011a, Guyatt et al 2013a, Guyatt et al 2013b, Santesso et al 2016). The Discussion section of the text of the review provides space to reflect and consider the implications of these aspects of the review’s findings. Cochrane Reviews include five standard subheadings to ensure the Discussion section places the review in an appropriate context: ‘Summary of main results (benefits and harms)’; ‘Potential biases in the review process’; ‘Overall completeness and applicability of evidence’; ‘Certainty of the evidence’; and ‘Agreements and disagreements with other studies or reviews’. Following the Discussion, the Authors’ conclusions section is divided into two standard subsections: ‘Implications for practice’ and ‘Implications for research’. The assessment of the certainty of evidence facilitates a structured description of the implications for practice and research.

Because Cochrane Reviews have an international audience, the Discussion and Authors’ conclusions should, so far as possible, assume a broad international perspective and provide guidance for how the results could be applied in different settings, rather than being restricted to specific national or local circumstances. Cultural differences and economic differences may both play an important role in determining the best course of action based on the results of a Cochrane Review. Furthermore, individuals within societies have widely varying values and preferences regarding health states, and use of societal resources to achieve particular health states. For all these reasons, and because information that goes beyond that included in a Cochrane Review is required to make fully informed decisions, different people will often make different decisions based on the same evidence presented in a review.

Thus, review authors should avoid specific recommendations that inevitably depend on assumptions about available resources, values and preferences, and other factors such as equity considerations, feasibility and acceptability of an intervention. The purpose of the review should be to present information and aid interpretation rather than to offer recommendations. The discussion and conclusions should help people understand the implications of the evidence in relation to practical decisions and apply the results to their specific situation. Review authors can aid this understanding of the implications by laying out different scenarios that describe certain value structures.

In this chapter, we address first one of the key aspects of interpreting findings that is also fundamental in completing a ‘Summary of findings’ table: the certainty of evidence related to each of the outcomes. We then provide a more detailed consideration of issues around applicability and around interpretation of numerical results, and provide suggestions for presenting authors’ conclusions.

15.2 Issues of indirectness and applicability

15.2.1 the role of the review author.

“A leap of faith is always required when applying any study findings to the population at large” or to a specific person. “In making that jump, one must always strike a balance between making justifiable broad generalizations and being too conservative in one’s conclusions” (Friedman et al 1985). In addition to issues about risk of bias and other domains determining the certainty of evidence, this leap of faith is related to how well the identified body of evidence matches the posed PICO ( Population, Intervention, Comparator(s) and Outcome ) question. As to the population, no individual can be entirely matched to the population included in research studies. At the time of decision, there will always be differences between the study population and the person or population to whom the evidence is applied; sometimes these differences are slight, sometimes large.

The terms applicability, generalizability, external validity and transferability are related, sometimes used interchangeably and have in common that they lack a clear and consistent definition in the classic epidemiological literature (Schünemann et al 2013). However, all of the terms describe one overarching theme: whether or not available research evidence can be directly used to answer the health and healthcare question at hand, ideally supported by a judgement about the degree of confidence in this use (Schünemann et al 2013). GRADE’s certainty domains include a judgement about ‘indirectness’ to describe all of these aspects including the concept of direct versus indirect comparisons of different interventions (Atkins et al 2004, Guyatt et al 2008, Guyatt et al 2011b).

To address adequately the extent to which a review is relevant for the purpose to which it is being put, there are certain things the review author must do, and certain things the user of the review must do to assess the degree of indirectness. Cochrane and the GRADE Working Group suggest using a very structured framework to address indirectness. We discuss here and in Chapter 14 what the review author can do to help the user. Cochrane Review authors must be extremely clear on the population, intervention and outcomes that they intend to address. Chapter 14, Section 14.1.2 , also emphasizes a crucial step: the specification of all patient-important outcomes relevant to the intervention strategies under comparison.

In considering whether the effect of an intervention applies equally to all participants, and whether different variations on the intervention have similar effects, review authors need to make a priori hypotheses about possible effect modifiers, and then examine those hypotheses (see Chapter 10, Section 10.10 and Section 10.11 ). If they find apparent subgroup effects, they must ultimately decide whether or not these effects are credible (Sun et al 2012). Differences between subgroups, particularly those that correspond to differences between studies, should be interpreted cautiously. Some chance variation between subgroups is inevitable so, unless there is good reason to believe that there is an interaction, review authors should not assume that the subgroup effect exists. If, despite due caution, review authors judge subgroup effects in terms of relative effect estimates as credible (i.e. the effects differ credibly), they should conduct separate meta-analyses for the relevant subgroups, and produce separate ‘Summary of findings’ tables for those subgroups.

The user of the review will be challenged with ‘individualization’ of the findings, whether they seek to apply the findings to an individual patient or a policy decision in a specific context. For example, even if relative effects are similar across subgroups, absolute effects will differ according to baseline risk. Review authors can help provide this information by identifying identifiable groups of people with varying baseline risks in the ‘Summary of findings’ tables, as discussed in Chapter 14, Section 14.1.3 . Users can then identify their specific case or population as belonging to a particular risk group, if relevant, and assess their likely magnitude of benefit or harm accordingly. A description of the identifying prognostic or baseline risk factors in a brief scenario (e.g. age or gender) will help users of a review further.

Another decision users must make is whether their individual case or population of interest is so different from those included in the studies that they cannot use the results of the systematic review and meta-analysis at all. Rather than rigidly applying the inclusion and exclusion criteria of studies, it is better to ask whether or not there are compelling reasons why the evidence should not be applied to a particular patient. Review authors can sometimes help decision makers by identifying important variation where divergence might limit the applicability of results (Rothwell 2005, Schünemann et al 2006, Guyatt et al 2011b, Schünemann et al 2013), including biologic and cultural variation, and variation in adherence to an intervention.

In addressing these issues, review authors cannot be aware of, or address, the myriad of differences in circumstances around the world. They can, however, address differences of known importance to many people and, importantly, they should avoid assuming that other people’s circumstances are the same as their own in discussing the results and drawing conclusions.

15.2.2 Biological variation

Issues of biological variation that may affect the applicability of a result to a reader or population include divergence in pathophysiology (e.g. biological differences between women and men that may affect responsiveness to an intervention) and divergence in a causative agent (e.g. for infectious diseases such as malaria, which may be caused by several different parasites). The discussion of the results in the review should make clear whether the included studies addressed all or only some of these groups, and whether any important subgroup effects were found.

15.2.3 Variation in context

Some interventions, particularly non-pharmacological interventions, may work in some contexts but not in others; the situation has been described as program by context interaction (Hawe et al 2004). Contextual factors might pertain to the host organization in which an intervention is offered, such as the expertise, experience and morale of the staff expected to carry out the intervention, the competing priorities for the clinician’s or staff’s attention, the local resources such as service and facilities made available to the program and the status or importance given to the program by the host organization. Broader context issues might include aspects of the system within which the host organization operates, such as the fee or payment structure for healthcare providers and the local insurance system. Some interventions, in particular complex interventions (see Chapter 17 ), can be only partially implemented in some contexts, and this requires judgements about indirectness of the intervention and its components for readers in that context (Schünemann 2013).

Contextual factors may also pertain to the characteristics of the target group or population, such as cultural and linguistic diversity, socio-economic position, rural/urban setting. These factors may mean that a particular style of care or relationship evolves between service providers and consumers that may or may not match the values and technology of the program.

For many years these aspects have been acknowledged when decision makers have argued that results of evidence reviews from other countries do not apply in their own country or setting. Whilst some programmes/interventions have been successfully transferred from one context to another, others have not (Resnicow et al 1993, Lumley et al 2004, Coleman et al 2015). Review authors should be cautious when making generalizations from one context to another. They should report on the presence (or otherwise) of context-related information in intervention studies, where this information is available.

15.2.4 Variation in adherence

Variation in the adherence of the recipients and providers of care can limit the certainty in the applicability of results. Predictable differences in adherence can be due to divergence in how recipients of care perceive the intervention (e.g. the importance of side effects), economic conditions or attitudes that make some forms of care inaccessible in some settings, such as in low-income countries (Dans et al 2007). It should not be assumed that high levels of adherence in closely monitored randomized trials will translate into similar levels of adherence in normal practice.

15.2.5 Variation in values and preferences

Decisions about healthcare management strategies and options involve trading off health benefits and harms. The right choice may differ for people with different values and preferences (i.e. the importance people place on the outcomes and interventions), and it is important that decision makers ensure that decisions are consistent with a patient or population’s values and preferences. The importance placed on outcomes, together with other factors, will influence whether the recipients of care will or will not accept an option that is offered (Alonso-Coello et al 2016) and, thus, can be one factor influencing adherence. In Section 15.6 , we describe how the review author can help this process and the limits of supporting decision making based on intervention reviews.

15.3 Interpreting results of statistical analyses

15.3.1 confidence intervals.

Results for both individual studies and meta-analyses are reported with a point estimate together with an associated confidence interval. For example, ‘The odds ratio was 0.75 with a 95% confidence interval of 0.70 to 0.80’. The point estimate (0.75) is the best estimate of the magnitude and direction of the experimental intervention’s effect compared with the comparator intervention. The confidence interval describes the uncertainty inherent in any estimate, and describes a range of values within which we can be reasonably sure that the true effect actually lies. If the confidence interval is relatively narrow (e.g. 0.70 to 0.80), the effect size is known precisely. If the interval is wider (e.g. 0.60 to 0.93) the uncertainty is greater, although there may still be enough precision to make decisions about the utility of the intervention. Intervals that are very wide (e.g. 0.50 to 1.10) indicate that we have little knowledge about the effect and this imprecision affects our certainty in the evidence, and that further information would be needed before we could draw a more certain conclusion.

A 95% confidence interval is often interpreted as indicating a range within which we can be 95% certain that the true effect lies. This statement is a loose interpretation, but is useful as a rough guide. The strictly correct interpretation of a confidence interval is based on the hypothetical notion of considering the results that would be obtained if the study were repeated many times. If a study were repeated infinitely often, and on each occasion a 95% confidence interval calculated, then 95% of these intervals would contain the true effect (see Section 15.3.3 for further explanation).

The width of the confidence interval for an individual study depends to a large extent on the sample size. Larger studies tend to give more precise estimates of effects (and hence have narrower confidence intervals) than smaller studies. For continuous outcomes, precision depends also on the variability in the outcome measurements (i.e. how widely individual results vary between people in the study, measured as the standard deviation); for dichotomous outcomes it depends on the risk of the event (more frequent events allow more precision, and narrower confidence intervals), and for time-to-event outcomes it also depends on the number of events observed. All these quantities are used in computation of the standard errors of effect estimates from which the confidence interval is derived.

The width of a confidence interval for a meta-analysis depends on the precision of the individual study estimates and on the number of studies combined. In addition, for random-effects models, precision will decrease with increasing heterogeneity and confidence intervals will widen correspondingly (see Chapter 10, Section 10.10.4 ). As more studies are added to a meta-analysis the width of the confidence interval usually decreases. However, if the additional studies increase the heterogeneity in the meta-analysis and a random-effects model is used, it is possible that the confidence interval width will increase.

Confidence intervals and point estimates have different interpretations in fixed-effect and random-effects models. While the fixed-effect estimate and its confidence interval address the question ‘what is the best (single) estimate of the effect?’, the random-effects estimate assumes there to be a distribution of effects, and the estimate and its confidence interval address the question ‘what is the best estimate of the average effect?’ A confidence interval may be reported for any level of confidence (although they are most commonly reported for 95%, and sometimes 90% or 99%). For example, the odds ratio of 0.80 could be reported with an 80% confidence interval of 0.73 to 0.88; a 90% interval of 0.72 to 0.89; and a 95% interval of 0.70 to 0.92. As the confidence level increases, the confidence interval widens.

There is logical correspondence between the confidence interval and the P value (see Section 15.3.3 ). The 95% confidence interval for an effect will exclude the null value (such as an odds ratio of 1.0 or a risk difference of 0) if and only if the test of significance yields a P value of less than 0.05. If the P value is exactly 0.05, then either the upper or lower limit of the 95% confidence interval will be at the null value. Similarly, the 99% confidence interval will exclude the null if and only if the test of significance yields a P value of less than 0.01.

Together, the point estimate and confidence interval provide information to assess the effects of the intervention on the outcome. For example, suppose that we are evaluating an intervention that reduces the risk of an event and we decide that it would be useful only if it reduced the risk of an event from 30% by at least 5 percentage points to 25% (these values will depend on the specific clinical scenario and outcomes, including the anticipated harms). If the meta-analysis yielded an effect estimate of a reduction of 10 percentage points with a tight 95% confidence interval, say, from 7% to 13%, we would be able to conclude that the intervention was useful since both the point estimate and the entire range of the interval exceed our criterion of a reduction of 5% for net health benefit. However, if the meta-analysis reported the same risk reduction of 10% but with a wider interval, say, from 2% to 18%, although we would still conclude that our best estimate of the intervention effect is that it provides net benefit, we could not be so confident as we still entertain the possibility that the effect could be between 2% and 5%. If the confidence interval was wider still, and included the null value of a difference of 0%, we would still consider the possibility that the intervention has no effect on the outcome whatsoever, and would need to be even more sceptical in our conclusions.

Review authors may use the same general approach to conclude that an intervention is not useful. Continuing with the above example where the criterion for an important difference that should be achieved to provide more benefit than harm is a 5% risk difference, an effect estimate of 2% with a 95% confidence interval of 1% to 4% suggests that the intervention does not provide net health benefit.

15.3.2 P values and statistical significance

A P value is the standard result of a statistical test, and is the probability of obtaining the observed effect (or larger) under a ‘null hypothesis’. In the context of Cochrane Reviews there are two commonly used statistical tests. The first is a test of overall effect (a Z-test), and its null hypothesis is that there is no overall effect of the experimental intervention compared with the comparator on the outcome of interest. The second is the (Chi 2 ) test for heterogeneity, and its null hypothesis is that there are no differences in the intervention effects across studies.

A P value that is very small indicates that the observed effect is very unlikely to have arisen purely by chance, and therefore provides evidence against the null hypothesis. It has been common practice to interpret a P value by examining whether it is smaller than particular threshold values. In particular, P values less than 0.05 are often reported as ‘statistically significant’, and interpreted as being small enough to justify rejection of the null hypothesis. However, the 0.05 threshold is an arbitrary one that became commonly used in medical and psychological research largely because P values were determined by comparing the test statistic against tabulations of specific percentage points of statistical distributions. If review authors decide to present a P value with the results of a meta-analysis, they should report a precise P value (as calculated by most statistical software), together with the 95% confidence interval. Review authors should not describe results as ‘statistically significant’, ‘not statistically significant’ or ‘non-significant’ or unduly rely on thresholds for P values , but report the confidence interval together with the exact P value (see MECIR Box 15.3.a ).

We discuss interpretation of the test for heterogeneity in Chapter 10, Section 10.10.2 ; the remainder of this section refers mainly to tests for an overall effect. For tests of an overall effect, the computation of P involves both the effect estimate and precision of the effect estimate (driven largely by sample size). As precision increases, the range of plausible effects that could occur by chance is reduced. Correspondingly, the statistical significance of an effect of a particular magnitude will usually be greater (the P value will be smaller) in a larger study than in a smaller study.

P values are commonly misinterpreted in two ways. First, a moderate or large P value (e.g. greater than 0.05) may be misinterpreted as evidence that the intervention has no effect on the outcome. There is an important difference between this statement and the correct interpretation that there is a high probability that the observed effect on the outcome is due to chance alone. To avoid such a misinterpretation, review authors should always examine the effect estimate and its 95% confidence interval.

The second misinterpretation is to assume that a result with a small P value for the summary effect estimate implies that an experimental intervention has an important benefit. Such a misinterpretation is more likely to occur in large studies and meta-analyses that accumulate data over dozens of studies and thousands of participants. The P value addresses the question of whether the experimental intervention effect is precisely nil; it does not examine whether the effect is of a magnitude of importance to potential recipients of the intervention. In a large study, a small P value may represent the detection of a trivial effect that may not lead to net health benefit when compared with the potential harms (i.e. harmful effects on other important outcomes). Again, inspection of the point estimate and confidence interval helps correct interpretations (see Section 15.3.1 ).

MECIR Box 15.3.a Relevant expectations for conduct of intervention reviews

15.3.3 Relation between confidence intervals, statistical significance and certainty of evidence

The confidence interval (and imprecision) is only one domain that influences overall uncertainty about effect estimates. Uncertainty resulting from imprecision (i.e. statistical uncertainty) may be no less important than uncertainty from indirectness, or any other GRADE domain, in the context of decision making (Schünemann 2016). Thus, the extent to which interpretations of the confidence interval described in Sections 15.3.1 and 15.3.2 correspond to conclusions about overall certainty of the evidence for the outcome of interest depends on these other domains. If there are no concerns about other domains that determine the certainty of the evidence (i.e. risk of bias, inconsistency, indirectness or publication bias), then the interpretation in Sections 15.3.1 and 15.3.2 . about the relation of the confidence interval to the true effect may be carried forward to the overall certainty. However, if there are concerns about the other domains that affect the certainty of the evidence, the interpretation about the true effect needs to be seen in the context of further uncertainty resulting from those concerns.

For example, nine randomized controlled trials in almost 6000 cancer patients indicated that the administration of heparin reduces the risk of venous thromboembolism (VTE), with a risk ratio of 43% (95% CI 19% to 60%) (Akl et al 2011a). For patients with a plausible baseline risk of approximately 4.6% per year, this relative effect suggests that heparin leads to an absolute risk reduction of 20 fewer VTEs (95% CI 9 fewer to 27 fewer) per 1000 people per year (Akl et al 2011a). Now consider that the review authors or those applying the evidence in a guideline have lowered the certainty in the evidence as a result of indirectness. While the confidence intervals would remain unchanged, the certainty in that confidence interval and in the point estimate as reflecting the truth for the question of interest will be lowered. In fact, the certainty range will have unknown width so there will be unknown likelihood of a result within that range because of this indirectness. The lower the certainty in the evidence, the less we know about the width of the certainty range, although methods for quantifying risk of bias and understanding potential direction of bias may offer insight when lowered certainty is due to risk of bias. Nevertheless, decision makers must consider this uncertainty, and must do so in relation to the effect measure that is being evaluated (e.g. a relative or absolute measure). We will describe the impact on interpretations for dichotomous outcomes in Section 15.4 .

15.4 Interpreting results from dichotomous outcomes (including numbers needed to treat)

15.4.1 relative and absolute risk reductions.

Clinicians may be more inclined to prescribe an intervention that reduces the relative risk of death by 25% than one that reduces the risk of death by 1 percentage point, although both presentations of the evidence may relate to the same benefit (i.e. a reduction in risk from 4% to 3%). The former refers to the relative reduction in risk and the latter to the absolute reduction in risk. As described in Chapter 6, Section 6.4.1 , there are several measures for comparing dichotomous outcomes in two groups. Meta-analyses are usually undertaken using risk ratios (RR), odds ratios (OR) or risk differences (RD), but there are several alternative ways of expressing results.

Relative risk reduction (RRR) is a convenient way of re-expressing a risk ratio as a percentage reduction:

conclusion of variables in research

For example, a risk ratio of 0.75 translates to a relative risk reduction of 25%, as in the example above.

The risk difference is often referred to as the absolute risk reduction (ARR) or absolute risk increase (ARI), and may be presented as a percentage (e.g. 1%), as a decimal (e.g. 0.01), or as account (e.g. 10 out of 1000). We consider different choices for presenting absolute effects in Section 15.4.3 . We then describe computations for obtaining these numbers from the results of individual studies and of meta-analyses in Section 15.4.4 .

15.4.2 Number needed to treat (NNT)

The number needed to treat (NNT) is a common alternative way of presenting information on the effect of an intervention. The NNT is defined as the expected number of people who need to receive the experimental rather than the comparator intervention for one additional person to either incur or avoid an event (depending on the direction of the result) in a given time frame. Thus, for example, an NNT of 10 can be interpreted as ‘it is expected that one additional (or less) person will incur an event for every 10 participants receiving the experimental intervention rather than comparator over a given time frame’. It is important to be clear that:

  • since the NNT is derived from the risk difference, it is still a comparative measure of effect (experimental versus a specific comparator) and not a general property of a single intervention; and
  • the NNT gives an ‘expected value’. For example, NNT = 10 does not imply that one additional event will occur in each and every group of 10 people.

NNTs can be computed for both beneficial and detrimental events, and for interventions that cause both improvements and deteriorations in outcomes. In all instances NNTs are expressed as positive whole numbers. Some authors use the term ‘number needed to harm’ (NNH) when an intervention leads to an adverse outcome, or a decrease in a positive outcome, rather than improvement. However, this phrase can be misleading (most notably, it can easily be read to imply the number of people who will experience a harmful outcome if given the intervention), and it is strongly recommended that ‘number needed to harm’ and ‘NNH’ are avoided. The preferred alternative is to use phrases such as ‘number needed to treat for an additional beneficial outcome’ (NNTB) and ‘number needed to treat for an additional harmful outcome’ (NNTH) to indicate direction of effect.

As NNTs refer to events, their interpretation needs to be worded carefully when the binary outcome is a dichotomization of a scale-based outcome. For example, if the outcome is pain measured on a ‘none, mild, moderate or severe’ scale it may have been dichotomized as ‘none or mild’ versus ‘moderate or severe’. It would be inappropriate for an NNT from these data to be referred to as an ‘NNT for pain’. It is an ‘NNT for moderate or severe pain’.

We consider different choices for presenting absolute effects in Section 15.4.3 . We then describe computations for obtaining these numbers from the results of individual studies and of meta-analyses in Section 15.4.4 .

15.4.3 Expressing risk differences

Users of reviews are liable to be influenced by the choice of statistical presentations of the evidence. Hoffrage and colleagues suggest that physicians’ inferences about statistical outcomes are more appropriate when they deal with ‘natural frequencies’ – whole numbers of people, both treated and untreated (e.g. treatment results in a drop from 20 out of 1000 to 10 out of 1000 women having breast cancer) – than when effects are presented as percentages (e.g. 1% absolute reduction in breast cancer risk) (Hoffrage et al 2000). Probabilities may be more difficult to understand than frequencies, particularly when events are rare. While standardization may be important in improving the presentation of research evidence (and participation in healthcare decisions), current evidence suggests that the presentation of natural frequencies for expressing differences in absolute risk is best understood by consumers of healthcare information (Akl et al 2011b). This evidence provides the rationale for presenting absolute risks in ‘Summary of findings’ tables as numbers of people with events per 1000 people receiving the intervention (see Chapter 14 ).

RRs and RRRs remain crucial because relative effects tend to be substantially more stable across risk groups than absolute effects (see Chapter 10, Section 10.4.3 ). Review authors can use their own data to study this consistency (Cates 1999, Smeeth et al 1999). Risk differences from studies are least likely to be consistent across baseline event rates; thus, they are rarely appropriate for computing numbers needed to treat in systematic reviews. If a relative effect measure (OR or RR) is chosen for meta-analysis, then a comparator group risk needs to be specified as part of the calculation of an RD or NNT. In addition, if there are several different groups of participants with different levels of risk, it is crucial to express absolute benefit for each clinically identifiable risk group, clarifying the time period to which this applies. Studies in patients with differing severity of disease, or studies with different lengths of follow-up will almost certainly have different comparator group risks. In these cases, different comparator group risks lead to different RDs and NNTs (except when the intervention has no effect). A recommended approach is to re-express an odds ratio or a risk ratio as a variety of RD or NNTs across a range of assumed comparator risks (ACRs) (McQuay and Moore 1997, Smeeth et al 1999). Review authors should bear these considerations in mind not only when constructing their ‘Summary of findings’ table, but also in the text of their review.

For example, a review of oral anticoagulants to prevent stroke presented information to users by describing absolute benefits for various baseline risks (Aguilar and Hart 2005, Aguilar et al 2007). They presented their principal findings as “The inherent risk of stroke should be considered in the decision to use oral anticoagulants in atrial fibrillation patients, selecting those who stand to benefit most for this therapy” (Aguilar and Hart 2005). Among high-risk atrial fibrillation patients with prior stroke or transient ischaemic attack who have stroke rates of about 12% (120 per 1000) per year, warfarin prevents about 70 strokes yearly per 1000 patients, whereas for low-risk atrial fibrillation patients (with a stroke rate of about 2% per year or 20 per 1000), warfarin prevents only 12 strokes. This presentation helps users to understand the important impact that typical baseline risks have on the absolute benefit that they can expect.

15.4.4 Computations

Direct computation of risk difference (RD) or a number needed to treat (NNT) depends on the summary statistic (odds ratio, risk ratio or risk differences) available from the study or meta-analysis. When expressing results of meta-analyses, review authors should use, in the computations, whatever statistic they determined to be the most appropriate summary for meta-analysis (see Chapter 10, Section 10.4.3 ). Here we present calculations to obtain RD as a reduction in the number of participants per 1000. For example, a risk difference of –0.133 corresponds to 133 fewer participants with the event per 1000.

RDs and NNTs should not be computed from the aggregated total numbers of participants and events across the trials. This approach ignores the randomization within studies, and may produce seriously misleading results if there is unbalanced randomization in any of the studies. Using the pooled result of a meta-analysis is more appropriate. When computing NNTs, the values obtained are by convention always rounded up to the next whole number.

15.4.4.1 Computing NNT from a risk difference (RD)

A NNT may be computed from a risk difference as

conclusion of variables in research

where the vertical bars (‘absolute value of’) in the denominator indicate that any minus sign should be ignored. It is convention to round the NNT up to the nearest whole number. For example, if the risk difference is –0.12 the NNT is 9; if the risk difference is –0.22 the NNT is 5. Cochrane Review authors should qualify the NNT as referring to benefit (improvement) or harm by denoting the NNT as NNTB or NNTH. Note that this approach, although feasible, should be used only for the results of a meta-analysis of risk differences. In most cases meta-analyses will be undertaken using a relative measure of effect (RR or OR), and those statistics should be used to calculate the NNT (see Section 15.4.4.2 and 15.4.4.3 ).

15.4.4.2 Computing risk differences or NNT from a risk ratio

To aid interpretation of the results of a meta-analysis of risk ratios, review authors may compute an absolute risk reduction or NNT. In order to do this, an assumed comparator risk (ACR) (otherwise known as a baseline risk, or risk that the outcome of interest would occur with the comparator intervention) is required. It will usually be appropriate to do this for a range of different ACRs. The computation proceeds as follows:

conclusion of variables in research

As an example, suppose the risk ratio is RR = 0.92, and an ACR = 0.3 (300 per 1000) is assumed. Then the effect on risk is 24 fewer per 1000:

conclusion of variables in research

The NNT is 42:

conclusion of variables in research

15.4.4.3 Computing risk differences or NNT from an odds ratio

Review authors may wish to compute a risk difference or NNT from the results of a meta-analysis of odds ratios. In order to do this, an ACR is required. It will usually be appropriate to do this for a range of different ACRs. The computation proceeds as follows:

conclusion of variables in research

As an example, suppose the odds ratio is OR = 0.73, and a comparator risk of ACR = 0.3 is assumed. Then the effect on risk is 62 fewer per 1000:

conclusion of variables in research

The NNT is 17:

conclusion of variables in research

15.4.4.4 Computing risk ratio from an odds ratio

Because risk ratios are easier to interpret than odds ratios, but odds ratios have favourable mathematical properties, a review author may decide to undertake a meta-analysis based on odds ratios, but to express the result as a summary risk ratio (or relative risk reduction). This requires an ACR. Then

conclusion of variables in research

It will often be reasonable to perform this transformation using the median comparator group risk from the studies in the meta-analysis.

15.4.4.5 Computing confidence limits

Confidence limits for RDs and NNTs may be calculated by applying the above formulae to the upper and lower confidence limits for the summary statistic (RD, RR or OR) (Altman 1998). Note that this confidence interval does not incorporate uncertainty around the ACR.

If the 95% confidence interval of OR or RR includes the value 1, one of the confidence limits will indicate benefit and the other harm. Thus, appropriate use of the words ‘fewer’ and ‘more’ is required for each limit when presenting results in terms of events. For NNTs, the two confidence limits should be labelled as NNTB and NNTH to indicate the direction of effect in each case. The confidence interval for the NNT will include a ‘discontinuity’, because increasingly smaller risk differences that approach zero will lead to NNTs approaching infinity. Thus, the confidence interval will include both an infinitely large NNTB and an infinitely large NNTH.

15.5 Interpreting results from continuous outcomes (including standardized mean differences)

15.5.1 meta-analyses with continuous outcomes.

Review authors should describe in the study protocol how they plan to interpret results for continuous outcomes. When outcomes are continuous, review authors have a number of options to present summary results. These options differ if studies report the same measure that is familiar to the target audiences, studies report the same or very similar measures that are less familiar to the target audiences, or studies report different measures.

15.5.2 Meta-analyses with continuous outcomes using the same measure

If all studies have used the same familiar units, for instance, results are expressed as durations of events, such as symptoms for conditions including diarrhoea, sore throat, otitis media, influenza or duration of hospitalization, a meta-analysis may generate a summary estimate in those units, as a difference in mean response (see, for instance, the row summarizing results for duration of diarrhoea in Chapter 14, Figure 14.1.b and the row summarizing oedema in Chapter 14, Figure 14.1.a ). For such outcomes, the ‘Summary of findings’ table should include a difference of means between the two interventions. However, when units of such outcomes may be difficult to interpret, particularly when they relate to rating scales (again, see the oedema row of Chapter 14, Figure 14.1.a ). ‘Summary of findings’ tables should include the minimum and maximum of the scale of measurement, and the direction. Knowledge of the smallest change in instrument score that patients perceive is important – the minimal important difference (MID) – and can greatly facilitate the interpretation of results (Guyatt et al 1998, Schünemann and Guyatt 2005). Knowing the MID allows review authors and users to place results in context. Review authors should state the MID – if known – in the Comments column of their ‘Summary of findings’ table. For example, the chronic respiratory questionnaire has possible scores in health-related quality of life ranging from 1 to 7 and 0.5 represents a well-established MID (Jaeschke et al 1989, Schünemann et al 2005).

15.5.3 Meta-analyses with continuous outcomes using different measures

When studies have used different instruments to measure the same construct, a standardized mean difference (SMD) may be used in meta-analysis for combining continuous data. Without guidance, clinicians and patients may have little idea how to interpret results presented as SMDs. Review authors should therefore consider issues of interpretability when planning their analysis at the protocol stage and should consider whether there will be suitable ways to re-express the SMD or whether alternative effect measures, such as a ratio of means, or possibly as minimal important difference units (Guyatt et al 2013b) should be used. Table 15.5.a and the following sections describe these options.

Table 15.5.a Approaches and their implications to presenting results of continuous variables when primary studies have used different instruments to measure the same construct. Adapted from Guyatt et al (2013b)

15.5.3.1 Presenting and interpreting SMDs using generic effect size estimates

The SMD expresses the intervention effect in standard units rather than the original units of measurement. The SMD is the difference in mean effects between the experimental and comparator groups divided by the pooled standard deviation of participants’ outcomes, or external SDs when studies are very small (see Chapter 6, Section 6.5.1.2 ). The value of a SMD thus depends on both the size of the effect (the difference between means) and the standard deviation of the outcomes (the inherent variability among participants or based on an external SD).

If review authors use the SMD, they might choose to present the results directly as SMDs (row 1a, Table 15.5.a and Table 15.5.b ). However, absolute values of the intervention and comparison groups are typically not useful because studies have used different measurement instruments with different units. Guiding rules for interpreting SMDs (or ‘Cohen’s effect sizes’) exist, and have arisen mainly from researchers in the social sciences (Cohen 1988). One example is as follows: 0.2 represents a small effect, 0.5 a moderate effect and 0.8 a large effect (Cohen 1988). Variations exist (e.g. <0.40=small, 0.40 to 0.70=moderate, >0.70=large). Review authors might consider including such a guiding rule in interpreting the SMD in the text of the review, and in summary versions such as the Comments column of a ‘Summary of findings’ table. However, some methodologists believe that such interpretations are problematic because patient importance of a finding is context-dependent and not amenable to generic statements.

15.5.3.2 Re-expressing SMDs using a familiar instrument

The second possibility for interpreting the SMD is to express it in the units of one or more of the specific measurement instruments used by the included studies (row 1b, Table 15.5.a and Table 15.5.b ). The approach is to calculate an absolute difference in means by multiplying the SMD by an estimate of the SD associated with the most familiar instrument. To obtain this SD, a reasonable option is to calculate a weighted average across all intervention groups of all studies that used the selected instrument (preferably a pre-intervention or post-intervention SD as discussed in Chapter 10, Section 10.5.2 ). To better reflect among-person variation in practice, or to use an instrument not represented in the meta-analysis, it may be preferable to use a standard deviation from a representative observational study. The summary effect is thus re-expressed in the original units of that particular instrument and the clinical relevance and impact of the intervention effect can be interpreted using that familiar instrument.

The same approach of re-expressing the results for a familiar instrument can also be used for other standardized effect measures such as when standardizing by MIDs (Guyatt et al 2013b): see Section 15.5.3.5 .

Table 15.5.b Application of approaches when studies have used different measures: effects of dexamethasone for pain after laparoscopic cholecystectomy (Karanicolas et al 2008). Reproduced with permission of Wolters Kluwer

1 Certainty rated according to GRADE from very low to high certainty. 2 Substantial unexplained heterogeneity in study results. 3 Imprecision due to wide confidence intervals. 4 The 20% comes from the proportion in the control group requiring rescue analgesia. 5 Crude (arithmetic) means of the post-operative pain mean responses across all five trials when transformed to a 100-point scale.

15.5.3.3 Re-expressing SMDs through dichotomization and transformation to relative and absolute measures

A third approach (row 1c, Table 15.5.a and Table 15.5.b ) relies on converting the continuous measure into a dichotomy and thus allows calculation of relative and absolute effects on a binary scale. A transformation of a SMD to a (log) odds ratio is available, based on the assumption that an underlying continuous variable has a logistic distribution with equal standard deviation in the two intervention groups, as discussed in Chapter 10, Section 10.6  (Furukawa 1999, Guyatt et al 2013b). The assumption is unlikely to hold exactly and the results must be regarded as an approximation. The log odds ratio is estimated as

conclusion of variables in research

(or approximately 1.81✕SMD). The resulting odds ratio can then be presented as normal, and in a ‘Summary of findings’ table, combined with an assumed comparator group risk to be expressed as an absolute risk difference. The comparator group risk in this case would refer to the proportion of people who have achieved a specific value of the continuous outcome. In randomized trials this can be interpreted as the proportion who have improved by some (specified) amount (responders), for instance by 5 points on a 0 to 100 scale. Table 15.5.c shows some illustrative results from this method. The risk differences can then be converted to NNTs or to people per thousand using methods described in Section 15.4.4 .

Table 15.5.c Risk difference derived for specific SMDs for various given ‘proportions improved’ in the comparator group (Furukawa 1999, Guyatt et al 2013b). Reproduced with permission of Elsevier 

15.5.3.4 Ratio of means

A more frequently used approach is based on calculation of a ratio of means between the intervention and comparator groups (Friedrich et al 2008) as discussed in Chapter 6, Section 6.5.1.3 . Interpretational advantages of this approach include the ability to pool studies with outcomes expressed in different units directly, to avoid the vulnerability of heterogeneous populations that limits approaches that rely on SD units, and for ease of clinical interpretation (row 2, Table 15.5.a and Table 15.5.b ). This method is currently designed for post-intervention scores only. However, it is possible to calculate a ratio of change scores if both intervention and comparator groups change in the same direction in each relevant study, and this ratio may sometimes be informative.

Limitations to this approach include its limited applicability to change scores (since it is unlikely that both intervention and comparator group changes are in the same direction in all studies) and the possibility of misleading results if the comparator group mean is very small, in which case even a modest difference from the intervention group will yield a large and therefore misleading ratio of means. It also requires that separate ratios of means be calculated for each included study, and then entered into a generic inverse variance meta-analysis (see Chapter 10, Section 10.3 ).

The ratio of means approach illustrated in Table 15.5.b suggests a relative reduction in pain of only 13%, meaning that those receiving steroids have a pain severity 87% of those in the comparator group, an effect that might be considered modest.

15.5.3.5 Presenting continuous results as minimally important difference units

To express results in MID units, review authors have two options. First, they can be combined across studies in the same way as the SMD, but instead of dividing the mean difference of each study by its SD, review authors divide by the MID associated with that outcome (Johnston et al 2010, Guyatt et al 2013b). Instead of SD units, the pooled results represent MID units (row 3, Table 15.5.a and Table 15.5.b ), and may be more easily interpretable. This approach avoids the problem of varying SDs across studies that may distort estimates of effect in approaches that rely on the SMD. The approach, however, relies on having well-established MIDs. The approach is also risky in that a difference less than the MID may be interpreted as trivial when a substantial proportion of patients may have achieved an important benefit.

The other approach makes a simple conversion (not shown in Table 15.5.b ), before undertaking the meta-analysis, of the means and SDs from each study to means and SDs on the scale of a particular familiar instrument whose MID is known. For example, one can rescale the mean and SD of other chronic respiratory disease instruments (e.g. rescaling a 0 to 100 score of an instrument) to a the 1 to 7 score in Chronic Respiratory Disease Questionnaire (CRQ) units (by assuming 0 equals 1 and 100 equals 7 on the CRQ). Given the MID of the CRQ of 0.5, a mean difference in change of 0.71 after rescaling of all studies suggests a substantial effect of the intervention (Guyatt et al 2013b). This approach, presenting in units of the most familiar instrument, may be the most desirable when the target audiences have extensive experience with that instrument, particularly if the MID is well established.

15.6 Drawing conclusions

15.6.1 conclusions sections of a cochrane review.

Authors’ conclusions in a Cochrane Review are divided into implications for practice and implications for research. While Cochrane Reviews about interventions can provide meaningful information and guidance for practice, decisions about the desirable and undesirable consequences of healthcare options require evidence and judgements for criteria that most Cochrane Reviews do not provide (Alonso-Coello et al 2016). In describing the implications for practice and the development of recommendations, however, review authors may consider the certainty of the evidence, the balance of benefits and harms, and assumed values and preferences.

15.6.2 Implications for practice

Drawing conclusions about the practical usefulness of an intervention entails making trade-offs, either implicitly or explicitly, between the estimated benefits, harms and the values and preferences. Making such trade-offs, and thus making specific recommendations for an action in a specific context, goes beyond a Cochrane Review and requires additional evidence and informed judgements that most Cochrane Reviews do not provide (Alonso-Coello et al 2016). Such judgements are typically the domain of clinical practice guideline developers for which Cochrane Reviews will provide crucial information (Graham et al 2011, Schünemann et al 2014, Zhang et al 2018a). Thus, authors of Cochrane Reviews should not make recommendations.

If review authors feel compelled to lay out actions that clinicians and patients could take, they should – after describing the certainty of evidence and the balance of benefits and harms – highlight different actions that might be consistent with particular patterns of values and preferences. Other factors that might influence a decision should also be highlighted, including any known factors that would be expected to modify the effects of the intervention, the baseline risk or status of the patient, costs and who bears those costs, and the availability of resources. Review authors should ensure they consider all patient-important outcomes, including those for which limited data may be available. In the context of public health reviews the focus may be on population-important outcomes as the target may be an entire (non-diseased) population and include outcomes that are not measured in the population receiving an intervention (e.g. a reduction of transmission of infections from those receiving an intervention). This process implies a high level of explicitness in judgements about values or preferences attached to different outcomes and the certainty of the related evidence (Zhang et al 2018b, Zhang et al 2018c); this and a full cost-effectiveness analysis is beyond the scope of most Cochrane Reviews (although they might well be used for such analyses; see Chapter 20 ).

A review on the use of anticoagulation in cancer patients to increase survival (Akl et al 2011a) provides an example for laying out clinical implications for situations where there are important trade-offs between desirable and undesirable effects of the intervention: “The decision for a patient with cancer to start heparin therapy for survival benefit should balance the benefits and downsides and integrate the patient’s values and preferences. Patients with a high preference for a potential survival prolongation, limited aversion to potential bleeding, and who do not consider heparin (both UFH or LMWH) therapy a burden may opt to use heparin, while those with aversion to bleeding may not.”

15.6.3 Implications for research

The second category for authors’ conclusions in a Cochrane Review is implications for research. To help people make well-informed decisions about future healthcare research, the ‘Implications for research’ section should comment on the need for further research, and the nature of the further research that would be most desirable. It is helpful to consider the population, intervention, comparison and outcomes that could be addressed, or addressed more effectively in the future, in the context of the certainty of the evidence in the current review (Brown et al 2006):

  • P (Population): diagnosis, disease stage, comorbidity, risk factor, sex, age, ethnic group, specific inclusion or exclusion criteria, clinical setting;
  • I (Intervention): type, frequency, dose, duration, prognostic factor;
  • C (Comparison): placebo, routine care, alternative treatment/management;
  • O (Outcome): which clinical or patient-related outcomes will the researcher need to measure, improve, influence or accomplish? Which methods of measurement should be used?

While Cochrane Review authors will find the PICO domains helpful, the domains of the GRADE certainty framework further support understanding and describing what additional research will improve the certainty in the available evidence. Note that as the certainty of the evidence is likely to vary by outcome, these implications will be specific to certain outcomes in the review. Table 15.6.a shows how review authors may be aided in their interpretation of the body of evidence and drawing conclusions about future research and practice.

Table 15.6.a Implications for research and practice suggested by individual GRADE domains

The review of compression stockings for prevention of deep vein thrombosis (DVT) in airline passengers described in Chapter 14 provides an example where there is some convincing evidence of a benefit of the intervention: “This review shows that the question of the effects on symptomless DVT of wearing versus not wearing compression stockings in the types of people studied in these trials should now be regarded as answered. Further research may be justified to investigate the relative effects of different strengths of stockings or of stockings compared to other preventative strategies. Further randomised trials to address the remaining uncertainty about the effects of wearing versus not wearing compression stockings on outcomes such as death, pulmonary embolism and symptomatic DVT would need to be large.” (Clarke et al 2016).

A review of therapeutic touch for anxiety disorder provides an example of the implications for research when no eligible studies had been found: “This review highlights the need for randomized controlled trials to evaluate the effectiveness of therapeutic touch in reducing anxiety symptoms in people diagnosed with anxiety disorders. Future trials need to be rigorous in design and delivery, with subsequent reporting to include high quality descriptions of all aspects of methodology to enable appraisal and interpretation of results.” (Robinson et al 2007).

15.6.4 Reaching conclusions

A common mistake is to confuse ‘no evidence of an effect’ with ‘evidence of no effect’. When the confidence intervals are too wide (e.g. including no effect), it is wrong to claim that the experimental intervention has ‘no effect’ or is ‘no different’ from the comparator intervention. Review authors may also incorrectly ‘positively’ frame results for some effects but not others. For example, when the effect estimate is positive for a beneficial outcome but confidence intervals are wide, review authors may describe the effect as promising. However, when the effect estimate is negative for an outcome that is considered harmful but the confidence intervals include no effect, review authors report no effect. Another mistake is to frame the conclusion in wishful terms. For example, review authors might write, “there were too few people in the analysis to detect a reduction in mortality” when the included studies showed a reduction or even increase in mortality that was not ‘statistically significant’. One way of avoiding errors such as these is to consider the results blinded; that is, consider how the results would be presented and framed in the conclusions if the direction of the results was reversed. If the confidence interval for the estimate of the difference in the effects of the interventions overlaps with no effect, the analysis is compatible with both a true beneficial effect and a true harmful effect. If one of the possibilities is mentioned in the conclusion, the other possibility should be mentioned as well. Table 15.6.b suggests narrative statements for drawing conclusions based on the effect estimate from the meta-analysis and the certainty of the evidence.

Table 15.6.b Suggested narrative statements for phrasing conclusions

Another common mistake is to reach conclusions that go beyond the evidence. Often this is done implicitly, without referring to the additional information or judgements that are used in reaching conclusions about the implications of a review for practice. Even when additional information and explicit judgements support conclusions about the implications of a review for practice, review authors rarely conduct systematic reviews of the additional information. Furthermore, implications for practice are often dependent on specific circumstances and values that must be taken into consideration. As we have noted, review authors should always be cautious when drawing conclusions about implications for practice and they should not make recommendations.

15.7 Chapter information

Authors: Holger J Schünemann, Gunn E Vist, Julian PT Higgins, Nancy Santesso, Jonathan J Deeks, Paul Glasziou, Elie Akl, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group

Acknowledgements: Andrew Oxman, Jonathan Sterne, Michael Borenstein and Rob Scholten contributed text to earlier versions of this chapter.

Funding: This work was in part supported by funding from the Michael G DeGroote Cochrane Canada Centre and the Ontario Ministry of Health. JJD receives support from the National Institute for Health Research (NIHR) Birmingham Biomedical Research Centre at the University Hospitals Birmingham NHS Foundation Trust and the University of Birmingham. JPTH receives support from the NIHR Biomedical Research Centre at University Hospitals Bristol NHS Foundation Trust and the University of Bristol. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

15.8 References

Aguilar MI, Hart R. Oral anticoagulants for preventing stroke in patients with non-valvular atrial fibrillation and no previous history of stroke or transient ischemic attacks. Cochrane Database of Systematic Reviews 2005; 3 : CD001927.

Aguilar MI, Hart R, Pearce LA. Oral anticoagulants versus antiplatelet therapy for preventing stroke in patients with non-valvular atrial fibrillation and no history of stroke or transient ischemic attacks. Cochrane Database of Systematic Reviews 2007; 3 : CD006186.

Akl EA, Gunukula S, Barba M, Yosuico VE, van Doormaal FF, Kuipers S, Middeldorp S, Dickinson HO, Bryant A, Schünemann H. Parenteral anticoagulation in patients with cancer who have no therapeutic or prophylactic indication for anticoagulation. Cochrane Database of Systematic Reviews 2011a; 1 : CD006652.

Akl EA, Oxman AD, Herrin J, Vist GE, Terrenato I, Sperati F, Costiniuk C, Blank D, Schünemann H. Using alternative statistical formats for presenting risks and risk reductions. Cochrane Database of Systematic Reviews 2011b; 3 : CD006776.

Alonso-Coello P, Schünemann HJ, Moberg J, Brignardello-Petersen R, Akl EA, Davoli M, Treweek S, Mustafa RA, Rada G, Rosenbaum S, Morelli A, Guyatt GH, Oxman AD, Group GW. GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction. BMJ 2016; 353 : i2016.

Altman DG. Confidence intervals for the number needed to treat. BMJ 1998; 317 : 1309-1312.

Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, Guyatt GH, Harbour RT, Haugh MC, Henry D, Hill S, Jaeschke R, Leng G, Liberati A, Magrini N, Mason J, Middleton P, Mrukowicz J, O'Connell D, Oxman AD, Phillips B, Schünemann HJ, Edejer TT, Varonen H, Vist GE, Williams JW, Jr., Zaza S. Grading quality of evidence and strength of recommendations. BMJ 2004; 328 : 1490.

Brown P, Brunnhuber K, Chalkidou K, Chalmers I, Clarke M, Fenton M, Forbes C, Glanville J, Hicks NJ, Moody J, Twaddle S, Timimi H, Young P. How to formulate research recommendations. BMJ 2006; 333 : 804-806.

Cates C. Confidence intervals for the number needed to treat: Pooling numbers needed to treat may not be reliable. BMJ 1999; 318 : 1764-1765.

Clarke MJ, Broderick C, Hopewell S, Juszczak E, Eisinga A. Compression stockings for preventing deep vein thrombosis in airline passengers. Cochrane Database of Systematic Reviews 2016; 9 : CD004002.

Cohen J. Statistical Power Analysis in the Behavioral Sciences . 2nd edition ed. Hillsdale (NJ): Lawrence Erlbaum Associates, Inc.; 1988.

Coleman T, Chamberlain C, Davey MA, Cooper SE, Leonardi-Bee J. Pharmacological interventions for promoting smoking cessation during pregnancy. Cochrane Database of Systematic Reviews 2015; 12 : CD010078.

Dans AM, Dans L, Oxman AD, Robinson V, Acuin J, Tugwell P, Dennis R, Kang D. Assessing equity in clinical practice guidelines. Journal of Clinical Epidemiology 2007; 60 : 540-546.

Friedman LM, Furberg CD, DeMets DL. Fundamentals of Clinical Trials . 2nd edition ed. Littleton (MA): John Wright PSG, Inc.; 1985.

Friedrich JO, Adhikari NK, Beyene J. The ratio of means method as an alternative to mean differences for analyzing continuous outcome variables in meta-analysis: a simulation study. BMC Medical Research Methodology 2008; 8 : 32.

Furukawa T. From effect size into number needed to treat. Lancet 1999; 353 : 1680.

Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E. Committee on Standards for Developing Trustworthy Clinical Practice Guidelines, Board on Health Care Services: Clinical Practice Guidelines We Can Trust. Washington, DC: National Academies Press; 2011.

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. Journal of Clinical Epidemiology 2011a; 64 : 383-394.

Guyatt GH, Juniper EF, Walter SD, Griffith LE, Goldstein RS. Interpreting treatment effects in randomised trials. BMJ 1998; 316 : 690-693.

Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann HJ. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008; 336 : 924-926.

Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M, Alonso-Coello P, Falck-Ytter Y, Jaeschke R, Vist G, Akl EA, Post PN, Norris S, Meerpohl J, Shukla VK, Nasser M, Schünemann HJ. GRADE guidelines: 8. Rating the quality of evidence--indirectness. Journal of Clinical Epidemiology 2011b; 64 : 1303-1310.

Guyatt GH, Oxman AD, Santesso N, Helfand M, Vist G, Kunz R, Brozek J, Norris S, Meerpohl J, Djulbegovic B, Alonso-Coello P, Post PN, Busse JW, Glasziou P, Christensen R, Schünemann HJ. GRADE guidelines: 12. Preparing summary of findings tables-binary outcomes. Journal of Clinical Epidemiology 2013a; 66 : 158-172.

Guyatt GH, Thorlund K, Oxman AD, Walter SD, Patrick D, Furukawa TA, Johnston BC, Karanicolas P, Akl EA, Vist G, Kunz R, Brozek J, Kupper LL, Martin SL, Meerpohl JJ, Alonso-Coello P, Christensen R, Schünemann HJ. GRADE guidelines: 13. Preparing summary of findings tables and evidence profiles-continuous outcomes. Journal of Clinical Epidemiology 2013b; 66 : 173-183.

Hawe P, Shiell A, Riley T, Gold L. Methods for exploring implementation variation and local context within a cluster randomised community intervention trial. Journal of Epidemiology and Community Health 2004; 58 : 788-793.

Hoffrage U, Lindsey S, Hertwig R, Gigerenzer G. Medicine. Communicating statistical information. Science 2000; 290 : 2261-2262.

Jaeschke R, Singer J, Guyatt GH. Measurement of health status. Ascertaining the minimal clinically important difference. Controlled Clinical Trials 1989; 10 : 407-415.

Johnston B, Thorlund K, Schünemann H, Xie F, Murad M, Montori V, Guyatt G. Improving the interpretation of health-related quality of life evidence in meta-analysis: The application of minimal important difference units. . Health Outcomes and Qualithy of Life 2010; 11 : 116.

Karanicolas PJ, Smith SE, Kanbur B, Davies E, Guyatt GH. The impact of prophylactic dexamethasone on nausea and vomiting after laparoscopic cholecystectomy: a systematic review and meta-analysis. Annals of Surgery 2008; 248 : 751-762.

Lumley J, Oliver SS, Chamberlain C, Oakley L. Interventions for promoting smoking cessation during pregnancy. Cochrane Database of Systematic Reviews 2004; 4 : CD001055.

McQuay HJ, Moore RA. Using numerical results from systematic reviews in clinical practice. Annals of Internal Medicine 1997; 126 : 712-720.

Resnicow K, Cross D, Wynder E. The Know Your Body program: a review of evaluation studies. Bulletin of the New York Academy of Medicine 1993; 70 : 188-207.

Robinson J, Biley FC, Dolk H. Therapeutic touch for anxiety disorders. Cochrane Database of Systematic Reviews 2007; 3 : CD006240.

Rothwell PM. External validity of randomised controlled trials: "to whom do the results of this trial apply?". Lancet 2005; 365 : 82-93.

Santesso N, Carrasco-Labra A, Langendam M, Brignardello-Petersen R, Mustafa RA, Heus P, Lasserson T, Opiyo N, Kunnamo I, Sinclair D, Garner P, Treweek S, Tovey D, Akl EA, Tugwell P, Brozek JL, Guyatt G, Schünemann HJ. Improving GRADE evidence tables part 3: detailed guidance for explanatory footnotes supports creating and understanding GRADE certainty in the evidence judgments. Journal of Clinical Epidemiology 2016; 74 : 28-39.

Schünemann HJ, Puhan M, Goldstein R, Jaeschke R, Guyatt GH. Measurement properties and interpretability of the Chronic respiratory disease questionnaire (CRQ). COPD: Journal of Chronic Obstructive Pulmonary Disease 2005; 2 : 81-89.

Schünemann HJ, Guyatt GH. Commentary--goodbye M(C)ID! Hello MID, where do you come from? Health Services Research 2005; 40 : 593-597.

Schünemann HJ, Fretheim A, Oxman AD. Improving the use of research evidence in guideline development: 13. Applicability, transferability and adaptation. Health Research Policy and Systems 2006; 4 : 25.

Schünemann HJ. Methodological idiosyncracies, frameworks and challenges of non-pharmaceutical and non-technical treatment interventions. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen 2013; 107 : 214-220.

Schünemann HJ, Tugwell P, Reeves BC, Akl EA, Santesso N, Spencer FA, Shea B, Wells G, Helfand M. Non-randomized studies as a source of complementary, sequential or replacement evidence for randomized controlled trials in systematic reviews on the effects of interventions. Research Synthesis Methods 2013; 4 : 49-62.

Schünemann HJ, Wiercioch W, Etxeandia I, Falavigna M, Santesso N, Mustafa R, Ventresca M, Brignardello-Petersen R, Laisaar KT, Kowalski S, Baldeh T, Zhang Y, Raid U, Neumann I, Norris SL, Thornton J, Harbour R, Treweek S, Guyatt G, Alonso-Coello P, Reinap M, Brozek J, Oxman A, Akl EA. Guidelines 2.0: systematic development of a comprehensive checklist for a successful guideline enterprise. CMAJ: Canadian Medical Association Journal 2014; 186 : E123-142.

Schünemann HJ. Interpreting GRADE's levels of certainty or quality of the evidence: GRADE for statisticians, considering review information size or less emphasis on imprecision? Journal of Clinical Epidemiology 2016; 75 : 6-15.

Smeeth L, Haines A, Ebrahim S. Numbers needed to treat derived from meta-analyses--sometimes informative, usually misleading. BMJ 1999; 318 : 1548-1551.

Sun X, Briel M, Busse JW, You JJ, Akl EA, Mejza F, Bala MM, Bassler D, Mertz D, Diaz-Granados N, Vandvik PO, Malaga G, Srinathan SK, Dahm P, Johnston BC, Alonso-Coello P, Hassouneh B, Walter SD, Heels-Ansdell D, Bhatnagar N, Altman DG, Guyatt GH. Credibility of claims of subgroup effects in randomised controlled trials: systematic review. BMJ 2012; 344 : e1553.

Zhang Y, Akl EA, Schünemann HJ. Using systematic reviews in guideline development: the GRADE approach. Research Synthesis Methods 2018a: doi: 10.1002/jrsm.1313.

Zhang Y, Alonso-Coello P, Guyatt GH, Yepes-Nunez JJ, Akl EA, Hazlewood G, Pardo-Hernandez H, Etxeandia-Ikobaltzeta I, Qaseem A, Williams JW, Jr., Tugwell P, Flottorp S, Chang Y, Zhang Y, Mustafa RA, Rojas MX, Schünemann HJ. GRADE Guidelines: 19. Assessing the certainty of evidence in the importance of outcomes or values and preferences-Risk of bias and indirectness. Journal of Clinical Epidemiology 2018b: doi: 10.1016/j.jclinepi.2018.1001.1013.

Zhang Y, Alonso Coello P, Guyatt G, Yepes-Nunez JJ, Akl EA, Hazlewood G, Pardo-Hernandez H, Etxeandia-Ikobaltzeta I, Qaseem A, Williams JW, Jr., Tugwell P, Flottorp S, Chang Y, Zhang Y, Mustafa RA, Rojas MX, Xie F, Schünemann HJ. GRADE Guidelines: 20. Assessing the certainty of evidence in the importance of outcomes or values and preferences - Inconsistency, Imprecision, and other Domains. Journal of Clinical Epidemiology 2018c: doi: 10.1016/j.jclinepi.2018.1005.1011.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

5.15: Drawing Conclusions from Statistics

  • Last updated
  • Save as PDF
  • Page ID 59855

Learning Objectives

  • Describe the role of random sampling and random assignment in drawing cause-and-effect conclusions

Generalizability

Photo of a diverse group of college-aged students.

One limitation to the study mentioned previously about the babies choosing the “helper” toy is that the conclusion only applies to the 16 infants in the study. We don’t know much about how those 16 infants were selected. Suppose we want to select a subset of individuals (a sample ) from a much larger group of individuals (the population ) in such a way that conclusions from the sample can be generalized to the larger population. This is the question faced by pollsters every day.

Example 1 : The General Social Survey (GSS) is a survey on societal trends conducted every other year in the United States. Based on a sample of about 2,000 adult Americans, researchers make claims about what percentage of the U.S. population consider themselves to be “liberal,” what percentage consider themselves “happy,” what percentage feel “rushed” in their daily lives, and many other issues. The key to making these claims about the larger population of all American adults lies in how the sample is selected. The goal is to select a sample that is representative of the population, and a common way to achieve this goal is to select a random sample that gives every member of the population an equal chance of being selected for the sample. In its simplest form, random sampling involves numbering every member of the population and then using a computer to randomly select the subset to be surveyed. Most polls don’t operate exactly like this, but they do use probability-based sampling methods to select individuals from nationally representative panels.

In 2004, the GSS reported that 817 of 977 respondents (or 83.6%) indicated that they always or sometimes feel rushed. This is a clear majority, but we again need to consider variation due to random sampling . Fortunately, we can use the same probability model we did in the previous example to investigate the probable size of this error. (Note, we can use the coin-tossing model when the actual population size is much, much larger than the sample size, as then we can still consider the probability to be the same for every individual in the sample.) This probability model predicts that the sample result will be within 3 percentage points of the population value (roughly 1 over the square root of the sample size, the margin of error ). A statistician would conclude, with 95% confidence, that between 80.6% and 86.6% of all adult Americans in 2004 would have responded that they sometimes or always feel rushed.

The key to the margin of error is that when we use a probability sampling method, we can make claims about how often (in the long run, with repeated random sampling) the sample result would fall within a certain distance from the unknown population value by chance (meaning by random sampling variation) alone. Conversely, non-random samples are often suspect to bias, meaning the sampling method systematically over-represents some segments of the population and under-represents others. We also still need to consider other sources of bias, such as individuals not responding honestly. These sources of error are not measured by the margin of error.

Query \(\PageIndex{1}\)

Query \(\PageIndex{2}\)

Cause and Effect

In many research studies, the primary question of interest concerns differences between groups. Then the question becomes how were the groups formed (e.g., selecting people who already drink coffee vs. those who don’t). In some studies, the researchers actively form the groups themselves. But then we have a similar question—could any differences we observe in the groups be an artifact of that group-formation process? Or maybe the difference we observe in the groups is so large that we can discount a “fluke” in the group-formation process as a reasonable explanation for what we find?

Example 2 : A psychology study investigated whether people tend to display more creativity when they are thinking about intrinsic (internal) or extrinsic (external) motivations (Ramsey & Schafer, 2002, based on a study by Amabile, 1985). The subjects were 47 people with extensive experience with creative writing. Subjects began by answering survey questions about either intrinsic motivations for writing (such as the pleasure of self-expression) or extrinsic motivations (such as public recognition). Then all subjects were instructed to write a haiku, and those poems were evaluated for creativity by a panel of judges. The researchers conjectured beforehand that subjects who were thinking about intrinsic motivations would display more creativity than subjects who were thinking about extrinsic motivations. The creativity scores from the 47 subjects in this study are displayed in Figure 2, where higher scores indicate more creativity.

Image showing a dot for creativity scores, which vary between 5 and 27, and the types of motivation each person was given as a motivator, either extrinsic or intrinsic.

In this example, the key question is whether the type of motivation affects creativity scores. In particular, do subjects who were asked about intrinsic motivations tend to have higher creativity scores than subjects who were asked about extrinsic motivations?

Figure 2 reveals that both motivation groups saw considerable variability in creativity scores, and these scores have considerable overlap between the groups. In other words, it’s certainly not always the case that those with extrinsic motivations have higher creativity than those with intrinsic motivations, but there may still be a statistical tendency in this direction. (Psychologist Keith Stanovich (2013) refers to people’s difficulties with thinking about such probabilistic tendencies as “the Achilles heel of human cognition.”)

The mean creativity score is 19.88 for the intrinsic group, compared to 15.74 for the extrinsic group, which supports the researchers’ conjecture. Yet comparing only the means of the two groups fails to consider the variability of creativity scores in the groups. We can measure variability with statistics using, for instance, the standard deviation: 5.25 for the extrinsic group and 4.40 for the intrinsic group. The standard deviations tell us that most of the creativity scores are within about 5 points of the mean score in each group. We see that the mean score for the intrinsic group lies within one standard deviation of the mean score for extrinsic group. So, although there is a tendency for the creativity scores to be higher in the intrinsic group, on average, the difference is not extremely large.

We again want to consider possible explanations for this difference. The study only involved individuals with extensive creative writing experience. Although this limits the population to which we can generalize, it does not explain why the mean creativity score was a bit larger for the intrinsic group than for the extrinsic group. Maybe women tend to receive higher creativity scores? Here is where we need to focus on how the individuals were assigned to the motivation groups. If only women were in the intrinsic motivation group and only men in the extrinsic group, then this would present a problem because we wouldn’t know if the intrinsic group did better because of the different type of motivation or because they were women. However, the researchers guarded against such a problem by randomly assigning the individuals to the motivation groups. Like flipping a coin, each individual was just as likely to be assigned to either type of motivation. Why is this helpful? Because this random assignment tends to balance out all the variables related to creativity we can think of, and even those we don’t think of in advance, between the two groups. So we should have a similar male/female split between the two groups; we should have a similar age distribution between the two groups; we should have a similar distribution of educational background between the two groups; and so on. Random assignment should produce groups that are as similar as possible except for the type of motivation, which presumably eliminates all those other variables as possible explanations for the observed tendency for higher scores in the intrinsic group.

But does this always work? No, so by “luck of the draw” the groups may be a little different prior to answering the motivation survey. So then the question is, is it possible that an unlucky random assignment is responsible for the observed difference in creativity scores between the groups? In other words, suppose each individual’s poem was going to get the same creativity score no matter which group they were assigned to, that the type of motivation in no way impacted their score. Then how often would the random-assignment process alone lead to a difference in mean creativity scores as large (or larger) than 19.88 – 15.74 = 4.14 points?

We again want to apply to a probability model to approximate a p-value , but this time the model will be a bit different. Think of writing everyone’s creativity scores on an index card, shuffling up the index cards, and then dealing out 23 to the extrinsic motivation group and 24 to the intrinsic motivation group, and finding the difference in the group means. We (better yet, the computer) can repeat this process over and over to see how often, when the scores don’t change, random assignment leads to a difference in means at least as large as 4.41. Figure 3 shows the results from 1,000 such hypothetical random assignments for these scores.

Standard distribution in a typical bell curve.

Only 2 of the 1,000 simulated random assignments produced a difference in group means of 4.41 or larger. In other words, the approximate p-value is 2/1000 = 0.002. This small p-value indicates that it would be very surprising for the random assignment process alone to produce such a large difference in group means. Therefore, as with Example 2, we have strong evidence that focusing on intrinsic motivations tends to increase creativity scores, as compared to thinking about extrinsic motivations.

Notice that the previous statement implies a cause-and-effect relationship between motivation and creativity score; is such a strong conclusion justified? Yes, because of the random assignment used in the study. That should have balanced out any other variables between the two groups, so now that the small p-value convinces us that the higher mean in the intrinsic group wasn’t just a coincidence, the only reasonable explanation left is the difference in the type of motivation. Can we generalize this conclusion to everyone? Not necessarily—we could cautiously generalize this conclusion to individuals with extensive experience in creative writing similar the individuals in this study, but we would still want to know more about how these individuals were selected to participate.

Close-up photo of mathematical equations.

Statistical thinking involves the careful design of a study to collect meaningful data to answer a focused research question, detailed analysis of patterns in the data, and drawing conclusions that go beyond the observed data. Random sampling is paramount to generalizing results from our sample to a larger population, and random assignment is key to drawing cause-and-effect conclusions. With both kinds of randomness, probability models help us assess how much random variation we can expect in our results, in order to determine whether our results could happen by chance alone and to estimate a margin of error.

So where does this leave us with regard to the coffee study mentioned previously (the Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012 found that men who drank at least six cups of coffee a day had a 10% lower chance of dying (women 15% lower) than those who drank none)? We can answer many of the questions:

  • This was a 14-year study conducted by researchers at the National Cancer Institute.
  • The results were published in the June issue of the New England Journal of Medicine , a respected, peer-reviewed journal.
  • The study reviewed coffee habits of more than 402,000 people ages 50 to 71 from six states and two metropolitan areas. Those with cancer, heart disease, and stroke were excluded at the start of the study. Coffee consumption was assessed once at the start of the study.
  • About 52,000 people died during the course of the study.
  • People who drank between two and five cups of coffee daily showed a lower risk as well, but the amount of reduction increased for those drinking six or more cups.
  • The sample sizes were fairly large and so the p-values are quite small, even though percent reduction in risk was not extremely large (dropping from a 12% chance to about 10%–11%).
  • Whether coffee was caffeinated or decaffeinated did not appear to affect the results.
  • This was an observational study, so no cause-and-effect conclusions can be drawn between coffee drinking and increased longevity, contrary to the impression conveyed by many news headlines about this study. In particular, it’s possible that those with chronic diseases don’t tend to drink coffee.

This study needs to be reviewed in the larger context of similar studies and consistency of results across studies, with the constant caution that this was not a randomized experiment. Whereas a statistical analysis can still “adjust” for other potential confounding variables, we are not yet convinced that researchers have identified them all or completely isolated why this decrease in death risk is evident. Researchers can now take the findings of this study and develop more focused studies that address new questions.

Explore these outside resources to learn more about applied statistics:

  • Video about p-values:  P-Value Extravaganza
  • Interactive web applets for teaching and learning statistics
  • Inter-university Consortium for Political and Social Research  where you can find and analyze data.
  • The Consortium for the Advancement of Undergraduate Statistics

Think It Over

  • Find a recent research article in your field and answer the following: What was the primary research question? How were individuals selected to participate in the study? Were summary results provided? How strong is the evidence presented in favor or against the research question? Was random assignment used? Summarize the main conclusions from the study, addressing the issues of statistical significance, statistical confidence, generalizability, and cause and effect. Do you agree with the conclusions drawn from this study, based on the study design and the results presented?
  • Is it reasonable to use a random sample of 1,000 individuals to draw conclusions about all U.S. adults? Explain why or why not.

cause-and-effect: related to whether we say one variable is causing changes in the other variable, versus other variables that may be related to these two variables.

generalizability : related to whether the results from the sample can be generalized to a larger population.

margin of error : the expected amount of random variation in a statistic; often defined for 95% confidence level.

population : a larger collection of individuals that we would like to generalize our results to.

p-value : the probability of observing a particular outcome in a sample, or more extreme, under a conjecture about the larger population or process.

random assignment : using a probability-based method to divide a sample into treatment groups.

random sampling : using a probability-based method to select a subset of individuals for the sample from the population.

sample : the collection of individuals on which we collect data.

Licenses and Attributions

CC licensed content, Original

  • Modification, adaptation, and original content. Authored by : Pat Carroll and Lumen Learning. Provided by : Lumen Learning. License : CC BY: Attribution
  • Statistical Thinking. Authored by : Beth Chance and Allan Rossman, California Polytechnic State University, San Luis Obispo. Provided by : Noba. Located at : http://nobaproject.com/modules/statistical-thinking . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike
  • The Replication Crisis. Authored by : Colin Thomas William. Provided by : Ivy Tech Community College. License : CC BY: Attribution

Logo for Portland State University Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Drawing Conclusions and Reporting the Results

Rajiv S. Jhangiani; I-Chant A. Chiang; Carrie Cuttler; and Dana C. Leighton

Learning Objectives

  • Identify the conclusions researchers can make based on the outcome of their studies.
  • Describe why scientists avoid the term “scientific proof.”
  • Explain the different ways that scientists share their findings.

Drawing Conclusions

Since statistics are probabilistic in nature and findings can reflect type I or type II errors, we cannot use the results of a single study to conclude with certainty that a theory is true. Rather theories are supported, refuted, or modified based on the results of research.

If the results are statistically significant and consistent with the hypothesis and the theory that was used to generate the hypothesis, then researchers can conclude that the theory is supported. Not only did the theory make an accurate prediction, but there is now a new phenomenon that the theory accounts for. If a hypothesis is disconfirmed in a systematic empirical study, then the theory has been weakened. It made an inaccurate prediction, and there is now a new phenomenon that it does not account for.

Although this seems straightforward, there are some complications. First, confirming a hypothesis can strengthen a theory but it can never prove a theory. In fact, scientists tend to avoid the word “prove” when talking and writing about theories. One reason for this avoidance is that the result may reflect a type I error.  Another reason for this  avoidance  is that there may be other plausible theories that imply the same hypothesis, which means that confirming the hypothesis strengthens all those theories equally. A third reason is that it is always possible that another test of the hypothesis or a test of a new hypothesis derived from the theory will be disconfirmed. This  difficulty  is a version of the famous philosophical “problem of induction.” One cannot definitively prove a general principle (e.g., “All swans are white.”) just by observing confirming cases (e.g., white swans)—no matter how many. It is always possible that a disconfirming case (e.g., a black swan) will eventually come along. For these reasons, scientists tend to think of theories—even highly successful ones—as subject to revision based on new and unexpected observations.

A second complication has to do with what it means when a hypothesis is disconfirmed. According to the strictest version of the hypothetico-deductive method, disconfirming a hypothesis disproves the theory it was derived from. In formal logic, the premises “if  A  then  B ” and “not  B ” necessarily lead to the conclusion “not  A .” If  A  is the theory and  B  is the hypothesis (“if  A  then  B ”), then disconfirming the hypothesis (“not  B ”) must mean that the theory is incorrect (“not  A ”). In practice, however, scientists do not give up on their theories so easily. One reason is that one disconfirmed hypothesis could be a missed opportunity (the result of a type II error) or it could be the result of a faulty research design. Perhaps the researcher did not successfully manipulate the independent variable or measure the dependent variable.

A disconfirmed hypothesis could also mean that some unstated but relatively minor assumption of the theory was not met. For example, if Zajonc had failed to find social facilitation in cockroaches, he could have concluded that drive theory is still correct but it applies only to animals with sufficiently complex nervous systems. That is, the evidence from a study can be used to modify a theory.  This practice does not mean that researchers are free to ignore disconfirmations of their theories. If they cannot improve their research designs or modify their theories to account for repeated disconfirmations, then they eventually must abandon their theories and replace them with ones that are more successful.

The bottom line here is that because statistics are probabilistic in nature and because all research studies have flaws there is no such thing as scientific proof, there is only scientific evidence.

Reporting the Results

The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences.

The most prestigious way to report one’s findings is by writing a manuscript and having it published in a peer-reviewed scientific journal. Manuscripts published in psychology journals typically must adhere to the writing style of the American Psychological Association (APA style). You will likely be learning the major elements of this writing style in this course.

Another way to report findings is by writing a book chapter that is published in an edited book. Preferably the editor of the book puts the chapter through peer review but this is not always the case and some scientists are invited by editors to write book chapters.

A fun way to disseminate findings is to give a presentation at a conference. This can either be done as an oral presentation or a poster presentation. Oral presentations involve getting up in front of an audience of fellow scientists and giving a talk that might last anywhere from 10 minutes to 1 hour (depending on the conference) and then fielding questions from the audience. Alternatively, poster presentations involve summarizing the study on a large poster that provides a brief overview of the purpose, methods, results, and discussion. The presenter stands by their poster for an hour or two and discusses it with people who pass by. Presenting one’s work at a conference is a great way to get feedback from one’s peers before attempting to undergo the more rigorous peer-review process involved in publishing a journal article.

Drawing Conclusions and Reporting the Results Copyright © by Rajiv S. Jhangiani; I-Chant A. Chiang; Carrie Cuttler; and Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections
  • How to Write Discussions and Conclusions

How to Write Discussions and Conclusions

The discussion section contains the results and outcomes of a study. An effective discussion informs readers what can be learned from your experiment and provides context for the results.

What makes an effective discussion?

When you’re ready to write your discussion, you’ve already introduced the purpose of your study and provided an in-depth description of the methodology. The discussion informs readers about the larger implications of your study based on the results. Highlighting these implications while not overstating the findings can be challenging, especially when you’re submitting to a journal that selects articles based on novelty or potential impact. Regardless of what journal you are submitting to, the discussion section always serves the same purpose: concluding what your study results actually mean.

A successful discussion section puts your findings in context. It should include:

  • the results of your research,
  • a discussion of related research, and
  • a comparison between your results and initial hypothesis.

Tip: Not all journals share the same naming conventions.

You can apply the advice in this article to the conclusion, results or discussion sections of your manuscript.

Our Early Career Researcher community tells us that the conclusion is often considered the most difficult aspect of a manuscript to write. To help, this guide provides questions to ask yourself, a basic structure to model your discussion off of and examples from published manuscripts. 

conclusion of variables in research

Questions to ask yourself:

  • Was my hypothesis correct?
  • If my hypothesis is partially correct or entirely different, what can be learned from the results? 
  • How do the conclusions reshape or add onto the existing knowledge in the field? What does previous research say about the topic? 
  • Why are the results important or relevant to your audience? Do they add further evidence to a scientific consensus or disprove prior studies? 
  • How can future research build on these observations? What are the key experiments that must be done? 
  • What is the “take-home” message you want your reader to leave with?

How to structure a discussion

Trying to fit a complete discussion into a single paragraph can add unnecessary stress to the writing process. If possible, you’ll want to give yourself two or three paragraphs to give the reader a comprehensive understanding of your study as a whole. Here’s one way to structure an effective discussion:

conclusion of variables in research

Writing Tips

While the above sections can help you brainstorm and structure your discussion, there are many common mistakes that writers revert to when having difficulties with their paper. Writing a discussion can be a delicate balance between summarizing your results, providing proper context for your research and avoiding introducing new information. Remember that your paper should be both confident and honest about the results! 

What to do

  • Read the journal’s guidelines on the discussion and conclusion sections. If possible, learn about the guidelines before writing the discussion to ensure you’re writing to meet their expectations. 
  • Begin with a clear statement of the principal findings. This will reinforce the main take-away for the reader and set up the rest of the discussion. 
  • Explain why the outcomes of your study are important to the reader. Discuss the implications of your findings realistically based on previous literature, highlighting both the strengths and limitations of the research. 
  • State whether the results prove or disprove your hypothesis. If your hypothesis was disproved, what might be the reasons? 
  • Introduce new or expanded ways to think about the research question. Indicate what next steps can be taken to further pursue any unresolved questions. 
  • If dealing with a contemporary or ongoing problem, such as climate change, discuss possible consequences if the problem is avoided. 
  • Be concise. Adding unnecessary detail can distract from the main findings. 

What not to do

Don’t

  • Rewrite your abstract. Statements with “we investigated” or “we studied” generally do not belong in the discussion. 
  • Include new arguments or evidence not previously discussed. Necessary information and evidence should be introduced in the main body of the paper. 
  • Apologize. Even if your research contains significant limitations, don’t undermine your authority by including statements that doubt your methodology or execution. 
  • Shy away from speaking on limitations or negative results. Including limitations and negative results will give readers a complete understanding of the presented research. Potential limitations include sources of potential bias, threats to internal or external validity, barriers to implementing an intervention and other issues inherent to the study design. 
  • Overstate the importance of your findings. Making grand statements about how a study will fully resolve large questions can lead readers to doubt the success of the research. 

Snippets of Effective Discussions:

Consumer-based actions to reduce plastic pollution in rivers: A multi-criteria decision analysis approach

Identifying reliable indicators of fitness in polar bears

  • How to Write a Great Title
  • How to Write an Abstract
  • How to Write Your Methods
  • How to Report Statistics
  • How to Edit Your Work

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

conclusion of variables in research

Variables in Research | Types, Definiton & Examples

conclusion of variables in research

Introduction

What is a variable, what are the 5 types of variables in research, other variables in research.

Variables are fundamental components of research that allow for the measurement and analysis of data. They can be defined as characteristics or properties that can take on different values. In research design , understanding the types of variables and their roles is crucial for developing hypotheses , designing methods , and interpreting results .

This article outlines the the types of variables in research, including their definitions and examples, to provide a clear understanding of their use and significance in research studies. By categorizing variables into distinct groups based on their roles in research, their types of data, and their relationships with other variables, researchers can more effectively structure their studies and achieve more accurate conclusions.

conclusion of variables in research

A variable represents any characteristic, number, or quantity that can be measured or quantified. The term encompasses anything that can vary or change, ranging from simple concepts like age and height to more complex ones like satisfaction levels or economic status. Variables are essential in research as they are the foundational elements that researchers manipulate, measure, or control to gain insights into relationships, causes, and effects within their studies. They enable the framing of research questions, the formulation of hypotheses, and the interpretation of results.

Variables can be categorized based on their role in the study (such as independent and dependent variables ), the type of data they represent (quantitative or categorical), and their relationship to other variables (like confounding or control variables). Understanding what constitutes a variable and the various variable types available is a critical step in designing robust and meaningful research.

conclusion of variables in research

ATLAS.ti makes complex data easy to understand

Turn to our powerful data analysis tools to make the most of your research. Get started with a free trial.

Variables are crucial components in research, serving as the foundation for data collection , analysis , and interpretation . They are attributes or characteristics that can vary among subjects or over time, and understanding their types is essential for any study. Variables can be broadly classified into five main types, each with its distinct characteristics and roles within research.

This classification helps researchers in designing their studies, choosing appropriate measurement techniques, and analyzing their results accurately. The five types of variables include independent variables, dependent variables, categorical variables, continuous variables, and confounding variables. These categories not only facilitate a clearer understanding of the data but also guide the formulation of hypotheses and research methodologies.

Independent variables

Independent variables are foundational to the structure of research, serving as the factors or conditions that researchers manipulate or vary to observe their effects on dependent variables. These variables are considered "independent" because their variation does not depend on other variables within the study. Instead, they are the cause or stimulus that directly influences the outcomes being measured. For example, in an experiment to assess the effectiveness of a new teaching method on student performance, the teaching method applied (traditional vs. innovative) would be the independent variable.

The selection of an independent variable is a critical step in research design, as it directly correlates with the study's objective to determine causality or association. Researchers must clearly define and control these variables to ensure that observed changes in the dependent variable can be attributed to variations in the independent variable, thereby affirming the reliability of the results. In experimental research, the independent variable is what differentiates the control group from the experimental group, thereby setting the stage for meaningful comparison and analysis.

Dependent variables

Dependent variables are the outcomes or effects that researchers aim to explore and understand in their studies. These variables are called "dependent" because their values depend on the changes or variations of the independent variables.

Essentially, they are the responses or results that are measured to assess the impact of the independent variable's manipulation. For instance, in a study investigating the effect of exercise on weight loss, the amount of weight lost would be considered the dependent variable, as it depends on the exercise regimen (the independent variable).

The identification and measurement of the dependent variable are crucial for testing the hypothesis and drawing conclusions from the research. It allows researchers to quantify the effect of the independent variable , providing evidence for causal relationships or associations. In experimental settings, the dependent variable is what is being tested and measured across different groups or conditions, enabling researchers to assess the efficacy or impact of the independent variable's variation.

To ensure accuracy and reliability, the dependent variable must be defined clearly and measured consistently across all participants or observations. This consistency helps in reducing measurement errors and increases the validity of the research findings. By carefully analyzing the dependent variables, researchers can derive meaningful insights from their studies, contributing to the broader knowledge in their field.

Categorical variables

Categorical variables, also known as qualitative variables, represent types or categories that are used to group observations. These variables divide data into distinct groups or categories that lack a numerical value but hold significant meaning in research. Examples of categorical variables include gender (male, female, other), type of vehicle (car, truck, motorcycle), or marital status (single, married, divorced). These categories help researchers organize data into groups for comparison and analysis.

Categorical variables can be further classified into two subtypes: nominal and ordinal. Nominal variables are categories without any inherent order or ranking among them, such as blood type or ethnicity. Ordinal variables, on the other hand, imply a sort of ranking or order among the categories, like levels of satisfaction (high, medium, low) or education level (high school, bachelor's, master's, doctorate).

Understanding and identifying categorical variables is crucial in research as it influences the choice of statistical analysis methods. Since these variables represent categories without numerical significance, researchers employ specific statistical tests designed for a nominal or ordinal variable to draw meaningful conclusions. Properly classifying and analyzing categorical variables allow for the exploration of relationships between different groups within the study, shedding light on patterns and trends that might not be evident with numerical data alone.

Continuous variables

Continuous variables are quantitative variables that can take an infinite number of values within a given range. These variables are measured along a continuum and can represent very precise measurements. Examples of continuous variables include height, weight, temperature, and time. Because they can assume any value within a range, continuous variables allow for detailed analysis and a high degree of accuracy in research findings.

The ability to measure continuous variables at very fine scales makes them invaluable for many types of research, particularly in the natural and social sciences. For instance, in a study examining the effect of temperature on plant growth, temperature would be considered a continuous variable since it can vary across a wide spectrum and be measured to several decimal places.

When dealing with continuous variables, researchers often use methods incorporating a particular statistical test to accommodate a wide range of data points and the potential for infinite divisibility. This includes various forms of regression analysis, correlation, and other techniques suited for modeling and analyzing nuanced relationships between variables. The precision of continuous variables enhances the researcher's ability to detect patterns, trends, and causal relationships within the data, contributing to more robust and detailed conclusions.

Confounding variables

Confounding variables are those that can cause a false association between the independent and dependent variables, potentially leading to incorrect conclusions about the relationship being studied. These are extraneous variables that were not considered in the study design but can influence both the supposed cause and effect, creating a misleading correlation.

Identifying and controlling for a confounding variable is crucial in research to ensure the validity of the findings. This can be achieved through various methods, including randomization, stratification, and statistical control. Randomization helps to evenly distribute confounding variables across study groups, reducing their potential impact. Stratification involves analyzing the data within strata or layers that share common characteristics of the confounder. Statistical control allows researchers to adjust for the effects of confounders in the analysis phase.

Properly addressing confounding variables strengthens the credibility of research outcomes by clarifying the direct relationship between the dependent and independent variables, thus providing more accurate and reliable results.

conclusion of variables in research

Beyond the primary categories of variables commonly discussed in research methodology , there exists a diverse range of other variables that play significant roles in the design and analysis of studies. Below is an overview of some of these variables, highlighting their definitions and roles within research studies:

  • Discrete variables : A discrete variable is a quantitative variable that represents quantitative data , such as the number of children in a family or the number of cars in a parking lot. Discrete variables can only take on specific values.
  • Categorical variables : A categorical variable categorizes subjects or items into groups that do not have a natural numerical order. Categorical data includes nominal variables, like country of origin, and ordinal variables, such as education level.
  • Predictor variables : Often used in statistical models, a predictor variable is used to forecast or predict the outcomes of other variables, not necessarily with a causal implication.
  • Outcome variables : These variables represent the results or outcomes that researchers aim to explain or predict through their studies. An outcome variable is central to understanding the effects of predictor variables.
  • Latent variables : Not directly observable, latent variables are inferred from other, directly measured variables. Examples include psychological constructs like intelligence or socioeconomic status.
  • Composite variables : Created by combining multiple variables, composite variables can measure a concept more reliably or simplify the analysis. An example would be a composite happiness index derived from several survey questions .
  • Preceding variables : These variables come before other variables in time or sequence, potentially influencing subsequent outcomes. A preceding variable is crucial in longitudinal studies to determine causality or sequences of events.

conclusion of variables in research

Master qualitative research with ATLAS.ti

Turn data into critical insights with our data analysis platform. Try out a free trial today.

conclusion of variables in research

  • Privacy Policy

Research Method

Home » Variables in Research – Definition, Types and Examples

Variables in Research – Definition, Types and Examples

Table of Contents

Variables in Research

Variables in Research

Definition:

In Research, Variables refer to characteristics or attributes that can be measured, manipulated, or controlled. They are the factors that researchers observe or manipulate to understand the relationship between them and the outcomes of interest.

Types of Variables in Research

Types of Variables in Research are as follows:

Independent Variable

This is the variable that is manipulated by the researcher. It is also known as the predictor variable, as it is used to predict changes in the dependent variable. Examples of independent variables include age, gender, dosage, and treatment type.

Dependent Variable

This is the variable that is measured or observed to determine the effects of the independent variable. It is also known as the outcome variable, as it is the variable that is affected by the independent variable. Examples of dependent variables include blood pressure, test scores, and reaction time.

Confounding Variable

This is a variable that can affect the relationship between the independent variable and the dependent variable. It is a variable that is not being studied but could impact the results of the study. For example, in a study on the effects of a new drug on a disease, a confounding variable could be the patient’s age, as older patients may have more severe symptoms.

Mediating Variable

This is a variable that explains the relationship between the independent variable and the dependent variable. It is a variable that comes in between the independent and dependent variables and is affected by the independent variable, which then affects the dependent variable. For example, in a study on the relationship between exercise and weight loss, the mediating variable could be metabolism, as exercise can increase metabolism, which can then lead to weight loss.

Moderator Variable

This is a variable that affects the strength or direction of the relationship between the independent variable and the dependent variable. It is a variable that influences the effect of the independent variable on the dependent variable. For example, in a study on the effects of caffeine on cognitive performance, the moderator variable could be age, as older adults may be more sensitive to the effects of caffeine than younger adults.

Control Variable

This is a variable that is held constant or controlled by the researcher to ensure that it does not affect the relationship between the independent variable and the dependent variable. Control variables are important to ensure that any observed effects are due to the independent variable and not to other factors. For example, in a study on the effects of a new teaching method on student performance, the control variables could include class size, teacher experience, and student demographics.

Continuous Variable

This is a variable that can take on any value within a certain range. Continuous variables can be measured on a scale and are often used in statistical analyses. Examples of continuous variables include height, weight, and temperature.

Categorical Variable

This is a variable that can take on a limited number of values or categories. Categorical variables can be nominal or ordinal. Nominal variables have no inherent order, while ordinal variables have a natural order. Examples of categorical variables include gender, race, and educational level.

Discrete Variable

This is a variable that can only take on specific values. Discrete variables are often used in counting or frequency analyses. Examples of discrete variables include the number of siblings a person has, the number of times a person exercises in a week, and the number of students in a classroom.

Dummy Variable

This is a variable that takes on only two values, typically 0 and 1, and is used to represent categorical variables in statistical analyses. Dummy variables are often used when a categorical variable cannot be used directly in an analysis. For example, in a study on the effects of gender on income, a dummy variable could be created, with 0 representing female and 1 representing male.

Extraneous Variable

This is a variable that has no relationship with the independent or dependent variable but can affect the outcome of the study. Extraneous variables can lead to erroneous conclusions and can be controlled through random assignment or statistical techniques.

Latent Variable

This is a variable that cannot be directly observed or measured, but is inferred from other variables. Latent variables are often used in psychological or social research to represent constructs such as personality traits, attitudes, or beliefs.

Moderator-mediator Variable

This is a variable that acts both as a moderator and a mediator. It can moderate the relationship between the independent and dependent variables and also mediate the relationship between the independent and dependent variables. Moderator-mediator variables are often used in complex statistical analyses.

Variables Analysis Methods

There are different methods to analyze variables in research, including:

  • Descriptive statistics: This involves analyzing and summarizing data using measures such as mean, median, mode, range, standard deviation, and frequency distribution. Descriptive statistics are useful for understanding the basic characteristics of a data set.
  • Inferential statistics : This involves making inferences about a population based on sample data. Inferential statistics use techniques such as hypothesis testing, confidence intervals, and regression analysis to draw conclusions from data.
  • Correlation analysis: This involves examining the relationship between two or more variables. Correlation analysis can determine the strength and direction of the relationship between variables, and can be used to make predictions about future outcomes.
  • Regression analysis: This involves examining the relationship between an independent variable and a dependent variable. Regression analysis can be used to predict the value of the dependent variable based on the value of the independent variable, and can also determine the significance of the relationship between the two variables.
  • Factor analysis: This involves identifying patterns and relationships among a large number of variables. Factor analysis can be used to reduce the complexity of a data set and identify underlying factors or dimensions.
  • Cluster analysis: This involves grouping data into clusters based on similarities between variables. Cluster analysis can be used to identify patterns or segments within a data set, and can be useful for market segmentation or customer profiling.
  • Multivariate analysis : This involves analyzing multiple variables simultaneously. Multivariate analysis can be used to understand complex relationships between variables, and can be useful in fields such as social science, finance, and marketing.

Examples of Variables

  • Age : This is a continuous variable that represents the age of an individual in years.
  • Gender : This is a categorical variable that represents the biological sex of an individual and can take on values such as male and female.
  • Education level: This is a categorical variable that represents the level of education completed by an individual and can take on values such as high school, college, and graduate school.
  • Income : This is a continuous variable that represents the amount of money earned by an individual in a year.
  • Weight : This is a continuous variable that represents the weight of an individual in kilograms or pounds.
  • Ethnicity : This is a categorical variable that represents the ethnic background of an individual and can take on values such as Hispanic, African American, and Asian.
  • Time spent on social media : This is a continuous variable that represents the amount of time an individual spends on social media in minutes or hours per day.
  • Marital status: This is a categorical variable that represents the marital status of an individual and can take on values such as married, divorced, and single.
  • Blood pressure : This is a continuous variable that represents the force of blood against the walls of arteries in millimeters of mercury.
  • Job satisfaction : This is a continuous variable that represents an individual’s level of satisfaction with their job and can be measured using a Likert scale.

Applications of Variables

Variables are used in many different applications across various fields. Here are some examples:

  • Scientific research: Variables are used in scientific research to understand the relationships between different factors and to make predictions about future outcomes. For example, scientists may study the effects of different variables on plant growth or the impact of environmental factors on animal behavior.
  • Business and marketing: Variables are used in business and marketing to understand customer behavior and to make decisions about product development and marketing strategies. For example, businesses may study variables such as consumer preferences, spending habits, and market trends to identify opportunities for growth.
  • Healthcare : Variables are used in healthcare to monitor patient health and to make treatment decisions. For example, doctors may use variables such as blood pressure, heart rate, and cholesterol levels to diagnose and treat cardiovascular disease.
  • Education : Variables are used in education to measure student performance and to evaluate the effectiveness of teaching strategies. For example, teachers may use variables such as test scores, attendance, and class participation to assess student learning.
  • Social sciences : Variables are used in social sciences to study human behavior and to understand the factors that influence social interactions. For example, sociologists may study variables such as income, education level, and family structure to examine patterns of social inequality.

Purpose of Variables

Variables serve several purposes in research, including:

  • To provide a way of measuring and quantifying concepts: Variables help researchers measure and quantify abstract concepts such as attitudes, behaviors, and perceptions. By assigning numerical values to these concepts, researchers can analyze and compare data to draw meaningful conclusions.
  • To help explain relationships between different factors: Variables help researchers identify and explain relationships between different factors. By analyzing how changes in one variable affect another variable, researchers can gain insight into the complex interplay between different factors.
  • To make predictions about future outcomes : Variables help researchers make predictions about future outcomes based on past observations. By analyzing patterns and relationships between different variables, researchers can make informed predictions about how different factors may affect future outcomes.
  • To test hypotheses: Variables help researchers test hypotheses and theories. By collecting and analyzing data on different variables, researchers can test whether their predictions are accurate and whether their hypotheses are supported by the evidence.

Characteristics of Variables

Characteristics of Variables are as follows:

  • Measurement : Variables can be measured using different scales, such as nominal, ordinal, interval, or ratio scales. The scale used to measure a variable can affect the type of statistical analysis that can be applied.
  • Range : Variables have a range of values that they can take on. The range can be finite, such as the number of students in a class, or infinite, such as the range of possible values for a continuous variable like temperature.
  • Variability : Variables can have different levels of variability, which refers to the degree to which the values of the variable differ from each other. Highly variable variables have a wide range of values, while low variability variables have values that are more similar to each other.
  • Validity and reliability : Variables should be both valid and reliable to ensure accurate and consistent measurement. Validity refers to the extent to which a variable measures what it is intended to measure, while reliability refers to the consistency of the measurement over time.
  • Directionality: Some variables have directionality, meaning that the relationship between the variables is not symmetrical. For example, in a study of the relationship between smoking and lung cancer, smoking is the independent variable and lung cancer is the dependent variable.

Advantages of Variables

Here are some of the advantages of using variables in research:

  • Control : Variables allow researchers to control the effects of external factors that could influence the outcome of the study. By manipulating and controlling variables, researchers can isolate the effects of specific factors and measure their impact on the outcome.
  • Replicability : Variables make it possible for other researchers to replicate the study and test its findings. By defining and measuring variables consistently, other researchers can conduct similar studies to validate the original findings.
  • Accuracy : Variables make it possible to measure phenomena accurately and objectively. By defining and measuring variables precisely, researchers can reduce bias and increase the accuracy of their findings.
  • Generalizability : Variables allow researchers to generalize their findings to larger populations. By selecting variables that are representative of the population, researchers can draw conclusions that are applicable to a broader range of individuals.
  • Clarity : Variables help researchers to communicate their findings more clearly and effectively. By defining and categorizing variables, researchers can organize and present their findings in a way that is easily understandable to others.

Disadvantages of Variables

Here are some of the main disadvantages of using variables in research:

  • Simplification : Variables may oversimplify the complexity of real-world phenomena. By breaking down a phenomenon into variables, researchers may lose important information and context, which can affect the accuracy and generalizability of their findings.
  • Measurement error : Variables rely on accurate and precise measurement, and measurement error can affect the reliability and validity of research findings. The use of subjective or poorly defined variables can also introduce measurement error into the study.
  • Confounding variables : Confounding variables are factors that are not measured but that affect the relationship between the variables of interest. If confounding variables are not accounted for, they can distort or obscure the relationship between the variables of interest.
  • Limited scope: Variables are defined by the researcher, and the scope of the study is therefore limited by the researcher’s choice of variables. This can lead to a narrow focus that overlooks important aspects of the phenomenon being studied.
  • Ethical concerns: The selection and measurement of variables may raise ethical concerns, especially in studies involving human subjects. For example, using variables that are related to sensitive topics, such as race or sexuality, may raise concerns about privacy and discrimination.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Control Variable

Control Variable – Definition, Types and Examples

Moderating Variable

Moderating Variable – Definition, Analysis...

Categorical Variable

Categorical Variable – Definition, Types and...

Independent Variable

Independent Variable – Definition, Types and...

Ratio Variable

Ratio Variable – Definition, Purpose and Examples

Ordinal Variable

Ordinal Variable – Definition, Purpose and...

Independent and Dependent Variables

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

In research, a variable is any characteristic, number, or quantity that can be measured or counted in experimental investigations . One is called the dependent variable, and the other is the independent variable.

In research, the independent variable is manipulated to observe its effect, while the dependent variable is the measured outcome. Essentially, the independent variable is the presumed cause, and the dependent variable is the observed effect.

Variables provide the foundation for examining relationships, drawing conclusions, and making predictions in research studies.

variables2

Independent Variable

In psychology, the independent variable is the variable the experimenter manipulates or changes and is assumed to directly affect the dependent variable.

It’s considered the cause or factor that drives change, allowing psychologists to observe how it influences behavior, emotions, or other dependent variables in an experimental setting. Essentially, it’s the presumed cause in cause-and-effect relationships being studied.

For example, allocating participants to drug or placebo conditions (independent variable) to measure any changes in the intensity of their anxiety (dependent variable).

In a well-designed experimental study , the independent variable is the only important difference between the experimental (e.g., treatment) and control (e.g., placebo) groups.

By changing the independent variable and holding other factors constant, psychologists aim to determine if it causes a change in another variable, called the dependent variable.

For example, in a study investigating the effects of sleep on memory, the amount of sleep (e.g., 4 hours, 8 hours, 12 hours) would be the independent variable, as the researcher might manipulate or categorize it to see its impact on memory recall, which would be the dependent variable.

Dependent Variable

In psychology, the dependent variable is the variable being tested and measured in an experiment and is “dependent” on the independent variable.

In psychology, a dependent variable represents the outcome or results and can change based on the manipulations of the independent variable. Essentially, it’s the presumed effect in a cause-and-effect relationship being studied.

An example of a dependent variable is depression symptoms, which depend on the independent variable (type of therapy).

In an experiment, the researcher looks for the possible effect on the dependent variable that might be caused by changing the independent variable.

For instance, in a study examining the effects of a new study technique on exam performance, the technique would be the independent variable (as it is being introduced or manipulated), while the exam scores would be the dependent variable (as they represent the outcome of interest that’s being measured).

Examples in Research Studies

For example, we might change the type of information (e.g., organized or random) given to participants to see how this might affect the amount of information remembered.

In this example, the type of information is the independent variable (because it changes), and the amount of information remembered is the dependent variable (because this is being measured).

Independent and Dependent Variables Examples

For the following hypotheses, name the IV and the DV.

1. Lack of sleep significantly affects learning in 10-year-old boys.

IV……………………………………………………

DV…………………………………………………..

2. Social class has a significant effect on IQ scores.

DV……………………………………………….…

3. Stressful experiences significantly increase the likelihood of headaches.

4. Time of day has a significant effect on alertness.

Operationalizing Variables

To ensure cause and effect are established, it is important that we identify exactly how the independent and dependent variables will be measured; this is known as operationalizing the variables.

Operational variables (or operationalizing definitions) refer to how you will define and measure a specific variable as it is used in your study. This enables another psychologist to replicate your research and is essential in establishing reliability (achieving consistency in the results).

For example, if we are concerned with the effect of media violence on aggression, then we need to be very clear about what we mean by the different terms. In this case, we must state what we mean by the terms “media violence” and “aggression” as we will study them.

Therefore, you could state that “media violence” is operationally defined (in your experiment) as ‘exposure to a 15-minute film showing scenes of physical assault’; “aggression” is operationally defined as ‘levels of electrical shocks administered to a second ‘participant’ in another room.

In another example, the hypothesis “Young participants will have significantly better memories than older participants” is not operationalized. How do we define “young,” “old,” or “memory”? “Participants aged between 16 – 30 will recall significantly more nouns from a list of twenty than participants aged between 55 – 70” is operationalized.

The key point here is that we have clarified what we mean by the terms as they were studied and measured in our experiment.

If we didn’t do this, it would be very difficult (if not impossible) to compare the findings of different studies to the same behavior.

Operationalization has the advantage of generally providing a clear and objective definition of even complex variables. It also makes it easier for other researchers to replicate a study and check for reliability .

For the following hypotheses, name the IV and the DV and operationalize both variables.

1. Women are more attracted to men without earrings than men with earrings.

I.V._____________________________________________________________

D.V. ____________________________________________________________

Operational definitions:

I.V. ____________________________________________________________

2. People learn more when they study in a quiet versus noisy place.

I.V. _________________________________________________________

D.V. ___________________________________________________________

3. People who exercise regularly sleep better at night.

Can there be more than one independent or dependent variable in a study?

Yes, it is possible to have more than one independent or dependent variable in a study.

In some studies, researchers may want to explore how multiple factors affect the outcome, so they include more than one independent variable.

Similarly, they may measure multiple things to see how they are influenced, resulting in multiple dependent variables. This allows for a more comprehensive understanding of the topic being studied.

What are some ethical considerations related to independent and dependent variables?

Ethical considerations related to independent and dependent variables involve treating participants fairly and protecting their rights.

Researchers must ensure that participants provide informed consent and that their privacy and confidentiality are respected. Additionally, it is important to avoid manipulating independent variables in ways that could cause harm or discomfort to participants.

Researchers should also consider the potential impact of their study on vulnerable populations and ensure that their methods are unbiased and free from discrimination.

Ethical guidelines help ensure that research is conducted responsibly and with respect for the well-being of the participants involved.

Can qualitative data have independent and dependent variables?

Yes, both quantitative and qualitative data can have independent and dependent variables.

In quantitative research, independent variables are usually measured numerically and manipulated to understand their impact on the dependent variable. In qualitative research, independent variables can be qualitative in nature, such as individual experiences, cultural factors, or social contexts, influencing the phenomenon of interest.

The dependent variable, in both cases, is what is being observed or studied to see how it changes in response to the independent variable.

So, regardless of the type of data, researchers analyze the relationship between independent and dependent variables to gain insights into their research questions.

Can the same variable be independent in one study and dependent in another?

Yes, the same variable can be independent in one study and dependent in another.

The classification of a variable as independent or dependent depends on how it is used within a specific study. In one study, a variable might be manipulated or controlled to see its effect on another variable, making it independent.

However, in a different study, that same variable might be the one being measured or observed to understand its relationship with another variable, making it dependent.

The role of a variable as independent or dependent can vary depending on the research question and study design.

Print Friendly, PDF & Email

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Writing the Conclusion Chapter: the Good, the Bad and the Missing

Profile image of Denis Robinson

Related Papers

Dr. Khalil A. Agha

conclusion of variables in research

Writing a succesful a cademic thesis

Raphael Akeyo

This article is a brief guidance on effective writing of academic research thesis with a focus on the results/ findings section/ chapters. It provides step by step highlights on how to present data from the field, interpretation of the findings, corroborating the findings with existing studies as well as the use of theoretical tenets to discuss the findings. The conclusions and recommendations sections are also highlighted.

Journal of English for Academic Purposes

Nasrin Nejad

This paper considers the generic structure of Conclusion chapters in PhD theses or dissertations. From a corpus of 45 PhD theses covering a range of disciplines, chapters playing a concluding role were identified and analysed for their functional moves and steps. Most Conclusions were found to restate purpose, consolidate research space with a varied array of steps, recommend future research and cover practical applications, implications or recommendations. However a minority were found to focus more on the field than on the thesis itself. These field-oriented Conclusions tended to adopt a problem–solution text structure, or in one case, an argument structure. Variations in focus and structure between disciplines were also found.

Scientific Research Publishing: Creative Education

Dr. Qais Faryadi

I have already discussed the PhD introduction and literature review in detail. In this paper, I discuss the PhD methodology, results and how to write a stunning conclusion for your thesis. The main objective of this paper is to help PhD candidates to understand what is a PhD methodology and guide them in writing a systematic and meaningful PhD methodology, results and conclusion. The methodology used in this research is a descriptive method as it deliberates and defines the various parts of PhD methodology, results and conclusion writing process and elucidates the "how to do" in a very unpretentious and understanding manner. As thus, this paper summarises the various steps of thesis methodology, results and conclusion writing to pilot the PhD students. This road map is a useful guidance especially for students of social science studies. Additionally, in this paper, methodology writing techniques , procedures and important strategies are enlightened in a simple manner. This paper adopts a "how-to approach" when discussing a variety of relevant topics such as introduction, formulation of the methodology, variables , research design process, types of sampling, data collection process, interviews, questionnaires, data analysis techniques and so on. Results and conclusions are also discussed in detail, so that PhD candidates can follow the guide clearly. This paper has 5 parts such as Introduction, Literature reviews, Methodology, Results and Conclusion. As such, I discuss Methodology, Results and Conclusion as the final assessment of the PhD thesis writing process.

Pamela Olmos

"The conclusions section of a thesis is the last chapter people read and usually the section that leaves the lasting impression. This study presents a framework for the analysis of thesis conclusions at an undergraduate level in the field of humanities, which –as the literature reveals–, lacks an agenda for its analysis at the undergraduate level. A sevenmove generic organization is proposed as a Framework for Undergraduate Thesis Conclusions (FUTC). This framework sheds a light on the complex construction of the thesis conclusions chapter towards its analysis. Moreover, the FUTC shows potentiality for further research, pedagogic implications and applications for genre and writing studies. Moreover, the FUTC shows potentiality for further research, pedagogic implications and applications for genre and writing studies."

Vernon Trafford , Prof Shosh Leshem

This study investigated how candidates claimed to have made an original contribution to knowledge in the conclusion chapters of 100 PhD theses. Documentary analysis was used to discover how this was explained within theses at selected universities in three countries. No other documents were accessed and neither were candidates, supervisors or examiners contacted. The evidence showed that the function of Discussion and Conclusion chapters was interpreted differently between disciplines and national academic traditions. The relative size of conclusion chapters to other chapters was consistently small. Explicit claims for originality and contributing to knowledge appeared in 54 per cent of theses thus meeting their universities&#39; stated criteria for PhD awards but were not adequately explained in 46 per cent of theses. Introduction As doctoral supervisors and examiners we have recognised an absence of research-based literature on the chapter of conclusions in doctoral theses. Thus, ...

Phuc.CX4074 2004074

Kofi Amissah

Chapter by chapter summary of my PhD thesis

Informal Channels for Conflict Resolution in Ibadan, Nigeria

Wuyi Omitoogun

Theo Lieven

This chapter summarizes the results of the preceding chapters, briefly discusses their implications, and concludes the book.

RELATED PAPERS

Fernando Gil-Alonso

Revista internacional de …

GABRIEL WINTER

Trunfos de uma Geografia Activa: desenvolvimento local, ambiente, ordenamento e tecnologia

Hans A. Wüthrich

Journal of Clinical Oncology

Joseph Podojil

Proceedings of the Second Asia Pacific International Conference on Industrial Engineering and Operations Management

Jeanne Svensky Ligte

Revista Brasileira de Ciências Agrárias - Brazilian Journal of Agricultural Sciences

Thiago Landim

Srinivas Tadepalli

Revista Prâksis

Elisa Prado

Noel Wannang

Rafał Kowalski

VIII Congreso Argentino de Hidrogeología y VI Seminario Latinoamericano sobre Termas Actuales de la Hidrología Subterránea (La Plata, 17 al 20 de septiembre de 2013)

Héctor De León-Gómez

European Psychiatry

Javier Rejas

Scientific Reports

Marcin Płonowski

Cell and Tissue Research

Call/Wa:0812 1774 9054 | Agen Telur Puyuh Tegal

Pabrik Telurpuyuh

Energy Procedia

Alla Shogenova

Journal of Food Protection

StuArt Chirtel

Haderslev-Samfundets Årsskrift

René Rasmussen

Palaeogeography, Palaeoclimatology, Palaeoecology

giacomo oggiano

Tehran University Medical Journal TUMS Publications

Nastaran Ghotbi

Earth and Planetary Science Letters

Fatima mokadem

Lurdes Rodrigues

Prof.Carlos Henrique Silva do Carmo

Analytical Biochemistry

Chen Lin Chen

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety

conclusion of variables in research

Research Variables

The research variables, of any scientific experiment or research process, are factors that can be manipulated and measured.

This article is a part of the guide:

  • Experimental Research
  • Pretest-Posttest
  • Third Variable
  • Research Bias
  • Independent Variable

Browse Full Outline

  • 1 Experimental Research
  • 2.1 Independent Variable
  • 2.2 Dependent Variable
  • 2.3 Controlled Variables
  • 2.4 Third Variable
  • 3.1 Control Group
  • 3.2 Research Bias
  • 3.3.1 Placebo Effect
  • 3.3.2 Double Blind Method
  • 4.1 Randomized Controlled Trials
  • 4.2 Pretest-Posttest
  • 4.3 Solomon Four Group
  • 4.4 Between Subjects
  • 4.5 Within Subject
  • 4.6 Repeated Measures
  • 4.7 Counterbalanced Measures
  • 4.8 Matched Subjects

Any factor that can take on different values is a scientific variable and influences the outcome of experimental research .

Most scientific experiments measure quantifiable factors, such as time or weight, but this is not essential for a component to be classed as a variable.

As an example, most of us have filled in surveys where a researcher asks questions and asks you to rate answers. These responses generally have a numerical range, from ‘1 - Strongly Agree’ through to ‘5 - Strongly Disagree’. This type of measurement allows opinions to be statistically analyzed and evaluated.

conclusion of variables in research

Dependent and Independent Variables

The key to designing any experiment is to look at what research variables could affect the outcome.

There are many types of variable but the most important, for the vast majority of research methods, are the independent and dependent variables.

The independent variable is the core of the experiment and is isolated and manipulated by the researcher. The dependent variable is the measurable outcome of this manipulation, the results of the experimental design . For many physical experiments , isolating the independent variable and measuring the dependent is generally easy.

If you designed an experiment to determine how quickly a cup of coffee cools, the manipulated independent variable is time and the dependent measured variable is temperature.

In other fields of science, the variables are often more difficult to determine and an experiment needs a robust design. Operationalization is a useful tool to measure fuzzy concepts which do not have one obvious variable.

conclusion of variables in research

The Difficulty of Isolating Variables

In biology , social science and geography, for example, isolating a single independent variable is more difficult and any experimental design must consider this.

For example, in a social research setting, you might wish to compare the effect of different foods upon hyperactivity in children. The initial research and inductive reasoning leads you to postulate that certain foods and additives are a contributor to increased hyperactivity. You decide to create a hypothesis and design an experiment , to establish if there is solid evidence behind the claim.

Reasoning Cycle - Scientific Research

The type of food is an independent variable, as is the amount eaten, the period of time and the gender and age of the child. All of these factors must be accounted for during the experimental design stage. Randomization and controls are generally used to ensure that only one independent variable is manipulated.

To eradicate some of these research variables and isolate the process, it is essential to use various scientific measurements to nullify or negate them.

For example, if you wanted to isolate the different types of food as the manipulated variable, you should use children of the same age and gender.

The test groups should eat the same amount of the food at the same times and the children should be randomly assigned to groups. This will minimize the physiological differences between children. A control group , acting as a buffer against unknown research variables, might involve some children eating a food type with no known links to hyperactivity.

In this experiment, the dependent variable is the level of hyperactivity, with the resulting statistical tests easily highlighting any correlation . Depending upon the results , you could try to measure a different variable, such as gender, in a follow up experiment.

Converting Research Variables Into Constants

Ensuring that certain research variables are controlled increases the reliability and validity of the experiment, by ensuring that other causal effects are eliminated. This safeguard makes it easier for other researchers to repeat the experiment and comprehensively test the results.

What you are trying to do, in your scientific design, is to change most of the variables into constants, isolating the independent variable. Any scientific research does contain an element of compromise and inbuilt error , but eliminating other variables will ensure that the results are robust and valid .

  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Martyn Shuttleworth (Aug 9, 2008). Research Variables. Retrieved May 05, 2024 from Explorable.com: https://explorable.com/research-variables

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!

Get all these articles in 1 guide.

Want the full version to study at home, take to school or just scribble on?

Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.

conclusion of variables in research

Download electronic versions: - Epub for mobiles and tablets - For Kindle here - For iBooks here - PDF version here

Save this course for later

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

conclusion of variables in research

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter

conclusion of variables in research

PHILO-notes

Free Online Learning Materials

What are Variables and Why are They Important in Research?

In research, variables are crucial components that help to define and measure the concepts and phenomena under investigation. Variables are defined as any characteristic or attribute that can vary or change in some way. They can be measured, manipulated, or controlled to investigate the relationship between different factors and their impact on the research outcomes. In this essay, I will discuss the importance of variables in research, highlighting their role in defining research questions, designing studies, analyzing data, and drawing conclusions.

Defining Research Questions

Variables play a critical role in defining research questions. Research questions are formulated based on the variables that are under investigation. These questions guide the entire research process, including the selection of research methods, data collection procedures, and data analysis techniques. Variables help researchers to identify the key concepts and phenomena that they wish to investigate, and to formulate research questions that are specific, measurable, and relevant to the research objectives.

For example, in a study on the relationship between exercise and stress, the variables would be exercise and stress. The research question might be: “What is the relationship between the frequency of exercise and the level of perceived stress among young adults?”

Designing Studies

Variables also play a crucial role in the design of research studies. The selection of variables determines the type of research design that will be used, as well as the methods and procedures for collecting and analyzing data. Variables can be independent, dependent, or moderator variables, depending on their role in the research design.

Independent variables are the variables that are manipulated or controlled by the researcher. They are used to determine the effect of a particular factor on the dependent variable. Dependent variables are the variables that are measured or observed to determine the impact of the independent variable. Moderator variables are the variables that influence the relationship between the independent and dependent variables.

For example, in a study on the effect of caffeine on athletic performance, the independent variable would be caffeine, and the dependent variable would be athletic performance. The moderator variables could include factors such as age, gender, and fitness level.

Analyzing Data

Variables are also essential in the analysis of research data. Statistical methods are used to analyze the data and determine the relationships between the variables. The type of statistical analysis that is used depends on the nature of the variables, their level of measurement, and the research design.

For example, if the variables are categorical or nominal, chi-square tests or contingency tables can be used to determine the relationships between them. If the variables are continuous, correlation analysis or regression analysis can be used to determine the strength and direction of the relationship between them.

Drawing Conclusions

Finally, variables are crucial in drawing conclusions from research studies. The results of the study are based on the relationship between the variables and the conclusions drawn depend on the validity and reliability of the research methods and the accuracy of the statistical analysis. Variables help to establish the cause-and-effect relationships between different factors and to make predictions about the outcomes of future events.

For example, in a study on the effect of smoking on lung cancer, the independent variable would be smoking, and the dependent variable would be lung cancer. The conclusion would be that smoking is a risk factor for lung cancer, based on the strength and direction of the relationship between the variables.

In conclusion, variables play a crucial role in research across different fields and disciplines. They help to define research questions, design studies, analyze data, and draw conclusions. By understanding the importance of variables in research, researchers can design studies that are relevant, accurate, and reliable, and can provide valuable insights into the phenomena under investigation. Therefore, it is essential to consider variables carefully when designing, conducting, and interpreting research studies.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 01 May 2024

E-scooter-related dental injuries: a two-year retrospective review

  • Junaid Rashid 1 ,
  • Rajeevan Sritharan 2 ,
  • Sophie Wu 1 &
  • Kevin McMillan 3  

British Dental Journal ( 2024 ) Cite this article

22 Accesses

Metrics details

Introduction In June 2020, the United Kingdom (UK) published guidance on electric scooter (e-scooter) use to ease transport congestion and reduce pollution. This study aims to examine dental injuries sustained during the two years following initiation of the trial.

Methods The research was conducted at a UK, Level 1, supra-regional major trauma centre. All eligible patient records were analysed to identify e-scooter-related dental injuries to the following regions: teeth, periodontium, alveolus, palate, tongue, floor of mouth, frenum, buccal mucosa and lips. To assess significant associations between recorded variables, a Pearson's chi-square test was utilised.

Results Of the 32 patients who experienced a total of 71 dental injuries, 46.5% (n = 33) affected teeth, predominantly upper central incisors (n = 17). ‘Lacerations' (n = 14) and ‘lip' (n = 11) were the most common type and site of soft tissue injuries, respectively. Unprovoked falls by riders accounted for 53.1% (n = 17) of the injuries. There was an overall increase in e-scooter-related dental injuries throughout the two-year period.

Conclusion E-scooters have introduced an additional source of dental trauma. It is imperative health care professionals can also identify signs of head and non-dental injuries when managing such patients. Further studies are warranted allowing for better informed and optimised dental public health interventions.

E-scooters are a new form of transport that can be the cause of hard tissue and soft tissue dental injuries.

E-scooter-related dental injuries are often related to head injuries, non-dental injuries and intoxication.

The rise in e-scooter-related dental injuries over the two-year period underscores the need for government-instigated e-scooter safety precautions.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 24 print issues and online access

251,40 € per year

only 10,48 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

conclusion of variables in research

Similar content being viewed by others

conclusion of variables in research

Oral and maxillofacial injuries associated with e-scooter use at Broomfield Hospital: a cohort study of 24 months of data since e-scooter legalisation in the UK

conclusion of variables in research

Electric scooters: a quick way to get to the emergency department?

E-scooter-related facial injuries: a one-year review following implementation of a citywide trial.

Eilert-Petersson E, Andersson L, Sorensen S. Traumatic oral vs non-oral injuries. An epidemiological study during one year in a Swedish county. Swed Dent J 1997; 21: 55-68.

Kargul B, Çağlar E, Tanboga I. Dental trauma in Turkish children, Istanbul. Dent Traumatol 2003; 19: 72-75.

Bani M, Alacam A, Cinar C. How does dental trauma affect the quality of life in Turkish families? Oral Health Prev Dent 2017; 15: 563-567.

Borum M K, Andreasen J O. Therapeutic and economic implications of traumatic dental injuries in Denmark: an estimate based on 7549 patients treated at a major trauma centre. Int J Paediatr Dent 2001; 11: 249-258.

Lee J Y, Divaris K. Hidden consequences of dental trauma: the social and psychological effects. Pediatr Dent 2009; 31: 96-101.

Glendor U. Epidemiology of traumatic dental injuries - a 12 year review of the literature. Dent Traumatol 2008; 24: 603-611.

Petti S, Glendor U, Andersson L. World traumatic dental injury prevalence and incidence, a meta-analysis-one billion living people have had traumatic dental injuries. Dent Traumatol 2018; 34: 71-86.

Brownson A B, Fagan P V, Dickson S, Civil I D. Electric scooter injuries at Auckland City Hospital. N Z Med J 2019; 132: 62-72.

Moftakhar T, Wanzel M, Vojcsik A et al . Incidence and severity of electric scooter related injuries after introduction of an urban rental programme in Vienna: A retrospective multicentre study. Arch Orthop Trauma Surg 2020; 141: 1207-1213.

UK Government. Rental e-scooter trials to be allowed from this weekend. 2020. Available at https://www.gov.uk/government/news/rental-e-scooter-trials-to-be-allowed-from-this-weekend (accessed August 2023).

City Monitor. Where are the largest cities in Britain? 2023. Available at https://citymonitor.ai/environment/infrastructure/where-are-largest-cities-britain-1404 (accessed August 2023).

BBC. Birmingham e-scooter trial extended to May 2024. 2022. Available at https://www.bbc.co.uk/news/articles/c4ne265lvzro (accessed August 2023).

Barker M, Pepper T, Dua R, Fan K. Electric scooters: convenient transport or ED Headache? Br J Oral Maxillofac Surg 2022; 60: 199-200.

Aurora F, Cove G, Sandhu P, Thomas S J, Gormley M. Oral and maxillofacial injuries from Electric Scooters in Bristol: A retrospective observational study. Br J Oral Maxillofac Surg 2022; 60: 837-840.

UK Parliament. Regulating electric scooters (e-scooters). 2021. Available at https://commonslibrary.parliament.uk/research-briefings/cbp-8958/ (accessed August 2023).

Harbrecht A, Hackl M, Leschinger T et al . What to expect? injury patterns of electric scooter accidents over a period of one year - a prospective monocentric study at a Level 1 trauma center. Eur J Orthop Surg Traumatol 2021; 32: 641-647.

Kobayashi L M, Williams E, Brown C V et al . The e-merging e-pidemic of e-scooters. Trauma Surg Acute Care Open 2019; DOI: 10.1136/tsaco-2019-000337.

Jones K, Parkin J, Rathod N, Bhatt V. Oral and maxillofacial injuries associated with e-scooter use at Broomfield Hospital: A cohort study of 24 months of data since e-scooter legalisation in the UK. Br Dent J 2023; DOI: 10.1038/s41415-023-5506-5.

Shiffler K, Mancini K, Wilson M, Huang A, Mejia E, Yip F K. Intoxication is a significant risk factor for severe craniomaxillofacial injuries in standing electric scooter accidents. J Oral Maxillofac Surg 2021; 79: 1084-1090.

Bourguignon C, Cohenca N, Lauridsen E et al . International Association of Dental Traumatology Guidelines for the management of traumatic dental injuries: 1. Fractures and luxations. Dent Traumatol 2020; 36: 314-330.

Salarić I, Tikvica Medojević D, Baždarić K et al . Primary school teachers' knowledge on tooth avulsion. Acta Stomatol Croat 2021; 55: 28-36.

NHS England. A&E Attendances and Emergency Admissions. Available at https://www.england.nhs.uk/statistics/statistical-work-areas/ae-waiting-times-and-activity/ (accessed August 2023).

Trivedy C, Kodate N, Ross A et al . The attitudes and awareness of emergency department (ED) physicians towards the management of common dentofacial emergencies. Dent Traumatol 2011; 28: 121-126.

Download references

Author information

Authors and affiliations.

Dental Core Trainee, Oral and Maxillofacial Surgery, Department of Oral and Maxillofacial Surgery, Queen Elizabeth Hospital Birmingham, Mindelsohn Way, Birmingham, B15 2GW, UK

Junaid Rashid & Sophie Wu

Junior Specialist Dentist, Oral and Maxillofacial Surgery, Department of Oral and Maxillofacial Surgery, Queen Elizabeth Hospital Birmingham, Mindelsohn Way, Birmingham, B15 2GW, UK

Rajeevan Sritharan

Consultant in Oral and Maxillofacial Surgery, Department of Oral and Maxillofacial Surgery, Queen Elizabeth Hospital Birmingham, Mindelsohn Way, Birmingham, B15 2GW, UK

Kevin McMillan

You can also search for this author in PubMed   Google Scholar

Contributions

Junaid Rashid: data extraction, data analysis, data interpretation, drafting of the article and final approval. Rajeevan Sritharan: data extraction, data analysis, data interpretation, drafting of the article and final approval. Sophie Wu: conception of idea, data analysis, data interpretation, illustrations, drafting of the article and final approval. Kevin McMillan: conception of idea, data interpretation, critical revisions, final approval, and guarantor of the manuscript.

Corresponding author

Correspondence to Junaid Rashid .

Ethics declarations

The authors report no conflicts of interest. Formal ethical approval was not required as this was a retrospective cohort study. Approval was granted by the Clinical Audit and Research Management System which allowed a unique ID to be created for each of the two years of the study. This ID allowed a health informatics request to be completed to identify suitable patients from a database which was retrospectively analysed. The collected data was completely anonymised and non-identifiable, thus not requiring patient permissions.

Data availability

The datasets generated and/or analysed during the current study are available from the corresponding author on reasonable request.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Rashid, J., Sritharan, R., Wu, S. et al. E-scooter-related dental injuries: a two-year retrospective review. Br Dent J (2024). https://doi.org/10.1038/s41415-024-7345-4

Download citation

Received : 15 September 2023

Revised : 14 December 2023

Accepted : 18 December 2023

Published : 01 May 2024

DOI : https://doi.org/10.1038/s41415-024-7345-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

conclusion of variables in research

  • Open access
  • Published: 30 April 2024

Extent, transparency and impact of industry funding for pelvic mesh research: a review of the literature

  • Angela Coderre-Ball 1 &
  • Susan P. Phillips   ORCID: orcid.org/0000-0003-4798-1742 1 , 2  

Research Integrity and Peer Review volume  9 , Article number:  4 ( 2024 ) Cite this article

94 Accesses

2 Altmetric

Metrics details

Conflicts of interest inherent in industry funding can bias medical research methods, outcomes, reporting and clinical applications. This study explored the extent of funding provided to American physician researchers studying surgical mesh used to treat uterine prolapse or stress urinary incontinence, and whether that funding was declared by researchers or influenced the ethical integrity of resulting publications in peer reviewed journals.

Publications identified via a Pubmed search (2014–2021) of the terms mesh and pelvic organ prolapse or stress urinary incontinence and with at least one US physician author were reviewed. Using the CMS Open Payments database industry funding received by those MDs in the year before, of and after publication was recorded, as were each study’s declarations of funding and 14 quality measures.

Fifty-three of the 56 studies reviewed had at least one American MD author who received industry funding in the year of, or one year before or after publication. For 47 articles this funding was not declared. Of 247 physician authors, 60% received > $100 while 13% received $100,000-$1,000,000 of which approximately 60% was undeclared. While 57% of the studies reviewed explicitly concluded that mesh was safe, only 39% of outcomes supported this. Neither the quality indicator of follow-up duration nor overall statements as to mesh safety varied with declaration status.

Conclusions

Journal editors’ guidelines re declaring conflicts of interest are not being followed. Financial involvement of industry in mesh research is extensive, often undeclared, and may shape the quality of, and conclusions drawn, resulting in overstated benefit and overuse of pelvic mesh in clinical practice.

Peer Review reports

Introduction

When medical research and vested interest collide, objectivity, research integrity, and best clinical practices are sometimes the victims. Compromise to objectivity can arise via ghost management of research [ 1 ], that is by direct involvement of industry personnel, or indirectly through industry transfers of honoraria, gratuities, or speaker payments made to independent researchers [ 2 ]. Circumstances such as these, that “create a risk that judgments or actions regarding a primary interest will be unduly influenced by a secondary interest are defined as conflicts of interest (COI)” [ 3 ]. COI stemming from industry funding can, although do not always [ 4 ], bias design, recruitment, conduct, choice of outcome measures, or reporting, all of which have the potential to distort study findings and undermine medical practice [ 5 , 6 , 7 ]. The United States Centers for Medicare & Medicaid Services Open Payments [ 8 ] database documents any industry payment of at least $10 and annual payments of $100 or more made to American physician researchers since 2013. Its creation has facilitated identifying a portion of corporate support for medical research.

We wished to examine the extent, accuracy and implications of COI reporting among authors studying the effectiveness and safety of one particular medical device, pelvic mesh. The CMS Open Payments database described above enables this examination although only for authors who were or are US physicians. Surgical mesh was first used in hernia surgery in the 1950s [ 9 ] and has become the standard of care for hernia repairs, although controversy remains [ 10 ]. By the late 1990s, surgical mesh was routinely being inserted trans-vaginally to treat pelvic organ prolapse (POP) and stress urinary incontinence (SUI). This repurposing required no approval in the US because the Food and Drug Administration’s (FDA) 510k route grants automatic authorization for products deemed to be equivalent to predicate devices already in use [ 11 , 12 ]. Prior to 1976 the FDA did not require testing of any biomedical devices, meaning surgical mesh had never undergone pre-market testing [ 13 ]. Studies of success, failure and safety of both hernia and pelvic mesh are, therefore, generally retrospective reviews tracking outcomes of use in patients.

The FDA estimates that one in eight women (in the US) undergo surgery to repair uterine prolapse [ 14 ]. Post-market evidence from peer-reviewed journals has generally endorsed pelvic mesh to be a successful treatment for POP and SUI [ 15 ]. At the same time there are reports from an unknown proportion of female mesh recipients questioning that success [ 16 , 17 ]. Commentaries have noted the close links among industry, researchers, surgeons and professional organizations that examine or voice support for pelvic mesh use [ 18 ]. Two studies of mesh used for hernia repairs raise questions about the evidence supporting its success and safety in that setting. First, despite many accounts of the value of mesh for hernia repair, none has reported on women, specifically, or considered that women’s greater immune response to foreign materials might predispose to disproportionate harm from insertion of the product [ 12 , 19 ]. Second, Sekigami and colleagues [ 20 ] determined that the majority of studies of mesh used for hernia repairs did not accurately report COI.

Whether and how industry funding is entwined with published research on pelvic mesh is unknown. As noted above, what is known is that such funding compromises medical research in general [ 21 ]. Our objectives were, therefore, to: (1) examine the scope of industry funding provided to US physician-authors of pelvic mesh research; (2) determine the proportions of that funding that were declared or undeclared and; (3) explore whether the methodologic strength and conclusions of industry funded studies differed from those without industry support.

Study selection/data extraction

We undertook a cross-sectional review of publications identified in a PubMed search in October 2021. All studies related to surgical mesh used in POP and SUI repairs were initially identified. Included were clinical trials and observational studies with at least one American physician author, and that examined post-surgical outcomes for polypropylene mesh inserted for the treatment of POP or SUI. We excluded studies with no original data, no US physician authors, those whose main purpose was to compare surgical techniques (e.g., single incision mesh vs. sacrospinous ligament fixation), studies using only autologous material or non-polypropylene mesh, and studies that only examined peri-operative outcomes.

Search terms included (POP[title/abstract] OR SUI[title/abstract]) AND mesh[title/abstract]. Studies published between January 1, 2014, and September 30, 2021 were included. This time frame matched available entries in the CMS Open Payments database (see below). We chose the year of publication rather than year of acceptance as not all studies documented their acceptance date. Included were studies from any peer-reviewed journal. One author (ACB) screened studies for inclusion/exclusion criteria, and, if questions arose, discussion occurred between the two authors.

For each study, we extracted the authors’ and journal’s names, the date of acceptance where available and of publication, conflict-of-interest statements, funding declarations, the study’s inclusion and exclusion criteria, the outcome scales or measures, outcomes, and follow-up duration. We also determined the journal’s impact factor (April 2022). This information for 10 randomly selected studies was independently abstracted by both authors who then discussed and compared results to ensure accuracy and consistency. One author then extracted data for the remaining 48 studies. These data were then reviewed by both authors, together (see Outcomes, below).

Open payments

For each physician author in each study, we searched the CMS Open Payments database to collect information on the types of payments (general, research, associated funding, and ownership and investment) made from all drug and device companies, the US dollar amount of each payment, and the companies making the payments. We included all payments authors received during the year before, the year of, and the year following publication to best ensure that all author payments that could be related to a study were captured. Payments totaling less than $100 over the three years, were entered as ‘no payment’. Small payments can influence physicians’ research and clinical behavior, however such amounts were not included to avoid modest sums or gratuities received that were likely unrelated to research.

Findings assessed

The key findings examined were the extent of industry funding of research and the dollar difference between declared and actual industry payment received. First we tallied the number of authors and papers with COI, whether declared or not. We then examined the declaration status of each author with a COI. This was recorded as no discrepancy if that COI was declared. We then counted how many authors made no declaration or declared that they did not have a COI and recorded each author’s total payment from all categories over the three years. We did not examine each journal’s declaration of COI requirements and authors’ compliance with these, nor could we determine whether aspects of authors’ declarations were redacted by specific journals.

To assess the strength of each study we examined the following. We determined the duration of patient follow-up post-surgery. This measure was chosen because complications from pelvic mesh continue to arise years after insertion. If studies did not explicitly state a mean or median follow-up in their results we accepted the follow-up duration as the timeframe indicated in the methods/design. If no measure or statement was present, this was left blank. The use of objective (e.g., POP-Q) and/or subjective (e.g., UDI-6, pain) scales and/or outcomes was tracked for each study. Critical appraisal of each included study was assessed using a purpose-built data extraction and appraisal tool (see Table  1 ) based on the Joanna Briggs Institute Checklist for Cohort Studies [ 22 ]. Fourteen questions appraised methodology including, for example, “ Are the authors conclusions supported by the findings ?” and “ Did the authors make a statement that mesh was safe to use ?” To ensure reliability both authors critically appraised each study independently and then reviewed and discussed all appraisals together to resolve differences and reach consensus. Evaluation of whether authors’ conclusions were supported by the findings (Table  1 , question ‘n’) was decided based on review of all the quality dimensions and discussion between both authors. For example, if a study made a positive conclusion about the effectiveness of mesh, but only followed patients for a short time (e.g., less than 12 months) and without a comparison group, it would be given a score of “no” or “unclear” for question ‘n’. Authors were blinded to information about funding when these quality indicators were recorded. Only after appraising and recording the strength of each study was this information merged with funding data.

Statistical analysis

Univariate analyses were used to determine the presence of study characteristics that aligned with discrepancy between declared and undeclared COI. Guided by previous research on COI of authors studying hernia mesh [ 20 ] we included impact factor (continuous), follow-up time (continuous), author’s role (e.g. first author, contributing author, senior author – categorical), and recommendations of mesh safety and effectiveness (categorical – yes/no). We report the difference in payments received between those that declared and did not declare COI. The relationship between categorical variables (e.g., author role) and the presence of undeclared COI was determined using Chi-Square testing. Logistic regression was used to determine the association of continuous variables (e.g., impact factor, follow-up time) with whether or not there was a discrepancy between reported and discovered COI (from CMS Open Payments).

Five hundred and sixty-two studies were retrieved from the PubMed search. After an initial review 56 of these were found to meet inclusion criteria (see Fig.  1 : Overview of retrieved articles, screening process, and final included studies). The majority of the excluded studies had no author whose data would appear in Open Payments (i.e. no American physician author).

Scope and declaration of industry funding: authors

There were a total of 299 authors of the 56 studies included in the full review. After excluding non-physicians and non-American physician authors as they would not be listed in the Open Payments database, 247 American MD authors remained and were included. For the remainder of the report, we only include these American MD authors in analyses.

figure 1

Overview of retrieved articles, screening and final included studies

Of the 247 authors and across all 56 included studies one hundred forty-nine authors (60%) received payments totaling more than $100. Eighty-one authors’ (33%) explicit declarations that they did not have COI aligned with Open Payments documentation of payments of less than $100 over the relevant three-year timeframe examined. An additional 12 authors (5%) made no declaration and did not receive payments totaling more than $100. Twenty-eight authors (11%) explicitly declared COI and did receive more than $100 in payments. One hundred and one authors (40%) explicitly declared that they had no COI but received payments, 20 (8%) did not make any declaration and received payments, and five authors (2%) declared a conflict although no payments were recorded in Open Payments.

Examining the dollar value of payments received, we found that the largest group receiving payments (36%, n  = 54) was for amounts of between $100 and $1000 and was made to authors who did not declare any COI. The remaining undeclared payments were between $1,000-$10,000 (24%, n  = 36), between $10,000-$100,000 (13%, n  = 20) and >$100,000 (7%, n  = 11).

The majority of payments for each of the four dollar amounts were undeclared (see Fig.  2 : Proportions and amounts of declared and undeclared payments received by authors).

figure 2

Proportions and amounts of declared and undeclared payments received by authors

Scope and declaration of industry funding: studies

Of the 56 studies reviewed, 53 (95%) had at least one American physician author with COI (declared or not). Thirty-nine (70%) included at least two American MD authors with COI, and 28 (54% of the 52 studies with 3 or more authors) had three or more American MD authors with COI.

Considering only non-declared COI, we found that 47 (84%) of studies included at least one American MD author with an undeclared COI, while 34 (61%) had at least two such authors, and 20 studies (38% of articles with more than 2 authors) had three or more authors with COI. Only three (5%) studies had no physician authors with any conflicts of interest (declared or not).

Study characteristics aligned with undeclared COI

We next examined alignment of the dollar amount of industry funding received and any of the following: declaring a COI; the duration of follow-up in a study; or the journal’s impact factor.

The median payment for US authors was $18,678 (IQR ~ $5000-$99,000) for those with declared COI and $158 (IQR ~ 0-$1,500) among authors, who did not declare COI, but had one (Cohen’s d effect size estimate = 0.39, 95% CI: 0.77 − 0.02).

Means and medians of the length of time patients were followed after mesh implant surgery were reported in 48/56 studies. Median follow-up was 1.0 year, with a mean of 1.9 years. Follow-up duration was not associated with whether or not a study had at least one author with undeclared COI ( OR  = 0.82 95% CI:0.54 1.17). The small number of studies without COI ( n  = 3) precluded comparing follow-up duration between them and the 53 with COI.

The impact factors of the journals publishing studies were also examined to see if there was any relationship with number of undeclared COI. A journal’s impact factor did not predict whether or not a study had at least one author with undeclared COI ( OR  = 0.98 95%CI [0.75 1.3]).

There was a trend although no statistical association between being the lead or senior author and the presence of COI ( p  = 0.18). 65% of first authors had COI (declared or not), as did 56% of middle authors, and 69% of senior authors.

Quality appraisal

We assessed the quality of each study using the 14 measures listed in Table  1 . Only 26% ( n  = 14) of articles included a comparison group, partially reflecting the different study designs included in the review, and of those, 40% had comparable patients (e.g., age) in the intervention and control groups. The majority of studies (80%) did identify at least one patient characteristic such as age or obesity that could affect the success of mesh as a treatment. Only 28% ( n  = 13) of these studies, however, utilized these data in their analyses. The majority of publications explicitly stated that mesh was safe and beneficial ( n  = 32, 57%) although only 39% ( n  = 22) of all articles’ methods and outcomes supported these conclusions (Table  1 ). The small number of studies with no COI (3 of 56) precluded comparisons of quality between groups defined by the presence or absence of COI.

95% of the 56 articles reviewed had at least one author among those who could be assessed using Open Payments who received industry funding. The majority of this funding (47/53 of articles) was undeclared. COI among American MD authors studying pelvic mesh are substantial (60%), and most (81%) are undeclared. This level of unacknowledged industry support aligns with findings of a meta-analysis of studies of undisclosed industry support to physicians in general [ 7 ] and of clinical practice guideline authors’ COI [ 23 ]. It may also explain why, despite patient reports and legal findings of harm, the scholarly literature tends to endorse pelvic mesh as effective and safe.

In 2009, the International Committee of Medical Journal Editors (ICMJE) introduced requirements for detailed disclosure of all relevant COI by any author [ 24 ]. All articles in this review were published well after this. Observed non-compliance could arise from journal laxity, researchers’ sense of impunity, conviction that they are not swayed by industry largess, or convincing themselves that funding received was not related to the reported research. 36% of all authors received undeclared industry support of less than $1000. Some might consider that smaller levels of funding which may not have been offered explicitly for research are unlikely to sway physicians and should, therefore, be exempt from required reporting. In reality, even small gifts and gratuities have repeatedly been found to ‘win over’ physicians’ research and practice [ 7 ]. In our study, industry-funding had an equivocal impact on research quality and reported outcomes. The majority of publications explicitly stated that mesh was safe and beneficial (57%, n  = 32) although only 10 of those 32 substantiated this with evidence. The median follow-up time of one-year post-op would have missed long-term complications. Such complications and failures of pelvic mesh are known to arise years after its insertion. For this reason, follow-up duration was chosen as a key indicator of study validity. As most studies were retrospective chart reviews longer follow-up duration could have been built into research designs. Indicators of poor research quality did not vary with authors’ declarations of industry support. The near ubiquitous presence of industry funding, however, precluded assessment of quality differences in articles with and without COI, and left us unable to really address aim 3 of this study.

Limitations

The ability to track COI of all authors rather than only US physicians would help clarify the full extent and impact of industry funding on study design, findings, and interpretation of results. Open Payments data only include physicians licenced in the US. The database is verified and frequently updated but does not presume to include all payments made [ 25 ]. Accurate tracking of funding is further compromised because device manufacturers are known to violate reporting requirements [ 26 ]. Payments made to researchers’ family members, research or office staff, PhDs, institutions rather than individuals, etc., and any payments originating outside the US cannot, at present, be tracked. By extracting payment information for the year preceding, the year of and the year after publication we have attempted to identify all payments relevant to the articles studied, but may have missed some industry funding for included studies or captured funding for unrelated projects. It is also possible that funding received was not linked to the reviewed publication. Journal non-compliance with ICMJE requirements for declaring COI may have removed the reporting requirement for some authors and some funding. The overall impact of all these limitations may be an underestimation of the extent of undeclared industry funding to researchers.

Although we attempted to standardize our appraisal of articles, quality appraisal, as the name suggests, involves qualitative elements. The authors first rated each article separately then engaged in discussion to reach consensus, but acknowledge that the ‘objectivity’ of this process could be questioned.

Industry funding for medical research is, at present, substantial and can be a source of innovation, but needs to also be ethical and transparent. During the timeframe studied the extent of industry involvement in research explicitly justifying the merit of pelvic mesh was high, while findings were at odds with concurrent FDA warnings of risk [ 14 ]. Equally important, self-reporting of financial COI by researchers appears to be unreliable and often contravenes requirements agreed upon by international medical journal editors. Industry funding both declared and, to a greater extent, undeclared, permeates almost all research on pelvic mesh and almost certainly shapes the quality of and conclusions drawn from those studies. This biased evidence in turn skews the risk benefit picture and potentially drives overuse of pelvic mesh in clinical practice.

Availability of data and materials

All data used and generated can be made available by the corresponding author upon reasonable request.

Abbreviations

United States Centers for Medicare & Medicaid Services Open Payments

  • Conflicts of interest

US Food and Drug Administration

International Committee of Medical Journal Editors

Interquartile range

Medical doctor

Pelvic organ prolapse

Stress urinary incontinence

Sismondo S, Doucet M. Publication Ethics and the Ghost Management of Medical Publication Bioethics. 2010;24(6):273–283. https://doi.org/10.1111/j.1467-8519.2008.01702.x .

Ioannidis JPA, Why Most Clinical Research Is Not Useful. PLoS Med. 2016;13(6):e1002049–1002049. https://doi.org/10.1371/journal.pmed.1002049 .

Article   Google Scholar  

Institute of Medicine Committee on Standards for Developing Trustworthy Clinical Practice G. Clinical practice guidelines we can trust. Washington (DC): National Academy of Sciences; 2011.

Google Scholar  

Via GG, Brueggeman DA, Lyons JG, Frommeyer TC, Froehle AW, Krishnamurthy AB. Funding has no effect on studies evaluating viscosupplementation for knee osteoarthritis: a systematic review of bibliometrics and conflicts of interest. J Orthop. 2023;39:18–29. https://doi.org/10.1016/j.jor.2023.03.015 .

Ahn R, Woodbridge A, Abraham A, et al. Financial ties of principal investigators and randomized controlled trial outcomes: cross sectional study. BMJ (Online). 2017;356:i6770–6770. https://doi.org/10.1136/bmj.i6770 .

Lundh A, Lexchin J, Mintzes B, Schroll JB, Bero L. Industry sponsorship and research outcome. Cochrane Database Syst Reviews. 2017;2017(2):MR000033–000033. https://doi.org/10.1002/14651858.MR000033.pub3 .

Taheri C, Kirubarajan A, Li X, Lam ACL, Taheri S, Olivieri NF. Discrepancies in self-reported financial conflicts of interest disclosures by physicians: a systematic review. BMJ Open. 2021;11(4):e045306. https://doi.org/10.1136/bmjopen-2020-045306 .

United States Centers for Medicare & Medicaid Services. Open Payments. ( https://www.cms.gov/openpayments ).

Sanders DL, Kingsnorth AN. From ancient to contemporary times: a concise history of incisional hernia repair. Hernia: J Hernias Abdom wall Surg. 2012;16(1):1–7. https://doi.org/10.1007/s10029-011-0870-5 .

Robinson TN, Clarke JH, Schoen J, Walsh MD. Major mesh-related complications following hernia repair: events reported to the Food and Drug Administration. Surg Endosc. 2005;19(12):1556–60. https://doi.org/10.1007/s00464-005-0120-y .

Heneghan C, Aronson JK, Goldacre B, Mahtani KR, Plüddemann A, Onakpoya I. Transvaginal mesh failure: lessons for regulation of implantable devices. BMJ (Online). 2017a;359:j5515–5515. https://doi.org/10.1136/bmj.j5515 .

Phillips SP, Gee K, Wells L, Medical, Devices. Invisible women, harmful consequences. Int J Environ Res Public Health. 2022;19(21):14524. https://www.mdpi.com/1660-4601/19/21/14524 .

Heneghan CJ, Goldacre B, Onakpoya I, et al. Trials of transvaginal mesh devices for pelvic organ prolapse: a systematic database review of the US FDA approval process. BMJ open. 2017;7(12):e017125–017125. https://doi.org/10.1136/bmjopen-2017-017125 .

Food US, Administration D. FDA takes action to protect women’s health, orders manufacturers of surgical mesh intended for transvaginal repair of pelvic organ prolapse to stop selling all devices. 2019.

Rubin R. Mesh implants for women: scandal or Standard of Care? JAMA: J Am Med Association. 2019;321(14):1338–40. https://doi.org/10.1001/jama.2019.0940 .

Motamedi M, Carter SM, Degeling C. Women’s experiences of and perspectives on transvaginal mesh surgery for stress urine incontinency and pelvic organ prolapse: a qualitative systematic review. The patient: patient-. Centered Outcomes Res. 2022;15(2):157–69. https://doi.org/10.1007/s40271-021-00547-7 .

Taylor D. The failure of polypropylene surgical mesh in vivo. J Mech Behav Biomed Mater. 2018;88:370–6. https://doi.org/10.1016/j.jmbbm.2018.08.041 .

Gornall J. Vaginal mesh implants: putting the relations between UK doctors and industry in plain sight. BMJ;363:k4164. https://doi.org/10.1136/bmj.k4164 .

Klein SL, Flanagan KL. Sex differences in immune responses. Nat Rev Immunol. 2016;16(10):626–38. https://doi.org/10.1038/nri.2016.90 .

Sekigami Y, Tian T, Char S, et al. Conflicts of interest in studies related to Mesh Use in ventral hernia repair and Abdominal Wall Reconstruction. Ann Surg. 2021;276(5):e571–6. https://doi.org/10.1097/SLA.0000000000004565 .

Chimonas S, Mamoor M, Zimbalist SA, Barrow B, Bach PB, Korenstein D. Mapping conflict of interests: scoping review. BMJ. 2021;375:e066576. https://doi.org/10.1136/bmj-2021-066576 .

Moola SMZ, Tufanaru C, Aromataris E et al. Chapter 7: Systematic reviews of etiology and risk. In: Aromataris E MZ, ed. JBI Manual for Evidence Synthesis 2020.

Mooghali M, Glick L, Ramachandran R, Ross JS. Financial conflicts of interest among US physician authors of 2020 clinical practice guidelines: a cross-sectional study. BMJ open. 2023;13(1):e069115–069115. https://doi.org/10.1136/bmjopen-2022-069115 .

Drazen JM, Van Der Weyden MB, Sahni P, et al. Uniform Format for Disclosure of competing interests in ICMJE journals. JAMA: J Am Med Association. 2010;303(1):75–6. https://doi.org/10.1001/jama.2009.1542 .

Department of Health and Human Services Office of Inspector General. Open Payments Data: Review of Accuracy, Precision, and Consistency in Reporting. 2018. ( https://oig.hhs.gov/oei/reports/oei-03-15-00220.pdf ).

Adashi EY, Cohen IG. Enforcement of the Physician payments Sunshine Act: Trust and verify. JAMA: J Am Med Association. 2021;326(9):807–8. https://doi.org/10.1001/jama.2021.13156 .

Download references

Acknowledgements

Not applicable.

Funding was received from an internal Queen’s University grant (Wicked Ideas).

Author information

Authors and affiliations.

Centre for Studies in Primary Care, Queen’s University, Kingston, Canada

Angela Coderre-Ball & Susan P. Phillips

Family Medicine and Public Health Sciences, Queen’s University, Kingston, Canada

Susan P. Phillips

You can also search for this author in PubMed   Google Scholar

Contributions

Both authors contributed to all aspects of the study.

Corresponding author

Correspondence to Susan P. Phillips .

Ethics declarations

Ethics approval and consent to participate.

Neither ethics approval nor consent for publication were applicable given the study design.

Consent for publication

The authors consent to publication of this paper.

Competing interests

Authors have no financial or non-financial conflicts of interest to declare.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Coderre-Ball, A., Phillips, S.P. Extent, transparency and impact of industry funding for pelvic mesh research: a review of the literature. Res Integr Peer Rev 9 , 4 (2024). https://doi.org/10.1186/s41073-024-00145-9

Download citation

Received : 18 September 2023

Accepted : 09 April 2024

Published : 30 April 2024

DOI : https://doi.org/10.1186/s41073-024-00145-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Pelvic mesh
  • Industry funding
  • Research methods
  • Uterine prolapse, stress urinary incontinence, women's health

Research Integrity and Peer Review

ISSN: 2058-8615

conclusion of variables in research

U.S. flag

An official website of the Department of Health & Human Services

  • Search All AHRQ Sites
  • Email Updates

Patient Safety Network

1. Use quotes to search for an exact match of a phrase.

2. Put a minus sign just before words you don't want.

3. Enter any important keywords in any order to find entries where all these terms appear.

  • The PSNet Collection
  • All Content
  • Perspectives
  • Current Weekly Issue
  • Past Weekly Issues
  • Curated Libraries
  • Clinical Areas
  • Patient Safety 101
  • The Fundamentals
  • Training and Education
  • Continuing Education
  • WebM&M: Case Studies
  • Training Catalog
  • Submit a Case
  • Improvement Resources
  • Innovations
  • Submit an Innovation
  • About PSNet
  • Editorial Team
  • Technical Expert Panel

Technology as a Tool for Improving Patient Safety

Introduction .

In the past several decades, technological advances have opened new possibilities for improving patient safety. Using technology to digitize healthcare processes has the potential to increase standardization and efficiency of clinical workflows and to reduce errors and cost across all healthcare settings. 1 However, if technological approaches are designed or implemented poorly, the burden on clinicians can increase. For example, overburdened clinicians can experience alert fatigue and fail to respond to notifications. This can lead to more medical errors. As a testament to the significance of this topic in recent years, several government agencies [(e.g. the Agency for Healthcare Research and Quality (AHRQ) and the Centers for Medicare and Medicaid services (CMS)] have developed resources to help healthcare organizations integrate technology, such as the Safety Assurance Factors for EHR Resilience (SAFER) guides developed by the Office of the National Coordinator for Health Information Technology (ONC). 2,3,4  However, there is some evidence that these resources have not been widely used.5 Recently, the Centers for Medicare & Medicaid Services (CMS) started requiring hospitals to use the SAFER guides as part of the FY 2022 Hospital Inpatient Prospective Payment Systems (IPPS), which should raise awareness and uptake of the guides. 6

During 2022, research into technological approaches was a major theme of articles on PSNet. Researchers reviewed all relevant articles on PSNet and consulted with Dr. A Jay Holmgren, PhD, and Dr. Susan McBride, PhD, subject matter experts in health IT and its role in patient safety. Key topics and themes are highlighted below.  

Clinical Decision Support  

The most prominent focus in the 2022 research on technology, based on the number of articles published on PSNet, was related to clinical decision support (CDS) tools. CDS provides clinicians, patients, and other individuals with relevant data (e.g. patient-specific information), purposefully filtered and delivered through a variety of formats and channels, to improve and enhance care. 7   

Computerized Patient Order Entry  

One of the main applications of CDS is in computerized patient order entry (CPOE), which is the process used by clinicians to enter and send treatment instructions via a computer application. 8 While the change from paper to electronic order entry itself can reduce errors (e.g., due to unclear handwriting or manual copy errors), research in 2022 showed that there is room for improvement in order entry systems, as well as some promising novel approaches. 

Two studies looked at the frequency of and reasons for medication errors in the absence of CDS and CPOE and demonstrated that there was a clear patient safety need. One study found that most medication errors occurred during the ordering or prescribing stage, and both this study and the other study found that the most common medication error was incorrect dose. Ongoing research, such as the AHRQ Medication Safety Measure Development project, aims to develop and validate measure specifications for wrong-patient, wrong-dose, wrong-medication, wrong-route, and wrong-frequency medication orders within EHR systems, in order to better understand and capture health IT safety events.9 Errors of this type could be avoided or at least reduced through the use of effective CPOE and CDS systems. However, even when CPOE and CDS are in place, errors can still occur and even be caused by the systems themselves. One study reviewed duplicate medication orders and found that 20% of duplicate orders resulted from technological issues, including alerts being overridden, alerts not firing, and automation issues (e.g., prefilled fields). A case study last year Illustrated one of the technological issues, in this case a manual keystroke error, that can lead to a safety event. A pharmacist mistakenly set the start date for a medication to the following year rather than the following day , which the CPOE system failed to flag. The authors recommended various alerts and coding changes in the system to prevent this particular error in the future.  

There were also studies in 2022 that showed successful outcomes of well-implemented CPOE systems. One in-depth pre-post, mixed-methods study showed that a fully implemented CPOE system significantly reduced specific serious and commonly occurring prescribing and procedural errors. The authors also presented evidence that it was cost-effective and detailed implementation lessons learned drawn from the qualitative data collected for the study. A specific CPOE function that demonstrated statistically significant improvement in 2022 was automatic deprescribing of medication orders and communication of the relevant information to pharmacies. Deprescribing is the planned and supervised process of dose reduction or stopping of a medication that is no longer beneficial or could be causing harm. That study showed an immediate and sustained 78% increase in successful discontinuations after implementation of the software. A second study on the same functionality determined that currently only one third to one half of medications are e-prescribed, and the study proposed that e-prescribing should be expanded to increase the impact of the deprescribing software. It should be noted, however, that the systems were not perfect and that a small percentage of medications were unintentionally cancelled. Finally, an algorithm to detect patients in need of follow-up after test results was developed and implemented in another study . The algorithm showed some process improvements, but outcome measures were not reported. 

Usability  

Usability of CDS systems was a large focus of research in 2022. Poorly designed systems that do not fit into existing workflows lead to frustrated users and increase the potential for errors. For example, if users are required to enter data in multiple places or prompted to enter data that are not available to them, they could find ways to work around the system or even cease to use it, increasing the potential for patient safety errors. The documentation burden is already very high on U.S. clinicians, 10 so it is important that novel technological approaches do not add to this burden but, if possible, alleviate it by offering a high level of usability and interoperability.  

One study used human-factored design in creating a CDS to diagnose pulmonary embolism in the Emergency Department and then surveyed clinician users about their experiences using the tool. Despite respondents giving the tool high usability ratings and reporting that the CDS was valuable, actual use of the tool was low. Based on the feedback from users, the authors proposed some changes to increase uptake, but both users and authors mentioned the challenges that arise when trying to change the existing workflow of clinicians without increasing their burden. Another study gathered qualitative feedback from clinicians on a theoretical CDS system for diagnosing neurological issues in the Emergency Department. In this study too, many clinicians saw the potential value in the CDS tool but had concerns about workflow integration and whether it would impact their ability to make clinical decisions. Finally, one study developed a dashboard to display various risk factors for multiple hospital-acquired infections and gathered feedback from users. The users generally found the dashboard useful and easy to learn, and they also provided valuable feedback on color scales, location, and types of data displayed. All of these studies show that attention to end user needs and preferences is necessary for successful implementation of CDS.  However, the recent market consolidation in Electronic Health Record vendors may have an impact on the amount of user feedback gathered and integrated into CDS systems. Larger vendors may have more resources to devote to improving the usability and design of CDS, or their near monopolies in the market may not provide an incentive to innovate further. 11 More research is needed as this trend continues.  

Alerts and Alarms 

Alerts and alarms are an important part of most CDS systems, as they can prompt clinicians with important and timely information during the treatment process. However, these alerts and alarms must be accurate and useful to elicit an appropriate response. The tradeoff between increased safety due to alerts and clinician alert fatigue is an important balance to strike. 12

Many studies in 2022 looked at clinician responses to medication-related alerts, including override and modification rates. Several of the studies found a high alert override rate but questioned the validity of using override rates alone as a marker of CDS effectiveness and usability. For example, one study looked at drug allergy alerts and found that although 44.8% of alerts were overridden, only 9.3% of those were inappropriately overridden, and very few overrides led to an adverse allergic reaction. A study on “do not give” alerts found that clinicians modified their orders to comply with alert recommendations after 78% of alerts but only cancelled orders after 26% of alerts. A scoping review looked at drug-drug interaction alerts and found similar results, including high override rates and the need for more data on why alerts are overridden. These findings are supported by another study that found that the underlying drug value sets triggering drug-drug interaction alerts are often inconsistent, leading to many inappropriate alerts that are then appropriately overridden by clinicians. These studies suggest that while a certain number of overrides should be expected, the underlying criteria for alert systems should be designed and regularly reviewed with specificity and sensitivity in mind. This will increase the frequency of appropriate alerts that foster indicated clinical action and reduce alert fatigue. 

There also seems to be variability in the effectiveness of alert systems across sites. One study looked at an alert to add an item to the problem list if a clinician placed an order for a medication that was not indicated based on the patient’s chart. The study found about 90% accuracy in alerts across two sites but a wide difference in the frequency of appropriate action between the sites (83% and 47%). This suggests that contextual factors at each site, such as culture and organizational processes, may impact success as much as the technology itself.  

A different study looked at the psychology of dismissing alerts using log data and found that dismissing alerts becomes habitual and that the habit is self-reinforcing over time. Furthermore, nearly three quarters of alerts were dismissed within 3 seconds. This indicates how challenging it can be to change or disrupt alert habits once they are formed. 

Artificial Intelligence and Machine Learning  

In recent years, one of the largest areas of burgeoning technology in healthcare has been artificial intelligence (AI) and machine learning. AI and machine learning use algorithms to absorb large amounts of historical and real-time data and then predict outcomes and recommend treatment options as new data are entered by clinicians. Research in 2022 showed that these techniques are starting to be integrated into EHR and CDS systems, but challenges remain. A full discussion of this topic is beyond the scope of this review. Here we limit the discussion to several patient-safety-focused resources posted on PSNet in 2022.  

One of the promising aspects of AI is its ability to improve CDS processes and clinician workflow overall. For example, one study last year looked at using machine learning to improve and filter CDS alerts. They found that the software could reduce alert volume by 54% while maintaining high precision. Reducing alert volume has the potential to alleviate alert fatigue and habitual overriding. Another topic explored in a scoping review was the use of AI to reduce adverse drug events. While only a few studies reviewed implementation in a clinical setting (most evaluated algorithm technical performance), several promising uses were found for AI systems that predict risk of an adverse drug event, which would facilitate early detection and mitigate negative effects.  

Despite enthusiasm for and promising applications of AI, implementation is slow. One of the challenges facing implementation is the variable quality of the systems. For example, a commonly used sepsis detection model was recently found to have very low sensitivity. 13 Algorithms also drift over time as new data are integrated, and this can affect performance, particularly during and after large disturbances like the COVID-19 pandemic. 14 There is also emerging research about the impact of AI algorithms on racial and ethnic biases in healthcare; at the time of publication of this essay, an AHRQ EPC was conducting a review of evidence on the topic. 15  These examples highlight the fact that AI is not a “set it and forget it” application; it requires monitoring and customization from a dedicated resource to ensure that the algorithms perform well over time. A related challenge is the lack of a strong business case for using high-quality AI. Because of this, many health systems choose to use out-of-the-box AI algorithms, which may be of poor quality overall (or are unsuited to particular settings) and may also be “black box” algorithms (i.e., not customizable by the health system because the vendor will not allow access to the underlying code). 16 The variable quality and the lack of transparency may cause mistrust by clinicians and overall aversion to AI interventions.  

In an attempt to address these concerns, one article in 2022 detailed best practices for AI implementation in health systems, focusing on the business case. Best practices include using AI to address a priority problem for the health system rather than treating it as an end itself. Additionally, testing the AI using the health system’s patients and data to demonstrate applicability and accuracy for that setting, confirming that the AI can provide a return on investment, and ensuring that the AI can be implemented easily and efficiently are also important. Another white paper described a human-factors and ergonomics framework for developing AI in order to improve the implementation within healthcare systems, teams, and workflows. The federal government and international organizations have also published AI guidelines, focusing on increasing trustworthiness (National Artificial Intelligence Initiative) 17 and ensuring ethical governance (World Health Organization). 18   

Conclusion and Next Steps 

As highlighted in this review, the scope and complexity of technology and its application in healthcare can be intimidating for healthcare systems to approach and implement. Researchers last year thus created a framework that health systems can use to assess their digital maturity and guide their plans for further integration.  

The field would benefit from more research in several areas in upcoming years. First and foremost, high-quality prospective outcome studies are needed to validate the effectiveness of the new technologies. Second, more work is needed on system usability, how the systems are integrated into workflows, and how they affect the documentation burden placed on clinicians. For CDS specifically, more focus is needed on patient-centered CDS (PC CDS), which supports patient-centered care by helping clinicians and patients make the best decisions given each individual’s circumstances and preferences. 19 AHRQ is already leading efforts in this field with their CDS Innovation Collaborative project. 20 Finally, as it becomes more common to incorporate EHR scribes to ease the documentation burden, research on their impact on patient safety will be needed, especially in relation to new technological approaches. For example, when a scribe encounters a CDS alert, do they alert the clinician in all cases? 

In addition to the approaches mentioned in this article, other emerging technologies in early stages of development hold theoretical promise for improving patient safety. One prominent example is “computer vision,” which uses cameras and AI to gather and process data on what physically happens in healthcare settings beyond what is captured in EHR data, 21 including being able to detect immediately that a patient fell in their room. 22  

As technology continues to expand and improve, researchers, clinicians, and health systems must be mindful of potential stumbling blocks that could impede progress and threaten patient safety. However, technology presents a wide array of opportunities to make healthcare more integrated, efficient, and safe.  

  • Cohen CC, Powell K, Dick AW, et al. The Association Between Nursing Home Information Technology Maturity and Urinary Tract Infection Among Long-Term Residents . J Appl Gerontol . 2022;41(7):1695-1701. doi: 10.1177/07334648221082024. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9232878/
  • https://www.healthit.gov/topic/safety/safer-guides
  • https://cds.ahrq.gov/cdsconnect/repository
  • https://www.cms.gov/about-cms/obrhi
  • McBride S, Makar E, Ross A, et al. Determining awareness of the SAFER guides among nurse informaticists. J Inform Nurs. 2021;6(4). https://library.ania.org/ania/articles/713/view
  • Sittig DF, Sengstack P, Singh H. Guidelines for US hospitals and clinicians on assessment of electronic health record safety using SAFER guides. J ama . 2022;327:719-720.
  • https://library.ahima.org/doc?oid=300027#.Y-6RhXbMKHt
  • https://www.healthit.gov/faq/what-computerized-provider-order-entry#:~:text=Computerized%20provider%20order%20entry%20(CPOE,paper%2C%20fax%2C%20or%20telephone
  • https://digital.ahrq.gov/2018-year-review/research-spotlights/leveragin…
  • Holmgren AJ, Downing NL, Bates DW, et al. Assessment of electronic health record use between US and non-US health systems. JAMA Intern Med. 2021;181:251-259. https://doi.org/10.1001/jamainternmed.2020.7071
  • Holmgren AJ, Apathy NC. Trends in US hospital electronic health record vendor market concentration, 2012–2021. J Gen Intern Med. 2022. https://link.springer.com/article/10.1007/s11606-022-07917-3#citeas
  • Co Z, Holmgren AJ, Classen DC, et al. The tradeoffs between safety and alert fatigue: data from a national evaluation of hospital medication-related clinical decision support. J Am Med Inform Assoc. 2020;27:1252-1258. https://pubmed.ncbi.nlm.nih.gov/32620948/
  • Wong A, Otles E, Donnelly JP, et al. External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Intern Med. 2021;181:1065-1070. https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2781307
  • Parikh RB, Zhang Y, Kolla L, et al. Performance drift in a mortality prediction algorithm among patients with cancer during the SARS-CoV-2 pandemic. J Am Med Inform Assoc. 2022;30:348-354. https://academic.oup.com/jamia/advance-article/doi/10.1093/jamia/ocac221/6835770?login=false
  • https://effectivehealthcare.ahrq.gov/products/racial-disparities-health…
  • https://www.statnews.com/2022/05/24/market-failure-preventing-efficient-diffusion-health-care-ai-software/
  • https://www.ai.gov/strategic-pillars/advancing-trustworthy-ai/
  • Ethics and governance of artificial intelligence for health (WHO guidance). Geneva: World Health Organization; 2021. https://www.who.int/publications/i/item/9789240029200
  • Dullabh P, Sandberg SF, Heaney-Huls K, et al. Challenges and opportunities for advancing patient-centered clinical decision support: findings from a horizon scan. J Am Med Inform Assoc. 2022: 29(7):1233-1243. doi: 10.1093/jamia/ocac059. PMID: 35534996; PMCID: PMC9196686.
  • https://cds.ahrq.gov/cdsic
  • Yeung S, Downing NL, Fei-Fei L, et al. Bedside computer vision: moving artificial intelligence from driver assistance to patient safety. N Engl J Med. 2018;387:1271-1273. https://www.nejm.org/doi/10.1056/NEJMp1716891
  • Espinosa R, Ponce H, Gutiérrez S, et al. A vision-based approach for fall detection using multiple cameras and convolutional neural networks: a case study using the UP-Fall detection dataset. Comput Biol Med. 2019;115:103520. https://doi.org/10.1016/j.compbiomed.2019.103520

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers

Perspective

Perspectives on Safety

Annual Perspective

Patient Safety Innovations

Suicide Prevention in an Emergency Department Population: ED-SAFE

WebM&M Cases

The Retrievals. August 9, 2023

Agent of change. August 1, 2018

Amid lack of accountability for bias in maternity care, a California family seeks justice. August 16, 2023

Mirror, Mirror on the Wall: An Update on the Quality of American Health Care Through the Patient's Lens. April 12, 2006

Improving patient safety by shifting power from health professionals to patients. October 25, 2023

Patient Safety Primers

Discharge Planning and Transitions of Care

Medicines-related harm in the elderly post-hospital discharge. March 27, 2019

Emergency department crowding: the canary in the health care system. November 3, 2021

Advancing Patient Safety: Reviews From the Agency for Healthcare Research and Quality's Making Healthcare Safer III Report. September 2, 2020

Exploring Alternatives To Malpractice Litigation. January 15, 2014

Making Healthcare Safer III. March 18, 2020

Special Section: Patient Safety. May 24, 2006

The Science of Simulation in Healthcare: Defining and Developing Clinical Expertise. November 19, 2008

Compendium of Strategies to Prevent HAIs in Acute Care Hospitals 2014. September 1, 2014

Quality, Safety, and Noninterpretive Skills. November 11, 2015

Patient Safety. November 21, 2018

Ambulatory Safety Nets to Reduce Missed and Delayed Diagnoses of Cancer

Remote response team and customized alert settings help improve management of sepsis.

Using sociotechnical theory to understand medication safety work in primary care and prescribers' use of clinical decision support: a qualitative study. May 24, 2023

Human factors and safety analysis methods used in the design and redesign of electronic medication management systems: a systematic review. May 17, 2023

Journal Article

Reducing hospital harm: establishing a command centre to foster situational awareness.

The potential for leveraging machine learning to filter medication alerts. May 4, 2022

Improving the specificity of drug-drug interaction alerts: can it be done? April 6, 2022

A qualitative study of prescribing errors among multi-professional prescribers within an e-prescribing system. December 23, 2020

The tradeoffs between safety and alert fatigue: data from a national evaluation of hospital medication-related clinical decision support. July 29, 2020

Assessment of health information technology-related outpatient diagnostic delays in the US Veterans Affairs health care system: a qualitative study of aggregated root cause analysis data. July 22, 2020

Reducing drug prescription errors and adverse drug events by application of a probabilistic, machine-learning based clinical decision support system in an inpatient setting. August 21, 2019

Improving medication-related clinical decision support. March 7, 2018

The frequency of inappropriate nonformulary medication alert overrides in the inpatient setting. April 6, 2016

The effect of provider characteristics on the responses to medication-related decision support alerts. July 15, 2015

Best practices: an electronic drug alert program to improve safety in an accountable care environment. July 1, 2015

Impact of computerized physician order entry alerts on prescribing in older patients. March 25, 2015

Differences of reasons for alert overrides on contraindicated co-prescriptions by admitting department. December 17, 2014

Patient Safety Network

Connect With Us

LinkedIn

Sign up for Email Updates

To sign up for updates or to access your subscriber preferences, please enter your email address below.

Agency for Healthcare Research and Quality

5600 Fishers Lane Rockville, MD 20857 Telephone: (301) 427-1364

  • Accessibility
  • Disclaimers
  • Electronic Policies
  • HHS Digital Strategy
  • HHS Nondiscrimination Notice
  • Inspector General
  • Plain Writing Act
  • Privacy Policy
  • Viewers & Players
  • U.S. Department of Health & Human Services
  • The White House
  • Don't have an account? Sign up to PSNet

Submit Your Innovations

Please select your preferred way to submit an innovation.

Continue as a Guest

Track and save your innovation

in My Innovations

Edit your innovation as a draft

Continue Logged In

Please select your preferred way to submit an innovation. Note that even if you have an account, you can still choose to submit an innovation as a guest.

Continue logged in

New users to the psnet site.

Access to quizzes and start earning

CME, CEU, or Trainee Certification.

Get email alerts when new content

matching your topics of interest

in My Innovations.

IMAGES

  1. Types of variables in scientific research

    conclusion of variables in research

  2. 27 Types of Variables in Research and Statistics (2024)

    conclusion of variables in research

  3. SOLUTION: What are examples of variables in research simplyeducate

    conclusion of variables in research

  4. 10 Types of Variables in Research

    conclusion of variables in research

  5. Types Of Variables In Research Ppt

    conclusion of variables in research

  6. Types of Research Variable in Research with Example

    conclusion of variables in research

VIDEO

  1. Statistics lecture 3, observations, variables, types of variables

  2. How to write a research paper conclusion

  3. Variables in Research: Applied Linguistics

  4. WE TRY MINI TYRE EXPERIENCE 😱😱😱 @MRINDIANHACKER @CrazyXYZ #shortsvideo

  5. Literature Review: Explaining Variables

  6. Types of Variables

COMMENTS

  1. How to Write a Conclusion for Research Papers (with Examples)

    Generate the conclusion outline: After entering all necessary details, click on 'generate'. Paperpal will then create a structured outline for your conclusion, to help you start writing and build upon the outline. Write your conclusion: Use the generated outline to build your conclusion.

  2. 9. The Conclusion

    The conclusion is intended to help the reader understand why your research should matter to them after they have finished reading the paper. A conclusion is not merely a summary of the main topics covered or a re-statement of your research problem, but a synthesis of key points derived from the findings of your study and, if applicable, where you recommend new areas for future research.

  3. Writing a Research Paper Conclusion

    Table of contents. Step 1: Restate the problem. Step 2: Sum up the paper. Step 3: Discuss the implications. Research paper conclusion examples. Frequently asked questions about research paper conclusions.

  4. Research Paper Conclusion

    Here are some steps you can follow to write an effective research paper conclusion: Restate the research problem or question: Begin by restating the research problem or question that you aimed to answer in your research. This will remind the reader of the purpose of your study. Summarize the main points: Summarize the key findings and results ...

  5. Organizing Your Social Sciences Research Paper

    Dependent Variable The variable that depends on other factors that are measured. These variables are expected to change as a result of an experimental manipulation of the independent variable or variables. It is the presumed effect. Independent Variable The variable that is stable and unaffected by the other variables you are trying to measure.

  6. A Practical Guide to Writing Quantitative and Qualitative Research

    These are precise and typically linked to the subject population, dependent and independent variables, and research design.1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, ... CONCLUSION. Research questions and hypotheses are crucial components to any type of research, whether ...

  7. Variables in Research: Breaking Down the Essentials of Experimental

    The Role of Variables in Research. In scientific research, variables serve several key functions: Define Relationships: Variables allow researchers to investigate the relationships between different factors and characteristics, providing insights into the underlying mechanisms that drive phenomena and outcomes. Establish Comparisons: By manipulating and comparing variables, scientists can ...

  8. Chapter 15: Interpreting results and drawing conclusions

    Key Points: This chapter provides guidance on interpreting the results of synthesis in order to communicate the conclusions of the review effectively. Methods are presented for computing, presenting and interpreting relative and absolute effects for dichotomous outcome data, including the number needed to treat (NNT).

  9. 5.15: Drawing Conclusions from Statistics

    The study reviewed coffee habits of more than 402,000 people ages 50 to 71 from six states and two metropolitan areas. Those with cancer, heart disease, and stroke were excluded at the start of the study. Coffee consumption was assessed once at the start of the study. About 52,000 people died during the course of the study.

  10. Drawing Conclusions and Reporting the Results

    Drawing Conclusions. Since statistics are probabilistic in nature and findings can reflect type I or type II errors, we cannot use the results of a single study to conclude with certainty that a theory is true. Rather theories are supported, refuted, or modified based on the results of research. If the results are statistically significant and ...

  11. How to Write Discussions and Conclusions

    Begin with a clear statement of the principal findings. This will reinforce the main take-away for the reader and set up the rest of the discussion. Explain why the outcomes of your study are important to the reader. Discuss the implications of your findings realistically based on previous literature, highlighting both the strengths and ...

  12. Importance of Variables in Stating the Research Objectives

    So, it is usual for research protocols to include many independent variables and many dependent variables in the generation of many hypotheses, as shown in Table 1. Pairing each variable in the "independent variable" column with each variable in the "dependent variable" column would result in the generation of these hypotheses.

  13. Variables in Research

    Understanding and identifying categorical variables is crucial in research as it influences the choice of statistical analysis methods. Since these variables represent categories without numerical significance, researchers employ specific statistical tests designed for a nominal or ordinal variable to draw meaningful conclusions.

  14. Variables in Research

    Extraneous variables can lead to erroneous conclusions and can be controlled through random assignment or statistical techniques. ... Scientific research: Variables are used in scientific research to understand the relationships between different factors and to make predictions about future outcomes. For example, scientists may study the ...

  15. Independent and Dependent Variables

    In research, the independent variable is manipulated to observe its effect, while the dependent variable is the measured outcome. Essentially, the independent variable is the presumed cause, and the dependent variable is the observed effect. Variables provide the foundation for examining relationships, drawing conclusions, and making ...

  16. Statistical Conclusion Validity: Some Common Threats and Simple

    The fourth aspect of research validity, which Cook and Campbell called statistical conclusion validity (SCV), is the subject of this paper. Cook and Campbell, 1979 , pp. 39-50) discussed that SCV pertains to the extent to which data from a research study can reasonably be regarded as revealing a link (or lack thereof) between independent and ...

  17. Independent vs. Dependent Variables

    The independent variable is the cause. Its value is independent of other variables in your study. The dependent variable is the effect. Its value depends on changes in the independent variable. Example: Independent and dependent variables. You design a study to test whether changes in room temperature have an effect on math test scores.

  18. Writing the Conclusion Chapter: the Good, the Bad and the Missing

    Most Conclusions were found to restate purpose, consolidate research space with a varied array of steps, recommend future research and cover practical applications, implications or recommendations. ... variables , research design process, types of sampling, data collection process, interviews, questionnaires, data analysis techniques and so on ...

  19. The relationship between variables

    Relationships between variables need to be studied and analyzed before drawing conclusions based on it. In natural science and engineering, this is usually more straightforward as you can keep all parameters except one constant and study how this one parameter affects the result under study. However, in social sciences, things get much more ...

  20. Research Variables: Types, Uses and Definition of Terms

    The purpose of research is to describe and explain variance in the world, that is, variance that. occurs naturally in the world or chang e that we create due to manipulation. Variables are ...

  21. Research Variables

    Research Variables. The research variables, of any scientific experiment or research process, are factors that can be manipulated and measured. Any factor that can take on different values is a scientific variable and influences the outcome of experimental research. Gender, color and country are all perfectly acceptable variables, because they ...

  22. What are Variables and Why are They Important in Research ...

    Conclusion. In conclusion, variables play a crucial role in research across different fields and disciplines. They help to define research questions, design studies, analyze data, and draw conclusions. By understanding the importance of variables in research, researchers can design studies that are relevant, accurate, and reliable, and can ...

  23. Extraneous Variables

    Example: Extraneous variables. In your experiment, these extraneous variables can affect the science knowledge scores: Participant's major (e.g., STEM or humanities) Participant's interest in science. Demographic variables such as gender or educational background. Time of day of testing.

  24. Association between adding salt in food and dementia in European

    Across the individual studies, the first PC explained an average of 41% of the variance in test performance. The detailed methodology can be found in the original research (Lee et al., 2018). Both exposure and outcome GWAS data have already been adjusted for sex and age. 2.3 Selection of instrumental variables

  25. E-scooter-related dental injuries: a two-year retrospective review

    Over the two-year period, a total of 32 patients sustained e-scooter-related dental injuries. Among them, 71.9% (n = 23) were men, with a mean age of 33, while women had a mean age of 36.7. The ...

  26. Extent, transparency and impact of industry funding for pelvic mesh

    Conflicts of interest inherent in industry funding can bias medical research methods, outcomes, reporting and clinical applications. This study explored the extent of funding provided to American physician researchers studying surgical mesh used to treat uterine prolapse or stress urinary incontinence, and whether that funding was declared by researchers or influenced the ethical integrity of ...

  27. Technology as a Tool for Improving Patient Safety

    Introduction . In the past several decades, technological advances have opened new possibilities for improving patient safety. Using technology to digitize healthcare processes has the potential to increase standardization and efficiency of clinical workflows and to reduce errors and cost across all healthcare settings. 1 However, if technological approaches are designed or implemented poorly ...

  28. Unveiling the 2024 SANS

    Conclusion The 2024 SANS | GIAC Cyber Workforce Research Report serves as a vital resource for understanding the dynamics of the cybersecurity job market. It provides actionable insights for HR and cybersecurity managers to refine their strategies in hiring, training, and retaining cybersecurity talent.

  29. STAT readers respond to "residency research arms race" and more

    Readers respond to funding academic medical centers, the 'residency research arms race,' and more. By Patrick Skerrett. Reprints. Molly Ferguson for STAT. STAT now publishes selected Letters ...

  30. Fracture Density Prediction of Basement Metamorphic Rocks ...

    Many methods have been developed to detect and predict the fracture properties of fractured rocks. The standard data sources for fracture evaluations are image logs and core samples. However, many wells do not have these data, especially for old wells. Furthermore, operating both methods can be costly, and, sometimes, the data gathered are of bad quality. Therefore, previous research attempted ...