Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 25, Issue 1
  • Critical appraisal of qualitative research: necessity, partialities and the issue of bias
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0001-5660-8224 Veronika Williams ,
  • Anne-Marie Boylan ,
  • http://orcid.org/0000-0003-4597-1276 David Nunan
  • Nuffield Department of Primary Care Health Sciences , University of Oxford, Radcliffe Observatory Quarter , Oxford , UK
  • Correspondence to Dr Veronika Williams, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford OX2 6GG, UK; veronika.williams{at}phc.ox.ac.uk

https://doi.org/10.1136/bmjebm-2018-111132

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • qualitative research

Introduction

Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the ‘how’ and ‘why’. As we have argued previously 1 , qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety, 2 prescribing, 3 4 and understanding chronic illness. 5 Equally, it offers additional insight into quantitative studies, explaining contextual factors surrounding a successful intervention or why an intervention might have ‘failed’ or ‘succeeded’ where effect sizes cannot. It is for these reasons that the MRC strongly recommends including qualitative evaluations when developing and evaluating complex interventions. 6

Critical appraisal of qualitative research

Is it necessary.

Although the importance of qualitative research to improve health services and care is now increasingly widely supported (discussed in paper 1), the role of appraising the quality of qualitative health research is still debated. 8 10 Despite a large body of literature focusing on appraisal and rigour, 9 11–15 often referred to as ‘trustworthiness’ 16 in qualitative research, there remains debate about how to —and even whether to—critically appraise qualitative research. 8–10 17–19 However, if we are to make a case for qualitative research as integral to evidence-based healthcare, then any argument to omit a crucial element of evidence-based practice is difficult to justify. That being said, simply applying the standards of rigour used to appraise studies based on the positivist paradigm (Positivism depends on quantifiable observations to test hypotheses and assumes that the researcher is independent of the study. Research situated within a positivist paradigm isbased purely on facts and consider the world to be external and objective and is concerned with validity, reliability and generalisability as measures of rigour.) would be misplaced given the different epistemological underpinnings of the two types of data.

Given its scope and its place within health research, the robust and systematic appraisal of qualitative research to assess its trustworthiness is as paramount to its implementation in clinical practice as any other type of research. It is important to appraise different qualitative studies in relation to the specific methodology used because the methodological approach is linked to the ‘outcome’ of the research (eg, theory development, phenomenological understandings and credibility of findings). Moreover, appraisal needs to go beyond merely describing the specific details of the methods used (eg, how data were collected and analysed), with additional focus needed on the overarching research design and its appropriateness in accordance with the study remit and objectives.

Poorly conducted qualitative research has been described as ‘worthless, becomes fiction and loses its utility’. 20 However, without a deep understanding of concepts of quality in qualitative research or at least an appropriate means to assess its quality, good qualitative research also risks being dismissed, particularly in the context of evidence-based healthcare where end users may not be well versed in this paradigm.

How is appraisal currently performed?

Appraising the quality of qualitative research is not a new concept—there are a number of published appraisal tools, frameworks and checklists in existence. 21–23  An important and often overlooked point is the confusion between tools designed for appraising methodological quality and reporting guidelines designed to assess the quality of methods reporting. An example is the Consolidate Criteria for Reporting Qualitative Research (COREQ) 24 checklist, which was designed to provide standards for authors when reporting qualitative research but is often mistaken for a methods appraisal tool. 10

Broadly speaking there are two types of critical appraisal approaches for qualitative research: checklists and frameworks. Checklists have often been criticised for confusing quality in qualitative research with ‘technical fixes’ 21 25 , resulting in the erroneous prioritisation of particular aspects of methodological processes over others (eg, multiple coding and triangulation). It could be argued that a checklist approach adopts the positivist paradigm, where the focus is on objectively assessing ‘quality’ where the assumptions is that the researcher is independent of the research conducted. This may result in the application of quantitative understandings of bias in order to judge aspects of recruitment, sampling, data collection and analysis in qualitative research papers. One of the most widely used appraisal tools is the Critical Appraisal Skills Programme (CASP) 26 and along with the JBI QARI (Joanna Briggs Institute Qualitative Assessment and Assessment Instrument) 27 presents examples which tend to mimic the quantitative approach to appraisal. The CASP qualitative tool follows that of other CASP appraisal tools for quantitative research designs developed in the 1990s. The similarities are therefore unsurprising given the status of qualitative research at that time.

Frameworks focus on the overarching concepts of quality in qualitative research, including transparency, reflexivity, dependability and transferability (see box 1 ). 11–13 15 16 20 28 However, unless the reader is familiar with these concepts—their meaning and impact, and how to interpret them—they will have difficulty applying them when critically appraising a paper.

The main issue concerning currently available checklist and framework appraisal methods is that they take a broad brush approach to ‘qualitative’ research as whole, with few, if any, sufficiently differentiating between the different methodological approaches (eg, Grounded Theory, Interpretative Phenomenology, Discourse Analysis) nor different methods of data collection (interviewing, focus groups and observations). In this sense, it is akin to taking the entire field of ‘quantitative’ study designs and applying a single method or tool for their quality appraisal. In the case of qualitative research, checklists, therefore, offer only a blunt and arguably ineffective tool and potentially promote an incomplete understanding of good ‘quality’ in qualitative research. Likewise, current framework methods do not take into account how concepts differ in their application across the variety of qualitative approaches and, like checklists, they also do not differentiate between different qualitative methodologies.

On the need for specific appraisal tools

Current approaches to the appraisal of the methodological rigour of the differing types of qualitative research converge towards checklists or frameworks. More importantly, the current tools do not explicitly acknowledge the prejudices that may be present in the different types of qualitative research.

Concepts of rigour or trustworthiness within qualitative research 31

Transferability: the extent to which the presented study allows readers to make connections between the study’s data and wider community settings, ie, transfer conceptual findings to other contexts.

Credibility: extent to which a research account is believable and appropriate, particularly in relation to the stories told by participants and the interpretations made by the researcher.

Reflexivity: refers to the researchers’ engagement of continuous examination and explanation of how they have influenced a research project from choosing a research question to sampling, data collection, analysis and interpretation of data.

Transparency: making explicit the whole research process from sampling strategies, data collection to analysis. The rationale for decisions made is as important as the decisions themselves.

However, we often talk about these concepts in general terms, and it might be helpful to give some explicit examples of how the ‘technical processes’ affect these, for example, partialities related to:

Selection: recruiting participants via gatekeepers, such as healthcare professionals or clinicians, who may select them based on whether they believe them to be ‘good’ participants for interviews/focus groups.

Data collection: poor interview guide with closed questions which encourage yes/no answers and/leading questions.

Reflexivity and transparency: where researchers may focus their analysis on preconceived ideas rather than ground their analysis in the data and do not reflect on the impact of this in a transparent way.

The lack of tailored, method-specific appraisal tools has potentially contributed to the poor uptake and use of qualitative research for informing evidence-based decision making. To improve this situation, we propose the need for more robust quality appraisal tools that explicitly encompass both the core design aspects of all qualitative research (sampling/data collection/analysis) but also considered the specific partialities that can be presented with different methodological approaches. Such tools might draw on the strengths of current frameworks and checklists while providing users with sufficient understanding of concepts of rigour in relation to the different types of qualitative methods. We provide an outline of such tools in the third and final paper in this series.

As qualitative research becomes ever more embedded in health science research, and in order for that research to have better impact on healthcare decisions, we need to rethink critical appraisal and develop tools that allow differentiated evaluations of the myriad of qualitative methodological approaches rather than continuing to treat qualitative research as a single unified approach.

  • Williams V ,
  • Boylan AM ,
  • Lingard L ,
  • Orser B , et al
  • Brawn R , et al
  • Van Royen P ,
  • Vermeire E , et al
  • Barker M , et al
  • McGannon KR
  • Dixon-Woods M ,
  • Agarwal S , et al
  • Greenhalgh T ,
  • Dennison L ,
  • Morrison L ,
  • Conway G , et al
  • Barrett M ,
  • Mayan M , et al
  • Lockwood C ,
  • Santiago-Delefosse M ,
  • Bruchez C , et al
  • Sainsbury P ,
  • ↵ CASP (Critical Appraisal Skills Programme). date unknown . http://www.phru.nhs.uk/Pages/PHD/CASP.htm .
  • ↵ The Joanna Briggs Institute . JBI QARI Critical appraisal checklist for interpretive & critical research . Adelaide : The Joanna Briggs Institute , 2014 .
  • Stephens J ,

Contributors VW and DN: conceived the idea for this article. VW: wrote the first draft. AMB and DN: contributed to the final draft. All authors approve the submitted article.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Correction notice This article has been updated since its original publication to include a new reference (reference 1.)

Read the full text or download the PDF:

Research Frameworks: Critical Components for Reporting Qualitative Health Care Research

Affiliation.

  • 1 Centre for Health Science Education, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa.
  • PMID: 38596348
  • PMCID: PMC11000705
  • DOI: 10.17294/2330-0698.2068

Qualitative health care research can provide insights into health care practices that quantitative studies cannot. However, the potential of qualitative research to improve health care is undermined by reporting that does not explain or justify the research questions and design. The vital role of research frameworks for designing and conducting quality research is widely accepted, but despite many articles and books on the topic, confusion persists about what constitutes an adequate underpinning framework, what to call it, and how to use one. This editorial clarifies some of the terminology and reinforces why research frameworks are essential for good-quality reporting of all research, especially qualitative research.

Keywords: conceptual frameworks; health care; qualitative research; research frameworks; theoretical frameworks; theory.

© 2024 Advocate Aurora Health, Inc.

Publication types

Enago Academy

The Importance of Critical Thinking Skills in Research

' src=

Why is Critical Thinking Important: A Disruptive Force

Research anxiety seems to be taking an increasingly dominant role in the world of academic research. The pressure to publish or perish can warp your focus into thinking that the only good research is publishable research!

Today, your role as the researcher appears to take a back seat to the perceived value of the topic and the extent to which the results of the study will be cited around the world. Due to financial pressures and a growing tendency of risk aversion, studies are increasingly going down the path of applied research rather than basic or pure research . The potential for breakthroughs is being deliberately limited to incremental contributions from researchers who are forced to worry more about job security and pleasing their paymasters than about making a significant contribution to their field.

A Slow Decline

So what lead the researchers to their love of science and scientific research in the first place? The answer is critical thinking skills. The more that academic research becomes governed by policies outside of the research process, the less opportunity there will be for researchers to exercise such skills.

True research demands new ideas , perspectives, and arguments based on willingness and confidence to revisit and directly challenge existing schools of thought and established positions on theories and accepted codes of practice. Success comes from a recursive approach to the research question with an iterative refinement based on constant reflection and revision.

The importance of critical thinking skills in research is therefore huge, without which researchers may even lack the confidence to challenge their own assumptions.

A Misunderstood Skill

Critical thinking is widely recognized as a core competency and as a precursor to research. Employers value it as a requirement for every position they post, and every survey of potential employers for graduates in local markets rate the skill as their number one concern.

Related: Do you have questions on research idea or manuscript drafting? Get personalized answers on the FREE Q&A Forum!

When asked to clarify what critical thinking means to them, employers will use such phrases as “the ability to think independently,” or “the ability to think on their feet,” or “to show some initiative and resolve a problem without direct supervision.” These are all valuable skills, but how do you teach them?

For higher education institutions in particular, when you are being assessed against dropout, graduation, and job placement rates, where does a course in critical thinking skills fit into the mix? Student Success courses as a precursor to your first undergraduate course will help students to navigate the campus and whatever online resources are available to them (including the tutoring center), but that doesn’t equate to raising critical thinking competencies.

The Dependent Generation

As education becomes increasingly commoditized and broken-down into components that can be delivered online for maximum productivity and profitability, we run the risk of devaluing academic discourse and independent thought. Larger class sizes preclude substantive debate, and the more that content is broken into sound bites that can be tested in multiple-choice questions, the less requirement there will be for original thought.

Academic journals value citation above all else, and so content is steered towards the type of articles that will achieve high citation volume. As such, students and researchers will perpetuate such misuse by ensuring that their papers include only highly cited works. And the objective of high citation volume is achieved.

We expand the body of knowledge in any field by challenging the status quo. Denying the veracity of commonly accepted “facts” or playing devil’s advocate with established rules supports a necessary insurgency that drives future research. If we do not continue to emphasize the need for critical thinking skills to preserve such rebellion, academic research may begin to slowly fade away.

Rate this article Cancel Reply

Your email address will not be published.

critical of good research

Enago Academy's Most Popular Articles

Content Analysis vs Thematic Analysis: What's the difference?

  • Reporting Research

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for data interpretation

In research, choosing the right approach to understand data is crucial for deriving meaningful insights.…

Cross-sectional and Longitudinal Study Design

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right approach

The process of choosing the right research design can put ourselves at the crossroads of…

Networking in Academic Conferences

  • Career Corner

Unlocking the Power of Networking in Academic Conferences

Embarking on your first academic conference experience? Fear not, we got you covered! Academic conferences…

Research recommendation

Research Recommendations – Guiding policy-makers for evidence-based decision making

Research recommendations play a crucial role in guiding scholars and researchers toward fruitful avenues of…

critical of good research

  • AI in Academia

Disclosing the Use of Generative AI: Best practices for authors in manuscript preparation

The rapid proliferation of generative and other AI-based tools in research writing has ignited an…

Intersectionality in Academia: Dealing with diverse perspectives

Meritocracy and Diversity in Science: Increasing inclusivity in STEM education

Avoiding the AI Trap: Pitfalls of relying on ChatGPT for PhD applications

critical of good research

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

critical of good research

What should universities' stance be on AI tools in research and academic writing?

Research: Definition, Characteristics, Goals, Approaches

research definition

Research is an original and systematic investigation undertaken to increase existing knowledge and understanding of the unknown to establish facts and principles.

Let’s understand research:

What is Research?

Research is a voyage of discovery of new knowledge. It comprises creating ideas and generating new knowledge that leads to new and improved insights and the development of new materials, devices, products, and processes.

It should have the potential to produce sufficiently relevant results to increase and synthesize existing knowledge or correct and integrate previous knowledge.

Good reflective research produces theories and hypotheses and benefits any intellectual attempt to analyze facts and phenomena.

Where did the word Research Come from?

The word ‘research’ perhaps originates from the old French word “recerchier” which meant to ‘ search again.’ It implicitly assumes that the earlier search was not exhaustive and complete; hence, a repeated search is called for.

In practice, ‘research’ refers to a scientific process of generating an unexplored horizon of knowledge, aiming at discovering or establishing facts, solving a problem, and reaching a decision. Keeping the above points in view, we arrive at the following definition of research:

Research Definition

Research is a scientific approach to answering a research question, solving a research problem, or generating new knowledge through a systematic and orderly collection, organization, and analysis of data to make research findings useful in decision-making.

When do we call research scientific? Any research endeavor is said to be scientific if

  • It is based on empirical and measurable evidence subject to specific principles of reasoning;
  • It consists of systematic observations, measurement, and experimentation;
  • It relies on the application of scientific methods and harnessing of curiosity;
  • It provides scientific information and theories for the explanation of nature;
  • It makes practical applications possible, and
  • It ensures adequate analysis of data employing rigorous statistical techniques.

The chief characteristic that distinguishes the scientific method from other methods of acquiring knowledge is that scientists seek to let reality speak for itself, supporting a theory when a theory’s predictions are confirmed and challenging a theory when its predictions prove false.

Scientific research has multidimensional functions, characteristics, and objectives.

Keeping these issues in view, we assert that research in any field or discipline:

  • Attempts to solve a research problem;
  • Involves gathering new data from primary or first-hand sources or using existing data for a new purpose;
  • is based upon observable experiences or empirical evidence;
  • Demands accurate observation and description;
  • Employs carefully designed procedures and rigorous analysis;
  • attempts to find an objective, unbiased solution to the problem and takes great pains to validate the methods employed;
  • is a deliberate and unhurried activity that is directional but often refines the problem or questions as the research progresses.

Characteristics of Research

Keeping this in mind that research in any field of inquiry is undertaken to provide information to support decision-making in its respective area, we summarize some desirable characteristics of research:

  • The research should focus on priority problems.
  • The research should be systematic. It emphasizes that a researcher should employ a structured procedure.
  • The research should be logical. Without manipulating ideas logically, the scientific researcher cannot make much progress in any investigation.
  • The research should be reductive. This means that one researcher’s findings should be made available to other researchers to prevent them from repeating the same research.
  • The research should be replicable. This asserts that there should be scope to confirm previous research findings in a new environment and different settings with a new group of subjects or at a different point in time.
  • The research should be generative. This is one of the valuable characteristics of research because answering one question leads to generating many other new questions.
  • The research should be action-oriented. In other words, it should be aimed at solving to implement its findings.
  • The research should follow an integrated multidisciplinary approach, i.e., research approaches from more than one discipline are needed.
  • The research should be participatory, involving all parties concerned (from policymakers down to community members) at all stages of the study.
  • The research must be relatively simple, timely, and time-bound, employing a comparatively simple design.
  • The research must be as much cost-effective as possible.
  • The research results should be presented in formats most useful for administrators, decision-makers, business managers, or community members.

3 Basic Operations of Research

Scientific research in any field of inquiry involves three basic operations:

  • Data collection;
  • Data analysis;
  • Report writing .

3 basic operations of research

  • Data collection refers to observing, measuring, and recording data or information.
  • Data analysis, on the other hand, refers to arranging and organizing the collected data so that we may be able to find out what their significance is and generalize about them.
  • Report writing is the ultimate step of the study . Its purpose is to convey the information contained in it to the readers or audience.

If you note down, for example, the reading habit of newspapers of a group of residents in a community, that would be your data collection.

If you then divide these residents into three categories, ‘regular,’ ‘occasional,’ and ‘never,’ you have performed a simple data analysis. Your findings may now be presented in a report form.

A reader of your report knows what percentage of the community people never read any newspaper and so on.

Here are some examples that demonstrate what research is:

  • A farmer is planting two varieties of jute side by side to compare yields;
  • A sociologist examines the causes and consequences of divorce;
  • An economist is looking at the interdependence of inflation and foreign direct investment;
  • A physician is experimenting with the effects of multiple uses of disposable insulin syringes in a hospital;
  • A business enterprise is examining the effects of advertisement of their products on the volume of sales;
  • An economist is doing a cost-benefit analysis of reducing the sales tax on essential commodities;
  • The Bangladesh Bank is closely observing and monitoring the performance of nationalized and private banks;
  • Based on some prior information, Bank Management plans to open new counters for female customers.
  • Supermarket Management is assessing the satisfaction level of the customers with their products.

The above examples are all researching whether the instrument is an electronic microscope, hospital records, a microcomputer, a questionnaire, or a checklist.

Research Motivation – What makes one motivated to do research?

A person may be motivated to undertake research activities because

  • He might have genuine interest and curiosity in the existing body of knowledge and understanding of the problem;
  • He is looking for answers to questions that have remained unanswered so far and trying to unfold the truth;
  • The existing tools and techniques are accessible to him, and others may need modification and change to suit the current needs.

One might research ensuring.

  • Better livelihood;
  • Better career development;
  • Higher position, prestige, and dignity in society;
  • Academic achievement leading to higher degrees;
  • Self-gratification.

At the individual level, the results of the research are used by many:

  • A villager is drinking water from an arsenic-free tube well;
  • A rural woman is giving more green vegetables to her child than before;
  • A cigarette smoker is actively considering quitting smoking;
  • An old man is jogging for cardiovascular fitness;
  • A sociologist is using newly suggested tools and techniques in poverty measurement.

The above activities are all outcomes of the research.

All involved in the above processes will benefit from the research results. There is hardly any action in everyday life that does not depend upon previous research.

Research in any field of inquiry provides us with the knowledge and skills to solve problems and meet the challenges of a fast-paced decision-making environment.

9 Qualities of Research

Good research generates dependable data. It is conducted by professionals and can be used reliably for decision-making. It is thus of crucial importance that research should be made acceptable to the audience for which research should possess some desirable qualities in terms of.

9 qualities of research are;

Purpose clearly defined

Research process detailed, research design planner, ethical issues considered, limitations revealed, adequate analysis ensured, findings unambiguously presented, conclusions and recommendations justified..

We enumerate below a few qualities that good research should possess.

Good research must have its purposes clearly and unambiguously defined.

The problem involved or the decision to be made should be sharply delineated as clearly as possible to demonstrate the credibility of the research.

The research procedures should be described in sufficient detail to permit other researchers to repeat the research later.

Failure to do so makes it difficult or impossible to estimate the validity and reliability of the results. This weakens the confidence of the readers.

Any recommendations from such research justifiably get little attention from the policymakers and implementation.

The procedural design of the research should be carefully planned to yield results that are as objective as possible.

In doing so, care must be taken so that the sample’s representativeness is ensured, relevant literature has been thoroughly searched, experimental controls, whenever necessary, have been followed, and the personal bias in selecting and recording data has been minimized.

A research design should always safeguard against causing mental and physical harm not only to the participants but also those who belong to their organizations.

Careful consideration must also be given to research situations when there is a possibility for exploitation, invasion of privacy, and loss of dignity of all those involved in the study.

The researcher should report with complete honesty and frankness any flaws in procedural design; he followed and provided estimates of their effects on the findings.

This enhances the readers’ confidence and makes the report acceptable to the audience. One can legitimately question the value of research where no limitations are reported.

Adequate analysis reveals the significance of the data and helps the researcher to check the reliability and validity of his estimates.

Data should, therefore, be analyzed with proper statistical rigor to assist the researcher in reaching firm conclusions.

When statistical methods have been employed, the probability of error should be estimated, and criteria of statistical significance applied.

The presentation of the results should be comprehensive, easily understood by the readers, and organized so that the readers can readily locate the critical and central findings.

Proper research always specifies the conditions under which the research conclusions seem valid.

Therefore, it is important that any conclusions drawn and recommendations made should be solely based on the findings of the study.

No inferences or generalizations should be made beyond the data. If this were not followed, the objectivity of the research would tend to decrease, resulting in confidence in the findings.

The researcher’s experiences were reflected.

The research report should contain information about the qualifications of the researchers .

If the researcher is experienced, has a good reputation in research, and is a person of integrity, his report is likely to be highly valued. The policymakers feel confident in implementing the recommendations made in such reports.

4 Goals of Research

goals of research

The primary goal or purpose of research in any field of inquiry; is to add to what is known about the phenomenon under investigation by applying scientific methods. Though each research has its own specific goals, we may enumerate the following 4 broad goals of scientific research:

Exploration and Explorative Research

Description and descriptive research, causal explanation and causal research, prediction and predictive research.

The link between the 4 goals of research and the questions raised in reaching these goals.

Let’s try to understand the 4 goals of the research.

Exploration is finding out about some previously unexamined phenomenon. In other words, an explorative study structures and identifies new problems.

The explorative study aims to gain familiarity with a phenomenon or gain new insights into it.

Exploration is particularly useful when researchers lack a clear idea of the problems they meet during their study.

Through exploration, researchers attempt to

  • Develop concepts more clearly;
  • Establish priorities among several alternatives;
  • Develop operational definitions of variables;
  • Formulate research hypotheses and sharpen research objectives;
  • Improve the methodology and modify (if needed) the research design .

Exploration is achieved through what we call exploratory research.

The end of an explorative study comes when the researchers are convinced that they have established the major dimensions of the research task.

Many research activities consist of gathering information on some topic of interest. The description refers to these data-based information-gathering activities. Descriptive studies portray precisely the characteristics of a particular individual, situation, or group.

Here, we attempt to describe situations and events through studies, which we refer to as descriptive research.

Such research is undertaken when much is known about the problem under investigation.

Descriptive studies try to discover answers to the questions of who, what, when, where, and sometimes how.

Such research studies may involve the collection of data and the creation of distribution of the number of times the researcher observes a single event or characteristic, known as a research variable.

A descriptive study may also involve the interaction of two or more variables and attempts to observe if there is any relationship between the variables under investigation .

Research that examines such a relationship is sometimes called a correlational study. It is correlational because it attempts to relate (i.e., co-relate) two or more variables.

A descriptive study may be feasible to answer the questions of the following types:

  • What are the characteristics of the people who are involved in city crime? Are they young? Middle-aged? Poor? Muslim? Educated?
  • Who are the potential buyers of the new product? Men or women? Urban people or rural people?
  • Are rural women more likely to marry earlier than their urban counterparts?
  • Does previous experience help an employee to get a higher initial salary?

Although the data description in descriptive research is factual, accurate, and systematic, the research cannot describe what caused a situation.

Thus, descriptive research cannot be used to create a causal relationship where one variable affects another.

In other words, descriptive research can be said to have a low requirement for internal validity. In sum, descriptive research deals with everything that can be counted and studied.

But there are always restrictions on that. All research must impact the lives of the people around us.

For example, finding the most frequent disease that affects the people of a community falls under descriptive research.

But the research readers will have the hunch to know why this has happened and what to do to prevent that disease so that more people will live healthy lives.

It dictates that we need a causal explanation of the situation under reference and a causal study vis-a-vis causal research .

Explanation reveals why and how something happens.

An explanatory study goes beyond description and attempts to establish a cause-and-effect relationship between variables. It explains the reason for the phenomenon that the descriptive study observed.

Thus, if a researcher finds that communities with larger family sizes have higher child deaths or that smoking correlates with lung cancer, he is performing a descriptive study.

If he explains why it is so and tries to establish a cause-and-effect relationship, he is performing explanatory or causal research . The researcher uses theories or at-least hypotheses to account for the factors that caused a certain phenomenon.

Look at the following examples that fit causal studies:

  • Why are people involved in crime? Can we explain this as a consequence of the present job market crisis or lack of parental care?
  • Will the buyers be motivated to purchase the new product in a new container ? Can an attractive advertisement motivate them to buy a new product?
  • Why has the share market shown the steepest-ever fall in stock prices? Is it because of the IMF’s warnings and prescriptions on the commercial banks’ exposure to the stock market or because of an abundant increase in the supply of new shares?

Prediction seeks to answer when and in what situations will occur if we can provide a plausible explanation for the event in question.

However, the precise nature of the relationship between explanation and prediction has been a subject of debate.

One view is that explanation and prediction are the same phenomena, except that prediction precedes the event while the explanation takes place after the event has occurred.

Another view is that explanation and prediction are fundamentally different processes.

We need not be concerned with this debate here but can simply state that in addition to being able to explain an event after it has occurred, we would also be able to predict when it will occur.

Research Approaches

4 research approaches

There are two main approaches to doing research.

The first is the basic approach, which mostly pertains to academic research. Many people view this as pure research or fundamental research.

The research implemented through the second approach is variously known as applied research, action research, operations research, or contract research.

Also, the third category of research, evaluative research, is important in many applications. All these approaches have different purposes influencing the nature of the respective research.

Lastly, precautions in research are required for thorough research.

So, 4 research approaches are;

  • Basic Research .
  • Applied Research .
  • Evaluative Research .
  • Precautions in Research.

Areas of Research

The most important fields or areas of research, among others, are;

  • Social Research .
  • Health Research .
  • Population Research .
  • Business Research .
  • Marketing Research .
  • Agricultural Research .
  • Biomedical Research.
  • Clinical Research .
  • Outcomes Research.
  • Internet Research.
  • Archival Research.
  • Empirical Research.
  • Legal Research .
  • Education Research .
  • Engineering Research .
  • Historical Research.

Check out our article describing all 16 areas of research .

Precautions in Research

Whether a researcher is doing applied or basic research or research of any other form, he or she must take necessary precautions to ensure that the research he or she is doing is relevant, timely, efficient, accurate, and ethical .

The research is considered relevant if it anticipates the kinds of information that decision-makers, scientists, or policymakers will require.

Timely research is completed in time to influence decisions.

  • Research is efficient when it is of the best quality for the minimum expenditure and the study is appropriate to the research context.
  • Research is considered accurate or valid when the interpretation can account for both consistencies and inconsistencies in the data.
  • Research is ethical when it can promote trust, exercise care, ensure standards, and protect the rights of the participants in the research process.

What is the definition of research?

What are the characteristics of good research, what are the three basic operations involved in scientific research, what are the four broad goals of scientific research, what distinguishes the scientific method from other methods of acquiring knowledge, what is the origin of the word ‘research’, how is “research methodology” defined, how does research methodology ensure the appropriateness of a research method.

After discussing the research definition and knowing the characteristics, goals, and approaches, it’s time to delve into the research fundamentals. For a comprehensive understanding, refer to our detailed research and methodology concepts guide .

Research should be relevant, timely, efficient, accurate, and ethical. It should anticipate the information required by decision-makers, be completed in time to influence decisions, be of the best quality for the minimum expenditure, and protect the rights of participants in the research process.

The two main approaches to research are the basic approach, often viewed as pure or fundamental research, and the applied approach, which includes action research, operations research, and contract research.

30 Accounting Research Paper Topics and Ideas for Writing

Your email address will not be published. Required fields are marked *

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Applying Critical Thinking
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Critical thinking refers to deliberately scrutinizing and evaluating theories, concepts, or ideas using reasoned reflection and analysis. The act of thinking critically involves moving beyond simply understanding information, but rather, to question its source, its production, and its presentation in order to expose potential bias or researcher subjectivity [i.e., being influenced by personal opinions and feelings rather than by external determinants ] . Applying critical thinking to investigating a research problem involves actively challenging basic assumptions and questioning the choices and potential motives underpinning how the author designed the study, conducted the research, and arrived at particular conclusions or recommended courses of action.

Mintz, Steven. "How the Word "Critical" Came to Signify the Leading Edge of Cultural Analysis." Higher Ed Gamma Blog , Inside Higher Ed, February 13, 2024; Van Merriënboer, Jeroen JG and Paul A. Kirschner. Ten Steps to Complex Learning: A Systematic Approach to Four-component Instructional Design . New York: Routledge, 2017.

Thinking Critically

Applying Critical Thinking to Research and Writing

Professors like to use the term critical thinking; in fact, the idea of being a critical thinker permeates much of higher education writ large. In the classroom, the idea of thinking critically is often mentioned by professors when students ask how they should approach a research and writing assignment [other approaches your professor might mention include interdisciplinarity, comparative, gendered, global, etc.]. However, critical thinking is more than just an approach to research and writing. It is an acquired skill associated with becoming a complex learner capable of discerning important relationships among the elements of, as well as integrating multiple ways of understanding applied to, the research problem. Critical thinking is a lens through which you holistically interrogate a topic.

Given this, thinking critically encompasses a variety of inter-related connotations applied to writing a college-level research paper:

  • Integrated and Multi-Dimensional . Critical thinking is not focused on any one element of research, but instead, is applied holistically throughout the process of identifying the research problem, reviewing the literature, applying methods of analysis, describing the results, discussing their implications, and, if appropriate, offering recommendations for further research. It permeates the entire research endeavor from contemplating what to write to proofreading the final product.
  • Humanizes the Research . Thinking critically can help humanize what is being studied by extending the scope of your analysis beyond the traditional boundaries of prior research. This prior research could have involved, for example, sampling homogeneous populations, considering only certain factors related to the investigation of a phenomenon, or limiting the way authors framed or contextualized their study. Critical thinking creates opportunities to incorporate the experiences of others into the research process, leading to a more inclusive and representative examination of the topic.
  • Non-Linear . This refers to analyzing a research problem in ways that do not rely on sequential decision-making or rational forms of reasoning. Creative thinking relies on intuitive judgement, flexibility, and unconventional approaches to investigating complex phenomena in order to discover new insights, connections, and potential solutions . This involves going back and modifying your thinking as new evidence emerges , perhaps multiple times throughout the research process, and drawing conclusions from multiple perspectives.
  • Normative . This is the idea that critical thinking can be used to challenge prior assumptions in ways that advocate for social justice, equity, and inclusion and that can lead to research having a more transformative and expansive impact. In this respect, critical thinking can be viewed as a method for breaking away from dominant culture norms so as to produce research outcomes that illuminate previously hidden aspects of exploitation and injustice.
  • Power Dynamics . Research in the social sciences often includes examining aspects of power and influence that shape social relations, organizations, institutions, and the production and maintenance of knowledge. These studies focus on how power operates, how it can be acquired, and how power and influence can be maintained. Critical thinking can reveal how societal structures perpetuate power and influence in ways that marginalizes and oppresses certain groups or communities within the contexts of history , politics, economics, culture, and other factors.
  • Reflection . A key component of critical thinking is practicing reflexivity; the act of turning ideas and concepts back onto yourself in order to reveal and clarify your own beliefs, assumptions, and perspectives. Being critically reflexive is important because it can reveal hidden biases you may have that could unintentionally influence how you interpret and validate information. The more reflexive you are, the better able and more comfortable you are in opening yourself up to new modes of understanding.
  • Rigorous Questioning . Thinking critically is guided by asking questions that lead to addressing complex concepts, principles, theories, or problems more effectively and, in so doing, help distinguish what is known from from what is not known [or that may be hidden]. Critical thinking involves deliberately framing inquiries not just as research questions, but as a way to apply systematic, disciplined,  in-depth forms of questioning concerning the research problem and your positionality as a researcher.
  • Social Change . An overarching goal of critical thinking applied to research and writing is to seek to identify and challenge sources of inequality, exploitation, oppression, and marinalization that contributes to maintaining the status quo within institutions of society. This can include entities, such as, schools, courts, businesses, government agencies, or religious organizations, that have been created and maintained through certain ways of thinking within the dominant culture.

Although critical thinking permeates the entire research and writing process, it applies most directly to the literature review and discussion sections of your paper . In reviewing the literature, it is important to reflect upon specific aspects of a study, such as, determining if the research design effectively establishes cause and effect relationships or provides insight into explaining why certain phenomena do or do not occur, assessing whether the method of gathering data or information supports the objectives of the study, and evaluating if the assumptions used t o arrive at a specific conclusion are evidence-based and relevant to addressing the research problem. However, an assessment of whether a source is helpful to investigating the research problem also involves critically analyzing how the research challenges conventional approaches to investigations that perpetuate inequalities or hides the voices of others.

Critical thinking applies to the discussion section of your paper because this is where you internalize the results of your study and explain its significance. This involves more than summarizing findings and describing outcomes. It includes reflecting on their importance and providing reasoned explanations why your paper is important in filling a gap in the literature or expanding knowledge and understanding in ways that inform practice. Critical reflection helps you think introspectively about your own beliefs concerning the significance of the findings, but in ways that avoid biased judgment and decision making.

Behar-Horenstein, Linda S., and Lian Niu. “Teaching Critical Thinking Skills in Higher Education: A Review of the Literature.” Journal of College Teaching and Learning 8 (February 2011): 25-41; Bayou, Yemeserach and Tamene Kitila. "Exploring Instructors’ Beliefs about and Practices in Promoting Students’ Critical Thinking Skills in Writing Classes." GIST–Education and Learning Research Journal 26 (2023): 123-154; Butcher, Charity. "Using In-class Writing to Promote Critical Thinking and Application of Course Concepts." Journal of Political Science Education 18 (2022): 3-21; Loseke, Donileen R. Methodological Thinking: Basic Principles of Social Research Design. Thousand Oaks, CA: Sage, 2012; Mintz, Steven. "How the Word "Critical" Came to Signify the Leading Edge of Cultural Analysis." Higher Ed Gamma Blog , Inside Higher Ed, February 13, 2024; Hart, Claire et al. “Exploring Higher Education Students’ Critical Thinking Skills through Content Analysis.” Thinking Skills and Creativity 41 (September 2021): 100877; Lewis, Arthur and David Smith. "Defining Higher Order Thinking." Theory into Practice 32 (Summer 1993): 131-137; Sabrina, R., Emilda Sulasmi, and Mandra Saragih. "Student Critical Thinking Skills and Student Writing Ability: The Role of Teachers' Intellectual Skills and Student Learning." Cypriot Journal of Educational Sciences 17 (2022): 2493-2510. Suter, W. Newton. Introduction to Educational Research: A Critical Thinking Approach. 2nd edition. Thousand Oaks, CA: SAGE Publications, 2012; Van Merriënboer, Jeroen JG and Paul A. Kirschner. Ten Steps to Complex Learning: A Systematic Approach to Four-component Instructional Design. New York: Routledge, 2017; Vance, Charles M., et al. "Understanding and Measuring Linear–Nonlinear Thinking Style for Enhanced Management Education and Professional Practice." Academy of Management Learning and Education 6 (2007): 167-185; Yeh, Hui-Chin, Shih-hsien Yang, Jo Shan Fu, and Yen-Chen Shih. "Developing College Students’ Critical Thinking through Reflective Writing." Higher Education Research & Development 42 (2023): 244-259.

  • << Previous: Academic Writing Style
  • Next: Choosing a Title >>
  • Last Updated: Apr 16, 2024 10:20 AM
  • URL: https://libguides.usc.edu/writingguide

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PLoS Comput Biol
  • v.18(6); 2022 Jun

Logo of ploscomp

Ten simple rules for good research practice

Simon schwab.

1 Center for Reproducible Science, University of Zurich, Zurich, Switzerland

2 Epidemiology, Biostatistics and Prevention Institute, University of Zurich, Zurich, Switzerland

Perrine Janiaud

3 Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland

Michael Dayan

4 Human Neuroscience Platform, Fondation Campus Biotech Geneva, Geneva, Switzerland

Valentin Amrhein

5 Department of Environmental Sciences, Zoology, University of Basel, Basel, Switzerland

Radoslaw Panczak

6 Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland

Patricia M. Palagi

7 SIB Training Group, SIB Swiss Institute of Bioinformatics, Lausanne, Switzerland

Lars G. Hemkens

8 Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, United States of America

9 Meta-Research Innovation Center Berlin (METRIC-B), Berlin Institute of Health, Berlin, Germany

Meike Ramon

10 Applied Face Cognition Lab, University of Lausanne, Lausanne, Switzerland

Nicolas Rothen

11 Faculty of Psychology, UniDistance Suisse, Brig, Switzerland

Stephen Senn

12 Statistical Consultant, Edinburgh, United Kingdom

Leonhard Held

This is a PLOS Computational Biology Methods paper.

Introduction

The lack of research reproducibility has caused growing concern across various scientific fields [ 1 – 5 ]. Today, there is widespread agreement, within and outside academia, that scientific research is suffering from a reproducibility crisis [ 6 , 7 ]. Researchers reach different conclusions—even when the same data have been processed—simply due to varied analytical procedures [ 8 , 9 ]. As we continue to recognize this problematic situation, some major causes of irreproducible research have been identified. This, in turn, provides the foundation for improvement by identifying and advocating for good research practices (GRPs). Indeed, powerful solutions are available, for example, preregistration of study protocols and statistical analysis plans, sharing of data and analysis code, and adherence to reporting guidelines. Although these and other best practices may facilitate reproducible research and increase trust in science, it remains the responsibility of researchers themselves to actively integrate them into their everyday research practices.

Contrary to ubiquitous specialized training, cross-disciplinary courses focusing on best practices to enhance the quality of research are lacking at universities and are urgently needed. The intersections between disciplines offer a space for peer evaluation, mutual learning, and sharing of best practices. In medical research, interdisciplinary work is inevitable. For example, conducting clinical trials requires experts with diverse backgrounds, including clinical medicine, pharmacology, biostatistics, evidence synthesis, nursing, and implementation science. Bringing researchers with diverse backgrounds and levels of experience together to exchange knowledge and learn about problems and solutions adds value and improves the quality of research.

The present selection of rules was based on our experiences with teaching GRP courses at the University of Zurich, our course participants’ feedback, and the views of a cross-disciplinary group of experts from within the Swiss Reproducibility Network ( www.swissrn.org ). The list is neither exhaustive, nor does it aim to address and systematically summarize the wide spectrum of issues including research ethics and legal aspects (e.g., related to misconduct, conflicts of interests, and scientific integrity). Instead, we focused on practical advice at the different stages of everyday research: from planning and execution to reporting of research. For a more comprehensive overview on GRPs, we point to the United Kingdom’s Medical Research Council’s guidelines [ 10 ] and the Swedish Research Council’s report [ 11 ]. While the discussion of the rules may predominantly focus on clinical research, much applies, in principle, to basic biomedical research and research in other domains as well.

The 10 proposed rules can serve multiple purposes: an introduction for researchers to relevant concepts to improve research quality, a primer for early-career researchers who participate in our GRP courses, or a starting point for lecturers who plan a GRP course at their own institutions. The 10 rules are grouped according to planning (5 rules), execution (3 rules), and reporting of research (2 rules); see Fig 1 . These principles can (and should) be implemented as a habit in everyday research, just like toothbrushing.

An external file that holds a picture, illustration, etc.
Object name is pcbi.1010139.g001.jpg

GRP, good research practices.

Research planning

Rule 1: specify your research question.

Coming up with a research question is not always simple and may take time. A successful study requires a narrow and clear research question. In evidence-based research, prior studies are assessed in a systematic and transparent way to identify a research gap for a new study that answers a question that matters [ 12 ]. Papers that provide a comprehensive overview of the current state of research in the field are particularly helpful—for example, systematic reviews. Perspective papers may also be useful, for example, there is a paper with the title “SARS-CoV-2 and COVID-19: The most important research questions.” However, a systematic assessment of research gaps deserves more attention than opinion-based publications.

In the next step, a vague research question should be further developed and refined. In clinical research and evidence-based medicine, there is an approach called population, intervention, comparator, outcome, and time frame (PICOT) with a set of criteria that can help framing a research question [ 13 ]. From a well-developed research question, subsequent steps will follow, which may include the exact definition of the population, the outcome, the data to be collected, and the sample size that is required. It may be useful to find out if other researchers find the idea interesting as well and whether it might promise a valuable contribution to the field. However, actively involving the public or the patients can be a more effective way to determine what research questions matter.

The level of details in a research question also depends on whether the planned research is confirmatory or exploratory. In contrast to confirmatory research, exploratory research does not require a well-defined hypothesis from the start. Some examples of exploratory experiments are those based on omics and multi-omics experiments (genomics, bulk RNA-Seq, single-cell, etc.) in systems biology and connectomics and whole-brain analyses in brain imaging. Both exploration and confirmation are needed in science, and it is helpful to understand their strengths and limitations [ 14 , 15 ].

Rule 2: Write and register a study protocol

In clinical research, registration of clinical trials has become a standard since the late 1990 and is now a legal requirement in many countries. Such studies require a study protocol to be registered, for example, with ClinicalTrials.gov, the European Clinical Trials Register, or the World Health Organization’s International Clinical Trials Registry Platform. Similar effort has been implemented for registration of systematic reviews (PROSPERO). Study registration has also been proposed for observational studies [ 16 ] and more recently in preclinical animal research [ 17 ] and is now being advocated across disciplines under the term “preregistration” [ 18 , 19 ].

Study protocols typically document at minimum the research question and hypothesis, a description of the population, the targeted sample size, the inclusion/exclusion criteria, the study design, the data collection, the data processing and transformation, and the planned statistical analyses. The registration of study protocols reduces publication bias and hindsight bias and can safeguard honest research and minimize waste of research [ 20 – 22 ]. Registration ensures that studies can be scrutinized by comparing the reported research with what was actually planned and written in the protocol, and any discrepancies may indicate serious problems (e.g., outcome switching).

Note that registration does not mean that researchers have no flexibility to adapt the plan as needed. Indeed, new or more appropriate procedures may become available or known only after registration of a study. Therefore, a more detailed statistical analysis plan can be amended to the protocol before the data are observed or unblinded [ 23 , 24 ]. Likewise, registration does not exclude the possibility to conduct exploratory data analyses; however, they must be clearly reported as such.

To go even further, registered reports are a novel article type that incentivize high-quality research—irrespective of the ultimate study outcome [ 25 , 26 ]. With registered reports, peer-reviewers decide before anyone knows the results of the study, and they have a more active role in being able to influence the design and analysis of the study. Journals from various disciplines increasingly support registered reports [ 27 ].

Naturally, preregistration and registered reports also have their limitations and may not be appropriate in a purely hypothesis-generating (explorative) framework. Reports of exploratory studies should indeed not be molded into a confirmatory framework; appropriate rigorous reporting alternatives have been suggested and start to become implemented [ 28 , 29 ].

Rule 3: Justify your sample size

Early-career researchers in our GRP courses often identify sample size as an issue in their research. For example, they say that they work with a low number of samples due to slow growth of cells, or they have a limited number of patient tumor samples due to a rare disease. But if your sample size is too low, your study has a high risk of providing a false negative result (type II error). In other words, you are unlikely to find an effect even if there truly was an effect.

Unfortunately, there is more bad news with small studies. When an effect from a small study was selected for drawing conclusions because it was statistically significant, low power increases the probability that an effect size is overestimated [ 30 , 31 ]. The reason is that with low power, studies that due to sampling variation find larger (overestimated) effects are much more likely to be statistically significant than those that happen to find smaller (more realistic) effects [ 30 , 32 , 33 ]. Thus, in such situations, effect sizes are often overestimated. For the phenomenon that small studies often report more extreme results (in meta-analyses), the term “small-study effect” was introduced [ 34 ]. In any case, an underpowered study is a problematic study, no matter the outcome.

In conclusion, small sample sizes can undermine research, but when is a study too small? For one study, a total of 50 patients may be fine, but for another, 1,000 patients may be required. How large a study needs to be designed requires an appropriate sample size calculation. Appropriate sample size calculation ensures that enough data are collected to ensure sufficient statistical power (the probability to reject the null hypothesis when it is in fact false).

Low-powered studies can be avoided by performing a sample size calculation to find out the required sample size of the study. This requires specifying a primary outcome variable and the magnitude of effect you are interested in (among some other factors); in clinical research, this is often the minimal clinically relevant difference. The statistical power is often set at 80% or larger. A comprehensive list of packages for sample size calculation are available [ 35 ], among them the R package “pwr” [ 36 ]. There are also many online calculators available, for example, the University of Zurich’s “SampleSizeR” [ 37 ].

A worthwhile alternative for planning the sample size that puts less emphasis on null hypothesis testing is based on the desired precision of the study; for example, one can calculate the sample size that is necessary to obtain a desired width of a confidence interval for the targeted effect [ 38 – 40 ]. A general framework to sample size justification beyond a calculation-only approach has been proposed [ 41 ]. It is also worth mentioning that some study types have other requirements or need specific methods. In diagnostic testing, one would need to determine the anticipated minimal sensitivity or specificity; in prognostic research, the number of parameters that can be used to fit a prediction model given a fixed sample size should be specified. Designs can also be so complex that a simulation (Monte Carlo method) may be required.

Sample size calculations should be done under different assumptions, and the largest estimated sample size is often the safer bet than a best-case scenario. The calculated sample size should further be adjusted to allow for possible missing data. Due to the complexity of accurately calculating sample size, researchers should strongly consider consulting a statistician early in the study design process.

Rule 4: Write a data management plan

In 2020, 2 Coronavirus Disease 2019 (COVID-19) papers in leading medical journals were retracted after major concerns about the data were raised [ 42 ]. Today, raw data are more often recognized as a key outcome of research along with the paper. Therefore, it is important to develop a strategy for the life cycle of data, including suitable infrastructure for long-term storage.

The data life cycle is described in a data management plan: a document that describes what data will be collected and how the data will be organized, stored, handled, and protected during and after the end of the research project. Several funders require a data management plan in grant submissions, and publishers like PLOS encourage authors to do so as well. The Wellcome Trust provides guidance in the development of a data management plan, including real examples from neuroimaging, genomics, and social sciences [ 43 ]. However, projects do not always allocate funding and resources to the actual implementation of the data management plan.

The Findable, Accessible, Interoperable, and Reusable (FAIR) data principles promote maximal use of data and enable machines to access and reuse data with minimal human intervention [ 44 ]. FAIR principles require the data to be retained, preserved, and shared preferably with an immutable unique identifier and a clear usage license. Appropriate metadata will help other researchers (or machines) to discover, process, and understand the data. However, requesting researchers to fully comply with the FAIR data principles in every detail is an ambitious goal.

Multidisciplinary data repositories that support FAIR are, for example, Dryad (datadryad.org https://datadryad.org/ ), EUDAT ( www.eudat.eu ), OSF (osf.io https://osf.io/ ), and Zenodo (zenodo.org https://zenodo.org/ ). A number of institutional and field-specific repositories may also be suitable. However, sometimes, authors may not be able to make their data publicly available for legal or ethical reasons. In such cases, a data user agreement can indicate the conditions required to access the data. Journals highlight what are acceptable and what are unacceptable data access restrictions and often require a data availability statement.

Organizing the study artifacts in a structured way greatly facilitates the reuse of data and code within and outside the lab, enhancing collaborations and maximizing the research investment. Support and courses for data management plans are sometimes available at universities. Another 10 simple rules paper for creating a good data management plan is dedicated to this topic [ 45 ].

Rule 5: Reduce bias

Bias is a distorted view in favor of or against a particular idea. In statistics, bias is a systematic deviation of a statistical estimate from the (true) quantity it estimates. Bias can invalidate our conclusions, and the more bias there is, the less valid they are. For example, in clinical studies, bias may mislead us into reaching a causal conclusion that the difference in the outcomes was due to the intervention or the exposure. This is a big concern, and, therefore, the risk of bias is assessed in clinical trials [ 46 ] as well as in observational studies [ 47 , 48 ].

There are many different forms of bias that can occur in a study, and they may overlap (e.g., allocation bias and confounding bias) [ 49 ]. Bias can occur at different stages, for example, immortal time bias in the design of the study, information bias in the execution of the study, and publication bias in the reporting of research. Understanding bias allows us researchers to remain vigilant of potential sources of bias when peer-reviewing and designing own studies. We summarized some common types of bias and some preventive steps in Table 1 , but many other forms of bias exist; for a comprehensive overview, see the Oxford University’s Catalogue of Bias [ 50 ].

For a comprehensive collection, see catalogofbias.org .

Here are some noteworthy examples of study bias from the literature: An example of information bias was observed when in 1998 an alleged association between the measles, mumps, and rubella (MMR) vaccine and autism was reported. Recall bias (a subtype of information bias) emerged when parents of autistic children recalled the onset of autism after an MMR vaccination more often than parents of similar children who were diagnosed prior to the media coverage of that controversial and meanwhile retracted study [ 51 ]. A study from 2001 showed better survival for academy award-winning actors, but this was due to immortal time bias that favors the treatment or exposure group [ 52 , 53 ]. A study systematically investigated self-reports about musculoskeletal symptoms and found the presence of information bias. The reason was that participants with little computer-time overestimated, and participants with a lot of computer-time spent underestimated their computer usage [ 54 ].

Information bias can be mitigated by using objective rather than subjective measurements. Standardized operating procedures (SOP) and electronic lab notebooks additionally help to follow well-designed protocols for data collection and handling [ 55 ]. Despite the failure to mitigate bias in studies, complete descriptions of data and methods can at least allow the assessment of risk of bias.

Research execution

Rule 6: avoid questionable research practices.

Questionable research practices (QRPs) can lead to exaggerated findings and false conclusions and thus lead to irreproducible research. Often, QRPs are used with no bad intentions. This becomes evident when methods sections explicitly describe such procedures, for example, to increase the number of samples until statistical significance is reached that supports the hypothesis. Therefore, it is important that researchers know about QRPs in order to recognize and avoid them.

Several questionable QRPs have been named [ 56 , 57 ]. Among them are low statistical power, pseudoreplication, repeated inspection of data, p -hacking [ 58 ], selective reporting, and hypothesizing after the results are known (HARKing).

The first 2 QRPs, low statistical power and pseudoreplication, can be prevented by proper planning and designing of studies, including sample size calculation and appropriate statistical methodology to avoid treating data as independent when in fact they are not. Statistical power is not equal to reproducibility, but statistical power is a precondition of reproducibility as the lack thereof can result in false negative as well as false positive findings (see Rule 3 ).

In fact, a lot of QRP can be avoided with a study protocol and statistical analysis plan. Preregistration, as described in Rule 2, is considered best practice for this purpose. However, many of these issues can additionally be rooted in institutional incentives and rewards. Both funding and promotion are often tied to the quantity rather than the quality of the research output. At universities, still only few or no rewards are given for writing and registering protocols, sharing data, publishing negative findings, and conducting replication studies. Thus, a wider “culture change” is needed.

Rule 7: Be cautious with interpretations of statistical significance

It would help if more researchers were familiar with correct interpretations and possible misinterpretations of statistical tests, p -values, confidence intervals, and statistical power [ 59 , 60 ]. A statistically significant p -value does not necessarily mean that there is a clinically or biologically relevant effect. Specifically, the traditional dichotomization into statistically significant ( p < 0.05) versus statistically nonsignificant ( p ≥ 0.05) results is seldom appropriate, can lead to cherry-picking of results and may eventually corrupt science [ 61 ]. We instead recommend reporting exact p -values and interpreting them in a graded way in terms of the compatibility of the null hypothesis with the data [ 62 , 63 ]. Moreover, a p -value around 0.05 (e.g., 0.047 or 0.055) provides only little information, as is best illustrated by the associated replication power: The probability that a hypothetical replication study of the same design will lead to a statistically significant result is only 50% [ 64 ] and is even lower in the presence of publication bias and regression to the mean (the phenomenon that effect estimates in replication studies are often smaller than the estimates in the original study) [ 65 ]. Claims of novel discoveries should therefore be based on a smaller p -value threshold (e.g., p < 0.005) [ 66 ], but this really depends on the discipline (genome-wide screenings or studies in particle physics often apply much lower thresholds).

Generally, there is often too much emphasis on p -values. A statistical index such as the p -value is just the final product of an analysis, the tip of the iceberg [ 67 ]. Statistical analyses often include many complex stages, from data processing, cleaning, transformation, addressing missing data, modeling, to statistical inference. Errors and pitfalls can creep in at any stage, and even a tiny error can have a big impact on the result [ 68 ]. Also, when many hypothesis tests are conducted (multiple testing), false positive rates may need to be controlled to protect against wrong conclusions, although adjustments for multiple testing are debated [ 69 – 71 ].

Thus, a p -value alone is not a measure of how credible a scientific finding is [ 72 ]. Instead, the quality of the research must be considered, including the study design, the quality of the measurement, and the validity of the assumptions that underlie the data analysis [ 60 , 73 ]. Frameworks exist that help to systematically and transparently assess the certainty in evidence; the most established and widely used one is Grading of Recommendations, Assessment, Development and Evaluations (GRADE; www.gradeworkinggroup.org ) [ 74 ].

Training in basic statistics, statistical programming, and reproducible analyses and better involvement of data professionals in academia is necessary. University departments sometimes have statisticians that can support researchers. Importantly, statisticians need to be involved early in the process and on an equal footing and not just at the end of a project to perform the final data analysis.

Rule 8: Make your research open

In reality, science often lacks transparency. Open science makes the process of producing evidence and claims transparent and accessible to others [ 75 ]. Several universities and research funders have already implemented open science roadmaps to advocate free and public science as well as open access to scientific knowledge, with the aim of further developing the credibility of research. Open research allows more eyes to see it and critique it, a principle similar to the “Linus’s law” in software development, which says that if there are enough people to test a software, most bugs will be discovered.

As science often progresses incrementally, writing and sharing a study protocol and making data and methods readily available is crucial to facilitate knowledge building. The Open Science Framework (osf.io) is a free and open-source project management tool that supports researchers throughout the entire project life cycle. OSF enables preregistration of study protocols and sharing of documents, data, analysis code, supplementary materials, and preprints.

To facilitate reproducibility, a research paper can link to data and analysis code deposited on OSF. Computational notebooks are now readily available that unite data processing, data transformations, statistical analyses, figures and tables in a single document (e.g., R Markdown, Jupyter); see also the 10 simple rules for reproducible computational research [ 76 ]. Making both data and code open thus minimizes waste of funding resources and accelerates science.

Open science can also advance researchers’ careers, especially for early-career researchers. The increased visibility, retrievability, and citations of datasets can all help with career building [ 77 ]. Therefore, institutions should provide necessary training, and hiring committees and journals should align their core values with open science, to attract researchers who aim for transparent and credible research [ 78 ].

Research reporting

Rule 9: report all findings.

Publication bias occurs when the outcome of a study influences the decision whether to publish it. Researchers, reviewers, and publishers often find nonsignificant study results not interesting or worth publishing. As a consequence, outcomes and analyses are only selectively reported in the literature [ 79 ], also known as the file drawer effect [ 80 ].

The extent of publication bias in the literature is illustrated by the overwhelming frequency of statistically significant findings [ 81 ]. A study extracted p -values from MEDLINE and PubMed Central and showed that 96% of the records reported at least 1 statistically significant p -value [ 82 ], which seems implausible in the real world. Another study plotted the distribution of more than 1 million z -values from Medline, revealing a huge gap from −2 to 2 [ 83 ]. Positive studies (i.e., statistically significant, perceived as striking or showing a beneficial effect) were 4 times more likely to get published than negative studies [ 84 ].

Often a statistically nonsignificant result is interpreted as a “null” finding. But a nonsignificant finding does not necessarily mean a null effect; absence of evidence is not evidence of absence [ 85 ]. An individual study may be underpowered, resulting in a nonsignificant finding, but the cumulative evidence from multiple studies may indeed provide sufficient evidence in a meta-analysis. Another argument is that a confidence interval that contains the null value often also contains non-null values that may be of high practical importance. Only if all the values inside the interval are deemed unimportant from a practical perspective, then it may be fair to describe a result as a null finding [ 61 ]. We should thus never report “no difference” or “no association” just because a p -value is larger than 0.05 or, equivalently, because a confidence interval includes the “null” [ 61 ].

On the other hand, studies sometimes report statistically nonsignificant results with “spin” to claim that the experimental treatment is beneficial, often by focusing their conclusions on statistically significant differences on secondary outcomes despite a statistically nonsignificant difference for the primary outcome [ 86 , 87 ].

Findings that are not being published have a tremendous impact on the research ecosystem, distorting our knowledge of the scientific landscape by perpetuating misconceptions, and jeopardizing judgment of researchers and the public trust in science. In clinical research, publication bias can mislead care decisions and harm patients, for example, when treatments appear useful despite only minimal or even absent benefits reported in studies that were not published and thus are unknown to physicians [ 88 ]. Moreover, publication bias also directly affects the formulation and proliferation of scientific theories, which are taught to students and early-career researchers, thereby perpetuating biased research from the core. It has been shown in modeling studies that unless a sufficient proportion of negative studies are published, a false claim can become an accepted fact [ 89 ] and the false positive rates influence trustworthiness in a given field [ 90 ].

In sum, negative findings are undervalued. They need to be more consistently reported at the study level or be systematically investigated at the systematic review level. Researchers have their share of responsibilities, but there is clearly a lack of incentives from promotion and tenure committees, journals, and funders.

Rule 10: Follow reporting guidelines

Study reports need to faithfully describe the aim of the study and what was done, including potential deviations from the original protocol, as well as what was found. Yet, there is ample evidence of discrepancies between protocols and research reports, and of insufficient quality of reporting [ 79 , 91 – 95 ]. Reporting deficiencies threaten our ability to clearly communicate findings, replicate studies, make informed decisions, and build on existing evidence, wasting time and resources invested in the research [ 96 ].

Reporting guidelines aim to provide the minimum information needed on key design features and analysis decisions, ensuring that findings can be adequately used and studies replicated. In 2008, the Enhancing the QUAlity and Transparency Of Health Research (EQUATOR) network was initiated to provide reporting guidelines for a variety of study designs along with guidelines for education and training on how to enhance quality and transparency of health research. Currently, there are 468 reporting guidelines listed in the network; see the most prominent guidelines in Table 2 . Furthermore, following the ICMJE recommendations, medical journals are increasingly endorsing reporting guidelines [ 97 ], in some cases making it mandatory to submit the appropriate reporting checklist along with the manuscript.

The EQUATOR Network is a library with more than 400 reporting guidelines in health research ( www.equator-network.org ).

The use of reporting guidelines and journal endorsement has led to a positive impact on the quality and transparency of research reporting, but improvement is still needed to maximize the value of research [ 98 , 99 ].

Conclusions

Originally, this paper targeted early-career researchers; however, throughout the development of the rules, it became clear that the present recommendations can serve all researchers irrespective of their seniority. We focused on practical guidelines for planning, conducting, and reporting of research. Others have aligned GRP with similar topics [ 100 , 101 ]. Even though we provide 10 simple rules, the word “simple” should not be taken lightly. Putting the rules into practice usually requires effort and time, especially at the beginning of a research project. However, time can also be redeemed, for example, when certain choices can be justified to reviewers by providing a study protocol or when data can be quickly reanalyzed by using computational notebooks and dynamic reports.

Researchers have field-specific research skills, but sometimes are not aware of best practices in other fields that can be useful. Universities should offer cross-disciplinary GRP courses across faculties to train the next generation of scientists. Such courses are an important building block to improve the reproducibility of science.

Acknowledgments

This article was written along the Good Research Practice (GRP) courses at the University of Zurich provided by the Center of Reproducible Science ( www.crs.uzh.ch ). All materials from the course are available at https://osf.io/t9rqm/ . We appreciated the discussion, development, and refinement of this article within the working group “training” of the SwissRN ( www.swissrn.org ). We are grateful to Philip Bourne for a lot of valuable comments on the earlier versions of the manuscript.

Funding Statement

S.S. received funding from SfwF (Stiftung für wissenschaftliche Forschung an der Universität Zürich; grant no. STWF-19-007). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Subject Guides

Critical Reading

Purpose of this guide, what is critical reading, benefits of critical reading.

  • Project Abstract - Library Research Scholars Program
  • Instructional methods
  • Resources for Critical Reading
  • Research Days Poster This link opens in a new window

This guide will focus on providing an introduction to Critical reading, including: What Critical Reading is, its benefits for students, and various examples of instructional methods. The guide will also provide resources for further explanation or implementation of critical reading into the curricula.

Critical Reading (CR) is the process of analyzing a scientific paper with the intent to determine the validity, reliability, rigor and contributions to the scientific scholarly conversation of the work done. It is fundamentally an expansion to the concept of information literacy, encouraging further insight into the qualifications and significance of scientific work.

Critical Reading instruction has been found to have many benefits for students including:

  • Ability to identify the author’s intent
  • Ability to identify relevance and significance of results to the initial research proposal, 
  • Understanding the applicability of the findings to academia as a whole and to other works
  • Improved student perception and interest in STEM fields
  • Skills in developing research questions
  • Increased student self-reported confidence in reading scientific literature 
  • Next: Project Abstract - Library Research Scholars Program >>
  • Last Updated: Apr 17, 2024 1:56 PM
  • URL: https://libraryguides.binghamton.edu/criticalreading
  • share facebook
  • share twitter
  • share pinterest
  • share linkedin
  • share email

Greater Good Science Center • Magazine • In Action • In Education

Mind & Body Articles & More

How patience can help you find your purpose, a two-year study suggests practicing patience may be critical to finding and pursuing purpose..

What am I going to do with my life? What really matters to me? How will I leave my mark?

These questions can fill us with hope, inspiration, and direction when we have some sense of what the answers may be. If we don’t, they can fill us with confusion, frustration, and irritation.

Leading a life of purpose, or making an enduring commitment to contributing to the broader world in personally meaningful ways, is associated with a range of benefits, including better physical health, enhanced psychological well-being, superior academic achievement, and enriched social connections. Despite these advantages, leading a life of purpose is rare, as researcher William Damon describes in his 2009 book, The Path to Purpose : As many as two out of three young adults struggle to articulate a clear purpose for their lives.

critical of good research

Before young people can identify a purpose, they need to engage in a process of self-exploration. Searching for a purpose in life is not often studied, but when it has been, scholars have found it to be a source of stress and anxiety, especially when it feels like everyone else has it all figured out. (Rest assured, others are likely still working it out, too!)

Members of my Adolescent Moral Development Lab and I became interested in how we could help young adults navigate the potentially distressing process of searching for a purpose in life. With the generous support of a grant from the Templeton Religion Trust, we conducted a two-year study, and our emerging findings suggest practicing patience may be a critical and often overlooked element of a productive and fulfilling search for purpose. 

How patience and purpose go hand in hand

Patience is the ability to stay actively engaged in working toward a goal without becoming frustrated. Patiently pursuing purpose does not mean sitting by and waiting for inspiration to strike. Instead, it means engaging in the personal reflection and intentional conversations that help us figure out how we want to contribute to the broader world without feeling rushed or hurried. Accepting that the search is a long-term endeavor can help us cultivate our purpose in a more efficient and growth-supporting way.

Practicing patience may facilitate the search for purpose, and this is important because our research also suggests that searching for purpose is not a one-and-done kind of activity. It is unlikely to be the case that we search for a purpose once and then spend the rest of our lives pursuing that single purpose. Instead, we tend to pursue multiple purposes across our lifetimes. Purposes wax and wane with the other things going on in our lives.

For instance, we may find purpose in parenting, but that purpose may transform when we launch our adult children and reinvest in personally meaningful work-related aims. Others of us may find purpose in work, and upon retirement those purposes may recede as we find new ways of contributing to our communities. For young adults, purposes are likely to evolve as they navigate the many transitions associated with this stage of life (e.g., moving from high school into college and from college into the working world). Moves like these are often accompanied by evolutions in our purposes in life.

The point is that the search for purpose is an ongoing activity. Even when we know how we want to leave our mark, we are still likely to search for new ways of making progress toward our personally meaningful aims or for new ways of contributing to the broader world.

Given that the search for purpose is likely to represent a long-term, possibly even a lifelong, activity, it is worthwhile to understand how we can engage in the self-exploration process in the most productive and rewarding way possible. Emerging findings from our study suggest patience may help optimize the search process in at least five ways.

Practicing patience allows us to stand back and take in the full picture of the aim we are after. We can become so focused on figuring out what it is we want to accomplish that we lose the forest for the trees. Taking a broad perspective on the purpose development process may yield insights into progress made to date, and recognizing and even celebrating this progress can fuel our ongoing efforts. Allowing ourselves time to take in the bigger picture may reveal more efficient routes for making progress toward our purpose.

Patience may bolster resilience. Patient individuals take setbacks in stride; they continue making forward progress despite them. Rather than being derailed by challenges in the pursuit of purpose, patient individuals view hardships as inevitable and surmountable. Practicing patience is an important way of cultivating the resilience required to both search for and pursue a purpose in life, as Anne Colby suggests in her 2020 paper, “Purpose as a Unifying Goal for Higher Education.”

Practicing patience may encourage a more thoughtful approach to pursuing meaningful aims. Rather than moving forward in haste, patient individuals move ahead with intention and deliberation, and this may support more sustainable progress in the search for purpose. Compared to others, patient individuals may be more likely to take time to develop relationships with mentors and like-minded peers who can facilitate their progress toward purpose. Slowing down to connect with others along our path to purpose can help us make progress in figuring out how we want to leave our mark (and these relationships may also support our pursuit of purpose, once we have determined what it entails).

Patience in the pursuit of larger aims may foster personal growth. In addition to encouraging resilience and social connections, practicing patience builds self-regulation, self-discipline, and deferred-gratification skills. Developing these strengths of character is likely to benefit individuals in many life domains, including in future periods of self-exploration and subsequent purpose cultivation efforts.

Finally, patient individuals may be more likely than impatient individuals to enjoy the search. Patience enables us to savor the process of figuring out what matters most and how we want to meaningfully contribute to the broader world. It allows us time to celebrate the small successes and be present in the purpose cultivation process. The mindfulness that can accompany a patient pursuit of purpose is likely to enhance our well-being during the search process and in our lives more generally.

In each of these ways, patience may represent a critical component of a healthy and productive search for purpose.

The bottom line: Whether searching for our own purpose in life or supporting someone in their search, remember to practice patience. When we find ourselves becoming agitated and frustrated by the feeling that everyone else has it all figured out, we should remind ourselves to slow down. Take heart in knowing that the process requires time. Focus on the big picture, recall that setbacks are inevitable and surmountable, connect with others who can support your search, take stock of gains, and find the joy in the process, if you can. Before you know it, you might just have figured out how you want to use your skills and talents to contribute in meaningful ways to the world beyond yourself. To read the published manuscripts from which these findings were drawn, please visit Kendall Cotton Bronk’s website . Upon publication, articles from this study will be posted there.

About the Author

Kendall Cotton Bronk

Kendall Cotton Bronk

Kendall Cotton Bronk, Ph.D. , is an associate professor of psychology at the Claremont Graduate University in the Division of Behavioral and Social Sciences, where she studies the things that give young people’s lives purpose. Dr. Bronk teamed up with the Greater Good Science Center and social impact firm ProSocial, with support from the John Templeton Foundation, to translate research on purpose into an online toolkit called The Purpose Challenge , which youth can use to explore their own purpose in life.

You May Also Enjoy

Five Ways to Foster Purpose in Adolescents

This article — and everything on this site — is funded by readers like you.

Become a subscribing member today. Help us continue to bring “the science of a meaningful life” to you and to millions around the globe.

Home

U.S. Government Accountability Office

Federal Research: Key Practices for Scientific Program Managers

In fiscal year 2021, the federal government funded over $85 billion in basic research as well as early research directed toward a specific practical aim. Federal research spurs innovation and promotes national economic competitiveness, prosperity, and security.

Scientific program managers at federal agencies that sponsor research play a crucial role in guiding and shaping the research. This report identifies key practices that program managers use to select, monitor, and coordinate research for their agencies. It can also serve as a resource to help program managers, agencies, and others to understand, assess, and improve research management.

Two people in white lab coats and gloves in a laboratory.

What GAO Found

To oversee basic and applied research at federal agencies, scientific program managers are typically responsible for managing award selection, monitoring ongoing awards, and coordinating with awardees and the research community. Program managers GAO interviewed from selected agencies identified key practices they used to carry out these responsibilities. They said these practices helped advance their agencies' goals, further science, and avoid unnecessary duplication. Further, the practices may help program managers, agencies, and others assess and improve management of basic and applied research.

As outlined in the figure below, the key practices fall into three areas.

  • Strengthening and building expertise—Practices that help program managers maintain scientific and management expertise.
  • Developing connections—Practices that help program managers enhance collaboration with the scientific community and the public, as well as within their own agencies and in other agencies.
  • Building a strong research portfolio—Practices that help program managers advance their agencies' research mission and scientific knowledge in general, while ensuring their own accountability and that of federally funded researchers.

Key Practices for Federal Program Managers to Select, Coordinate, and Monitor Scientific Research

critical of good research

Why GAO Did This Study

The federal government invests in basic and applied scientific research to drive innovation, promote economic competitiveness, and enhance national security. The National Science Foundation estimates that 32 federal agencies funded over $85 billion in basic and applied research in fiscal year 2021.

Scientific program managers at federal agencies that sponsor basic and applied research play a critical role in guiding and shaping the research funded by their agencies.

In this report, GAO describes key practices that federal program managers use to manage their research.

GAO held 14 group discussions with 79 program managers from seven selected agencies that funded over 90 percent of basic and applied research obligations in fiscal year 2021. GAO asked the program managers to describe the practices they use when managing projects in their basic and applied research portfolios. GAO conducted qualitative analysis to identify common themes and distilled them into 10 key practices. These key practices were cited by multiple program managers or agencies and could be used by program managers across the federal government when managing projects in their basic and applied research portfolios.

GAO also conducted a literature review to help corroborate the key practices. GAO sought and incorporated feedback on these practices from the selected agencies as well as experts identified by the National Academies of Sciences, Engineering, and Medicine.

For more information, contact Candice N. Wright at (202) 512-6888 or [email protected] .

Full Report

Gao contacts.

Candice N. Wright Director [email protected] (202) 512-6888

Office of Public Affairs

Chuck Young Managing Director [email protected] (202) 512-4800

critical of good research

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

How to Make a “Good” Presentation “Great”

  • Guy Kawasaki

critical of good research

Remember: Less is more.

A strong presentation is so much more than information pasted onto a series of slides with fancy backgrounds. Whether you’re pitching an idea, reporting market research, or sharing something else, a great presentation can give you a competitive advantage, and be a powerful tool when aiming to persuade, educate, or inspire others. Here are some unique elements that make a presentation stand out.

  • Fonts: Sans Serif fonts such as Helvetica or Arial are preferred for their clean lines, which make them easy to digest at various sizes and distances. Limit the number of font styles to two: one for headings and another for body text, to avoid visual confusion or distractions.
  • Colors: Colors can evoke emotions and highlight critical points, but their overuse can lead to a cluttered and confusing presentation. A limited palette of two to three main colors, complemented by a simple background, can help you draw attention to key elements without overwhelming the audience.
  • Pictures: Pictures can communicate complex ideas quickly and memorably but choosing the right images is key. Images or pictures should be big (perhaps 20-25% of the page), bold, and have a clear purpose that complements the slide’s text.
  • Layout: Don’t overcrowd your slides with too much information. When in doubt, adhere to the principle of simplicity, and aim for a clean and uncluttered layout with plenty of white space around text and images. Think phrases and bullets, not sentences.

As an intern or early career professional, chances are that you’ll be tasked with making or giving a presentation in the near future. Whether you’re pitching an idea, reporting market research, or sharing something else, a great presentation can give you a competitive advantage, and be a powerful tool when aiming to persuade, educate, or inspire others.

critical of good research

  • Guy Kawasaki is the chief evangelist at Canva and was the former chief evangelist at Apple. Guy is the author of 16 books including Think Remarkable : 9 Paths to Transform Your Life and Make a Difference.

Partner Center

Why your students don't do the readings

Dr Sandris Zeivots

Dr Sandris Zeivots

Most higher education students do not engage with course readings, but new research suggests that can be fixed by rethinking how and why they are assigned.

Published in the Academy of Management Learning and Education , the research proposes six aspects to consider when seeking meaningful student engagement with readings: usefulness, enjoyment, quantity, access, intent and integration with the course.

Dr Sandris Zeivots said the research was inspired by his work in course co-design at the University of Sydney Business School .

“ Existing research  suggests 70 to 80 percent of students do not engage with course readings. That’s for a whole host of reasons, from time constraints and language barriers to a lack of understanding of the purpose of readings,” Dr Zeivots said.

“Anecdotally, lecturers know most students don’t comply with the system, but we act as though the fault is with the students. This research started by flipping that assumption and asking: what are we trying to achieve with readings? And, how can we better design them so that students will want to engage?”

Dr Zeivots worked with International Business course coordinator Professor Vikas Kumar to redesign the readings in the popular Master of Commerce subject at the Business School.

After examining the existing literature and developing their framework, Dr Zeivots and his co-author Ms Courtney Shalavin worked with Professor Kumar, tutors and students over the course of three consecutive semesters – a process that led to half the readings being changed.

They introduced one to two ‘must-read’ pages for each reading that contained the key points, and an unmarked online discussion question to guide students’ approach to each reading and to engage their critical thinking.

Surveys revealed just over half (54 percent) of students reported reading the must-read pages, and engagement with readings increased slightly over the semester, rather than decreased with student fatigue.

Professor Kumar said the discussion questions allowed tutors to engage students on the concepts raised by the readings without the pressure of a grade.

“Involving students in the co-design process produced invaluable feedback. For example, we learned students wanted to learn more about fintech, so we introduced more fintech case studies that aligned with our core teaching concepts.”

Declaration

This research was conducted as part of the Connected Learning at Scale project at the University of Sydney Business School. The authors declare no conflicts of interest.

Media contact

Harrison vesey.

Related articles

Museum as classroom.

The chance to handle museum objects is bringing study to life for students from across the disciplines, with object-based learning preparing for a second semester.

Teaching the joy of learning

Our core purpose remains critical for universities of the future.

Smithsonian Voices

From the Smithsonian Museums

Smithsonian Tropical Research Institute logo

SMITHSONIAN TROPICAL RESEARCH INSTITUTE

Harnessing the Wisdom of Indigenous Communities for Marine Conservation

By engaging directly with community members and embracing indigenous knowledge in the Bocas del Toro archipelago, a NatGeo project led by a Smithsonian scientist highlights the necessity of inclusive approaches to safeguard critical marine ecosystems and culture for future generations.

Leila Nilipour

A7S09048.jpg

There’s a cold front in the Bocas del Toro archipelago when we arrive in mid-February. It should be peak summertime in Panama’s Caribbean. Instead, we are greeted by cloudy skies, light rain, and choppy seas. With our rain gear and life jackets on, we take off from the Smithsonian’s dock at the Bocas del Toro research station towards Popa island to meet with the Ngäbe indigenous communities living there. This visit is part of the NatGeo project “The Many Faces of Conservation: Impacts and meaning of Bastimentos Island National Marine Park on the Ngäbe in Panama” led by Ana Spalding, the director of the Adrienne Arsht Community Based Resilience Solutions Initiative and Staff Scientist at the Smithsonian Tropical Research Institute. 

In 1988, when Bastimentos Island National Marine Park was established in Bocas del Toro, the indigenous islanders living in its buffer zone were not consulted. As an environmental social scientist, Spalding is interested in listening to their side of the story, considering that the government of Panama has recently explored the possibility of expanding the park. The ecological knowledge of the Ngäbe — the largest indigenous group in Panama— is indispensable for informing policy decisions that may directly affect their ways of living and interacting with their environment in the future.  

Her goal is to understand the community’s relationship to the natural resources of the Bocas del Toro archipelago, whether the creation of the National Park in the eighties had any positive or negative impacts on their livelihoods, and how they feel about a potential expansion. She also seeks to gather their perspectives on best ways to protect their marine resources.  

None

“Within the existing forms of conservation, very little information exists on local knowledge and local uses, particularly local indigenous uses,” said Spalding. "You cannot do conservation with your back to the people." 

To do this work, she enlisted two colleagues with long-term ties to the region and its people: Felipe Baker a Ngäbe biologist from Kusapin —a coastal community on the Bocas del Toro mainland— and Cinda Scott, a marine biologist, and Center Director at The School for Field Studies (SFS) in Bocas del Toro. When we arrived in Popa, community members gathered in a traditional community house near the dock to meet us. Most were fishermen or housekeepers. It was 10:20 a.m., a bit later than we had anticipated because of the unusual weather conditions. They received us with sweet freshly-brewed coffee and johnny cakes —a traditional coconut-based bread from Bocas del Toro.  

Cinda and Felipe led the discussion, in Spanish and, at times, in Ngäbere, while Ana took meticulous notes –making sure to organize each participants’ viewpoint based on their affiliation with specific groups within the community, such as homemakers, artisans, fishermen, or tour operators. They made it clear that the meeting had no political motives —an important clarification as the visit took place in the middle of an electoral year. 

They voiced concerns over the disparity in expectations regarding mangrove forest conservation across the archipelago. Despite being prohibited from felling any mangrove trees which they rely upon for cooking fuel, they witness a stark contrast in treatment, as foreigners acquiring land in Bocas del Toro clear mangroves on their properties apparently with impunity. Additionally, they lamented the seeming decline in biodiversity, citing the disappearance of once-plentiful species such as the sardine from the mangroves and the diminishing size of traditionally relied-upon species like lobster. 

None

Ultimately, they found themselves grappling with a sense of powerlessness when it came to safeguarding their resources. This stemmed from the belief that they lacked the necessary authority to prevent others from over-exploiting them.   

“We could call their attention a thousand times, but their response will always be that the ocean is free,” one of them said. “We could be protecting our resources, but then someone else comes to destroy what we are protecting.”   

Over the next few days, we visited three other island communities—Bahía Honda, Salt Creek, and Isla Tigre—to meet with their residents. They shared a common concern: the exploitation of natural resources by outsiders, resources they depend on for subsistence or economic activities such as tourism. These resources include mangroves, coral reefs, dolphins, endangered marine species, and even their marine protein sources. 

“There are very good laws, but they’re not being enforced,” said a Bahia Honda resident. “We witness it when we see them building a house over the reefs.”  

A recurring pattern emerged in the four communities we visited: middle-aged and elderly men typically exhibited greater ease in expressing their perspectives, while women and younger members tended to listen attentively, except for Isla Tigre where most of the attendees were women. Following each group session, Felipe Baker conducted individual interviews with a few members of each community, gathering insights that might not surface in a collective setting. 

None

“I am a Ngäbe professional, so I am an instrument for understanding those thoughts, that knowledge, those ideas, a little more,” said Baker. “The indigenous communities have knowledge to share with the international community, the people who want to conserve, because only then can we talk about equity in conservation.” 

As a gesture of gratitude to each island community, we provided ingredients for a communal lunch to conclude our day together. The meal usually consisted of chicken, rice, plantains and cabbage salad. I scribbled an observation about it in my notebook: Would you share a cup of coffee or a meal with someone you didn’t trust? Perhaps this is the natural result of people-centered environmental efforts. 

“Local communities surrounding protected areas possess invaluable insights into what they value, what they seek to preserve, and what benefits their communities,” said Cinda Scott. “As external actors, it's essential for us to pause and genuinely observe the realities on the ground. The most effective way to gain this understanding is by engaging directly with the people who inhabit these areas, listening to their voices, and incorporating their wisdom into conservation efforts.” 

For Spalding, it is a matter of respect; a matter of relationship-building and acknowledging the different ways of knowing and of understanding the environment. 

“If there is only one voice speaking about conservation, we are giving privilege to that voice,” said Spalding. “The idea is to have multiple voices and ideally find a solution to the major environmental problems we are experiencing.” 

None

Ultimately, The Many Faces of Conservation project strives to foster equity in conversations about conservation. Upon producing a report, Spalding and her teammates intend to circulate this valuable information back to the communities involved, ensuring that they have access to and can benefit from the insights gathered, for their own self-determination or for the improvement of their well-being. 

Our last day in Bocas del Toro met us with sunshine and beautiful blue skies. Setting off on our last boat journey from the Isla Tigre community back to the Smithsonian research station, the mangrove forests dotting the archipelago landscape throughout our ride served as reminders of their crucial role in shaping the identity of all those who call themselves bocatoreños. 

There is no Bocas without the mangroves, I often overheard during our week in the archipelago. For locals, the mere thought of losing these vital ecosystems was inconceivable.

None

“It is the habitat where humans and the ocean meet,” said Scott. 

If only we could perceive them through the lens of those — human or animal— who have been protected or have flourished amidst their roots and branches, in part by embracing indigenous ways of knowing, we would grasp the shared responsibility we hold to preserve them for the benefit and well-being of future generations.  

Leila Nilipour

Leila Nilipour | READ MORE

Leila Nilipour is a bilingual storyteller based in Panama City. 

IMAGES

  1. RES 5 Characteristics of Good Research / lecture and notes

    critical of good research

  2. essential characteristics of research-requirements of a good research

    critical of good research

  3. How to Develop a Strong Research Question

    critical of good research

  4. Top 10 Qualities of Good Academic Research in 2024

    critical of good research

  5. Characteristics of Good Research Design

    critical of good research

  6. Characteristics and criteria of good research

    critical of good research

VIDEO

  1. Ensuring trustworthiness in qualitative research

  2. Gut Eze: 30 Capsules

  3. Characteristics of a Good Research and Quality of a Good Researcher

  4. Criteria Of Good Research

  5. Critical Thinking

  6. 1.3 Characteristics of Good Research (Business Research Methods)

COMMENTS

  1. Criteria for Good Qualitative Research: A Comprehensive Review

    This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then ...

  2. The critical steps for successful research: The research proposal and

    The four cornerstones of good research are the well-formulated protocol or proposal that is well executed, analyzed, discussed and concluded. This recent workshop educated researchers in the critical steps involved in the development of a scientific idea to its successful execution and eventual publication. ... The objectives of the workshop ...

  3. A Review of the Quality Indicators of Rigor in Qualitative Research

    Developing a FINER research question is critical to study rigor and quality and should not be rushed, as all other aspects of research design depend on the focus and clarity of the research question(s) guiding the study. 15 Agee provides clear and worthwhile additional guidance for developing qualitative research questions. 15.

  4. PDF Criteria for Good Qualitative Research: A Comprehensive Review

    evaluating good research and varieties of research contri-butions that can be made. This review attempts to present a ... qualitative research and to verify the existence of research studies dealing with the critical assessment of qualitative research based on the concept of diverse paradigmatic stances. Contrary to the existing reviews, this ...

  5. What is Good Qualitative Research?

    good qualitative research and one of the most common omissions in qualitative articles. If the sample is representing the themes around an issue using theoretical sampling, cases will be collected until issues are felt to be 'theoretically saturated'; i.e. no new relevant data seem to emerge (Strauss & Corbin, 1990).

  6. Criteria for Good Qualitative Research: A Comprehensive Review

    Drishti Yadav. 1. Accepted: 28 August 2021. The Author (s) 2021. Abstract This review aims to synthesize a published set of. evaluative criteria for good qualitative research. The aim is. to shed ...

  7. Research quality: What it is, and how to achieve it

    2) Initiating research stream: The researcher (s) must be able to assemble a research team that can achieve the identified research potential. The team should be motivated to identify research opportunities and insights, as well as to produce top-quality articles, which can reach the highest-level journals.

  8. Quality in Research: Asking the Right Question

    This column is about research questions, the beginning of the researcher's process. For the reader, the question driving the researcher's inquiry is the first place to start when examining the quality of their work because if the question is flawed, the quality of the methods and soundness of the researchers' thinking does not matter.

  9. Critical appraisal of qualitative research

    Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the 'how' and 'why'. As we have argued previously1, qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety,2 prescribing,3 4 and ...

  10. Good listening: A key element in establishing quality in qualitative

    Listening studies demonstrate that good listening is a major part of the interaction between a listener and a speaker. Numerous findings suggest that the insights of these quantitative studies could also be highly relevant to the interaction between interviewer and interviewee in the context of qualitative research.

  11. A guide to critical appraisal of evidence : Nursing2020 Critical Care

    Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers ...

  12. Full article: Critical appraisal

    Critical appraisal 'The notion of systematic review - looking at the totality of evidence - is quietly one of the most important innovations in medicine over the past 30 years' (Goldacre, Citation 2011, p. xi).These sentiments apply equally to sport and exercise psychology; systematic review or evidence synthesis provides transparent and methodical procedures that assist reviewers in ...

  13. What is good Research and what makes a good research

    Answer: Good quality research is one that provides robust and ethical evidence. A good research must revolve around a novel question and must be based on a feasible study plan. It must make a significant contribution to scientific development by addressing an unanswered question or by solving a problem or difficulty that existed in the real world.

  14. Research Frameworks: Critical Components for Reporting ...

    Research Frameworks: Critical Components for Reporting Qualitative Health Care Research J Patient ... framework, what to call it, and how to use one. This editorial clarifies some of the terminology and reinforces why research frameworks are essential for good-quality reporting of all research, especially qualitative research. ...

  15. The Importance of Critical Thinking Skills in Research

    The answer is critical thinking skills. The more that academic research becomes governed by policies outside of the research process, the less opportunity there will be for researchers to exercise such skills. True research demands new ideas, perspectives, and arguments based on willingness and confidence to revisit and directly challenge ...

  16. PDF Step'by-step guide to critiquing research. Part 1: quantitative research

    research is research, and it is often quite difficult to grasp what others are referring to when they discuss the limitations and or strengths within a research study. Research texts and journals refer to critiquing the literature, critical analysis, reviewing the literature, evaluation and appraisal of the literature which

  17. Research: Definition, Characteristics, Goals, Approaches

    The primary goal or purpose of research in any field of inquiry; is to add to what is known about the phenomenon under investigation by applying scientific methods. Though each research has its own specific goals, we may enumerate the following 4 broad goals of scientific research: Exploration and Explorative Research.

  18. (PDF) Qualities and Characteristics of a Good Scientific Research

    Qualities and Characteristics of a Good Scientific Research. Writing; Step-by-Step Approaches. Val Hyginus Udoka Eze 1, * Chidinma Esther Eze, Asiati Mbabazi, Ugwu. Chinyere N, Ugwu Okechukwu Paul ...

  19. Applying Critical Thinking

    Critical thinking refers to deliberately scrutinizing and evaluating theories, concepts, or ideas using reasoned reflection and analysis. The act of thinking critically implies moving beyond simply understanding information, but questioning its source, its production, and its presentation in order to expose potential bias or researcher subjectivity [i.e., being influenced by personal opinions ...

  20. Ten simple rules for good research practice

    GRP, good research practices. Research planning. Rule 1: Specify your research question ... Rombey T, Wayant C, et al. Reducing bias and improving transparency in medical research: a critical overview of the problems, progress and suggested next steps. J R Soc Med. 2020; 113:433-43. doi: 10.1177/0141076820956799 [PMC free article] [Google ...

  21. (PDF) The Criteria of a Good Research

    this research is to mention the criteria that may assist us to write a. good research. 1- The purpose of research or the problem involved should be. clearly defined and sharply limited in terms as ...

  22. Home

    Critical Reading instruction has been found to have many benefits for students including: Ability to identify the author's intent; Ability to identify relevance and significance of results to the initial research proposal, Understanding the applicability of the findings to academia as a whole and to other works

  23. How Patience Can Help You Find Your Purpose

    Kendall Cotton Bronk. Kendall Cotton Bronk, Ph.D., is an associate professor of psychology at the Claremont Graduate University in the Division of Behavioral and Social Sciences, where she studies the things that give young people's lives purpose.Dr. Bronk teamed up with the Greater Good Science Center and social impact firm ProSocial, with support from the John Templeton Foundation, to ...

  24. Federal Research: Key Practices for Scientific Program Managers

    The National Science Foundation estimates that 32 federal agencies funded over $85 billion in basic and applied research in fiscal year 2021. Scientific program managers at federal agencies that sponsor basic and applied research play a critical role in guiding and shaping the research funded by their agencies.

  25. How to Make a "Good" Presentation "Great"

    A strong presentation is so much more than information pasted onto a series of slides with fancy backgrounds. Whether you're pitching an idea, reporting market research, or sharing something ...

  26. Cheating death: The latest research on aging and immortality from a

    In a new book, Nobel Prize-winning molecular biologist Venki Ramakrishnan raises critical questions about the societal, political and ethical costs of attempts to live forever.

  27. Essential Ingredients of a Good Research Proposal for Undergraduate and

    The chapters on: (a) critical and analytical review of the main literature (an expansion of the mini literature review in the research proposal) including the development of an appropriate theoretical framework (for MPhil and PhD theses); (b) research methodology; (c) data presentation, analysis, and discussion; and (d) summary of research ...

  28. Why your students don't do the readings

    Dr Sandris Zeivots said the research was inspired by his work in course co-design at the University of Sydney Business School. "Existing research suggests 70 to 80 percent of students do not engage with course readings. That's for a whole host of reasons, from time constraints and language barriers to a lack of understanding of the purpose ...

  29. Harnessing the Wisdom of Indigenous Communities for Marine Conservation

    With our rain gear and life jackets on, we take off from the Smithsonian's dock at the Bocas del Toro research station towards Popa island to meet with the Ngäbe indigenous communities living ...