• News & Highlights

Search

  • Publications and Documents
  • Postgraduate Education
  • Browse Our Courses
  • C/T Research Academy
  • K12 Investigator Training
  • Translational Innovator
  • SMART IRB Reliance Request
  • Biostatistics Consulting
  • Regulatory Support
  • Pilot Funding
  • Informatics Program
  • Community Engagement
  • Diversity Inclusion
  • Research Enrollment and Diversity
  • Harvard Catalyst Profiles

Harvard Catalyst Logo

Creating a Good Research Question

  • Advice & Growth
  • Process in Practice

Successful translation of research begins with a strong question. How do you get started? How do good research questions evolve? And where do you find inspiration to generate good questions in the first place?  It’s helpful to understand existing frameworks, guidelines, and standards, as well as hear from researchers who utilize these strategies in their own work.

In the fall and winter of 2020, Naomi Fisher, MD, conducted 10 interviews with clinical and translational researchers at Harvard University and affiliated academic healthcare centers, with the purpose of capturing their experiences developing good research questions. The researchers featured in this project represent various specialties, drawn from every stage of their careers. Below you will find clips from their interviews and additional resources that highlight how to get started, as well as helpful frameworks and factors to consider. Additionally, visit the Advice & Growth section to hear candid advice and explore the Process in Practice section to hear how researchers have applied these recommendations to their published research.

  • Naomi Fisher, MD , is associate professor of medicine at Harvard Medical School (HMS), and clinical staff at Brigham and Women’s Hospital (BWH). Fisher is founder and director of Hypertension Services and the Hypertension Specialty Clinic at the BWH, where she is a renowned endocrinologist. She serves as a faculty director for communication-related Boundary-Crossing Skills for Research Careers webinar sessions and the Writing and Communication Center .
  • Christopher Gibbons, MD , is associate professor of neurology at HMS, and clinical staff at Beth Israel Deaconess Medical Center (BIDMC) and Joslin Diabetes Center. Gibbons’ research focus is on peripheral and autonomic neuropathies.
  • Clare Tempany-Afdhal, MD , is professor of radiology at HMS and the Ferenc Jolesz Chair of Research, Radiology at BWH. Her major areas of research are MR imaging of the pelvis and image- guided therapy.
  • David Sykes, MD, PhD , is assistant professor of medicine at Massachusetts General Hospital (MGH), he is also principal investigator at the Sykes Lab at MGH. His special interest area is rare hematologic conditions.
  • Elliot Israel, MD , is professor of medicine at HMS, director of the Respiratory Therapy Department, the director of clinical research in the Pulmonary and Critical Care Medical Division and associate physician at BWH. Israel’s research interests include therapeutic interventions to alter asthmatic airway hyperactivity and the role of arachidonic acid metabolites in airway narrowing.
  • Jonathan Williams, MD, MMSc , is assistant professor of medicine at HMS, and associate physician at BWH. He focuses on endocrinology, specifically unravelling the intricate relationship between genetics and environment with respect to susceptibility to cardiometabolic disease.
  • Junichi Tokuda, PhD , is associate professor of radiology at HMS, and is a research scientist at the Department of Radiology, BWH. Tokuda is particularly interested in technologies to support image-guided “closed-loop” interventions. He also serves as a principal investigator leading several projects funded by the National Institutes of Health and industry.
  • Osama Rahma, MD , is assistant professor of medicine at HMS and clinical staff member in medical oncology at Dana-Farber Cancer Institute (DFCI). Rhama is currently a principal investigator at the Center for Immuno-Oncology and Gastroenterology Cancer Center at DFCI. His research focus is on drug development of combinational immune therapeutics.
  • Sharmila Dorbala, MD, MPH , is professor of radiology at HMS and clinical staff at BWH in cardiovascular medicine and radiology. She is also the president of the American Society of Nuclear Medicine. Dorbala’s specialty is using nuclear medicine for cardiovascular discoveries.
  • Subha Ramani, PhD, MBBS, MMed , is associate professor of medicine at HMS, as well as associate physician in the Division of General Internal Medicine and Primary Care at BWH. Ramani’s scholarly interests focus on innovative approaches to teaching, learning and assessment of clinical trainees, faculty development in teaching, and qualitative research methods in medical education.
  • Ursula Kaiser, MD , is professor at HMS and chief of the Division of Endocrinology, Diabetes and Hypertension, and senior physician at BWH. Kaiser’s research focuses on understanding the molecular mechanisms by which pulsatile gonadotropin-releasing hormone regulates the expression of luteinizing hormone and follicle-stimulating hormone genes.

Insights on Creating a Good Research Question

Junichi Tokuda, PhD

Play Junichi Tokuda video

Ursula Kaiser, MD

Play Ursula Kaiser video

Start Successfully: Build the Foundation of a Good Research Question

Jonathan Williams, MD, MMSc

Start Successfully Resources

Ideation in Device Development: Finding Clinical Need Josh Tolkoff, MS A lecture explaining the critical importance of identifying a compelling clinical need before embarking on a research project. Play Ideation in Device Development video .

Radical Innovation Jeff Karp, PhD This ThinkResearch podcast episode focuses on one researcher’s approach using radical simplicity to break down big problems and questions. Play Radical Innovation .

Using Healthcare Data: How can Researchers Come up with Interesting Questions? Anupam Jena, MD, PhD Another ThinkResearch podcast episode addresses how to discover good research questions by using a backward design approach which involves analyzing big data and allowing the research question to unfold from findings. Play Using Healthcare Data .

Important Factors: Consider Feasibility and Novelty

Sharmila Dorbala, MD, MPH

Refining Your Research Question 

Play video of Clare Tempany-Afdhal

Elliot Israel, MD

Play Elliott Israel video

Frameworks and Structure: Evaluate Research Questions Using Tools and Techniques

Frameworks and Structure Resources

Designing Clinical Research Hulley et al. A comprehensive and practical guide to clinical research, including the FINER framework for evaluating research questions. Learn more about the book .

Translational Medicine Library Guide Queens University Library An introduction to popular frameworks for research questions, including FINER and PICO. Review translational medicine guide .

Asking a Good T3/T4 Question  Niteesh K. Choudhry, MD, PhD This video explains the PICO framework in practice as participants in a workshop propose research questions that compare interventions. Play Asking a Good T3/T4 Question video

Introduction to Designing & Conducting Mixed Methods Research An online course that provides a deeper dive into mixed methods’ research questions and methodologies. Learn more about the course

Network and Support: Find the Collaborators and Stakeholders to Help Evaluate Research Questions

Chris Gibbons, MD,

Network & Support Resource

Bench-to-bedside, Bedside-to-bench Christopher Gibbons, MD In this lecture, Gibbons shares his experience of bringing research from bench to bedside, and from bedside to bench. His talk highlights the formation and evolution of research questions based on clinical need. Play Bench-to-bedside. 

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Research process
  • Writing Strong Research Questions | Criteria & Examples

Writing Strong Research Questions | Criteria & Examples

Published on 30 October 2022 by Shona McCombes . Revised on 12 December 2023.

A research question pinpoints exactly what you want to find out in your work. A good research question is essential to guide your research paper , dissertation , or thesis .

All research questions should be:

  • Focused on a single problem or issue
  • Researchable using primary and/or secondary sources
  • Feasible to answer within the timeframe and practical constraints
  • Specific enough to answer thoroughly
  • Complex enough to develop the answer over the space of a paper or thesis
  • Relevant to your field of study and/or society more broadly

Writing Strong Research Questions

Table of contents

How to write a research question, what makes a strong research question, research questions quiz, frequently asked questions.

You can follow these steps to develop a strong research question:

  • Choose your topic
  • Do some preliminary reading about the current state of the field
  • Narrow your focus to a specific niche
  • Identify the research problem that you will address

The way you frame your question depends on what your research aims to achieve. The table below shows some examples of how you might formulate questions for different purposes.

Using your research problem to develop your research question

Note that while most research questions can be answered with various types of research , the way you frame your question should help determine your choices.

Prevent plagiarism, run a free check.

Research questions anchor your whole project, so it’s important to spend some time refining them. The criteria below can help you evaluate the strength of your research question.

Focused and researchable

Feasible and specific, complex and arguable, relevant and original.

The way you present your research problem in your introduction varies depending on the nature of your research paper . A research paper that presents a sustained argument will usually encapsulate this argument in a thesis statement .

A research paper designed to present the results of empirical research tends to present a research question that it seeks to answer. It may also include a hypothesis – a prediction that will be confirmed or disproved by your research.

As you cannot possibly read every source related to your topic, it’s important to evaluate sources to assess their relevance. Use preliminary evaluation to determine whether a source is worth examining in more depth.

This involves:

  • Reading abstracts , prefaces, introductions , and conclusions
  • Looking at the table of contents to determine the scope of the work
  • Consulting the index for key terms or the names of important scholars

An essay isn’t just a loose collection of facts and ideas. Instead, it should be centered on an overarching argument (summarised in your thesis statement ) that every part of the essay relates to.

The way you structure your essay is crucial to presenting your argument coherently. A well-structured essay helps your reader follow the logic of your ideas and understand your overall point.

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, December 12). Writing Strong Research Questions | Criteria & Examples. Scribbr. Retrieved 22 April 2024, from https://www.scribbr.co.uk/the-research-process/research-question/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a research proposal | examples & templates, how to write a results section | tips & examples, what is a research methodology | steps & tips.

Grad Coach

Research Question Examples 🧑🏻‍🏫

25+ Practical Examples & Ideas To Help You Get Started 

By: Derek Jansen (MBA) | October 2023

A well-crafted research question (or set of questions) sets the stage for a robust study and meaningful insights.  But, if you’re new to research, it’s not always clear what exactly constitutes a good research question. In this post, we’ll provide you with clear examples of quality research questions across various disciplines, so that you can approach your research project with confidence!

Research Question Examples

  • Psychology research questions
  • Business research questions
  • Education research questions
  • Healthcare research questions
  • Computer science research questions

Examples: Psychology

Let’s start by looking at some examples of research questions that you might encounter within the discipline of psychology.

How does sleep quality affect academic performance in university students?

This question is specific to a population (university students) and looks at a direct relationship between sleep and academic performance, both of which are quantifiable and measurable variables.

What factors contribute to the onset of anxiety disorders in adolescents?

The question narrows down the age group and focuses on identifying multiple contributing factors. There are various ways in which it could be approached from a methodological standpoint, including both qualitatively and quantitatively.

Do mindfulness techniques improve emotional well-being?

This is a focused research question aiming to evaluate the effectiveness of a specific intervention.

How does early childhood trauma impact adult relationships?

This research question targets a clear cause-and-effect relationship over a long timescale, making it focused but comprehensive.

Is there a correlation between screen time and depression in teenagers?

This research question focuses on an in-demand current issue and a specific demographic, allowing for a focused investigation. The key variables are clearly stated within the question and can be measured and analysed (i.e., high feasibility).

Free Webinar: How To Find A Dissertation Research Topic

Examples: Business/Management

Next, let’s look at some examples of well-articulated research questions within the business and management realm.

How do leadership styles impact employee retention?

This is an example of a strong research question because it directly looks at the effect of one variable (leadership styles) on another (employee retention), allowing from a strongly aligned methodological approach.

What role does corporate social responsibility play in consumer choice?

Current and precise, this research question can reveal how social concerns are influencing buying behaviour by way of a qualitative exploration.

Does remote work increase or decrease productivity in tech companies?

Focused on a particular industry and a hot topic, this research question could yield timely, actionable insights that would have high practical value in the real world.

How do economic downturns affect small businesses in the homebuilding industry?

Vital for policy-making, this highly specific research question aims to uncover the challenges faced by small businesses within a certain industry.

Which employee benefits have the greatest impact on job satisfaction?

By being straightforward and specific, answering this research question could provide tangible insights to employers.

Examples: Education

Next, let’s look at some potential research questions within the education, training and development domain.

How does class size affect students’ academic performance in primary schools?

This example research question targets two clearly defined variables, which can be measured and analysed relatively easily.

Do online courses result in better retention of material than traditional courses?

Timely, specific and focused, answering this research question can help inform educational policy and personal choices about learning formats.

What impact do US public school lunches have on student health?

Targeting a specific, well-defined context, the research could lead to direct changes in public health policies.

To what degree does parental involvement improve academic outcomes in secondary education in the Midwest?

This research question focuses on a specific context (secondary education in the Midwest) and has clearly defined constructs.

What are the negative effects of standardised tests on student learning within Oklahoma primary schools?

This research question has a clear focus (negative outcomes) and is narrowed into a very specific context.

Need a helping hand?

good research question is practicable

Examples: Healthcare

Shifting to a different field, let’s look at some examples of research questions within the healthcare space.

What are the most effective treatments for chronic back pain amongst UK senior males?

Specific and solution-oriented, this research question focuses on clear variables and a well-defined context (senior males within the UK).

How do different healthcare policies affect patient satisfaction in public hospitals in South Africa?

This question is has clearly defined variables and is narrowly focused in terms of context.

Which factors contribute to obesity rates in urban areas within California?

This question is focused yet broad, aiming to reveal several contributing factors for targeted interventions.

Does telemedicine provide the same perceived quality of care as in-person visits for diabetes patients?

Ideal for a qualitative study, this research question explores a single construct (perceived quality of care) within a well-defined sample (diabetes patients).

Which lifestyle factors have the greatest affect on the risk of heart disease?

This research question aims to uncover modifiable factors, offering preventive health recommendations.

Research topic evaluator

Examples: Computer Science

Last but certainly not least, let’s look at a few examples of research questions within the computer science world.

What are the perceived risks of cloud-based storage systems?

Highly relevant in our digital age, this research question would align well with a qualitative interview approach to better understand what users feel the key risks of cloud storage are.

Which factors affect the energy efficiency of data centres in Ohio?

With a clear focus, this research question lays a firm foundation for a quantitative study.

How do TikTok algorithms impact user behaviour amongst new graduates?

While this research question is more open-ended, it could form the basis for a qualitative investigation.

What are the perceived risk and benefits of open-source software software within the web design industry?

Practical and straightforward, the results could guide both developers and end-users in their choices.

Remember, these are just examples…

In this post, we’ve tried to provide a wide range of research question examples to help you get a feel for what research questions look like in practice. That said, it’s important to remember that these are just examples and don’t necessarily equate to good research topics . If you’re still trying to find a topic, check out our topic megalist for inspiration.

good research question is practicable

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

What is a research question?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Developing a Research Question

17 Developing a Researchable Research Question

After thinking about what topics interest you, identifying a topic that is both empirical and sociological, and decide whether your research will be exploratory, descriptive, or explanatory, the next step is to form a research question about your topic.  For many researchers, forming hypotheses comes after developing one’s research question. However, for now, we will just think about research questions.

So then, what makes a good research question?  Let us first consider some practical aspects. A good research question is one that:

  • You are interested in;
  • You have resources (money, technology, assistance, etc) to answer the question;
  • You have access to the data you need (human, animal or numerical/ file data);
  • Is operationalized appropriately;
  • Has a specific objective (anything from explaining something to describing something).

A good research question also has some specific characteristics, as follows:

  • It is generally written in the form of a question;
  • It is also well focused;
  • It cannot be answered with a simple yes or no;
  • It should have more than one plausible answer;
  • It considers relationships amongst multiple concepts.

Generally speaking, your research question will guide whether your research project is best approached with a quantitative, qualitative, mixed methods, or other [1] approaches. Table 3.2 provides some examples of problematic research questions and then suggestions for how to improve each research question.

In Chapter 8 , we look at designing survey questions, which are not to be confused with research questions.

Text Attributions

  • This chapter has been adapted from Chapter 4.4 in Principles of Sociological Inquiry , which was adapted by the Saylor Academy without attribution to the original authors or publisher, as requested by the licensor. © Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License .
  • We will look at “other” methods, such as unobtrusive methods, in Chapters ↵

An Introduction to Research Methods in Sociology Copyright © 2019 by Valerie A. Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Loading metrics

Open Access

Ten simple rules for good research practice

* E-mail: [email protected]

Affiliations Center for Reproducible Science, University of Zurich, Zurich, Switzerland, Epidemiology, Biostatistics and Prevention Institute, University of Zurich, Zurich, Switzerland

ORCID logo

Affiliation Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland

Affiliation Human Neuroscience Platform, Fondation Campus Biotech Geneva, Geneva, Switzerland

Affiliation Department of Environmental Sciences, Zoology, University of Basel, Basel, Switzerland

Affiliation Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland

Affiliation SIB Training Group, SIB Swiss Institute of Bioinformatics, Lausanne, Switzerland

Affiliations Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland, Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, United States of America, Meta-Research Innovation Center Berlin (METRIC-B), Berlin Institute of Health, Berlin, Germany

Affiliation Applied Face Cognition Lab, University of Lausanne, Lausanne, Switzerland

Affiliation Faculty of Psychology, UniDistance Suisse, Brig, Switzerland

Affiliation Statistical Consultant, Edinburgh, United Kingdom

  • Simon Schwab, 
  • Perrine Janiaud, 
  • Michael Dayan, 
  • Valentin Amrhein, 
  • Radoslaw Panczak, 
  • Patricia M. Palagi, 
  • Lars G. Hemkens, 
  • Meike Ramon, 
  • Nicolas Rothen, 

PLOS

Published: June 23, 2022

  • https://doi.org/10.1371/journal.pcbi.1010139
  • Reader Comments

Fig 1

Citation: Schwab S, Janiaud P, Dayan M, Amrhein V, Panczak R, Palagi PM, et al. (2022) Ten simple rules for good research practice. PLoS Comput Biol 18(6): e1010139. https://doi.org/10.1371/journal.pcbi.1010139

Copyright: © 2022 Schwab et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: S.S. received funding from SfwF (Stiftung für wissenschaftliche Forschung an der Universität Zürich; grant no. STWF-19-007). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

This is a PLOS Computational Biology Methods paper.

Introduction

The lack of research reproducibility has caused growing concern across various scientific fields [ 1 – 5 ]. Today, there is widespread agreement, within and outside academia, that scientific research is suffering from a reproducibility crisis [ 6 , 7 ]. Researchers reach different conclusions—even when the same data have been processed—simply due to varied analytical procedures [ 8 , 9 ]. As we continue to recognize this problematic situation, some major causes of irreproducible research have been identified. This, in turn, provides the foundation for improvement by identifying and advocating for good research practices (GRPs). Indeed, powerful solutions are available, for example, preregistration of study protocols and statistical analysis plans, sharing of data and analysis code, and adherence to reporting guidelines. Although these and other best practices may facilitate reproducible research and increase trust in science, it remains the responsibility of researchers themselves to actively integrate them into their everyday research practices.

Contrary to ubiquitous specialized training, cross-disciplinary courses focusing on best practices to enhance the quality of research are lacking at universities and are urgently needed. The intersections between disciplines offer a space for peer evaluation, mutual learning, and sharing of best practices. In medical research, interdisciplinary work is inevitable. For example, conducting clinical trials requires experts with diverse backgrounds, including clinical medicine, pharmacology, biostatistics, evidence synthesis, nursing, and implementation science. Bringing researchers with diverse backgrounds and levels of experience together to exchange knowledge and learn about problems and solutions adds value and improves the quality of research.

The present selection of rules was based on our experiences with teaching GRP courses at the University of Zurich, our course participants’ feedback, and the views of a cross-disciplinary group of experts from within the Swiss Reproducibility Network ( www.swissrn.org ). The list is neither exhaustive, nor does it aim to address and systematically summarize the wide spectrum of issues including research ethics and legal aspects (e.g., related to misconduct, conflicts of interests, and scientific integrity). Instead, we focused on practical advice at the different stages of everyday research: from planning and execution to reporting of research. For a more comprehensive overview on GRPs, we point to the United Kingdom’s Medical Research Council’s guidelines [ 10 ] and the Swedish Research Council’s report [ 11 ]. While the discussion of the rules may predominantly focus on clinical research, much applies, in principle, to basic biomedical research and research in other domains as well.

The 10 proposed rules can serve multiple purposes: an introduction for researchers to relevant concepts to improve research quality, a primer for early-career researchers who participate in our GRP courses, or a starting point for lecturers who plan a GRP course at their own institutions. The 10 rules are grouped according to planning (5 rules), execution (3 rules), and reporting of research (2 rules); see Fig 1 . These principles can (and should) be implemented as a habit in everyday research, just like toothbrushing.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

GRP, good research practices.

https://doi.org/10.1371/journal.pcbi.1010139.g001

Research planning

Rule 1: specify your research question.

Coming up with a research question is not always simple and may take time. A successful study requires a narrow and clear research question. In evidence-based research, prior studies are assessed in a systematic and transparent way to identify a research gap for a new study that answers a question that matters [ 12 ]. Papers that provide a comprehensive overview of the current state of research in the field are particularly helpful—for example, systematic reviews. Perspective papers may also be useful, for example, there is a paper with the title “SARS-CoV-2 and COVID-19: The most important research questions.” However, a systematic assessment of research gaps deserves more attention than opinion-based publications.

In the next step, a vague research question should be further developed and refined. In clinical research and evidence-based medicine, there is an approach called population, intervention, comparator, outcome, and time frame (PICOT) with a set of criteria that can help framing a research question [ 13 ]. From a well-developed research question, subsequent steps will follow, which may include the exact definition of the population, the outcome, the data to be collected, and the sample size that is required. It may be useful to find out if other researchers find the idea interesting as well and whether it might promise a valuable contribution to the field. However, actively involving the public or the patients can be a more effective way to determine what research questions matter.

The level of details in a research question also depends on whether the planned research is confirmatory or exploratory. In contrast to confirmatory research, exploratory research does not require a well-defined hypothesis from the start. Some examples of exploratory experiments are those based on omics and multi-omics experiments (genomics, bulk RNA-Seq, single-cell, etc.) in systems biology and connectomics and whole-brain analyses in brain imaging. Both exploration and confirmation are needed in science, and it is helpful to understand their strengths and limitations [ 14 , 15 ].

Rule 2: Write and register a study protocol

In clinical research, registration of clinical trials has become a standard since the late 1990 and is now a legal requirement in many countries. Such studies require a study protocol to be registered, for example, with ClinicalTrials.gov, the European Clinical Trials Register, or the World Health Organization’s International Clinical Trials Registry Platform. Similar effort has been implemented for registration of systematic reviews (PROSPERO). Study registration has also been proposed for observational studies [ 16 ] and more recently in preclinical animal research [ 17 ] and is now being advocated across disciplines under the term “preregistration” [ 18 , 19 ].

Study protocols typically document at minimum the research question and hypothesis, a description of the population, the targeted sample size, the inclusion/exclusion criteria, the study design, the data collection, the data processing and transformation, and the planned statistical analyses. The registration of study protocols reduces publication bias and hindsight bias and can safeguard honest research and minimize waste of research [ 20 – 22 ]. Registration ensures that studies can be scrutinized by comparing the reported research with what was actually planned and written in the protocol, and any discrepancies may indicate serious problems (e.g., outcome switching).

Note that registration does not mean that researchers have no flexibility to adapt the plan as needed. Indeed, new or more appropriate procedures may become available or known only after registration of a study. Therefore, a more detailed statistical analysis plan can be amended to the protocol before the data are observed or unblinded [ 23 , 24 ]. Likewise, registration does not exclude the possibility to conduct exploratory data analyses; however, they must be clearly reported as such.

To go even further, registered reports are a novel article type that incentivize high-quality research—irrespective of the ultimate study outcome [ 25 , 26 ]. With registered reports, peer-reviewers decide before anyone knows the results of the study, and they have a more active role in being able to influence the design and analysis of the study. Journals from various disciplines increasingly support registered reports [ 27 ].

Naturally, preregistration and registered reports also have their limitations and may not be appropriate in a purely hypothesis-generating (explorative) framework. Reports of exploratory studies should indeed not be molded into a confirmatory framework; appropriate rigorous reporting alternatives have been suggested and start to become implemented [ 28 , 29 ].

Rule 3: Justify your sample size

Early-career researchers in our GRP courses often identify sample size as an issue in their research. For example, they say that they work with a low number of samples due to slow growth of cells, or they have a limited number of patient tumor samples due to a rare disease. But if your sample size is too low, your study has a high risk of providing a false negative result (type II error). In other words, you are unlikely to find an effect even if there truly was an effect.

Unfortunately, there is more bad news with small studies. When an effect from a small study was selected for drawing conclusions because it was statistically significant, low power increases the probability that an effect size is overestimated [ 30 , 31 ]. The reason is that with low power, studies that due to sampling variation find larger (overestimated) effects are much more likely to be statistically significant than those that happen to find smaller (more realistic) effects [ 30 , 32 , 33 ]. Thus, in such situations, effect sizes are often overestimated. For the phenomenon that small studies often report more extreme results (in meta-analyses), the term “small-study effect” was introduced [ 34 ]. In any case, an underpowered study is a problematic study, no matter the outcome.

In conclusion, small sample sizes can undermine research, but when is a study too small? For one study, a total of 50 patients may be fine, but for another, 1,000 patients may be required. How large a study needs to be designed requires an appropriate sample size calculation. Appropriate sample size calculation ensures that enough data are collected to ensure sufficient statistical power (the probability to reject the null hypothesis when it is in fact false).

Low-powered studies can be avoided by performing a sample size calculation to find out the required sample size of the study. This requires specifying a primary outcome variable and the magnitude of effect you are interested in (among some other factors); in clinical research, this is often the minimal clinically relevant difference. The statistical power is often set at 80% or larger. A comprehensive list of packages for sample size calculation are available [ 35 ], among them the R package “pwr” [ 36 ]. There are also many online calculators available, for example, the University of Zurich’s “SampleSizeR” [ 37 ].

A worthwhile alternative for planning the sample size that puts less emphasis on null hypothesis testing is based on the desired precision of the study; for example, one can calculate the sample size that is necessary to obtain a desired width of a confidence interval for the targeted effect [ 38 – 40 ]. A general framework to sample size justification beyond a calculation-only approach has been proposed [ 41 ]. It is also worth mentioning that some study types have other requirements or need specific methods. In diagnostic testing, one would need to determine the anticipated minimal sensitivity or specificity; in prognostic research, the number of parameters that can be used to fit a prediction model given a fixed sample size should be specified. Designs can also be so complex that a simulation (Monte Carlo method) may be required.

Sample size calculations should be done under different assumptions, and the largest estimated sample size is often the safer bet than a best-case scenario. The calculated sample size should further be adjusted to allow for possible missing data. Due to the complexity of accurately calculating sample size, researchers should strongly consider consulting a statistician early in the study design process.

Rule 4: Write a data management plan

In 2020, 2 Coronavirus Disease 2019 (COVID-19) papers in leading medical journals were retracted after major concerns about the data were raised [ 42 ]. Today, raw data are more often recognized as a key outcome of research along with the paper. Therefore, it is important to develop a strategy for the life cycle of data, including suitable infrastructure for long-term storage.

The data life cycle is described in a data management plan: a document that describes what data will be collected and how the data will be organized, stored, handled, and protected during and after the end of the research project. Several funders require a data management plan in grant submissions, and publishers like PLOS encourage authors to do so as well. The Wellcome Trust provides guidance in the development of a data management plan, including real examples from neuroimaging, genomics, and social sciences [ 43 ]. However, projects do not always allocate funding and resources to the actual implementation of the data management plan.

The Findable, Accessible, Interoperable, and Reusable (FAIR) data principles promote maximal use of data and enable machines to access and reuse data with minimal human intervention [ 44 ]. FAIR principles require the data to be retained, preserved, and shared preferably with an immutable unique identifier and a clear usage license. Appropriate metadata will help other researchers (or machines) to discover, process, and understand the data. However, requesting researchers to fully comply with the FAIR data principles in every detail is an ambitious goal.

Multidisciplinary data repositories that support FAIR are, for example, Dryad (datadryad.org https://datadryad.org/ ), EUDAT ( www.eudat.eu ), OSF (osf.io https://osf.io/ ), and Zenodo (zenodo.org https://zenodo.org/ ). A number of institutional and field-specific repositories may also be suitable. However, sometimes, authors may not be able to make their data publicly available for legal or ethical reasons. In such cases, a data user agreement can indicate the conditions required to access the data. Journals highlight what are acceptable and what are unacceptable data access restrictions and often require a data availability statement.

Organizing the study artifacts in a structured way greatly facilitates the reuse of data and code within and outside the lab, enhancing collaborations and maximizing the research investment. Support and courses for data management plans are sometimes available at universities. Another 10 simple rules paper for creating a good data management plan is dedicated to this topic [ 45 ].

Rule 5: Reduce bias

Bias is a distorted view in favor of or against a particular idea. In statistics, bias is a systematic deviation of a statistical estimate from the (true) quantity it estimates. Bias can invalidate our conclusions, and the more bias there is, the less valid they are. For example, in clinical studies, bias may mislead us into reaching a causal conclusion that the difference in the outcomes was due to the intervention or the exposure. This is a big concern, and, therefore, the risk of bias is assessed in clinical trials [ 46 ] as well as in observational studies [ 47 , 48 ].

There are many different forms of bias that can occur in a study, and they may overlap (e.g., allocation bias and confounding bias) [ 49 ]. Bias can occur at different stages, for example, immortal time bias in the design of the study, information bias in the execution of the study, and publication bias in the reporting of research. Understanding bias allows us researchers to remain vigilant of potential sources of bias when peer-reviewing and designing own studies. We summarized some common types of bias and some preventive steps in Table 1 , but many other forms of bias exist; for a comprehensive overview, see the Oxford University’s Catalogue of Bias [ 50 ].

thumbnail

https://doi.org/10.1371/journal.pcbi.1010139.t001

Here are some noteworthy examples of study bias from the literature: An example of information bias was observed when in 1998 an alleged association between the measles, mumps, and rubella (MMR) vaccine and autism was reported. Recall bias (a subtype of information bias) emerged when parents of autistic children recalled the onset of autism after an MMR vaccination more often than parents of similar children who were diagnosed prior to the media coverage of that controversial and meanwhile retracted study [ 51 ]. A study from 2001 showed better survival for academy award-winning actors, but this was due to immortal time bias that favors the treatment or exposure group [ 52 , 53 ]. A study systematically investigated self-reports about musculoskeletal symptoms and found the presence of information bias. The reason was that participants with little computer-time overestimated, and participants with a lot of computer-time spent underestimated their computer usage [ 54 ].

Information bias can be mitigated by using objective rather than subjective measurements. Standardized operating procedures (SOP) and electronic lab notebooks additionally help to follow well-designed protocols for data collection and handling [ 55 ]. Despite the failure to mitigate bias in studies, complete descriptions of data and methods can at least allow the assessment of risk of bias.

Research execution

Rule 6: avoid questionable research practices.

Questionable research practices (QRPs) can lead to exaggerated findings and false conclusions and thus lead to irreproducible research. Often, QRPs are used with no bad intentions. This becomes evident when methods sections explicitly describe such procedures, for example, to increase the number of samples until statistical significance is reached that supports the hypothesis. Therefore, it is important that researchers know about QRPs in order to recognize and avoid them.

Several questionable QRPs have been named [ 56 , 57 ]. Among them are low statistical power, pseudoreplication, repeated inspection of data, p -hacking [ 58 ], selective reporting, and hypothesizing after the results are known (HARKing).

The first 2 QRPs, low statistical power and pseudoreplication, can be prevented by proper planning and designing of studies, including sample size calculation and appropriate statistical methodology to avoid treating data as independent when in fact they are not. Statistical power is not equal to reproducibility, but statistical power is a precondition of reproducibility as the lack thereof can result in false negative as well as false positive findings (see Rule 3 ).

In fact, a lot of QRP can be avoided with a study protocol and statistical analysis plan. Preregistration, as described in Rule 2, is considered best practice for this purpose. However, many of these issues can additionally be rooted in institutional incentives and rewards. Both funding and promotion are often tied to the quantity rather than the quality of the research output. At universities, still only few or no rewards are given for writing and registering protocols, sharing data, publishing negative findings, and conducting replication studies. Thus, a wider “culture change” is needed.

Rule 7: Be cautious with interpretations of statistical significance

It would help if more researchers were familiar with correct interpretations and possible misinterpretations of statistical tests, p -values, confidence intervals, and statistical power [ 59 , 60 ]. A statistically significant p -value does not necessarily mean that there is a clinically or biologically relevant effect. Specifically, the traditional dichotomization into statistically significant ( p < 0.05) versus statistically nonsignificant ( p ≥ 0.05) results is seldom appropriate, can lead to cherry-picking of results and may eventually corrupt science [ 61 ]. We instead recommend reporting exact p -values and interpreting them in a graded way in terms of the compatibility of the null hypothesis with the data [ 62 , 63 ]. Moreover, a p -value around 0.05 (e.g., 0.047 or 0.055) provides only little information, as is best illustrated by the associated replication power: The probability that a hypothetical replication study of the same design will lead to a statistically significant result is only 50% [ 64 ] and is even lower in the presence of publication bias and regression to the mean (the phenomenon that effect estimates in replication studies are often smaller than the estimates in the original study) [ 65 ]. Claims of novel discoveries should therefore be based on a smaller p -value threshold (e.g., p < 0.005) [ 66 ], but this really depends on the discipline (genome-wide screenings or studies in particle physics often apply much lower thresholds).

Generally, there is often too much emphasis on p -values. A statistical index such as the p -value is just the final product of an analysis, the tip of the iceberg [ 67 ]. Statistical analyses often include many complex stages, from data processing, cleaning, transformation, addressing missing data, modeling, to statistical inference. Errors and pitfalls can creep in at any stage, and even a tiny error can have a big impact on the result [ 68 ]. Also, when many hypothesis tests are conducted (multiple testing), false positive rates may need to be controlled to protect against wrong conclusions, although adjustments for multiple testing are debated [ 69 – 71 ].

Thus, a p -value alone is not a measure of how credible a scientific finding is [ 72 ]. Instead, the quality of the research must be considered, including the study design, the quality of the measurement, and the validity of the assumptions that underlie the data analysis [ 60 , 73 ]. Frameworks exist that help to systematically and transparently assess the certainty in evidence; the most established and widely used one is Grading of Recommendations, Assessment, Development and Evaluations (GRADE; www.gradeworkinggroup.org ) [ 74 ].

Training in basic statistics, statistical programming, and reproducible analyses and better involvement of data professionals in academia is necessary. University departments sometimes have statisticians that can support researchers. Importantly, statisticians need to be involved early in the process and on an equal footing and not just at the end of a project to perform the final data analysis.

Rule 8: Make your research open

In reality, science often lacks transparency. Open science makes the process of producing evidence and claims transparent and accessible to others [ 75 ]. Several universities and research funders have already implemented open science roadmaps to advocate free and public science as well as open access to scientific knowledge, with the aim of further developing the credibility of research. Open research allows more eyes to see it and critique it, a principle similar to the “Linus’s law” in software development, which says that if there are enough people to test a software, most bugs will be discovered.

As science often progresses incrementally, writing and sharing a study protocol and making data and methods readily available is crucial to facilitate knowledge building. The Open Science Framework (osf.io) is a free and open-source project management tool that supports researchers throughout the entire project life cycle. OSF enables preregistration of study protocols and sharing of documents, data, analysis code, supplementary materials, and preprints.

To facilitate reproducibility, a research paper can link to data and analysis code deposited on OSF. Computational notebooks are now readily available that unite data processing, data transformations, statistical analyses, figures and tables in a single document (e.g., R Markdown, Jupyter); see also the 10 simple rules for reproducible computational research [ 76 ]. Making both data and code open thus minimizes waste of funding resources and accelerates science.

Open science can also advance researchers’ careers, especially for early-career researchers. The increased visibility, retrievability, and citations of datasets can all help with career building [ 77 ]. Therefore, institutions should provide necessary training, and hiring committees and journals should align their core values with open science, to attract researchers who aim for transparent and credible research [ 78 ].

Research reporting

Rule 9: report all findings.

Publication bias occurs when the outcome of a study influences the decision whether to publish it. Researchers, reviewers, and publishers often find nonsignificant study results not interesting or worth publishing. As a consequence, outcomes and analyses are only selectively reported in the literature [ 79 ], also known as the file drawer effect [ 80 ].

The extent of publication bias in the literature is illustrated by the overwhelming frequency of statistically significant findings [ 81 ]. A study extracted p -values from MEDLINE and PubMed Central and showed that 96% of the records reported at least 1 statistically significant p -value [ 82 ], which seems implausible in the real world. Another study plotted the distribution of more than 1 million z -values from Medline, revealing a huge gap from −2 to 2 [ 83 ]. Positive studies (i.e., statistically significant, perceived as striking or showing a beneficial effect) were 4 times more likely to get published than negative studies [ 84 ].

Often a statistically nonsignificant result is interpreted as a “null” finding. But a nonsignificant finding does not necessarily mean a null effect; absence of evidence is not evidence of absence [ 85 ]. An individual study may be underpowered, resulting in a nonsignificant finding, but the cumulative evidence from multiple studies may indeed provide sufficient evidence in a meta-analysis. Another argument is that a confidence interval that contains the null value often also contains non-null values that may be of high practical importance. Only if all the values inside the interval are deemed unimportant from a practical perspective, then it may be fair to describe a result as a null finding [ 61 ]. We should thus never report “no difference” or “no association” just because a p -value is larger than 0.05 or, equivalently, because a confidence interval includes the “null” [ 61 ].

On the other hand, studies sometimes report statistically nonsignificant results with “spin” to claim that the experimental treatment is beneficial, often by focusing their conclusions on statistically significant differences on secondary outcomes despite a statistically nonsignificant difference for the primary outcome [ 86 , 87 ].

Findings that are not being published have a tremendous impact on the research ecosystem, distorting our knowledge of the scientific landscape by perpetuating misconceptions, and jeopardizing judgment of researchers and the public trust in science. In clinical research, publication bias can mislead care decisions and harm patients, for example, when treatments appear useful despite only minimal or even absent benefits reported in studies that were not published and thus are unknown to physicians [ 88 ]. Moreover, publication bias also directly affects the formulation and proliferation of scientific theories, which are taught to students and early-career researchers, thereby perpetuating biased research from the core. It has been shown in modeling studies that unless a sufficient proportion of negative studies are published, a false claim can become an accepted fact [ 89 ] and the false positive rates influence trustworthiness in a given field [ 90 ].

In sum, negative findings are undervalued. They need to be more consistently reported at the study level or be systematically investigated at the systematic review level. Researchers have their share of responsibilities, but there is clearly a lack of incentives from promotion and tenure committees, journals, and funders.

Rule 10: Follow reporting guidelines

Study reports need to faithfully describe the aim of the study and what was done, including potential deviations from the original protocol, as well as what was found. Yet, there is ample evidence of discrepancies between protocols and research reports, and of insufficient quality of reporting [ 79 , 91 – 95 ]. Reporting deficiencies threaten our ability to clearly communicate findings, replicate studies, make informed decisions, and build on existing evidence, wasting time and resources invested in the research [ 96 ].

Reporting guidelines aim to provide the minimum information needed on key design features and analysis decisions, ensuring that findings can be adequately used and studies replicated. In 2008, the Enhancing the QUAlity and Transparency Of Health Research (EQUATOR) network was initiated to provide reporting guidelines for a variety of study designs along with guidelines for education and training on how to enhance quality and transparency of health research. Currently, there are 468 reporting guidelines listed in the network; see the most prominent guidelines in Table 2 . Furthermore, following the ICMJE recommendations, medical journals are increasingly endorsing reporting guidelines [ 97 ], in some cases making it mandatory to submit the appropriate reporting checklist along with the manuscript.

thumbnail

https://doi.org/10.1371/journal.pcbi.1010139.t002

The use of reporting guidelines and journal endorsement has led to a positive impact on the quality and transparency of research reporting, but improvement is still needed to maximize the value of research [ 98 , 99 ].

Conclusions

Originally, this paper targeted early-career researchers; however, throughout the development of the rules, it became clear that the present recommendations can serve all researchers irrespective of their seniority. We focused on practical guidelines for planning, conducting, and reporting of research. Others have aligned GRP with similar topics [ 100 , 101 ]. Even though we provide 10 simple rules, the word “simple” should not be taken lightly. Putting the rules into practice usually requires effort and time, especially at the beginning of a research project. However, time can also be redeemed, for example, when certain choices can be justified to reviewers by providing a study protocol or when data can be quickly reanalyzed by using computational notebooks and dynamic reports.

Researchers have field-specific research skills, but sometimes are not aware of best practices in other fields that can be useful. Universities should offer cross-disciplinary GRP courses across faculties to train the next generation of scientists. Such courses are an important building block to improve the reproducibility of science.

Acknowledgments

This article was written along the Good Research Practice (GRP) courses at the University of Zurich provided by the Center of Reproducible Science ( www.crs.uzh.ch ). All materials from the course are available at https://osf.io/t9rqm/ . We appreciated the discussion, development, and refinement of this article within the working group “training” of the SwissRN ( www.swissrn.org ). We are grateful to Philip Bourne for a lot of valuable comments on the earlier versions of the manuscript.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 35. H. G. Zhang EZ. CRAN task view: Clinical trial design, monitoring, and analysis. 20 Jun 2021 [cited 3 Mar 2022]. Available: https://CRAN.R-project.org/view=ClinicalTrials .
  • 36. Champely S. pwr: Basic Functions for Power Analysis. 2020. Available: https://CRAN.R-project.org/package=pwr .
  • 43. Outputs Management Plan—Grant Funding. In: Wellcome [Internet]. [cited 13 Feb 2022]. Available: https://wellcome.org/grant-funding/guidance/how-complete-outputs-management-plan .
  • 62. Cox DR, Donnelly CA. Principles of Applied Statistics. Cambridge University Press; 2011. Available: https://play.google.com/store/books/details?id=BMLMGr4kbQYC .
  • 74. GRADE approach. [cited 3 Mar 2022]. Available: https://training.cochrane.org/grade-approach .

Researching

The qualities of a good research question.

Research = the physical process of  gathering information  + the mental process of  deriving the answer to your question  from the information you gathered. Research writing = the process of  sharing the answer  to your research question along with the  evidence  on which your answer is based, the  sources  you used, and your own  reasoning  and  explanation .

Image of a round orange button with a white question mark, against a yellow background

  • Questions that are purely values-based (such as “Should assisted suicide be legal?”) cannot be answered objectively because the answer varies depending on one’s values. Be wary of questions that include “should” or “ought” because those words often (although not always) indicate a values-based question.However, note that most values-based questions can be turned into research questions by judicious reframing. For instance, you could reframe “Should assisted suicide be legal?” as “What are the ethical implications of legalizing assisted suicide?” Using a “what are” frame turns a values-based question into a legitimate research question by moving it out of the world of debate and into the world of investigation.
  • The question, “Does carbon-based life exist outside of Earth’s solar system?” is a perfectly good research question in the sense that it is not values-based and therefore could be answered in an objective way, IF it were possible to collect data about the presence of life outside of Earth’s solar system. That is not yet possible with current technology; therefore, this is not (yet) a research question because it’s not (now) possible to obtain the data that would be needed to answer it.
  • If the answer to the question is readily available in a good encyclopedia, textbook, or reference book, then it is a homework question, not a research question. It was probably a research question in the past, but if the answer is so thoroughly known that you can easily look it up and find it, then it is no longer an open question. However, it is important to remember that as new information becomes available, homework questions can sometimes be reopened as research questions. Equally important, a question may have been answered for one population or circumstance, but not for all populations or all circumstances.
  • Composition II. Authored by : Janet Zepernick. Provided by : Pittsburg State University. Located at : http://www.pittstate.edu/ . Project : Kaleidoscope Open Course Initiative. License : CC BY: Attribution

Want to Get your Dissertation Accepted?

Discover how we've helped doctoral students complete their dissertations and advance their academic careers!

good research question is practicable

Join 200+ Graduated Students

textbook-icon

Get Your Dissertation Accepted On Your Next Submission

Get customized coaching for:.

  • Crafting your proposal,
  • Collecting and analyzing your data, or
  • Preparing your defense.

Trapped in dissertation revisions?

What makes a good research question, published by dr. david banner on november 4, 2022 november 4, 2022.

Last Updated on: 4th March 2024, 06:04 am

Creating a good research question is vital to successfully completing your dissertation. Here are some tips that will help you formulate a good research question. 

These are the three most important qualities of a good research question:

#1: Open-Ended (Not Yes/No)

You do NOT want a question that can be answered with a simple “yes” or “no.” That is a dead end for a question. There needs to be something beyond a simple yes/no” or the research has nowhere to go!

asian grad student taking notes from a laptop

#2: Addresses a Gap in Literature

Secondly, you want a question, ideally, that fits into a niche of questions that have not been addressed yet in peer reviewed research yet is worthy of scholarly study. If you wish to address a topic that has been researched before, you may use different subjects or time periods of study; this is called a replication and is acceptable in most universities.

#3: Holds Your Interest

This last point is especially crucial for dissertation research. You will be thinking about and studying a particular question for at least a year, so you want it to be something that you are REALLY interested in learning about. This will hold your interest throughout the process. The research question is really the heart of the research process. A good research question will hold your interest and contribute to the body of scholarly knowledge about a subject.

How Do You Find a Good Research Question?

Look to your interests.

Problems that can use research are everywhere. Where do your interests lie? Pick an area that you are excited about. It needs to engage your interest and, ideally, your passion. 

Identify the Type of Research

There are basically two kinds of research; applied research and basic research. Applied research is meant to inform decision making about practical problems, while basic research can advance theoretical conceptualizations about a particular topic. Both are useful, but chances are you will find an applied research topic.

Review the Existing Literature

Over 50% of doctoral candidates don’t finish their dissertations..

good research question is practicable

Start your search by looking at what other scholars have studied. Go look at dissertation abstracts in the library; see if anything grabs your interest. But self-enlightenment is not the goal of research. Gathering information about a certain topic is fine, but it doesn’t lead to new knowledge. The same is true for comparing two sets of data; you can go to the library and do this (e.g. comparing men and women over 100 years as to the number of each employed during that span of years).

Chapter 2 of a dissertation proposal usually is called the literature review and this needs to be done early on. This is where you discover what has been studied in your chosen area of interest. If you find a topic that grabs your interest, think through the feasibility that the project implies. 

You want something that is doable in a reasonable amount of time. A project that is too ambitious can lead to frustration and heartache. Remember; you want a question that leads you to new research but too big a topic can wait until you complete the PhD!

Develop Your Research Question

A statement of the research question needs to be precise. You need to say exactly what you mean. You cannot assume that others will be able to read your mind. If you cannot state the problem clearly and succinctly, then your data gathering might be sloppy, too. 

Develop Your Problem Statement

Occasionally a researcher talks about a problem, but never states exactly what the problem is –  avoid this at all costs. Be sure to edit your work. 

You may wish to subdivide the problem into sub-problems, so that the sub-problems add up to the totality of the problem. But sub-problems need to be small in number (ideally, 2-5 subproblems will do.) Having too many sub-problems is not helpful in designing a research project. If you come up with too many sub-problems, see if any are just procedural issues and not really sub-problems.

Final Thoughts

Remember, you need to find a question that really energizes you and, ideally, one that fills a gap in the existing research in this area. Make sure the problem statement is concise and doable; the scope of the problem needs to be something that you can do in a reasonable amount of time. And, above all, keep in mind that your job is to increase the body of knowledge in this field. You are providing fertile ground for future research. Get going!

Dr. David Banner

David Banner is the author of 6 books, 40 journal articles, and 35 conference papers on transformational leadership, Dr. David Banner received his PhD in Policy and Organizational Behavior from the Kellogg Graduate School of Management, Northwestern University in Illinois. He worked for the DePaul College of Commerce, The University of the Pacific School of Business, and the University of New Brunswick (Canada) School of Management; he was tenured at all 3 universities and was voted “Outstanding Professor” at all three. He also worked at Viterbo University in La Crosse, Wi, where he was the Director of the values-based MBA program, which he designed, recruited students, mentored faculty, set up an Advisory Board and got the program accredited (2003-07). He also worked for 16 years as a faculty mentor for the Leadership and Organizational Change PhD students (2005-21); in his 16 years, he graduated 82 PhDs in his roles as Committee Chair, Committee Member and URR (University Research Reviewer). Mentoring PhD students gives him the most joy and satisfaction. He offers his services to help people complete their PhDs, find good academic jobs, get published in peer-reviewed journals and find their place in the academic environment. Book a Free Consultation with Branford McAllister

Related Posts

concentrated grad student taking dissertation notes

Dissertation

Dissertation structure.

When it comes to writing a dissertation, one of the most fraught questions asked by graduate students is about dissertation structure. A dissertation is the lengthiest writing project that many graduate students ever undertake, and Read more…

professor consulting students in his office

Choosing a Dissertation Chair

Choosing your dissertation chair is one of the most important decisions that you’ll make in graduate school. Your dissertation chair will in many ways shape your experience as you undergo the most rigorous intellectual challenge Read more…

Dissertation Chair carefully looking at the dissertation defendant

Dissertation Chair: An Owner’s Manual

One of the most important faculty members in a doctoral student’s academic life is their dissertation chair. Part mentor, part administrator, the dissertation chair’s role encompasses responsibilities that directly impact your graduate experience. Prior to Read more…

Make This Your Last Round of Dissertation Revision.

Learn How to Get Your Dissertation Accepted .

Discover the 5-Step Process in this Free Webinar .

Almost there!

Please verify your email address by clicking the link in the email message we just sent to your address.

If you don't see the message within the next five minutes, be sure to check your spam folder :).

Hack Your Dissertation

5-Day Mini Course: How to Finish Faster With Less Stress

Interested in more helpful tips about improving your dissertation experience? Join our 5-day mini course by email!

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Indian Assoc Pediatr Surg
  • v.24(1); Jan-Mar 2019

Formulation of Research Question – Stepwise Approach

Simmi k. ratan.

Department of Pediatric Surgery, Maulana Azad Medical College, New Delhi, India

1 Department of Community Medicine, North Delhi Municipal Corporation Medical College, New Delhi, India

2 Department of Pediatric Surgery, Batra Hospital and Research Centre, New Delhi, India

Formulation of research question (RQ) is an essentiality before starting any research. It aims to explore an existing uncertainty in an area of concern and points to a need for deliberate investigation. It is, therefore, pertinent to formulate a good RQ. The present paper aims to discuss the process of formulation of RQ with stepwise approach. The characteristics of good RQ are expressed by acronym “FINERMAPS” expanded as feasible, interesting, novel, ethical, relevant, manageable, appropriate, potential value, publishability, and systematic. A RQ can address different formats depending on the aspect to be evaluated. Based on this, there can be different types of RQ such as based on the existence of the phenomenon, description and classification, composition, relationship, comparative, and causality. To develop a RQ, one needs to begin by identifying the subject of interest and then do preliminary research on that subject. The researcher then defines what still needs to be known in that particular subject and assesses the implied questions. After narrowing the focus and scope of the research subject, researcher frames a RQ and then evaluates it. Thus, conception to formulation of RQ is very systematic process and has to be performed meticulously as research guided by such question can have wider impact in the field of social and health research by leading to formulation of policies for the benefit of larger population.

I NTRODUCTION

A good research question (RQ) forms backbone of a good research, which in turn is vital in unraveling mysteries of nature and giving insight into a problem.[ 1 , 2 , 3 , 4 ] RQ identifies the problem to be studied and guides to the methodology. It leads to building up of an appropriate hypothesis (Hs). Hence, RQ aims to explore an existing uncertainty in an area of concern and points to a need for deliberate investigation. A good RQ helps support a focused arguable thesis and construction of a logical argument. Hence, formulation of a good RQ is undoubtedly one of the first critical steps in the research process, especially in the field of social and health research, where the systematic generation of knowledge that can be used to promote, restore, maintain, and/or protect health of individuals and populations.[ 1 , 3 , 4 ] Basically, the research can be classified as action, applied, basic, clinical, empirical, administrative, theoretical, or qualitative or quantitative research, depending on its purpose.[ 2 ]

Research plays an important role in developing clinical practices and instituting new health policies. Hence, there is a need for a logical scientific approach as research has an important goal of generating new claims.[ 1 ]

C HARACTERISTICS OF G OOD R ESEARCH Q UESTION

“The most successful research topics are narrowly focused and carefully defined but are important parts of a broad-ranging, complex problem.”

A good RQ is an asset as it:

  • Details the problem statement
  • Further describes and refines the issue under study
  • Adds focus to the problem statement
  • Guides data collection and analysis
  • Sets context of research.

Hence, while writing RQ, it is important to see if it is relevant to the existing time frame and conditions. For example, the impact of “odd-even” vehicle formula in decreasing the level of air particulate pollution in various districts of Delhi.

A good research is represented by acronym FINERMAPS[ 5 ]

Interesting.

  • Appropriate
  • Potential value and publishability
  • Systematic.

Feasibility means that it is within the ability of the investigator to carry out. It should be backed by an appropriate number of subjects and methodology as well as time and funds to reach the conclusions. One needs to be realistic about the scope and scale of the project. One has to have access to the people, gadgets, documents, statistics, etc. One should be able to relate the concepts of the RQ to the observations, phenomena, indicators, or variables that one can access. One should be clear that the collection of data and the proceedings of project can be completed within the limited time and resources available to the investigator. Sometimes, a RQ appears feasible, but when fieldwork or study gets started, it proves otherwise. In this situation, it is important to write up the problems honestly and to reflect on what has been learned. One should try to discuss with more experienced colleagues or the supervisor so as to develop a contingency plan to anticipate possible problems while working on a RQ and find possible solutions in such situations.

This is essential that one has a real grounded interest in one's RQ and one can explore this and back it up with academic and intellectual debate. This interest will motivate one to keep going with RQ.

The question should not simply copy questions investigated by other workers but should have scope to be investigated. It may aim at confirming or refuting the already established findings, establish new facts, or find new aspects of the established facts. It should show imagination of the researcher. Above all, the question has to be simple and clear. The complexity of a question can frequently hide unclear thoughts and lead to a confused research process. A very elaborate RQ, or a question which is not differentiated into different parts, may hide concepts that are contradictory or not relevant. This needs to be clear and thought-through. Having one key question with several subcomponents will guide your research.

This is the foremost requirement of any RQ and is mandatory to get clearance from appropriate authorities before stating research on the question. Further, the RQ should be such that it minimizes the risk of harm to the participants in the research, protect the privacy and maintain their confidentiality, and provide the participants right to withdraw from research. It should also guide in avoiding deceptive practices in research.

The question should of academic and intellectual interest to people in the field you have chosen to study. The question preferably should arise from issues raised in the current situation, literature, or in practice. It should establish a clear purpose for the research in relation to the chosen field. For example, filling a gap in knowledge, analyzing academic assumptions or professional practice, monitoring a development in practice, comparing different approaches, or testing theories within a specific population are some of the relevant RQs.

Manageable (M): It has the similar essence as of feasibility but mainly means that the following research can be managed by the researcher.

Appropriate (A): RQ should be appropriate logically and scientifically for the community and institution.

Potential value and publishability (P): The study can make significant health impact in clinical and community practices. Therefore, research should aim for significant economic impact to reduce unnecessary or excessive costs. Furthermore, the proposed study should exist within a clinical, consumer, or policy-making context that is amenable to evidence-based change. Above all, a good RQ must address a topic that has clear implications for resolving important dilemmas in health and health-care decisions made by one or more stakeholder groups.

Systematic (S): Research is structured with specified steps to be taken in a specified sequence in accordance with the well-defined set of rules though it does not rule out creative thinking.

Example of RQ: Would the topical skin application of oil as a skin barrier reduces hypothermia in preterm infants? This question fulfills the criteria of a good RQ, that is, feasible, interesting, novel, ethical, and relevant.

Types of research question

A RQ can address different formats depending on the aspect to be evaluated.[ 6 ] For example:

  • Existence: This is designed to uphold the existence of a particular phenomenon or to rule out rival explanation, for example, can neonates perceive pain?
  • Description and classification: This type of question encompasses statement of uniqueness, for example, what are characteristics and types of neuropathic bladders?
  • Composition: It calls for breakdown of whole into components, for example, what are stages of reflux nephropathy?
  • Relationship: Evaluate relation between variables, for example, association between tumor rupture and recurrence rates in Wilm's tumor
  • Descriptive—comparative: Expected that researcher will ensure that all is same between groups except issue in question, for example, Are germ cell tumors occurring in gonads more aggressive than those occurring in extragonadal sites?
  • Causality: Does deletion of p53 leads to worse outcome in patients with neuroblastoma?
  • Causality—comparative: Such questions frequently aim to see effect of two rival treatments, for example, does adding surgical resection improves survival rate outcome in children with neuroblastoma than with chemotherapy alone?
  • Causality–Comparative interactions: Does immunotherapy leads to better survival outcome in neuroblastoma Stage IV S than with chemotherapy in the setting of adverse genetic profile than without it? (Does X cause more changes in Y than those caused by Z under certain condition and not under other conditions).

How to develop a research question

  • Begin by identifying a broader subject of interest that lends itself to investigate, for example, hormone levels among hypospadias
  • Do preliminary research on the general topic to find out what research has already been done and what literature already exists.[ 7 ] Therefore, one should begin with “information gaps” (What do you already know about the problem? For example, studies with results on testosterone levels among hypospadias
  • What do you still need to know? (e.g., levels of other reproductive hormones among hypospadias)
  • What are the implied questions: The need to know about a problem will lead to few implied questions. Each general question should lead to more specific questions (e.g., how hormone levels differ among isolated hypospadias with respect to that in normal population)
  • Narrow the scope and focus of research (e.g., assessment of reproductive hormone levels among isolated hypospadias and hypospadias those with associated anomalies)
  • Is RQ clear? With so much research available on any given topic, RQs must be as clear as possible in order to be effective in helping the writer direct his or her research
  • Is the RQ focused? RQs must be specific enough to be well covered in the space available
  • Is the RQ complex? RQs should not be answerable with a simple “yes” or “no” or by easily found facts. They should, instead, require both research and analysis on the part of the writer
  • Is the RQ one that is of interest to the researcher and potentially useful to others? Is it a new issue or problem that needs to be solved or is it attempting to shed light on previously researched topic
  • Is the RQ researchable? Consider the available time frame and the required resources. Is the methodology to conduct the research feasible?
  • Is the RQ measurable and will the process produce data that can be supported or contradicted?
  • Is the RQ too broad or too narrow?
  • Create Hs: After formulating RQ, think where research is likely to be progressing? What kind of argument is likely to be made/supported? What would it mean if the research disputed the planned argument? At this step, one can well be on the way to have a focus for the research and construction of a thesis. Hs consists of more specific predictions about the nature and direction of the relationship between two variables. It is a predictive statement about the outcome of the research, dictate the method, and design of the research[ 1 ]
  • Understand implications of your research: This is important for application: whether one achieves to fill gap in knowledge and how the results of the research have practical implications, for example, to develop health policies or improve educational policies.[ 1 , 8 ]

Brainstorm/Concept map for formulating research question

  • First, identify what types of studies have been done in the past?
  • Is there a unique area that is yet to be investigated or is there a particular question that may be worth replicating?
  • Begin to narrow the topic by asking open-ended “how” and “why” questions
  • Evaluate the question
  • Develop a Hypothesis (Hs)
  • Write down the RQ.

Writing down the research question

  • State the question in your own words
  • Write down the RQ as completely as possible.

For example, Evaluation of reproductive hormonal profile in children presenting with isolated hypospadias)

  • Divide your question into concepts. Narrow to two or three concepts (reproductive hormonal profile, isolated hypospadias, compare with normal/not isolated hypospadias–implied)
  • Specify the population to be studied (children with isolated hypospadias)
  • Refer to the exposure or intervention to be investigated, if any
  • Reflect the outcome of interest (hormonal profile).

Another example of a research question

Would the topical skin application of oil as a skin barrier reduces hypothermia in preterm infants? Apart from fulfilling the criteria of a good RQ, that is, feasible, interesting, novel, ethical, and relevant, it also details about the intervention done (topical skin application of oil), rationale of intervention (as a skin barrier), population to be studied (preterm infants), and outcome (reduces hypothermia).

Other important points to be heeded to while framing research question

  • Make reference to a population when a relationship is expected among a certain type of subjects
  • RQs and Hs should be made as specific as possible
  • Avoid words or terms that do not add to the meaning of RQs and Hs
  • Stick to what will be studied, not implications
  • Name the variables in the order in which they occur/will be measured
  • Avoid the words significant/”prove”
  • Avoid using two different terms to refer to the same variable.

Some of the other problems and their possible solutions have been discussed in Table 1 .

Potential problems and solutions while making research question

An external file that holds a picture, illustration, etc.
Object name is JIAPS-24-15-g001.jpg

G OING B EYOND F ORMULATION OF R ESEARCH Q UESTION–THE P ATH A HEAD

Once RQ is formulated, a Hs can be developed. Hs means transformation of a RQ into an operational analog.[ 1 ] It means a statement as to what prediction one makes about the phenomenon to be examined.[ 4 ] More often, for case–control trial, null Hs is generated which is later accepted or refuted.

A strong Hs should have following characteristics:

  • Give insight into a RQ
  • Are testable and measurable by the proposed experiments
  • Have logical basis
  • Follows the most likely outcome, not the exceptional outcome.

E XAMPLES OF R ESEARCH Q UESTION AND H YPOTHESIS

Research question-1.

  • Does reduced gap between the two segments of the esophagus in patients of esophageal atresia reduces the mortality and morbidity of such patients?

Hypothesis-1

  • Reduced gap between the two segments of the esophagus in patients of esophageal atresia reduces the mortality and morbidity of such patients
  • In pediatric patients with esophageal atresia, gap of <2 cm between two segments of the esophagus and proper mobilization of proximal pouch reduces the morbidity and mortality among such patients.

Research question-2

  • Does application of mitomycin C improves the outcome in patient of corrosive esophageal strictures?

Hypothesis-2

In patients aged 2–9 years with corrosive esophageal strictures, 34 applications of mitomycin C in dosage of 0.4 mg/ml for 5 min over a period of 6 months improve the outcome in terms of symptomatic and radiological relief. Some other examples of good and bad RQs have been shown in Table 2 .

Examples of few bad (left-hand side column) and few good (right-hand side) research questions

An external file that holds a picture, illustration, etc.
Object name is JIAPS-24-15-g002.jpg

R ESEARCH Q UESTION AND S TUDY D ESIGN

RQ determines study design, for example, the question aimed to find the incidence of a disease in population will lead to conducting a survey; to find risk factors for a disease will need case–control study or a cohort study. RQ may also culminate into clinical trial.[ 9 , 10 ] For example, effect of administration of folic acid tablet in the perinatal period in decreasing incidence of neural tube defect. Accordingly, Hs is framed.

Appropriate statistical calculations are instituted to generate sample size. The subject inclusion, exclusion criteria and time frame of research are carefully defined. The detailed subject information sheet and pro forma are carefully defined. Moreover, research is set off few examples of research methodology guided by RQ:

  • Incidence of anorectal malformations among adolescent females (hospital-based survey)
  • Risk factors for the development of spontaneous pneumoperitoneum in pediatric patients (case–control design and cohort study)
  • Effect of technique of extramucosal ureteric reimplantation without the creation of submucosal tunnel for the preservation of upper tract in bladder exstrophy (clinical trial).

The results of the research are then be available for wider applications for health and social life

C ONCLUSION

A good RQ needs thorough literature search and deep insight into the specific area/problem to be investigated. A RQ has to be focused yet simple. Research guided by such question can have wider impact in the field of social and health research by leading to formulation of policies for the benefit of larger population.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

R EFERENCES

  • Translators
  • Graphic Designers

Solve

Please enter the email address you used for your account. Your sign in information will be sent to your email address after it has been verified.

What Makes a Good Research Question?

WriteOn

If you're struggling through the process of writing a research question , you might start to see that there are some Goldilocks aspects of writing a good research question: There is a very narrow range for it to be considered good, and there are a lot of ways for it to be unsatisfactory. A good research question shouldn't be too broad or too narrow, it can't be too simple or too complex, and it has to be answerable but not easy to answer. If finding the balance within these contradictions feels paralyzing to you, don't fret! Keep reading to learn how to write an effective research question that will create a strong foundation for your research paper.

Regardless of whether you are writing a research question about modern science or you're analyzing 19th century literature through a new lens, you will follow the same basic steps when formulating your research question. However, your field of study might have specific requirements and standards for writing a good research question, so in addition to following the steps outlined here, make sure you know the expectations for your particular field before you decide on your research question.

Writing a good research question requires time and preparation. Do not commit to the first research question that pops into your mind. Choosing an unfortunate research question will actually make the writing process much more difficult, and it will ultimately take you longer and cause you more frustration. To save yourself from such a plight, take your time and follow the six steps described below, and ask yourself the list of checklist questions at the end. If you take the time to develop a good research question, your entire writing process will be easier as a result.

Step 1: Choose a topic that interests you

You will be spending quite a bit of time researching, writing, and thinking about this topic, so make sure to choose a topic that actually interests you. It is very difficult to write an effective paper if you do not care about the topic. If you are genuinely interested in the question and invested in the results, you will produce a better research paper while hopefully contributing to a topic that matters to you.

Step 2: Start broad

Identify a broad topic in your field that you would like to explore further. If you are interested in multiple topics, write them all down so you can easily evaluate your options. Look through this list and pick the topic that interests you most. Do not discard the list: You might need to refer back to this list later if you need to refocus, redirect, or totally change your research question.

Step 3: Brainstorm

Once you have selected a broad topic, perform a general internet search to begin brainstorming possible lines of inquiry. Write down every question that occurs to you as you skim articles and peruse academic journals on the topic. Once you have identified some important or unique questions, review your list to see if any question stands out to you. If you feel a strong connection to one particular question, select it for now and start doing some preliminary research on that question. Keep your list of brainstormed questions: You can refer to back to it later if you need to add depth or make your question more specific.

Step 4: Perform preliminary research

One of the most important aspects of a good research question is that it must be unique to you. Therefore, you have to think of a question that no one else has researched or a question that has not been researched in the specific way you are proposing. Once you have narrowed down your list of questions, check prominent publications in your field to make sure that previous researchers haven't already explored it. If you find papers or studies that are similar but not identical to yours, note the author(s) and publication(s) in case you want to use them as possible sources.

While you are skimming previous publications, look for gaps in the existing research. If you find a research gap in an area that interests you, add it to the list of research topics that you brainstormed in step three. Finding gaps in existing research might lead you to a valuable research question that will complement existing research.

If you find that someone else has already researched your exact question, go back to the lists you created in steps two and three, and explore ways to modify your question to make it more specific. You might have to repeat this process a few times until you find the perfect combination of factors.

Step 5: Decide what kind of research question you want to ask

Now that you have selected a topic, consider what you hope to learn through this research. Even if you do not know precisely what you hope to learn, asking yourself this question can help you decide whether to write a qualitative question or a quantitative question .

  • Qualitative research questions tend to be open-ended questions that explore people's experiences or beliefs to better understand a topic. These questions usually begin with "what" or "how," and they aim to generate information that is difficult to measure, such as people's attitudes, perceptions, or motivations.
  • Quantitative research questions usually generate quantifiable results that researchers can analyze to find causal relationships between variables. Quantitative research questions usually answer the question "why?" According to Professor Imed Bouchrika , quantitative research questions typically include the population to be studied, dependent and independent variables, and the research design to be used.

Once you've decided whether you're proposing a qualitative or quantitative research question, write down a rough draft version of your question. It doesn't have to be perfect: This is just a rough draft, so it is still subject to change as you go through the process.

Understanding Qualitative vs. Quantitative Research

Step 6: Ensure that you can answer the question

Once you've confirmed that your research question is unique and no one has explored it in this way before, make sure that it is possible to answer your question. If you are researching in a scientific field, you will need to ensure that you can answer the question in an objective manner with supporting qualitative data, quantitative data, or a mix of the two. Consult journals and publications to see you can find adequate existing data to answer your research question. If the data does not currently exist, consider your time, resources, and page limits to decide whether you will be able to conduct research to obtain the necessary data. Go back to step two if you need to revise your question to make it answerable.

If you are creating a research question for a topic in the liberal arts field, there will probably not be existing quantifiable data in previous publications. Instead, make sure that you can argue your position in a knowledgeable and informed manner. Look for passages in the text that support your argument and search for quotes from other researchers that you can cite to strengthen your argument.

After going through the six steps outlined above, you will have a rough draft of your strong research question. To identify if you have written a good research question, ask yourself the following questions and revise your question accordingly:

  • Can this question be answered with a perfunctory internet search? If so, then your question is too broad. Look for ways to focus the question, such as by applying it to a specific population or in a more specific context.
  • Can this question be answered with a simple yes or no? If so, then you need to add some parameters or additional factors to your question to make it more specific. A good research question cannot be answered in one sentence, and it certainly can't be answered in just one word.
  • Does this question elicit an opinion or value judgement? If so, then you need to keep refining this research question. A good research question should be open to academic analysis and interpretation, but it should not generate judgement or responses based solely on opinions or feelings.
  • Can I adequately answer this question within the boundaries of this research paper? If not, then your research question is probably too broad. You do not want to end up writing a general overview on a topic, so look for ways to focus the question.
  • Does this question allow for analysis and interpretation? If not, then return to steps two and three and see if you can expand on the question or add an element that makes it more complex.
  • Does this question contribute to my field or provide a new perspective on an important topic?
  • Is this research question thought-provoking? A good research question will inspire people to want to learn more. Does this question excite you and make you want to learn more?

If you have been struggling to write a good research question, follow the six steps outlined above and ask yourself these essential seven questions. If you follow this process, you will develop a powerful research question that will be the foundation for a great research paper.

Related Posts

How To Write a Strong Research Hypothesis

How To Write a Strong Research Hypothesis

Open Access vs. Traditional Journals: Pros and Cons

Open Access vs. Traditional Journals: Pros and Cons

  • Academic Writing Advice
  • All Blog Posts
  • Writing Advice
  • Admissions Writing Advice
  • Book Writing Advice
  • Short Story Advice
  • Employment Writing Advice
  • Business Writing Advice
  • Web Content Advice
  • Article Writing Advice
  • Magazine Writing Advice
  • Grammar Advice
  • Dialect Advice
  • Editing Advice
  • Freelance Advice
  • Legal Writing Advice
  • Poetry Advice
  • Graphic Design Advice
  • Logo Design Advice
  • Translation Advice
  • Blog Reviews
  • Short Story Award Winners
  • Scholarship Winners

Need an academic editor before submitting your work?

Need an academic editor before submitting your work?

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

good research question is practicable

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

good research question is practicable

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

good research question is practicable

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

good research question is practicable

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods, sign up for our weekly newsletter.

Fresh data delivered Saturday mornings

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Share Podcast

HBR IdeaCast podcast series

Are You Asking the Right Questions?

A conversation with IMD Business School’s Arnaud Chevallier on simple changes to improve your decision-making.

  • Apple Podcasts
  • Google Podcasts

Few leaders have been trained to ask great questions. That might explain why they tend to be good at certain kinds of questions, and less effective at other kinds. Unfortunately, that hurts their ability to pursue strategic priorities. Arnaud Chevallier, strategy professor at IMD Business School, explains how leaders can break out of that rut and systematically ask five kinds of questions: investigative, speculative, productive, interpretive, and subjective. He shares real-life examples of how asking the right sort of question at a key time can unlock value and propel your organization. With his IMD colleagues Frédéric Dalsace and Jean-Louis Barsoux, Chevallier wrote the HBR article “ The Art of Asking Smarter Questions .”

CURT NICKISCH: Welcome to the HBR IdeaCast from Harvard Business Review, I’m Curt Nickisch.

The complexity and uncertainty around business today demands a different skill in leaders, namely the ability to ask illuminating questions.

Jensen Huang, the CEO of chip maker NVIDIA has said that over time, his job has become less about giving answers to problems, and more about asking questions; that he wants his team to join that exploration with him. And it’s probably not a coincidence that his company operates at the heart of the artificial intelligence revolution. After all, now that you have the ability to basically talk to a database, it really does come down to the questions you ask of it. By the way, we talked to him on IdeaCast on episode 940, check that out.

But today’s guest says few business professionals are trained in the skill of asking questions. They don’t know the different types of strategic questions, and even when they do hang question marks, they often have blind spots.

Here to explain is Arnaud Chevallier, a professor at IMD Business School, with his colleagues Frederic Dalsace and Jean-Louis Barsoux he wrote the HBR article, The Art of Asking Smarter Questions. Welcome, Arnaud.

ARNAUD CHEVALLIER: Thanks for having me, Curt.

CURT NICKISCH: Why is asking questions, this basic conversational skill, so hard for people?

ARNAUD CHEVALLIER: Well, I think we’ve all heard it, asking more questions helps people make better decisions. But there’s a dark side. Because whenever you’re asking one question, you’re not asking another type of question. And so if you’re under time pressure, you might be probing one side of a problem or decision but not other sides. And if you look at managers compared to other professions, lawyers, physicians, psychologists, they’re trained to ask better questions. Managers, seems like we are supposed to learn on the job.

CURT NICKISCH: And many do learn it and perhaps learn a certain kind of question that seems to work for them for some time. You point out a lot of people don’t understand that there are different types of questions that you can be asking, and they just by their nature tend to ask a certain type of question but avoid other ones just because it doesn’t come naturally to them.

ARNAUD CHEVALLIER: Yeah. That’s what we find speaking with managers and leaders across organizations. I think when you start professionally, you develop your own mix of questions. Maybe you pick up a couple questions that you think are insightful from your boss perhaps. You get to learn and hone that mix and it gets you here but it’s unclear when you get promoted to your next job that what got you here will get you there.

We are trained, we are told, “Ask open-ended questions, ask follow up questions.”

CURT NICKISCH: Yeah, ask why. Ask the five why’s.

ARNAUD CHEVALLIER: The five why’s, absolutely. But what else? You get to the, “Sure, good idea. I should ask why. What else should I ask?” And usually the guidance falls flat. We’ve been speaking with hundreds of executives, trying to understand which questions they ask. We’ve been speaking with very senior people trying to understand what works for them. And out of that we came together with a taxonomy of questions that we believe are useful in making better decisions, in solving complex problems.

CURT NICKISCH: This taxonomy basically divides strategic questions into five types, investigative, speculative, productive, interpretive and subjective. It’s probably smart for us to go through them one by one.

ARNAUD CHEVALLIER: Let’s, because that’s a mouthful, right? Let’s project ourselves into big decisions that you have to make maybe as a manager or maybe as just a person. Perhaps you’re thinking about buying a new house, maybe moving the family. Maybe you’re thinking about acquiring a new firm. Whenever we’re faced with those complex decisions, pretty quickly we want to identify, “Okay, what is it that we want to achieve?”

But we realize we don’t have enough information to achieve it. We need to get into investigative mode by asking ourselves, what’s known? What’s known about the problem? For instance, the five why’s. Or what’s known about the solutions, the potential solutions by asking how may we do this? How may we do that? The first type of question is investigative, helps you probe in depth into the problem or into the solution.

CURT NICKISCH: Some of the questions that can be asked here are what happened? What is and isn’t working? What are the causes of the problem? Those are all examples of investigative questions. Are these questions that are typically asked at the beginning of a process, or can they be used anywhere in problem solving?

ARNAUD CHEVALLIER: Yes. What we’re finding out is it works better actually if we go back and forth. There’s no real segmentation because investigative gets you to a point: you drill deeper into the problem or into the solutions, but going deep is not the only way. You may want to speculate as well. The second type of question is speculative questions, epitomized by what if? Speculative questions are here to help you foster innovation by challenging the implicit and the explicit assumptions for the problem.

CURT NICKISCH: What if is really good. Examples of this are also what other scenarios might exist? Could we do this differently? That’s a way of just asking a simple question, but trying to open up a brand new avenue of thinking or problem solving.

ARNAUD CHEVALLIER: Exactly. And by doing this, you’re really expanding the space in which you operate. Investigative, you go deep. And speculative, you go wide, and you’re stretching a little bit the universe of possibilities.

CURT NICKISCH: Now, productive is the next type. Tell us about that.

ARNAUD CHEVALLIER: Yeah. Investigative, you go deep. Speculative, you go wide. If you’re a professor, that’s all you have to do. You can spend years and years on your problem but if you actually have a real job, chances are you’re asked to have some results, right? So productive is the now what questions. You’re adjusting the pace of the effort, deciding whether you know enough to move forward right away or perhaps deciding that you need to slow down a little bit before you make those decisions, to give you a chance to get even more insight into your problem.

CURT NICKISCH: Examples here that you list in your article are things like, do we have the resources to move ahead? Do we know enough to proceed? Are we ready to decide? Very tactical and the sorts of questions that bring everybody back to the realization of what needs to happen.

ARNAUD CHEVALLIER: That’s right. How are we doing across compared to project plan and should we accelerate or should we slow down?

CURT NICKISCH: I can definitely see certain types of managers would be really good at this. There are roles sometimes that are very operational or process oriented, and you almost have a traffic police officer managing a process, yeah. Interpretive was the next type.

ARNAUD CHEVALLIER: Investigative, what’s known. Speculative, what if? Productive, now what? All these gets me some information about my decision, about my problem. But information is one thing, but it’s not quite wisdom. The fourth type, the interpretative questions, the “so what” helps us convert that information into insight.

CURT NICKISCH: Examples here are questions like how does this fit with that goal? What are we trying to achieve – that really gets at so what? What did we learn from this new information? This seems very helpful at a transition point where you’ve … I don’t know, you’ve gotten customer data back or you have new information to process.

ARNAUD CHEVALLIER: I love how you phrased it because this is also what we’re discovering, the “so what” helps you transition from one type of question to another. So the five why’s, why aren’t we having better revenues? Because our clients are not buying enough of our products. Okay, so what? Maybe then that will help me transition from being investigative, asking why, to perhaps being speculative, thinking about how else we could get our clients to buy our products. It enables you to transition from one type to another.

CURT NICKISCH: Now, the last type of strategic question that you identify in your taxonomy is subjective, which was really interesting to me because it wasn’t one of the sorts of questions I expected to jump out in a strategy framework. Tell us a little bit more about subjective questions.

ARNAUD CHEVALLIER: Maybe it’s helpful to explain how we got to the first four types. We were very happy when we got there, we figured it was really clicking and then we had the catchy way of thinking about it. It’s four types but there are really three main ones, like the three Musketeers, that sort of thing. We thought we were done and then we started interviewing top leaders, people in charge of billion-dollar operations. And there was something else, and maybe this is best exemplified by this wonderful little cartoon by Jack Ziegler in the New Yorker a few years ago, where you see a little fish happily swimming around minding its own business, not realizing that right behind it there’s a huge fish about to eat it alive. And the small one is called agenda, and the big one is called hidden agenda. The last type of questions, subjective questions, are just realizing that we’re dealing with people. People have emotions, they have political agendas, and if we don’t embrace this we might just miss entirely what the problem is actually all about.

CURT NICKISCH: Examples of these questions are how do you really feel about this decision? Have we consulted the right people? Those are all things that do get at those emotions and just the real impact of business decisions.

ARNAUD CHEVALLIER: Right on. I remember specifically we were interviewing the CEO of a major airplane manufacturer. And brilliant fellow, mid 40s, everything … Former engineer, I think. We were expecting him to be very investigative. Nothing against engineers, I’m one myself. But turns out that he was saying after every big meeting he would sit down and reflect on was there a difference between what was said, what was heard, and what was meant? To him, what really mattered was that human component in the meeting.

CURT NICKISCH: Now that we have these five types, let’s go through some of the advice that you have in your article. Number one, is you really want people to understand what questions they tend to ask or what their own interrogatory typology is. Talk more about that.

ARNAUD CHEVALLIER: I think it’s fair to say that we all develop our question mix, the questions that have served us well, that we believe will serve us well in the future. I remember for example, interviewing the COO of a major car company. And he’s telling us how on Monday morning he meets his team and he’s asking them, “How was your weekend?”

But he also made it very clear that when he’s asking how was your weekend, he doesn’t want to hear about little Timmy’s baseball game, he wants to know whether we shipped on time, if there’s any issues with the manufacturers. In other words, he is in full productive mode. And that makes a lot of sense. Again, he’s a COO. His job is to get things moving. But we can also imagine that he’s doing such a good job at the COO level that he might be offered the CEO position. And here, if he’s using the same mix that is predominantly into productive, he might not see other areas, he might develop some blind spots.

CURT NICKISCH: And so number one, you can learn to mix it up yourself by understanding your type, basically keeping track of the questions that you ask and making a concerted effort to ask different kinds of questions so that you expand your repertoire. That’s one way to get started.

ARNAUD CHEVALLIER: Maybe another way is also to take the LQM test, the leaders question mix test that we are putting together on the IMD website. It takes five minutes and you’re given two batches of questions, and you tell us which one you prefer. And as a result, we help you identify what your preferred mix is. And back to your point, Curt, my preferred mix is one thing but I shouldn’t be … I need to realize as well that there are other questions, including some that I’m not familiar or comfortable with, and that what matters is not so much my preference, as much as what is needed for the specific decision or specific problem I’m facing.

CURT NICKISCH: So if you’ve assessed your current question style, you start to adjust your repertoire, it’s still a lot to keep track of. When you’re in conversations, it’s easy to remember afterwards, why didn’t I ask that question? While you’re in it, especially if it’s a heated exchange or a very pithy conversation, it’s hard to just do this in real time on the fly, really well.

So what advice do you have for somebody to practically keep track, and expand their repertoire, but also make sure that they’re not missing anything and that they still don’t have blind spots even after they try to expand the zone in that way?

ARNAUD CHEVALLIER: I think you’re describing situations that we see often with executives. And one way of doing this is by taking the LQM, the leader question mix assessment, you also get a list of questions. And you can take that list with you, especially if there are some types of questions you realize you don’t ask very naturally. You can also pick a couple of those ahead of the meeting, making a mental or written note to ask those questions over there and see what happens with those.

CURT NICKISCH: Does this work at all levels of the organization or are we really talking about leaders asking strategic questions?

ARNAUD CHEVALLIER: We’ve applied it at all levels, absolutely, and in fact what we’ve found is in teams it works even better, realizing first that we have different mixes and then identifying, so what? Being interpretive: what are we going to do with the fact that you and I, Curt, have different mixes? If I’m terrible at one type, for instance speculative, maybe I need to rely on my teammates who are better there. Or at the very least, learn to recognize the value of speculative questions, at least in some settings, not shutting down the door the moment I hear a speculative question.

CURT NICKISCH: And one point you make in the article too is that you can find people on your team to help compensate for you if you know that you have certain weaknesses. Let’s talk a little bit about the difficulty of asking questions though in business settings, because when you ask a question, in some ways you’re putting people on the spot. What advice do you have for managers and leaders asking questions in these settings where you can ask penetrating and provocative questions but not make them feel so hard edged?

ARNAUD CHEVALLIER: Yeah. I think, again, you’re putting your finger on it because if you’re the authority figure and you ask, “Why did you do this?” Chances are the person on the receiving end of that is going to feel threatened. There is what we ask and there is how we ask it and how we phrase it. And what we found with those leaders who are particularly good with these subjective kinds of questions is they’re very conscious of the way they ask things. For instance, you might not ask why did you do this, but perhaps what happened?

CURT NICKISCH: Can you give us some examples of where these questions or changing your mix, asking different types of questions, yeah, being more deliberate in your question asking, how that can lead to better business results?

ARNAUD CHEVALLIER: Well, my favorite of course is a Swiss cliche. IMD professors will tell you, of course we’ll bring it back to the Swiss army knife. And your mix really is a Swiss army knife. You should be able not to have just one blade but you have different mixes of questions and you use the mix that best fits whichever situations you’re in.

Take the example of an airline captain who’s about to land at Geneva airport. If I’m in the back of the plane, I do not want the captain to start thinking speculative questions. “Hey, what if I turn this knob here? What if I try to land the plane in a different way?” No, no, no, no. Her job at that time is to land the plane, be productive. You take the time you to decide, no more, no less, and you just get it done. But that same captain maybe a few minutes before might have to deal with an issue, maybe a passenger who had drunk too much alcohol and started to act up, and maybe she needed to on the spot think creatively and perhaps using seat belts to restrain the passenger.

And perhaps even earlier in the day when she first met the first officer who was going to assist her on the flight, she needed to create quickly an environment where they could work well together. She maybe needed to be very subjective in her question mix. We can see how the same person on the same job might have to fundamentally alter her mix just to be effective at all three decision points.

CURT NICKISCH: You also have a lot of good examples in the article of companies that … Or leaders that didn’t ask a certain type of question, and that led to a huge problem.

ARNAUD CHEVALLIER: Mmhmm. Being French, we can make fun of the SNCF who built …

CURT NICKISCH: This is the French rail company.

ARNAUD CHEVALLIER: That’s right. They ordered 15 billions worth of trains and design spec’d them on the assumption that all platforms were some standard size, only to realize that all the platforms, all 1300 of them were actually larger, needed to be respec’d. And I think in hindsight, it’s always easy to make fun and to look at deficiencies in the decision process.

However, we probably can safely assume that engineers on the problem did their utmost to get it done. Really, five question types as a way perhaps of having a checklist, of reducing the chances of having blind spots in our decision process, but realizing that those blind spots can happen even to the best organizations out there, and realizing then that if we’re not mindful about the questions we ask, we might just every now and then fail to check an important question category.

CURT NICKISCH: One question that you suggest asking is, “are we all okay with this?” Which is a powerful question. It also presupposes that you’ve got the psychological safety on the team for everybody to be able to speak up. So, questioning and asking the right questions at the right time still demands an awareness of the culture that you’re asking it in, and how these questions are going to come across, and whether you’ve created the climate for people to be able to give you the powerful answers that you’re asking for.

ARNAUD CHEVALLIER: This is a very good point. And we’ve worked with organizations where there was very little psychological safety, where admitting that anything might be less and perfect might be a big, big issue. And in those settings it’s much more challenging but there are ways of still eliciting the wisdom of the group.

One such way for instance, is to use pre-mortems and to project the organization, say, “Okay, let’s go with this decision. Let’s assume that we are picking option one and we are now three years from today and we realize it’s a total fiasco. It crashed down. What happened?” And that can help people who would probably not ask questions frontally, to put on the table some less than perfect aspects of the decision they’re seeing.

CURT NICKISCH: Yeah, that’s very clever. What could go wrong? What did go wrong with this fiasco? It’s almost like this article is giving advice for how to speak, how to talk. Asking a question, it’s a conversational device. And it might seem too basic to people, why is this important and why is this especially important now?

ARNAUD CHEVALLIER: Well, questions are ways to make better decisions. We’ve all heard it, asking better questions is a way forward. We probably all develop our own mix of questions, those questions that we like, but there might be three issues associated with that. First, how do you know that your mix is a good mix? Second, when you’re asking a question, especially under time pressure, you’re not asking another type of question. There’s a cost of opportunity of asking a specific question. And are you sure that you’re using the best question for the job? And third, maybe you mix got you here but if that means that you’re doing such a good job here, you’re getting promoted, then tomorrow’s universe for you is not the same as yesterday. How do you adapt your question mix to help you be successful in the future?

CURT NICKISCH: And is there anything different about today’s business climate or the oncoming opportunity with artificial intelligence, that amplifies the ability to ask questions?

ARNAUD CHEVALLIER: I think you’re spot-on. GenAI, especially since late 2022, enables you to be a sparring partner or to have a sparring partner in having back and forth. You can indeed have a conversation with the database now, and you can’t have that conversation by proposing answers. You need to be asking questions. Clearly asking more insightful questions might unlock some value you couldn’t otherwise.

CURT NICKISCH: So for a speculative question, what does that look like in a real business setting?

ARNAUD CHEVALLIER: You hear mid managers who are often risk-averse, and then you speak with their boss and the boss is always asking for taking more risk. And you can rationalize it from both sides. Because the boss has a portfolio of a project and if some of those fail, no big deal. But if I’m the manager in charge of a project and I have it fail, then pretty quickly I start thinking that people associate me with failure. And so asking what if, having that conversation between the top team and the manager saying, “What if we didn’t care about failure? What if we were looking for – each of us managers, some of us having some failure? What if we relaxed this constraint or that constraint?” – can help us realize and realign what would be individual objectives with organizational ones.

CURT NICKISCH: Do you remember any good stories from the executives that you talked to where asking some of these subjective, what’s unsaid questions really opened up new opportunities or changed things?

ARNAUD CHEVALLIER: Yeah. And this one really gets to the human dimension. If you ask me next Monday morning how I’m doing, and I reply, fine, fine can be a number of things. Fine can be my dog died yesterday. Or fine, can be life is beautiful. What we found with some of the execs who were really good at going to the essence of it is probing in a caring way to understand the meaning behind the words, what’s kept unsaid, and remembering that you have short post people and you have long post people, some people will say fine as just an introductory but if you give them a little bit more time, they might actually expand and through that unlock a set of information you wouldn’t have had access to.

CURT NICKISCH: Arnaud, I have to ask, you’ve done all this research, I’m curious if you have a favorite question that you never asked before that you’ve come out of this process with that you use in your work and your job.

ARNAUD CHEVALLIER: Putting me on the spot, huh.

CURT NICKISCH: A little bit.

ARNAUD CHEVALLIER: I really fell in love with that difference between what was said, what was heard, and what was meant. I really think this is something I need to be better at and reading the weak signals and understanding what’s behind the words. But whenever I take the test, and I’ve taken it several times, what comes out is I am terrible at productive questions. So maybe, just maybe I need to pay more attention to the pace of my decision making.

CURT NICKISCH: For a manager who’s not a leader yet, hasn’t developed their repertoire per se, what advice would you give to them? What can they do tomorrow to start asking more strategic and stronger questions?

ARNAUD CHEVALLIER: My advice to someone who feels they don’t have yet a mix is, first of all, you probably already have a mix. There’s probably a couple of questions that you’ve seen or heard and they feel very insightful. But maybe you want to do as I do, is I keep track. All the questions I hear on your podcast and elsewhere that I haven’t heard before, I keep a long list and then I categorize them under the five buckets and I have my favorite ones.

CURT NICKISCH: Arnaud, thanks so much for coming on the show and sharing this research with us.

ARNAUD CHEVALLIER: My pleasure, thanks for having me.

CURT NICKISCH: That’s Arnaud Chevallier, a professor at IMD Business School and a co-author of the HBR article, the Art of Asking Smarter Questions.

And we have nearly 1000 episodes plus more podcasts to help you manage your team, your organization and your career. Find them at HBR.org/podcasts or search HBR and Apple Podcasts, Spotify or wherever you listen.

Thanks to our team, senior producer Mary Dooe, associate producer Hannah Bates, audio product manager Ian Fox, and senior production specialist Rob Eckhardt. Thank you for listening to the HBR IdeaCast . We’ll be back with a new episode on Tuesday, I’m Curt Nickisch.

  • Subscribe On:

Latest in this series

This article is about leadership and managing people.

  • Interpersonal skills
  • Public speaking
  • Management communication
  • Strategy formulation
  • Decision making and problem solving

Partner Center

ORIGINAL RESEARCH article

Is this good science communication construction and validation of a multi-dimensional quality assessment scale from the audience's perspective (quasap) provisionally accepted.

  • 1 Technical University of Braunschweig, Germany
  • 2 Institute for Communication Science, Germany

The final, formatted version of the article will be published soon.

The expansion of science communication (Bubela et al., 2009;Bucchi and Trench, 2014) underscores the increasing importance of understanding what constitutes good science communication (Niemann et al., 2023). This question concerns the public's understanding and engagement with science (Choung et al., 2020; Davies, 2013; Stilgoe et al., 2014). The scholarly discussion has shifted from the traditional deficit model to a more dialogue-oriented approach yet remains normatively anchored (Fähnrich, 2017;Mede and Schäfer, 2020). There is a pivotal lack of attention to the audience's perspective regarding the question of what good science communication is. Moreover, different formats of science communication have hardly been researched thus far.Therefore, this paper introduces a multi-dimensional scale to capture the audience's assessment of specific science communication formats. The development utilized a multi-step process to identify relevant criteria from both theoretical and practical perspectives. The instrument integrates 15 distinct quality dimensions, such as comprehensibility, credibility, fun, and applicability, structured according to different quality levels (functional, normative, user-, and communication-oriented). It considered theory-driven and practice-experienced categories and was validated through confirmatory factor analyses conducted on a German representative sample (n = 990). For validation, the scale was applied to a science blog post and a science video on homeopathy.After employing a seven-step process, we conclude that the newly devised scale effectively assesses the perceived quality of both blog and video science communication content. The overall assessment aligns with common target variables, such as interest and attitudes. The results regarding the different quality subdimensions provide a nuanced understanding of their contribution to the perceived overall quality. In this way, the scale aids in enhancing science communication in accordance with audience perceptions of quality. This marks the inaugural introduction of a comprehensive measurement instrument tailored to gauge quality from the audience's standpoint, rendering it applicable for utilization by both researchers and practitioners.

Keywords: Science Communication, quality, audience perspective, Evaluation, multidimensional

Received: 09 Feb 2024; Accepted: 22 Apr 2024.

Copyright: © 2024 Taddicken, Fick and Wicke. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Prof. Monika Taddicken, Technical University of Braunschweig, Braunschweig, Germany

People also looked at

Poll: Election interest hits new low in tight Biden-Trump race

The share of voters who say they have high interest in the 2024 election has hit a nearly 20-year low at this point in a presidential race, according to the latest national NBC News poll , with majorities holding negative views of both President Joe Biden and former President Donald Trump.

The poll also shows Biden trimming Trump’s previous lead to just 2 points in a head-to-head contest, an improvement within the margin of error compared to the previous survey, as Biden bests Trump on the issues of abortion and uniting the country, while Trump is ahead on competency and dealing with inflation.

And it finds inflation and immigration topping the list of most important issues facing the country, as just one-third of voters give Biden credit for an improving economy.

But what also stands out in the survey is how the low voter interest and the independent candidacy of Robert F. Kennedy Jr. could scramble what has been a stable presidential contest with more than six months until Election Day. While Trump holds a 2-point edge over Biden head to head, Biden leads Trump by 2 points in a five-way ballot test including Kennedy and other third-party candidates.

“I don’t think Biden has done much as a president. And if Trump gets elected, I just feel like it’s going to be the same thing as it was before Biden got elected,” said poll respondent Devin Fletcher, 37, of Wayne, Michigan, a Democrat who said he’s still voting for Biden.

“I just don’t feel like I have a candidate that I’m excited to vote for,” Fletcher added.

Another poll respondent from New Jersey, who declined to provide her name and voted for Biden in 2020, said she wouldn’t be voting in November.

“Our candidates are horrible. I have no interest in voting for Biden. He did nothing. And I absolutely will not vote for Trump,” she said.

Democratic pollster Jeff Horwitt of Hart Research Associates, who conducted the survey with Republican pollster Bill McInturff of Public Opinion Strategies, said, “Americans don’t agree on much these days, but nothing unites the country more than voters’ desire to tune this election out.”

The poll was conducted April 12-16, during yet another turbulent time in American politics, including the  beginning of Trump’s criminal trial  in New York and new attacks and heightened tensions  in the Middle East.

According to the poll, 64% of registered voters say they have high levels of interest in November’s election — registering either a “9” or a 10” on a 10-point scale of interest.

That’s lower than what the NBC News poll showed at this time in the 2008 (74%), 2012 (67%), 2016 (69%) and 2020 (77%) presidential contests.

The question dates to the 2008 election cycle. The lowest level of high election interest in the poll during a presidential cycle was in March 2012 — at 59%. But it quickly ticked up in the next survey.

This election cycle, high interest has been both low and relatively flat for months, according to the poll.

McInturff, the Republican pollster, says the high level of interest in the poll has “always been a signal for the level of turnout” for a presidential contest.

“It makes it very hard for us to predict turnout this far in advance of November, but every signal is turnout will be a lower percentage of eligible voters than in 2020,” he said.

By party, the current poll shows 70% of self-identified Republicans saying they have high interest in the coming election, compared with 65% of Democrats who say so.

Independents are at 48%, while only 36% of voters ages 18 to 34 rate themselves as highly interested in the election.

“They just aren’t low interest,” McInturff said of young voters. “They are off-the-charts low.”

NBC News poll: Frequently asked questions

Professional pollsters at a Democratic polling firm (Hart Research Associates) and a Republican firm (Public Opinion Strategies) have worked together to conduct and operate this poll since 1989. (Coldwater Corporation served as the Republican firm from 1989-2004.)

The polling firms employ a call center, where live interviewers speak by cell phone and telephone with a cross section of (usually) 1,000 respondents. The respondents are randomly selected from national lists of households and cell numbers. Respondents are asked for by name, starting with the youngest male adult or female adult in the household.

One of the common questions that critics ask of polls is, "I wasn't interviewed, so why should this poll matter?” By interviewing 1,000 respondents and applying minimal weights based on race, ethnicity, age, gender, education and the 2020 presidential vote, the poll achieves a representative sample of the nation at large – with a margin of error at a 95% confidence level.

NBC News editors and reporters — along with the pollsters at Hart Research and Public Opinion Strategies — all work to formulate the questions to try to capture the news and current events NBC is trying to gauge. Both Hart Research and Public Opinion Strategies work to ensure the language and placement of the questions are as neutral as possible.

Biden trims Trump’s lead

The poll also finds Trump narrowly ahead of Biden by 2 points among registered voters in a head-to-head matchup, 46% to 44% — down from Trump’s 5-point advantage in January, 47% to 42%.

The movement, which is within the poll’s margin of error of plus or minus 3.1 percentage points, is consistent with what other national polls have found in the Trump-Biden race.

Trump’s biggest advantages are among men (53% to 37%), white voters (54% to 37%) and white voters without college degrees (65% to 25%).

Biden’s top advantages are among Black voters (71% to 13%), women (50% to 39%) and Latinos (49% to 39%).

The poll shows the two candidates are essentially tied among independents (Biden 36%, Trump 34%) and voters ages 18-34 (Biden 44%, Trump 43%). One of the big polling mysteries this cycle is whether young voters have defected from Biden (as the NBC News poll has found over multiple surveys) or whether Democrats have maintained their advantage among that demographic.

When the ballot is expanded to five named candidates, Biden takes a 2-point lead over Trump: Biden 39%, Trump 37%, Kennedy 13%, Jill Stein 3% and Cornel West 2%.

Again, the result between Biden and Trump is within the poll’s margin of error.

Notably, the poll finds a greater share of Trump voters from the head-to-head matchup supporting Kennedy in the expanded ballot compared with Biden voters, different from the results of some other surveys.

(Read more here about how Kennedy's candidacy affe cts the 2024 race, according to the poll.)

The president’s approval rating ticks up to 42%

In addition, the poll found 42% of registered voters approving of Biden’s overall job performance — up 5 points since January’s NBC News poll, which found Biden at the lowest point of his presidency.

Fifty-six percent of voters say they disapprove of the job he has done, which is down 4 points from January.

Biden’s gains over the past few months have come from key parts of his 2020   base, especially among Democrats and Black voters. But he continues to hold low ratings among Latinos (40% approval), young voters (37%) and independents (36%).

“The data across this poll show that Joe Biden has begun to gain some ground in rebuilding his coalition from 2020,” said Horwitt, the Democratic pollster. “The question is whether he can build upon this momentum and make inroads with the groups of voters that still are holding back support.”

But McInturff, the GOP pollster, points out that the only recent presidents   who lost re-election had approval ratings higher than Biden’s at this point in the election cycle: George H.W. Bush (43%) and Trump (46%).

“President Biden has a precarious hold on the presidency and is in a difficult position as it relates to his re-election,” McInturff said.

On the issues, 39% of voters say they approve of Biden’s handling of the economy (up from 36% in January), 28% approve of his handling of border security and immigration, and just 27% approve of his handling of the Israel-Hamas war (down from 29% in January).

Voters gave Biden his highest issue rating on   addressing student loan debt, with 44% approving of his handling of the issue, compared with 51% who say they disapprove.

Biden leads on abortion and unity; Trump leads on inflation and competency

The NBC News poll asked voters to determine which candidate they thought is better on several different issues and attributes.

Biden holds a 15-point advantage over Trump on dealing with the issue of abortion, and he is ahead by 9 points on having the ability to bring the country together — though that is down from his 24-point advantage on that issue in the September 2020 NBC News poll.

Trump, meanwhile, leads in having the ability to handle a crisis (by 4 points), in having a strong record of accomplishments (by 7 points), in being competent and effective (by 11 points), in having the necessary mental and physical health to be president (by 19 points) and in dealing with inflation and the cost of living (by 22 points).

Inflation, immigration are the top 2024 issues

Inflation and the cost of living top the list of issues in the poll, with 23% of voters saying they’re the most important issue facing the country.

The other top voters is   immigration and the situation at the border (22%) — followed by   threats to democracy (16%), jobs and the economy (11%), abortion (6%) and health care (6%).

In addition, 63% of voters say their families’ incomes are falling behind the cost of living — essentially unchanged from what the poll found in 2022 and 2023.

And 53% of voters say the country’s economy hasn’t improved, compared with 33% who say that it has improved and that Biden deserves some credit for it and another 8% who agree the economy has improved but don’t give him credit for it.

“If I look back to when I had all three of my children in the house — we only have one child left in the house now, and we’re spending more now than what we did when we had a family of five,” said poll respondent Art Fales, 45, of Florida, who says he’s most likely voting for Trump.

But on a separate question — is there an issue so important that you’ll vote for or against a candidate solely on that basis? — the top responses are protecting democracy and constitutional rights (28%), immigration and border security (20%) and abortion (19%).

Indeed, 30% of Democrats, 29% of young voters and 27% of women say they are single-issue voters on abortion.

“I have a right to what I do with my body,” said poll respondent Amanda Willis, 28, of Louisiana, who said she’s voting for Biden. “And I don’t believe that other people should have the ability to determine that.”

Other poll findings

  • With Trump’s first criminal trial underway, 50% of voters say he is being held to the same standard as anyone else when it comes to his multiple legal challenges. That compares with 43% who believe he’s being unfairly targeted in the trials. 
  • 52% of voters have unfavorable views of Biden, while 53% share the same views of Trump.
  • And Democrats and Republicans are essentially tied in congressional preference, with 47% of voters preferring Republicans to control Congress and 46% wanting Democrats in charge. Republicans held a 4-point lead on this question in January.

The NBC News poll of 1,000 registered voters nationwide — 891 contacted via cellphone — was conducted April 12-16, and it has an overall margin of error of plus or minus 3.1 percentage points.

good research question is practicable

Mark Murray is a senior political editor at NBC News.

good research question is practicable

Sarah Dean is a 2024 NBC News campaign embed.

IMAGES

  1. How to Develop a Strong Research Question

    good research question is practicable

  2. How to Write a Research Question in 2024: Types, Steps, and Examples

    good research question is practicable

  3. What Is a Research Question? Tips on How to Find Interesting Topics

    good research question is practicable

  4. PPT

    good research question is practicable

  5. How to Write a Good Research Question (w/ Examples)

    good research question is practicable

  6. Research Question: Definition, Types, Examples, Quick Tips

    good research question is practicable

VIDEO

  1. MASTERING RESEARCH : NTRODUCTION TO RESEARCH METHODOLOGY & FRAMING A GOOD RESEARCH QUESTION

  2. What, When, Why: Research Goals, Questions, and Hypotheses

  3. Practical Research 2 Quarter 1 Module 3: Kinds of Variables and Their Uses

  4. FINAL VIDEO Safety Professioanl are you ok

  5. 3 2 What is a research questions and what is not a research question

  6. How to Develop a Strong Research Question

COMMENTS

  1. Formulating a good research question: Pearls and pitfalls

    Focusing on the primary research question. The process of developing a new idea usually stems from a dilemma inherent to the clinical practice.[2,3,4] However, once the problem has been identified, it is tempting to formulate multiple research questions.Conducting a clinical trial with more than one primary study question would not be feasible.

  2. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  3. Quality in Research: Asking the Right Question

    This column is about research questions, the beginning of the researcher's process. For the reader, the question driving the researcher's inquiry is the first place to start when examining the quality of their work because if the question is flawed, the quality of the methods and soundness of the researchers' thinking does not matter.

  4. Back to the basics: Guidance for formulating good research questions

    As such, the purpose of this commentary is to provide useful guidance on composing and evaluating rigorous research questions. 2. A framework for formulating research questions. Although every research project is unique, they share common domains that a researcher should consider and define a priori.

  5. Research Question 101

    A good research question is focused, specific, practical, rooted in a research gap, and aligned with the research aim. If your question meets these criteria, it's likely a strong question. Is a research question similar to a hypothesis? Not quite. A hypothesis is a testable statement that predicts an outcome, while a research question is a ...

  6. Ten simple rules for good research practice

    Rule 1: Specify your research question. Coming up with a research question is not always simple and may take time. A successful study requires a narrow and clear research question. In evidence-based research, prior studies are assessed in a systematic and transparent way to identify a research gap for a new study that answers a question that ...

  7. Creating a Good Research Question

    This video explains the PICO framework in practice as participants in a workshop propose research questions that compare interventions. Play Asking a Good T3/T4 Question video. Introduction to Designing & Conducting Mixed Methods Research An online course that provides a deeper dive into mixed methods' research questions and methodologies.

  8. Writing Strong Research Questions

    A good research question is essential to guide your research paper, dissertation, or thesis. All research questions should be: Focused on a single problem or issue. Researchable using primary and/or secondary sources. Feasible to answer within the timeframe and practical constraints. Specific enough to answer thoroughly.

  9. Formulating a good research question: Pearls and pitfalls

    The process of formulating a good research question can be challenging and frustrating. While a comprehensive literature review is compulsory, the researcher usually encounters. methodological dif ...

  10. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  11. Research Question Examples ‍

    A well-crafted research question (or set of questions) sets the stage for a robust study and meaningful insights. But, if you're new to research, it's not always clear what exactly constitutes a good research question. In this post, we'll provide you with clear examples of quality research questions across various disciplines, so that you can approach your research project with confidence!

  12. Developing a Researchable Research Question

    17. Developing a Researchable Research Question. After thinking about what topics interest you, identifying a topic that is both empirical and sociological, and decide whether your research will be exploratory, descriptive, or explanatory, the next step is to form a research question about your topic. For many researchers, forming hypotheses ...

  13. Good research questions

    Good questions are also theoretically, pedagogically, or empiri- cally motivated and hence worthy of research. A research question is theoretically wor- thy of investigation to the same extent that it contributes to the development, refinement, or testing a theory, hypothesis, or key constructs in the field. Its investigation would fill a gap ...

  14. How to Write a Research Question in 2024: Types, Steps, and Examples

    The examples of research questions provided in this guide have illustrated what good research questions look like. The key points outlined below should help researchers in the pursuit: The development of a research question is an iterative process that involves continuously updating one's knowledge on the topic and refining ideas at all ...

  15. Generating Good Research Questions

    Yet coming up with good research questions is something that novice researchers often find difficult and stressful. One reason is that this is a creative process that can appear mysterious—even magical—with experienced researchers seeming to pull interesting research questions out of thin air. ... Practice: Generate five research ideas ...

  16. PDF Writing a Good Research Question

    Developing a good research question is one of the first critical steps in the research process. The research question, when appropriately written, will guide the research project and assist in the construction of ... Practice, 3(1), 3. Taylor, D. (1999). Introduction to Research Methods. medicine, 319, 1618. Created Date:

  17. Ten simple rules for good research practice

    Rule 1: Specify your research question. Coming up with a research question is not always simple and may take time. A successful study requires a narrow and clear research question. In evidence-based research, prior studies are assessed in a systematic and transparent way to identify a research gap for a new study that answers a question that ...

  18. The Qualities of a Good Research Question

    A good research question is a question that hasn't already been answered, or hasn't been answered completely, or hasn't been answered for your specific context. If the answer to the question is readily available in a good encyclopedia, textbook, or reference book, then it is a homework question, not a research question. It was probably a ...

  19. What Makes a Good Research Question?

    You will be thinking about and studying a particular question for at least a year, so you want it to be something that you are REALLY interested in learning about. This will hold your interest throughout the process. The research question is really the heart of the research process. A good research question will hold your interest and ...

  20. (PDF) How to…write a good research question

    This paper, on writing research questions, is the first in a series that aims to support novice researchers within clinical education, particularly those undertaking their first qualitative study ...

  21. Formulation of Research Question

    Abstract. Formulation of research question (RQ) is an essentiality before starting any research. It aims to explore an existing uncertainty in an area of concern and points to a need for deliberate investigation. It is, therefore, pertinent to formulate a good RQ. The present paper aims to discuss the process of formulation of RQ with stepwise ...

  22. What Makes a Good Research Question?

    Step 1: Choose a topic that interests you. You will be spending quite a bit of time researching, writing, and thinking about this topic, so make sure to choose a topic that actually interests you. It is very difficult to write an effective paper if you do not care about the topic. If you are genuinely interested in the question and invested in ...

  23. Writing Survey Questions

    Writing Survey Questions. Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions.

  24. Are You Asking the Right Questions?

    April 16, 2024. Few leaders have been trained to ask great questions. That might explain why they tend to be good at certain kinds of questions, and less effective at other kinds. Unfortunately ...

  25. Frontiers

    The expansion of science communication (Bubela et al., 2009;Bucchi and Trench, 2014) underscores the increasing importance of understanding what constitutes good science communication (Niemann et al., 2023). This question concerns the public's understanding and engagement with science (Choung et al., 2020; Davies, 2013; Stilgoe et al., 2014). The scholarly discussion has shifted from the ...

  26. 10 Common Interview Questions and How to Answer Them

    1. Tell me about yourself. This warm-up question is your chance to make an impactful first impression. Be prepared to describe yourself in a few sentences. You can mention: Your past experiences and how they relate to the current job. How your most recent job is tied to this new opportunity. Two of your strengths.

  27. Poll: Election interest hits new low in tight Biden-Trump race

    The question dates to the 2008 election cycle. The lowest level of high election interest in the poll during a presidential cycle was in March 2012 — at 59%. But it quickly ticked up in the next ...