Training videos   |   Faqs

Ref-n-Write: Scientific Research Paper Writing Software

Formulating Strong Research Questions: Examples and Writing Tips

Abstract | Introduction | Literature Review | Research question | Materials & Methods | Results | Discussion | Conclusion

In this blog, we will see how to construct and present the research question in your research paper. We will also look at other components that make up the final paragraph of the introduction section of your paper.

1. What is a research question in a research paper?

research question

The research questions are normally the aims and objectives of your work. The research question pinpoints exactly what it is you want to find out in your work. You can have a single research question or multiple research questions in your paper depending on the complexity of your research. Generally, it is a good idea to keep the number of research questions to less than four.

2. Research question examples

Let’s look at some examples of research questions. The research question is normally one of the major components of the final paragraph of the introduction section. We will look at the examples of the entire final paragraph of the introduction along with the research questions to put things into perspective.

2.1. Example #1 (Health sciences research paper)

Here is an example from a health sciences research paper. The passage starts with the research gap. The authors are saying that there is a need for a better understanding of the relationship between social media and mental health. Then, the authors explain the aims of their research and elaborate on what methodology they will be using to achieve their aims. The authors say that they will be using online surveys and face-to-face interviews to collect data to answer their research question. The passage flows very well and the author nicely lays out the research gap, the study aims, and the plan of action.

The effects of social media usage on mental health are poorly documented in the literature as research papers on the topic give contradictory conclusions. The present study aims to improve our understanding of the effects of social media usage on mental health. The data were collected from a variety of age-group over a period of two years in a structured manner. The methods of data collection involved online surveys and face-to-face interviews. _ Research gap  _  Research question _   _  Method summary

2.2. Example #2 (Hypothesis-driven research paper)

Here is a slightly different variant of the previous example. Here, the authors have formulated the research question in the form of a hypothesis. Same as before, the authors are establishing the research gap in the first statement. In the next couple of statements, they are defining a specific hypothesis that they will be testing in the paper. In this case, they are testing the link between social media and mental health. And in the final statement, they are explaining the research methodology they will be employing to either prove or disprove the hypothesis. This is a pretty good example to follow if your research work is hypothesis-driven.

Past research suggests that while social media use is correlated with levels of anxiety and depression, the evidence so far is limited [1-2]. Therefore, building on previous discussion, Hypothesis 1 proposes: The levels of anxiety and depression will be lower among those who use social media platforms less frequently compared to those who use social media more frequently. This hypothesis (H1) is tested in this study through surveys and face-to-face interviews. _  Research gap  _  Research question (Hypothesis)  _  Method summary

2.3. Example #3 (Computer sciences research paper)

Here is an example from a computer sciences research paper. The authors establish the research gap by saying that there aren’t many papers on the topic of stock price prediction. Then, they explain what they are proposing.  They are proposing a new method called the ‘Hybrid prediction model’. Then, they are providing a brief breakdown of their method by explaining how their method functions.  They are saying that in their approach they are combining multiple methods in a structured way to improve the overall prediction accuracy of stock prices.

Only a few papers have addressed the problem of accurately predicting stock prices. In this paper, we propose a method, called the Hybrid Prediction Method that combines a selection of existing methods in a structured way to improve on the results obtained by using any single method alone. This paper is organized as follows: In Section 2, we introduce the Hybrid Analysis. Section 3 presents a number of experiments and results, and these results are discussed in Section 6. Section 7 concludes the paper. _  Research gap  _  Research question  _  Paper outline

Finally, they finish off the section by providing the outline of the paper. Please note, providing the paper outline is optional. It depends on your personal preference and journal requirements.  This passage is a typical format you will see in engineering research papers that propose a new method to solve a particular problem.

2.4. Example #4 (Psychology research paper)

Here is an example from a psychology research paper. In the first line, the authors clearly state the research question, and the methodology they will be using to address it. The authors aim to test the impact of background music on the listener’s ability to remember words. They will be addressing this by performing a series of experiments in which observers will be shown words on the computer screen while playing different types of background music. Then, they are finishing off the section with a very brief summary of the results. This is a good idea because it will provide readers with a rough idea of what to expect from the rest of the paper.

In two experiments, we tested whether the presence of background music had an effect on memory recall. More precisely, we examined whether the type of music, either classical or pop, had an impact on the ability of people to remember a list of words. Observers viewed a list of words on a computer screen and listened to either classical or pop music in the background. The results of this study indicate significant differences between classical and pop music in terms of their effects on memory recall and cognition. _   _  Research questions  _  Methods summary  _  Results summary

3. Frequently Asked Questions

Your research question should align with your research gap and the problem statement. The research question should logically follow the problem statement and research gap you established in the previous sections of your paper. If your research objectives are misaligned with your problem statement and research gap, then reviewers will reject your paper. So make sure they are all tightly aligned with each other.

Look at the first example. We are saying that we are going to study the impact of social media on young people. The research question is too broad. As you can see there is no clear direction, and the study attempts to take on too much.

The research aims to find out the impact of social media on young people. Bad research question (Too broad)

Now, look at the second example. It is much more focused. We are very specific about our research questions.  We are saying that we are attempting to measure the average time spent by teenagers on social media. And, we are also trying to understand the exact nature of their interactions on social media. We will be using an online questionnaire to answer the questions and we will be choosing participants from England and Scotland. This is a good research question, because it clearly defines what you have set out to do and how you plan to achieve it.

The research aims to estimate the average time spent by 18-24 year-olds on social media, and investigate the nature of interactions and conversations they have on social media. We attempt to answer these questions by conducting an online questionnaire survey in England and Scotland. Good research question (Very specific and focussed)

Similar Posts

Writing a Questionnaire Survey Research Paper – Example & Format

Writing a Questionnaire Survey Research Paper – Example & Format

In this blog, we will explain how to write a survey questionnaire paper and discuss all the important points to consider while writing the research paper.

Abstract Section Examples and Writing Tips

Abstract Section Examples and Writing Tips

In this blog, we will go through many abstract examples and understand how to construct a good abstract for your research paper.

Discussion Section Examples and Writing Tips

Discussion Section Examples and Writing Tips

In this blog, we will go through many discussion examples and understand how to write a great discussion for your research paper.

Introduction Paragraph Examples and Writing Tips

Introduction Paragraph Examples and Writing Tips

In this blog, we will go through a few introduction paragraph examples and understand how to construct a great introduction paragraph for your research paper.

Literature Review Examples and Writing Tips

Literature Review Examples and Writing Tips

In this blog, we will go through many literature review examples and understand different ways to present past literature in your paper.

Materials and Methods Examples and Writing Tips

Materials and Methods Examples and Writing Tips

In this blog, we will go through many materials and methods examples and understand how to write a clear and concise method section for your research paper.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • 2 Share Facebook
  • 1 Share Twitter
  • 0 Share LinkedIn
  • 1 Share Email

write a research question to test a prediction

Book cover

Doing Research: A New Researcher’s Guide pp 77–103 Cite as

Crafting the Methods to Test Hypotheses

  • James Hiebert 6 ,
  • Jinfa Cai 7 ,
  • Stephen Hwang 7 ,
  • Anne K Morris 6 &
  • Charles Hohensee 6  
  • Open Access
  • First Online: 03 December 2022

10k Accesses

1 Citations

Part of the book series: Research in Mathematics Education ((RME))

If you have carefully worked through the ideas in the previous chapters, the many questions researchers often ask about what methods to use boil down to one central question: How can I best test my hypotheses? The answers to questions such as “Should I do an ethnography or an experiment?” and “Should I use qualitative data or quantitative data?” are quite clear if you make explicit predictions for what you will find and fully develop rationales for why you made these predictions. Then you need only worry about how to find out in what ways your predictions are right in what ways they are wrong. There is a lot to know about different research designs and methods because these provide the tools you can use to test your hypotheses. But as you learn these details, keep in mind they are means to an end, not an end in themselves.

Download chapter PDF

Part I. What Does It Mean to Test Your Hypotheses?

From the beginning, we have talked about formulating and testing hypotheses. We will briefly review relevant points from the first three chapters and then consider some additional issues you will encounter as you craft the methods you will use to test your hypotheses.

In Chap. 1 , we proposed a distinction between hypotheses and predictions. Predictions are guesses you make about answers to your research questions; hypotheses are the predictions plus the reasons, or rationales, for your predictions. We tied together predictions and rationales as constituent parts of hypotheses because it is beneficial to keep them connected throughout the process of scientific inquiry. When we talk about testing hypotheses , we mean gathering information (data) to see how close your predictions were to being correct and then assessing the soundness of your rationales. So, testing hypotheses is really a two-step process: (1) comparing predictions with empirical observations or data, and (2) assessing the soundness of the rationales that justified these predictions.

In Chap. 2 , we suggested that making predictions and explaining why you made them should happen at the same time. Along with your first guesses about the answers to your research questions, you should write out your explanations for why you think the answers will be accurate. This will be a back-and-forth process because you are likely to revise your predictions as you think through the reasons you are making them. In addition, we suggested asking how you could test your predictions. This often leads to additional revisions in your predictions.

We also noted that, because education is filled with complexities, answers to substantive questions can seldom be predicted with complete accuracy. Consequently, testing predictions does not mean deciding whether or not they were correct but rather how you can revise them to improve their correctness. In addition, testing predictions means reexamining your rationales to improve the soundness of your reasoning. In other words, testing predictions involves gathering the kind of information that guides revisions to your hypotheses.

As a final reminder from Chap. 2 , we asked you to imagine how you could test your hypotheses. This involves anticipating what information (data) would best show how accurate your predictions were and would inform revisions to your rationales. Imagining the best ways to test hypotheses is essential for moving through the early cycles of scientific inquiry. In this chapter, we extend the process by crafting the actual methods you will use to test your hypotheses.

In Chap. 3 , you considered further the multiple cycles of asking questions, articulating your predictions, developing your rationales, imagining testing your predictions and rationales, adjusting your rationales, revising your predictions, and so on. You learned that a significant consequence of repeating this cycle many times is the increasingly clear, justifiable, and complete rationales that turn into the theoretical framework for your study. This comes, in large part, from the clear descriptions of the variables you will attend to and the mechanisms you conjecture are at work. The theoretical framework allows you to imagine with greater confidence, and in more detail, the kind of data you will need to test your hypotheses and how you could collect them.

In this chapter, we will examine many of the issues you must consider as you choose and adapt methods to fit your study. By “methods,” we mean the entire set of procedures you will use, including the basic design of the study, measures for collecting data, and analytic approaches. As in previous chapters, we will focus on issues that are critical for conducting scientific inquiry but often are not sufficiently discussed in more standard methods textbooks. We will also cite sources where you can find more information. For example, the Institute of Education Sciences and the National Science Foundation (2013) jointly developed guidelines for researchers about the different methods that can be used for different types of research. These guidelines are meant to inform researchers who seek funding from these agencies.

Exercise 4.1

Choose a published empirical study that includes clearly stated research questions, explicit hypotheses (predictions about the answers to the research questions plus the rationales for the predictions), and the methods used. Identify the variables studied and describe the mechanisms, embedded in the hypotheses and conjectured to create the predicted answers. Analyze the appropriateness of the methods used to answer the research questions (i.e., test the predictions). Notes: (1) you might have trouble finding a clear statement of the hypotheses; if so, imagine what the researchers had in mind; and (2) although we have not discussed all of the information you might need to complete this exercise in detail, writing out your response in as much detail as possible will prepare you to make sense of this chapter.

Part II. What Are the Best Methods for Your Study?

The best methods for your study are the procedures that give you the richest information about how near your predictions were to the actual findings and how they could be adjusted to be more accurate. Said another way, choose the methods that provide the clearest answers to your research questions. There are many decisions you will need to make about which methods to use, and it is likely that, at a detailed level, there are different combinations of decisions that would be equally effective. So, we will not assume there is a single best combination. Rather, from this point on we will talk about appropriate methods.

The picture of text chooses the method that provides the clearest answer to your questions; written.

Most research questions in education are too complicated to be fully answered by conducting only one study using one set of methods. Different methods offer different perspectives and reveal different aspects of educational phenomena. “Science becomes more certain in its progression if it has the benefits of a wide array of methods and information. Science is not improved by subtracting but by adding methods” (Sechrest et al., 1993 , p. 230). You will need to craft one set of methods for your study but be aware that, in the future, other researchers could use another set of methods to test similar hypotheses and report somewhat different findings that would lead to further revisions of the hypotheses. The methods you craft should be aligned with your theoretical framework, as noted earlier, but there are likely to be other sets of methods that are aligned as well.

A useful organizational scheme for crafting your methods divides the process into three phases: choosing the design of your study, developing the measures and procedures for gathering the data, and choosing methods to analyze the data (in order to compare your findings to your predictions). We will not repeat most of what you can find in textbooks on research methods. Rather, we will focus on the issues within each phase of crafting your methods that are often difficult for beginning researchers. In addition, we will identify areas that manuscript reviewers for JRME often say are inadequately developed or described. Reviewers’ concerns are based on what they read, so the problems they identify could be with the study itself or the way it is reported. We will deal first with issues of conducting the study and then talk about related issues with communicating your study to others.

Choosing the Design for Your Study

One of the first decisions you need to make is what design you will use. By design we mean the overall strategy you choose to integrate the different components of the study in a coherent and logical way. The design offers guidelines for the sampling procedure, the development of measures, the collection of data, and the analysis of data. Depending on the textbook you consult, there are different classification schemes that identify different designs. One common scheme is to distinguish between experimental, correlational, and descriptive research.

In our view, each design is tailored to explain different features of phenomena. Experiments, as we define them, are tailored to explain changes in phenomena. Correlations are tailored to explain relationships between two or more phenomena. And descriptions are tailored to explain phenomena as they exist. We unpack these ideas in the following discussions.

In education, most experiments take the form of intervention studies. They are conducted to test the effects of an intervention designed to change something (e.g., students’ achievement). If you choose an experimental design, your research questions probably ask whether an intervention will improve certain outcomes. For example: “Will professional development that engages teachers in analyzing videos of teaching help them teach more conceptually? If so, under what conditions does this occur?” There are several good sources to read about designing experiments in education research (e.g., Cook et al., 2002 ; Gall et al., 2007 ; Kelly & Lesh, 2000 ). We will focus our attention on several specific issues.

Many experiments aim to determine if something causes something else. This is another way of saying the aim is to produce change in something and explain why the change occurred. In education, experiments often try to explain whether and why an intervention is “effective,” or whether and why intervention A is more effective than intervention B. Effective usually means the treatment causes or explains the outcomes of interest. If your investigation is situated in an actual classroom or another authentic educational setting, it is usually difficult to claim causal effects. There are many reasons for this, most tied to the complicated nature of educational settings. You should consider the following three issues when designing an experiment.

First, in education, the strict requirements for an experimental design are rarely met. For example, usually students, teachers, schools, and so forth, cannot be randomly assigned to receive one or the other of the interventions that are being compared. In addition, it is almost impossible to double-blind education experiments (that is, to ensure that the participants do not know which treatment they are receiving and that the researchers do not know which participants are receiving which treatment—like in medical drug trials). These design constraints limit your ability to claim causal effects of an intervention because they make it difficult to explain the reasons for the changes. Consequently, many studies that are called experiments are better labeled “quasi-experiments.” See Campbell et al. ( 1963 ) and Gopalan et al. ( 2020 ) for more details.

Second, even when you are aware of these constraints and consider your study a quasi-experiment, it is still tempting to make causal claims not supported by your findings. Suppose you are testing your prediction that a specially designed five-lesson unit will help students understand adding and subtracting fractions with unlike denominators. Suppose you are fortunate enough to randomly assign many classrooms to your intervention and an equal number to a common textbook unit. Suppose students in the experimental classrooms perform significantly better on a valid measure of understanding fraction addition and subtraction. Can you claim your treatment caused the better outcomes?

Before making this basic causal claim, you should ask yourself, “What, exactly, was the treatment? To what do I attribute better performance?” When implemented in actual, real classrooms, your intervention will have included many (interacting) elements, some of which you might not even be aware of. That is, in practice, the “treatment” may no longer be defined precisely enough to make a strong claim about the effects of the treatment you planned. And, because each classroom operates under different conditions (e.g., different groups of students, different expectations), the aspects of the intervention that really mattered in each classroom might not be apparent. An average effect over all classrooms may mask aspects of the intervention that matter in some classrooms but not others.

Despite the challenges outlined above with making causal claims, it remains important for education researchers to pursue a greater understanding of the causes behind effects. As the National Research Council ( 2002 ) says: “An area of research that, for example, does not advance beyond the descriptive phase toward more precise scientific investigation of causal effects and mechanisms for a long period of time is clearly not contributing as much to knowledge as one that builds on prior work and moves toward more complete understanding of the causal structure” (NRC, 2002 , p. 101).

Many of the problems with developing convincing explanations for changes and making causal claims have become more visible as researchers find it difficult to replicate findings (Makel & Plucker, 2014 ; Open Science Collaboration, 2015 ). And if findings cannot be replicated, it is impossible to accumulate knowledge—a hallmark of scientific inquiry (Campbell, 1961 ). Even when efforts are made to implement a particular intervention in another setting with as much fidelity as possible, the findings usually look different. The real challenge is to identify the conditions under which the intervention works as it did.

This leads to a third issue. Be sure to consider the nature of data that will best help you establish connections between interventions and outcomes. Quantitative data often are the data of choice because analyses can be applied to detect the probability the outcomes occurred as a consequence of the intervention. This information is important, but it does not, by itself, explain why the connections exist. Along with Maxwell ( 2004 ), we recommend that qualitative data also play a role in establishing causation. Qualitative data can provide insights into the mechanisms that are responsible for the connections between interventions and outcomes. Identifying mechanisms that explain changes in outcomes is key to making causal claims. Whereas quantitative data are helpful in showing whether an intervention could have caused particular outcomes, qualitative data can explain how or why this could have occurred.

Beyond Causation

Do the challenges of using experiments mean experimental designs should be avoided? No. There are a number of considerations that can make experimental designs informative. Remember that the overriding purpose of research is to understand what you are studying. We equate this to explaining why what you found might look like it does (see Chaps. 1 and 2 ). Experiments that simply compare one treatment with another or with “business as usual” do not help you understand what you are studying because the data do not help you explain why the differences occurred. They do not help you refine your predictions and revise your rationales. However, experiments do not need to be conducted simply to determine the winner of two treatments.

If you are conducting an experiment to increase the accuracy of your predictions and the adequacy of your rationales, your research questions will almost certainly ask about the conditions under which your predicted outcomes will occur. Your predictions will likely focus on the ways in which the outcomes are significantly different from before the intervention to after the intervention, and on how the intervention plus the conditions might explain or have caused these changes. Your experiment will be designed to test the effects of these conditions on the outcomes. Testing conditions is a direct way of trying to understand the reasons for the outcomes, to explain why you found what you did. In fact, understanding under what conditions an intervention works or does not work is the essence of scientific inquiry that follows an experimental design.

By providing as much detail as you can in the hypotheses, by making your predictions as precise as possible, you can set boundaries on how and to what you will generalize your findings. Making hypotheses precise often requires including the conditions under which you believe the intervention might work best, the conditions under which your predictions will be true.

Another way of saying this is that you should subject your hypotheses to severe tests . The more precise your predictions, the more severe your tests. Consider a meteorologist predicting, a month in advance, that it will rain in the State of Delaware in April. This is not a precise hypothesis, so the test is not severe. No one would be surprised if the prediction was true. Suppose she predicts it will rain in the city of Newark, Delaware, during the second week in April. The hypothesis is more precise, the test is more severe, and her colleagues will be a bit more interested in her rationale (why she made the prediction). Now suppose she predicts it will rain on the University of Delaware campus on April 16. This is a very precise prediction, the test would be considered very severe, and lots of people will be interested in understanding her rationale (even before April 16).

In education, making precise predictions about the conditions under which a classroom intervention might cause changes in particular learning outcomes and subjecting your predictions to severe tests often requires gathering lots of data at high levels of detail or small grain sizes. Graham Nuthall (2004, 2005) provides a useful analysis of the challenges involved in designing a study with the grain size of data he believes is essential. Your study will probably not be as ambitious as that described by Nuthall (2005), but the lesson is to think carefully about the grain size of data you need to test your (precise) predictions.

Additional Considerations

Although you can find details about experimental designs in several sources, some issues might not be emphasized in these sources even though they deserve attention.

First, if you are comparing the changes that occurred during your intervention to the changes that occurred during a control condition, your interpretation of the effectiveness of your intervention is only as useful as the quality of the control condition. That is, if the control condition is not expected to produce much change, and if your analyses are designed primarily to show statistical differences in outcomes, then your claim about the better effects of your intervention is not very interesting or educationally important.

Second, the significance in the size of the changes from before to after the intervention are usually reported using values that describe the probability the changes would have occurred by chance (statistical significance). But these values are affected by factors other than the size of the change, such as the size of the sample. Recently, journals have started encouraging or requiring researchers to report the size of the changes in more meaningful ways, both in terms of what the statistical result really means and in terms of the educational importance of the changes. “Effect size” is often used for these purposes. See Bakker et al. ( 2019 ) for further discussion of effect size and related issues.

Third, you should consider what “better performance” means when you compare interventions. Did all the students in the experimental classrooms outperform their peers in the control classrooms, or was the better average performance due to some students performing much better to make up for some students performing worse? Do you want to claim the intervention was effective when some students found it less effective than the control condition?

Fourth, you need to consider how fully you can describe the nature of the intervention. Because you want to explain changes in outcomes by referencing aspects of the intervention, you need to describe the intervention in enough detail to provide meaningful explanations. Describing the intervention means describing how it was implemented, not how it was planned. The degree to which the intervention was implemented as planned is sometimes referred to as fidelity of implementation (O’Donnell, 2008 ). Fidelity of implementation is especially critical when an intervention is implemented by multiple teachers in different contexts.

Based on our experience as an editorial team, there are a few additional considerations you should keep in mind. These considerations concern inadequacies that were often commented on by reviewers, so they are about the research paper and not always about the study itself. But many of them can be traced back to decisions the authors made about their research methods.

Sample is not big enough to conduct the analyses presented. If you are planning to use quantitative methods, we strongly recommend conducting a statistical power analysis. This is a method of determining if your sample is large enough to detect the anticipated effects of an intervention.

Measures used do not appear to assess what the authors claim they assess.

Methods (including coding rubrics) are not described in enough detail. (A good rule of thumb for “enough” is that readers should be able to replicate the study if they wish.)

Methods are different from those expected based on the theoretical framework presented in the paper.

Special Experimental Designs

Three designs that fit under the general category of experiments are specially crafted to examine the possible reasons for changes observed before and after an intervention. Sometimes, these designs are used to explore the conditions under which changes occur before conducting a larger study. These designs are defined somewhat differently by different researchers. Our goal is to introduce the designs but not to settle the differences in the definitions.

Because these designs include features that fall outside the conventional experiment, researchers face some unique challenges both conducting and reporting these studies. One such feature is the repeated implementation of an intervention, with each implementation containing small revisions based on the previous outcomes, in order to improve the intervention during the study. There are no agreed upon practices for reporting these studies. Should every trial and every small change in outcomes and subsequent interventions be reported? Should all the revised versions of the hypotheses that guided the next trial be reported? Keep these challenges in mind as you consider the following designs.

Teaching Experiments

During the 1980s, mathematics educators began focusing closely on how students changed their thinking during instruction (Cobb & Steffe, 1983 ; Steffe & Thompson, 2000 ). The aim was to describe these changes in considerable detail and to explain how the instructional activities prompted them. Teaching experiments were developed as a design to follow changes in students’ thinking as they received small, well-defined episodes of teaching. In some cases, mapping developmental changes in student thinking was of primary interest; instruction was simply used to induce and accelerate these changes.

Most teaching experiments can be described as a sequence of teaching episodes designed for testing hypotheses about how students learn and reason. A premium is placed on getting to know students well, so the number of students is usually small, and the teacher is the researcher. Predictions are made before each episode about how students’ (often each student’s) thinking will change based on the features of the teaching activity. Data are gathered at a small grain size to test the predictions and revise the hypotheses for the next episode. Until they gain the insights they intend, researchers often continue the following cycles of activities: teaching to test hypotheses, collecting data, analyzing data to compare with predictions, revising predictions and rationales, teaching to test the revised hypotheses, and so on.

Design-Based Research

Following the introduction of teaching experiments, the concept was elaborated and expanded into an approach called design-based research (Akker et al., 2006 ; Cobb et al., 2017 ; Collins, 1992 ; Design-Based Research Collaborative, 2003 ; Puntambekar, 2018 ). There are many forms of this research design but most of them are tailored to developing topic-specific instructional theories that can be shared with teachers and educational designers.

Like teaching experiments, design-based research consists of continuous cycles of formulating hypotheses that connect instructional activities with changes in learning, designing the learning environment to test the hypotheses, implementing instruction, gathering and analyzing data on changes in learning, and revising the hypotheses. The grain size of data matches the needs of teachers to make day-to-day instructional decisions. Often, this research is carried out through researcher–teacher partnerships, with researchers focused on developing theories (systematic explanations for changes in students’ learning) and teachers focused on implementing and testing theories. In addition, unlike many teaching experiments, design-based research has the design of instructional products as one of its goals.

These designs initially aimed to develop full explanations or theories of the learning processes through which students developed understanding for a topic complemented with theories of instructional activities that support such processes. The design was quickly expanded to study learning situations of all kinds, including, for example, teacher professional development (Gravemeijer & van Eerde, 2009 ).

Other forms of design-based research have also emerged, each with the same basic principles but with different emphases. For example, “Design-Based Implementation Research” (Fishman & Penuel, 2018) focuses on improving the implementation of promising instructional approaches for meeting the needs of diverse students in diverse classrooms. Researcher–teacher partnerships produce adaptations that are scalable and sustainable through cycles of formulating, testing, and revising hypotheses.

Continuous Improvement Research

An approach to research that shares features with design-based research but focuses more directly on improving professional practices is often called either continuous improvement, improvement science, or implementation science. This approach has shown considerable promise outside of education in fields such as medicine and industry and could be adapted to educational settings (Bryk et al., 2015 ; Morris & Hiebert, 2011). A special issue of the American Psychologist in 2020 explored the possibilities of implementation science to address the challenge posed in its first sentence, “Reducing the gap between science and practice is the great challenge of our time” (Stirman & Beidas, 2020 , p. 1033).

The cycles of formulating, testing, and revising hypotheses in the continuous improvement model are characterized by four features (Morris & Hiebert, 2011). First, the research problems are drawn from practice because the aim is to improve these practices. Second, the outcome is a concrete product that holds the knowledge gained from the research. For example, an annotated lesson plan could serve as a product of research directed toward improving instructional practice of a particular concept or skill. Third, the interventions test a series of small changes to the product, each built on the previous version, by collecting just enough data to tell whether the change was an improvement. Finally, the research process involves the users as well as the researchers. If the goal is to improve practice, practitioners must be an integral part of the process.

Shared Goals of Useful Education Experiments

All experimental designs that we recommend have two things in common. One is they try to change something and then study the possible mechanisms for the change and the conditions under which the change occurred. Experimental designs that study the reasons and conditions for a change offer greater understanding of the phenomena they are studying. The noted sociologist Kurt Lewin said, “If you want truly to understand something, try to change it” (quoted in Tolman et al., 1996 , p. 31). Recall that understanding phenomena was one of the basic descriptors of scientific inquiry we introduced in Chap. 1 .

In our view, a second feature of useful experiments in education is that they formulate, test, and revise hypotheses at a grain size that matches the needs of educators to make decisions that improve the learning opportunities for all students. Often, research questions that motivate useful experiments address instructional problems that teachers face in their classrooms. We will return to these two features in Chap. 5 .

Correlation

Correlational designs investigate and explain the relationship between two or more variables. Researchers who use this design might ask questions like Martha’s: “What is the relationship between how well teachers analyze videos of teaching and how conceptually they teach?”

Notice the difference between this research question and the earlier one posed for an experimental design (“Will professional development that engages teachers in analyzing videos of teaching help them teach more conceptually? If so, under what conditions does this occur?”). In the experimental case, researchers hypothesized that analyzing videos of teaching would cause more conceptual teaching; in the correlational case they are acknowledging they are not ready to make this prediction. However, they believe there is a sufficiently strong rationale (theoretical framework) to predict a relationship between the two. In other words, although predicting that one event causes another cannot be justified, a rationale can be developed for predicting a relationship between the events.

Correlations in Education Are Rarely Simple

When two or more events appear related, the explanation might be quite complicated. It might be that one event causes another, but there are many more possibilities. Recall Martha’s research question: “What are the relationships between learning to analyze videos of teaching in particular ways (specified from prior research) and teaching for conceptual understanding?” Her research question fits a correlational design because she could not develop a clear rationale explaining why one event (learning to analyze videos) should cause changes in another (changes in teaching conceptually).

Martha could imagine three reasons for a relationship: (1) an underlying factor could be responsible for both events varying together (maybe developing more pedagogical content knowledge is the underlying factor that enables teachers to both analyze videos more insightfully and teach more conceptually); (2) there could be a causal relation but in the reverse direction (maybe teachers who already teach quite conceptually build on students’ thinking, which then helps them analyze videos of teaching in particular ways); or (3) analyzing videos well could lead to more conceptual teaching but through a complicated path (maybe analyzing video helps focus teachers’ attention on key learning moments during a lesson which, in turn, helps them plan lessons with these moments in mind which, in turn, shifts their emphasis to engaging students in these moments which, in turn, results in more conceptual instruction).

Simple correlational designs involve investigating and explaining relationships between just two variables. But simple correlations can get complicated quickly. Researchers might, for example, hypothesize the relationship exists only under particular conditions—when other factors are controlled. In these situations, researchers often remove the effect of these variables and investigate the “partial correlations” between the two variables of primary interest. Many sophisticated statistical techniques have been developed for investigating more complicated relationships between multiple variables (e.g., exploratory and confirmatory factor analysis, Gorsuch, 2014 ).

Correlational Designs We Recommend

The correlational designs we recommend are those that involve collecting data to test your predictions about the extent of the relationship between two (or more) variables and assess how well your rationales (theoretical framework) explain why these relationships exist. By predicting the extent of the relationships and formulating rationales for the degree of the relationships, the findings will help you adjust your predictions and revise your rationales.

Because correlations often involve multiple variables, your rationales might have proposed which variables are most important for, or best explain, the relationship. The findings could help you revise your thinking about the roles of different variables in determining the observed relationship.

For example, analyzing videos insightfully could be unpacked into separate variables, such as the nature of the video, the aspects of the video that could be attended to, and the knowledge needed to comment on each aspect. Teaching conceptually could also be unpacked into many individual variables. To explain or understand the predicted relationship, you would need to study which variables are most responsible for the relationship.

Some researchers suggest that correlational designs precede experimental designs (Sloane, 2008 ). The logic is that correlational research can document that relationships exist and can reveal the key variables. This information can enable the development of rationales for why changes in one construct or variable might cause changes in another construct or variable.

Description

In some ways, descriptions are the most basic design. They are tailored to describe a phenomenon and then explain why it exists as it does. If the research questions ask about the status of a situation or about the nature of a phenomenon and there is no interest, at the moment, in trying to change something or to relate one thing with another, then a descriptive design is appropriate. For example, researchers might be interested in describing the ways in which teachers analyze video clips of classroom instruction or in describing the nature of conceptual teaching in a particular school district.

In this type of research, researchers would predict what they expect to find, and rationales would explain why these findings are expected. As an example, consider the case above of researchers describing the ways teachers analyze video clips of classroom instruction. If Martha had access to such a description and an explanation for why teachers analyzed videos in this way, she could have used this information to formulate her hypotheses regarding the relationship between analysis of videos and conceptual teaching (see Chap. 3 ). Based on the literature describing what teachers notice when observing classroom instruction (e.g., Sherin et al., 2001 ) and on the researchers’ experience working with teachers to explain why they notice particular features, researchers might predict that many teachers will focus more on specific pedagogical skills of the teacher, such as classroom management and organization, and less on the nature of the content being discussed and the strategies students use to solve problems. If these predictions are partially confirmed, the predictions and their rationales would support the rationale for Martha’s hypothesis of a growing relationship between analyzing videos and conceptual teaching as teachers move from focusing on pedagogical skills to focusing on the way in which students are interacting with the content.

In some research programs, descriptive studies logically precede correlation studies (Sloane, 2008 ). Until researchers know they can describe, say, conceptual teaching, there is no point in asking how such teaching relates to other variables (e.g., analyzing videos of teaching) or how to improve the level of conceptual teaching.

As with other designs, there are several types of descriptive studies. We encourage you to read more about the details of each (in, e.g., Miles et al., 2014 ; de Freitas et al., 2017 ).

A case study is usually defined as the in-depth study of a particular instance or of a single unit or case. The instance must be identifiable with clear boundaries and must be sufficiently meaningful to warrant detailed observation, data collection, and analysis. At the outset, you need to describe what the case is a case of. The goal is to understand the case—how it works, what it means, why it looks like it does—within the context in which it functions. To describe conceptual teaching more fully, for example, researchers might investigate a case of one teacher teaching several lessons conceptually.

Some researchers use a case study to show something exists . For example, suppose a researcher notices that students change the way they think about two-dimensional geometric figures after studying three-dimensional objects. The researcher might propose a concept of backward transfer (Hohensee, 2014 ) and design a case study with a small group of students and a targeted set of instructional activities to study this phenomenon in detail. The goal is to determine whether this effect exists and to explain its existence by identifying some of the conditions under which it occurs. Notice that this example also could be considered a “teaching experiment.” There are overlaps between some designs and boundaries between them are not always clear.

Ethnography

The term “ethnography” often is used to name a variety of research approaches that provide detailed and comprehensive accounts of educational phenomena. The approaches include participant observation, fieldwork, and even case studies. For a useful example, see Weisner et al. ( 2001 ). See the following for further descriptions of ethnographic research from various perspectives (Atkinson et al., 2007 ; Denzin & Lincoln, 2017 ).

Survey designs are used to gather information from groups of participants, often large groups that fit specific criteria (e.g., fourth-grade teachers in Delaware), to learn about their characteristics, opinions, attitudes, and so on. Usually, surveys are conducted by administering a questionnaire, either written or oral. The responses to the questions form the data for the study. See Wolf et al. ( 2016 ) for more complete descriptions of survey methodology.

Like for previous designs, we recommend that each of these designs be used to test predictions about what will be found and assess the soundness of the rationales for these predictions. In all these settings, the goal remains to understand and explain what you are studying.

Developing Measures and Procedures for Gathering Data

This a critical phase of crafting your methods because your study is only as good as the quality of the data you gather. And, the quality of data is determined by the measures you use. “Measures” means tests, questionnaires, observation instruments, and anything else that generates data. The research methods textbooks and other resources we cited above include lots of detail about this phase. However, we will note a few issues that journal reviewers often raise and that we have found are problematic for beginning researchers.

Craft Measures That Produce Data at an Appropriate Grain Size

A critical step in the scientific inquiry process is comparing the results you find with those you predicted based on your rationales. Thinking ahead about this part of the process (see Chap. 3 ) helps you see that, for this comparison to be useful for revising your hypotheses, the predictions you make must be at the same level of detail, or grain size, as the results. If your predictions are at too general of a level, you will not be able to make this comparison in a meaningful way. After making predictions, you must craft measures that generate data at the same grain size as your predictions.

To illustrate, we return to Martha, the doctoral student investigating “What are the relationships between learning to analyze videos of teaching in particular ways (specified from prior research) and teaching for conceptual understanding?” In Chap. 3 , one of Martha’s predictions was: “Of the video analysis skills that will be assessed, the two that will show the strongest relationship are spontaneously describing (1) the mathematics that students are struggling with and (2) useful suggestions for how to improve the conceptual learning opportunities for students.” To test this prediction, Martha will need to craft measures that assess separately different kinds of responses when analyzing the videos. Notice that in her case, the predictions are precise enough to specify the nature and grain size of the data that must be collected (i.e., the measures must yield information on the teachers’ spontaneous descriptions of the mathematics that students are struggling with plus their suggestions for how to improve conceptual learning opportunities for students).

Develop Your Own Measures or Borrow from Others?

When crafting the measures for gathering data, weigh carefully the benefits and costs of designing your own measures versus using measures designed and already used by other researchers.

The benefits of developing your own measures come mostly from targeting your measures to assess exactly what you need so you can test your predictions. Sometimes, creating your own measures is critical for the success of your study.

The picture has the text Weigh carefully the benefits and costs of designing your own measures versus using measures designed and already used by other researchers; written.

However, there also are costs to consider. One is convincing others that your measures are both reliable and valid. In general, reliability of a measure refers to how consistently it will yield the same outcomes; validity means how accurately the measure assesses what you say you are measuring (see Gournelos et al., 2019 ). Establishing reliability and validity for new measures can be challenging and expensive in terms of time and resources.

A second cost of creating your own measures is not being able to compare your data to those of other researchers who have studied similar phenomenon. Knowledge accumulates as researchers build on the work of others and extend and refine hypotheses. This is partially enabled by comparing results across different studies that have addressed similar research questions. When you formulate hypotheses that extend previous research, it is often natural (and even obvious) to borrow measures that were used in previous studies. Consider Martha’s predictions described in Chap. 3 , one of which is presented above. Because the prediction builds directly on previous work, testing the predictions would almost require Martha to use the same measures used previously.

If you find it necessary to design your own measures, you should ask yourself whether you are reaching too far beyond previous work. Maybe you could tie your work more closely to past research by tweaking your research questions and hypotheses so existing, validated measures are what you need to test your predictions. In other words, use the time when you are crafting measures as a chance to ask whether you are extending previous research in the most productive way. If you decide to keep your original research questions and design new measures, we recommend considering a combination of previously validated measures and your own custom-made measures.

Whichever approach you choose, be sure to describe your measures in enough detail that others can use them if they are studying related phenomenon or if they would like to replicate your study. Also, if you use measures developed by others be sure to credit them.

Using Data that Already Exist

Most educational researchers collect their own data as part of the study. We have written the previous sections assuming this is the case. Is it possible to conduct an important study using data that have been collected by someone else? Yes. But we suggest you consider the following issues if you are planning a study using an existing set of data.

First, we recommend that your study begin with a hypothesis or research question, just like for a study in which you collect your own data. A common warning about choosing research methods is that you should not choose a method (e.g., hierarchical linear modeling) and then look for a research question. Your hypotheses, or research questions, should drive everything else. Similarly for choosing data to analyze. The data should be chosen because they are the best data to test your hypothesis, not because they exist.

Of course, you might be familiar with a data set and wonder what it would tell you about a particular research problem. Even in this case, however, you should formulate a hypothesis that is important on its own merits. It is easy to tell whether this is true by sharing your hypothesis with colleagues who are not aware of the existing data set and asking them to comment on the value of testing the hypothesis. Would a tested and revised hypothesis make a contribution to the field?

The picture has text you should not choose a method and then look for a research question. Your hypotheses, or research questions, should drive everything else; written.

A second issue to consider when using existing data is the alignment of the methods used to collect the data and your theoretical framework. Although you didn’t choose the methods, you need to be familiar with the methods that were used and be able to justify the appropriateness of the methods, just as you would with methods you craft. Justifying the appropriateness of methods is another way of saying you need to convince others you are using the best data possible to test your hypotheses. As you read the remaining sections of this chapter, think about what you would need to do if you use existing data. Could you satisfy the same expectations as researchers who are collecting their own data?

Exercise 4.2

There are several large data sets that are available to researchers for secondary analyses, including data from the National Assessment of Educational Progress (NAEP), the Programme for International Student Assessment (PISA), and the Trends in International Mathematics and Science Study (TIMSS). Locate a published empirical study that uses an existing data set and clearly states explicit hypotheses or research questions. How do the authors justify their use of the existing data set to address their hypotheses or research questions? What advantages do you think the authors gained by choosing to use existing data? What constraints do you think that choice placed on them?

Choosing Methods to Analyze Data and Compare with Predictions

As with the first two phases of crafting your methods, there are a number of sources that describe issues to think about when putting together your data analysis strategies (e.g., de Freitas et al., 2017 ; Sloane & Wilkins, 2017 ). Beyond what you will read in these sources, or to emphasize some things you might read, we identify a few issues that you should attend to with extra care.

Create Coding Rubrics

Frequently, research in education involves collecting data in the form of interview responses by participants (students, teachers, teacher educators, etc.) or written responses to tasks, problems, or questionnaires, as well as in other forms that researchers must interpret before conducting analyses. This interpretation process is often referred to as coding data, and coding requires developing a rubric that describes, in detail, how the responses will be coded.

There are two main reasons to create a rubric. First, you must code responses that have the same meaning in the same way. This is sometimes called intracoder reliability : an individual coder is coding similar responses consistently. Second, you must communicate to readers and other researchers exactly how you coded the responses. This helps them interpret your data and make their own decisions about whether your claims are warranted. Recall from Chap. 1 an implication of the third descriptor of scientific inquiry which pointed to the public nature of research: “It is a public practice that occurs in the open and is available for others to see and learn from.”

As you code, you will almost always realize that the initial definitions you created for your codes are insufficient to make borderline judgments, and you will need to revise and elaborate the coding rubric. For example, you might decide to split a code into several codes because you realize that the responses you were coding as similar are not as similar as you initially thought. Or you might decide to combine codes that at first seemed to describe different kinds of responses but you now realize are too hard to distinguish reliably. This process helps you clarify for yourself exactly what your codes mean and what the data are telling you.

Determine Intercoder Reliability

In addition to ensuring that you are coding consistently with yourself, you must make sure others would code the same way if they followed your rubric. Determining intercoder reliability involves training someone else to use your rubric to code the same responses and then comparing codes for agreement. There are several ways to calculate intercoder reliability (see, e.g., Stemler, 2004 ).

There are two main reasons to determine intercoder reliability. First, it is important to convince readers that the rubric holds all the information you used to code the responses. It is easy to use lots of implicit knowledge to code responses, especially if you are familiar with the data (e.g., if you conducted the interviews). Using implicit knowledge to code responses hides from others why you are coding responses as you are. This creates bias that interferes with the principles of scientific inquiry (being open and transparent). Establishing acceptable levels of intercoder reliability shows others that the knowledge made explicit in the rubric is all that was needed to code the responses.

A second reason to determine intercoder reliability is that doing so improves the completeness and specificity of the definitions for the codes. As you compare your coding with that of another coder, you will realize that your definitions were not as clear as you thought. You can learn what needs to be added or revised so the definition is clearer; sometimes this includes examples to help clarify the boundary between one code and another. As you reach sufficient levels of agreement, your rubric will reach its final version. This is the version that you will likely include as an appendix in a written report of your study. It tells the reader what each code means.

Beyond the Three Phases

We have discussed three phases of crafting methods (choosing the design of your study, developing the measures and procedures you need to gather the data, and selecting the analysis procedures to compare your findings with your predictions). There are some issues that cut across all three phases. You will read about some of these in the sources we suggested, but several could benefit from special attention.

Quantitative and Qualitative Data

For some time, educators have debated the value of quantitative versus qualitative data (Hart et al., 2008 ). As the labels suggest, quantitative data refers to data that can be expressed with numbers (frequencies, amounts, etc.). Most of the common statistical analyses require quantitative data. Qualitative data are not automatically transformed into numbers. Coding of qualitative data, as described above, can produce numbers (e.g., frequencies) but the data themselves are often words—written or spoken. Corresponding to these two forms of data, some types of research are referred to as quantitative research and some types as qualitative. As an easy reference point, experimental and correlational designs often foreground quantitative data and descriptive designs often foreground qualitative data. We recommend keeping several things in mind when reading about these two types of research.

First, it is best not to begin developing a study by saying you want to do a quantitative study or a qualitative study. We recommend, as we did earlier, that you begin with questions or hypotheses that are of most interest and then decide whether the methods that will best test your predictions require collecting quantitative or qualitative data.

Second, many hypotheses in education are best examined using both kinds of data. You are not limited to using one or the other. Often, studies that use both are referred to as mixed methods studies. Our guess is that if you are investigating an important hypothesis, your study could take advantage of, and benefit from, mixed methods (Hay, 2016 ; Weis et al. 2019a ). As we noted earlier, different methods offer different perspectives so multiple methods are more likely to tell a more complete story (Sechrest et al., 1993 ). Some useful resources for reading about quantitative, qualitative, and mixed methods are Miles et al. ( 2014 ); de Freitas et al. ( 2017 ); Weis et al. ( 2019b ); Small ( 2011 ); and Sloane and Wilkins ( 2017 ).

Defining a Unit of Analysis

The unit of analysis in your study is the “who” or the “what” that you are analyzing and want to make claims about. There are several ways in which this term is used. Your unit of analysis could be an individual student, a group of students, an individual task, a classroom, and so forth. It is important to understand that, in these cases, your unit of analysis might not be the same as your unit of observation. For example, you might gather data about individual students (unit of observation) but then compare the averages among groups of students, say in classrooms or schools (unit of analysis).

Unit of analysis can also refer to what is coded when you analyze qualitative data. For example, when analyzing the transcript of an interview or a classroom lesson, you might want to break up the transcript into segments that focus on different topics, into turns that each speaker takes, into sentences or utterances, or into other chunks. Again, the unit of analysis might not be the same as your unit of observation (the unit in which your findings are presented).

We recommend keeping two things in mind when you consider the unit of analysis. First, it is not uncommon to use more than one unit of analysis in a study. For example, when conducting a textbook analysis, you might use “page” as a unit of analysis (i.e., you treat each page as a single, separate object to examine), and you might also use “instructional task” as a unit of analysis (i.e., you treat each instructional task as a single object to examine, whether it takes up less than one page or many pages). Second, when the data collected have a nested nature (e.g., students nested in classrooms nested in schools), it is necessary to determine what is the most appropriate unit of analysis. Readers can refer to Sloane and Wilkins ( 2017 ) for a more detailed discussion of such analyses.

Ensuring Your Methods Are Fair to All Students

Regardless of which methods you use, remember they need to help you fulfill the purpose of your study. Suppose, as we suggested in earlier chapters, the purpose furthers the goal of understanding how educators can improve the learning opportunities for all students. It is worth thinking, separately, about whether the methods you are using are fully inclusive and are not (unintentionally) leading you to draw conclusions that systematically ignore groups of students with specific characteristics—race, ethnicity, gender, sexual orientation, and special education needs.

For example, if you want to investigate the correlation between students’ participation in class and their sense of efficacy for the subject, you need to include students at different levels of achievement, with different demographics, with different entry efficacy levels, and so on. Your hypotheses should be perfectly clear about which variables that might influence this correlation are being included in your design. This issue is also directly related to our concern about generalizability: it would be inappropriate to generalize to populations or conditions that you have not accounted for in your study.

Researchers in education and psychology have also considered methodological approaches to ensure that research does not unfairly marginalize groups of students. For example, researchers have made use of back translation to ensure the translation equivalency of measures when a study involves students using different languages. Jonson and Geisinger ( 2022 ) and Zieky ( 2013 ) discuss ways to help ensure the fairness of educational assessments.

Part III. Crafting the Most Appropriate Methods

With the background we developed in Part III, we can now consider how to craft the methods you will use. In Chap. 3 , we discussed how the theoretical framework you create does lots of work for you: (1) it helps you refine your predictions and backs them up with sound reasons or explanations; (2) it provides the parameters within which you craft your methods by providing clear rationales for some methods but not others; (3) it ensures that you can interpret your results appropriately by comparing them with your predictions; and, (4) it describes how your results connect with the prior research you used to build the rationales for your hypotheses. In this part of Chap. 4 , we will explore the ways in which your theoretical framework guides, and even determines, the methods you craft for your study.

In Chap. 3 , we described a cyclical process that produced the theoretical framework: asking questions, articulating predictions, developing rationales, imagining testing predictions, revising questions, adjusting rationales, revising predictions, and so on, and so on. We now extend this process beyond imagining how you could test your predictions.

The best way to craft appropriate methods that you will use is to try them out. Instead of only imagining how you could test your predictions, the cyclical process we described in Chap. 3 will be extended to trying out the methods you think you will use. This means trying out the measures you plan to use, the coding rubric (if you are coding data), the ways in which you will collect data, and how you will analyze data. By “try out” we mean a range of activities.

Write Out Your Methods

The first way you should try out your methods is by writing them out for yourself ( actually writing them out ) and then asking yourself two main questions. First, do the reasons or rationales in the theoretical framework point to using these specific measures, this coding rubric, and so forth? In other words, would anyone who reads your theoretical framework be the least bit surprised that you plan to use these methods? They should not be. In fact, you would expect anyone who read your theoretical framework to choose from the same set of reasonable, appropriate methods. If you plan to use methods for reasons other than those you find in your theoretical framework (perhaps because the framework is silent about this part of your study) or if you are using methods that are different from what would be expected, you probably need to either revise your framework (maybe to fill in some gaps or revise the arguments you make) or change your methods.

A second question to ask yourself after you have written a description of your methods is: “Can I imagine using these methods to generate data I could compare with my predictions?” Are the grain sizes similar? Can you plan how you will compare the data with the predictions? If you are unsure about this, you should consider changing your predictions (and your hypotheses and theoretical rationales) or changing your methods.

As described in Chap. 3 , your writing will serve two purposes. It will help you think through and reflect on your methods, trying them out in your head. And it will also constitute another part of your evolving research paper that you create while you are designing, conducting, and then documenting your research study. Writing is a powerful tool for thinking as well as the most common form of communicating your work to others. So, the writing you do here is not just scratch work that you will discard. It should be a draft for what will become your final research paper. Treat it seriously. That said, it is still just a draft; do not take it so seriously that you find yourself stuck and unable to put words to paper because you are not certain what you are writing is good enough.

The second way you can try out your methods is to solicit feedback and advice from other people. Scientific inquiry is not only an individual process but a social process as well (recall again the third descriptor of scientific inquiry in Chap. 1 ). Doing good scientific inquiry requires the assistance of others. It is impossible to see everything you will need to think about by yourself; you need to present your ideas and get feedback from others. Here are several things to try.

First, if you are a doctoral student, describe your planned methods to your advisor. That is probably already your go-to strategy. If you are a beginning professor, you can seek advice from former and current colleagues.

Second, try out your ideas by making a more formal presentation to an audience of friendly critics (e.g., colleagues). Perhaps you can invite colleagues to a special “seminar” in which you present your study (without the results). Ask for suggestions, maybe about specific issues you are struggling with and about any aspects of your study that could be clarified and even revised. You do not need to have the details of your methods worked out before showing your preliminary plans to your colleagues. If your research questions and initial predictions are clear, getting feedback on your preliminary plans (design, measures, and data analysis) can be very helpful and can prevent wasting time on things you will end up needing to change. We recommend getting feedback earlier rather than later and getting feedback in multiple settings multiple times.

Finally, regardless of your current professional situation, we encourage you to join, or create, a community of learners who interact regularly. Such communities are not only intellectually stimulating but socially supportive.

Exercise 4.3

Ask a few colleagues to spend 45–60 min with you. Present your study as you have imagined it to this point (20 min): Research questions, predictions about the answers, rationales for your predictions (i.e., your theoretical framework), and methods you will use to test your predictions (design, measures, data collection, and data analysis to check your predictions). Ask for their feedback (especially about the methods you will use, but also about any aspect of the planned study). Presenting all this information is challenging but is good practice for thinking about the most critical pieces of your plan and your reasons for them. Use the feedback to revise your plan.

Conduct Pilot Studies

The value of conducting small, repeated, pilot studies cannot be overstated. It is hugely undervalued in most discussions of crafting methods for research studies. Conducting pilot studies is well worth the time and effort. It is probably the best way to try out the methods you think will work.

The picture has text conducting pilot studies is probably the best way to try out the methods you think will work; written.

Pilot studies can be quite small, both in terms of time spent and number of participants. You can keep pilot studies small by using a very small sample of participants or a small sample of your measures. The sample of participants can be participants who are easy to find. Just try to select a small sample that represents the larger sample you plan to use. Then, see if the data you collect are like those you expected and if these data will test your predictions in the way you hoped. If not, you are likely to find that your methods are not aligned well enough with your theoretical framework. Even one pilot study can be very useful and save you tons of time; several follow-up pilots are even better because you can check whether your revisions solved the problem. Do not think of pilot studies as speed bumps that slow your progress but rather as course corrections that help you stay aimed squarely at your goal and save you time in the long run.

Small pilot studies can be conducted for various purposes. Here are a few.

Help Specify Your Predictions

Pilot studies can help you specify your predictions. Sometimes it might be difficult to anticipate the answers to your research questions. Rather than conducting a complete study with little idea of what will happen, it is much more productive to do some preliminary work to help you formulate predictions. If you conduct your study without doing this, you are likely to realize too late that your study could have been much more informative if you used a different sample of participants, if you asked different or additional questions during your interviews, if you used different measures (or tasks) to gather the data, if your data looked different so you could have used different analyses, and so forth.

In our view, this is an especially important use of pilot studies because it is our response to the argument we rebutted earlier that asserted research can be productive even if researchers have no idea what to expect and cannot make testable predictions. Throughout this book, we have argued that scientific inquiry requires predictions and rationales, regardless how weak or uncertain. We have claimed that, if the research is worth doing, it is possible and productive to make predictions. It is hard for us to imagine conducting research that builds on past work yet having no idea what to expect. If a researcher is charting new territory, then pilot studies are essential. Conducting one or more small pilot studies will provide some initial guesses and should trigger some ideas for why these guesses will be correct. As we noted earlier, however, we do not recommend beginning researchers chart completely new territory.

Improve Your Predictions

Even if you have some predictions, conducting a pilot study or two will tell you whether you are close. The more accurate you are with your predictions for the main study, the more precisely you can revise your predictions after the study and formulate very good explanations for why these new predictions should be accurate.

Refine Your Measures

Pilot studies can be very useful for making sure your measures will produce the kinds of data you need. For example, if your study includes participants who are asked to complete tasks of various kinds, you need to make sure the tasks generate the information you need.

Suppose you ask whether second graders improve their understanding of place value after an instructional intervention. You need to use tasks that help you interpret how well they understand place value before and after the intervention. You might ask two second graders and two third graders to complete your tasks to see if they generate the expected variation in performance and whether this variation can be tied to inferred levels of understanding. Also, ask a few colleagues to interpret the responses and check if they match with your interpretations.

Suppose you want to know whether middle school teachers interact differently with boys and girls about the most challenging problems during math class. Find a lesson or two in the curriculum that includes challenging problems and sit in on these lessons in several teachers’ classrooms. Test whether your observation instrument captures the differences that you think you notice.

Test Your Analytic Procedures

You can use small pilot studies to check if your data analysis procedures will work. This can be extremely useful if your procedures are more than simple quantitative comparisons such as t tests. Suppose you will conduct interviews with teachers and code their responses for particular features or patterns. Conducting two or three interviews and coding them can tell you quickly whether your coding rubric will work. Even more important, coding the interviews will tell you whether the interview questions are the right ones or whether they need to be revised to produce the data you need.

Other Purposes of Pilot Studies

In addition to the purposes we identified above, pilot studies can tell you whether the sample you identified will give you the information you need, whether your measures can be administered in the time you allocated, and whether other details of your data collection and analysis plans work as you hope. In summary, pilot studies allow you to rehearse your methods so you can be sure they will provide a strong test of your predictions.

After you conduct a pilot study, make the revisions needed to the framework or to the methods to ensure you will gather more informative data. Be sure to update your evolving research paper to reflect these changes. Each draft of this paper should be the draft which matches your current reasoning and decisions regarding your study.

The picture has text pilot studies that allow you to rehearse your methods so you can be sure they will provide a strong test of your predictions; written.

Part IV. Writing Your Evolving Research Paper and Revisiting Alignment

We continue here to elaborate our recommendation that you compose drafts of your evolving research paper as you make decisions along the way. It is worth describing several advantages in writing the paper and planning the study in parallel.

Advantages of Writing Your Research Paper While Planning Your Study

One of the major challenges researchers face as they plan and conduct research studies is aligning all parts of the study with a visible and tight logic tying all the parts together. You will find that as you make decisions about your study and write about these decisions, you are faced with this alignment challenge in both settings. Working out the alignment in one setting will help in the other. They reinforce each other. For example, as you write a record of your decisions while you plan your study, you might notice a gap in your logic. You can then fill in the gap, both in the paper and in the plans for the study.

As we have argued, writing is a useful tool for thinking. Writing out your questions and your predictions of the answers helps you decide if the questions are the ones you really want to ask and if your predictions are testable; writing out your rationales for your predictions helps you decide if you have sound reasons for your predictions, and if your theoretical framework is complete and convincing; writing out your theoretical rationales also helps you decide which methods will provide a strong test of your predictions.

Your evolving research paper will become the paper you will use to communicate your study to others. Writing drafts as you make decisions about how to conduct your study and why to conduct it as you did will prevent you from needing to reconstruct the logic you used as you planned each successive phase of your study. In addition, composing the paper as you go ensures that you consider the logic connecting each step to the next one. One of the major complaints reviewers are likely to have is that there is a lack of alignment. By following the processes we have described, you have no choice but to find, in the end, that all parts of the study are connected by an obvious logic.

We noted in Chap. 3 that writing your evolving research paper along with planning and conducting your study does not mean creating a chronology of all the decisions you made along the way. At each point in the process, you should step back and think about how to describe your work in the easiest-to-follow and clearest way for the reader. Usually, readers want to know only about your final decisions and, in many cases, your reasons for making these decisions.

Journal Reviewers’ Common Concerns

The concerns of reviewers provide useful guides for where you need to be especially careful to conduct a well-argued and well-designed study and to write a coherent paper reporting the study. As the editorial team for JRME , we found that one of the most frequent concerns raised by reviewers was that the research questions were not well connected to other parts of the paper. Of all manuscripts sent out for review, nearly 30% of the reviewers expressed concern that the paper was not coherent because parts of the paper were not connected back to the research questions. This could mean, for example, reviewers were not clear why or how the methods crafted for the study were appropriate to test the hypotheses or to answer the questions. The lack of clear connections could be due to either choices made planning and implementing the study or writing the research paper. Sometimes the connections exist but have been left implicit in the research report or even in the conceptualization of the study. Conceptualizing a study and writing the research report require making all the connections explicit. As noted above, these disconnects are less likely if you are composing the evolving research paper simultaneously with planning and implementing the study.

A further concern raised by many reviewers speaks to alignment and coherence: One or more of the research questions were not answered fully by the study. Although we will deal with this concern further in the next chapter, we believe it is relevant for the choice of methods because if you do not ensure that the methods are appropriate to answer your research questions (i.e., to test your hypotheses), it is likely they will not generate the data you need to answer your questions. In contrast, if you have aligned all parts of your study, you are likely to collect the data you need to answer your questions (i.e., to test and revise your hypotheses).

In summary, there are many reasons to compose your evolving research paper along with planning and conducting your study. As we have noted several times, your paper will not be a chronology of all the back-and-forth cycles you used to refine aspects of your study as you moved to the next phase, but it will be a faithful description of the ultimate decisions you made and your reasons for making them. Consequently, your evolving research paper will gradually build as you describe the following parts and explain the logic connecting them: (1) the purpose of your study, (2) your theoretical framework (i.e., the rationales for your predictions woven into a coherent argument), (3) your research questions plus predictions of the answers (generated directly from your theoretical rationales), (4) the methods you used to test your predictions, (5) the presentation of results, and (6) your interpretation of results (i.e., comparison of predicted results with the results reported plus proposed revisions to hypotheses). We will continue the story by addressing parts 5 and 6 in Chap. 5 .

Akker, J., Gravemeijer, K., McKenney, S., & Nieveen, N. (Eds.). (2006). Educational design research . Routledge.

Google Scholar  

Atkinson, P., Coffey, A., Delamont, S., Lofland, J., & Lofland, L. (2007). Handbook of ethnography . SAGE.

Book   Google Scholar  

Bakker, A., Cai, J., English, L., Kaiser, G., Mesa, V., & Van Dooren, W. (2019). Beyond small, medium, or large: Points of consideration when interpreting effect sizes. Educational Studies in Mathematics, 102 , 1–8.

Article   Google Scholar  

Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better . Harvard University Press.

Campbell, D. T. (1961). The mutual methodological relevance of anthropology and psychology. In F. L. K. Hsu (Ed.), Psychological anthropology: Approaches to culture and personality (pp. 333–352). Dorsey Press.

Campbell, D. T., Stanley, J. C., & Gage, N. L. (1963). Experimental and quasi-experimental designs for research . Houghton.

Cobb, P., & Steffe, L. P. (1983). The constructivist researcher as teacher and model builder. Journal for Research in Mathematics Education, 14 (2), 83–94.

Cobb, P., Jackson, K., & Sharpe, C. D. (2017). Conducting design studies to investigate and support mathematics students’ and teachers’ learning. In J. Cai (Ed.), Compendium for research in mathematics education (pp. 208–233). National Council of Teachers of Mathematics.

Collins, A. (1992). Toward a design science of education. In E. Scanlon & T. O’Shea (Eds.), New directions in educational technology . Springer.

Cook, T. D., Campbell, D. T., & Shadish, W. (2002). Experimental and quasi-experimental designs for generalized causal inference . Houghton Mifflin.

de Freitas, E., Lerman, S., & Parks, A. N. (2017). Qualitative methods. In J. Cai (Ed.), Compendium for research in mathematics education (pp. 159–182). NCTM.

Denzin, N. K., & Lincoln, Y. S. (Eds.). (2017). The SAGE handbook of qualitative research . SAGE.

Design-Based Research Collaborative. (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher, 32 (1), 5–8.

Gall, M. D., Gall, J. P., & Borg, W. R. (2007). Educational research: An introduction (8th ed.). Pearson.

Gopalan, M., Rosinger, K., & Ahn, J. B. (2020). Use of quasi-experimental research designs in education research: Growth, promise, and challenges. Review of Research in Education, 44 (1), 218–243. https://doi.org/10.3102/0091732X20903302

Gorsuch, R. L. (2014). Factor analysis (Classic 2nd Edition). Routledge.

Gournelos, T., Hammonds, J. R., & Wilson, M. A. (2019). Doing academic research: A practical guide to research methods and analysis . Routledge.

Gravemeijer, K., & van Eerde, D. (2009). Design research as a means for building a knowledge base for teachers and teaching in mathematics education. Elementary School Journal, 109 (5), 510–524.

Hart, L. C., Smith, S. Z., Swars, S. L., & Smith, M. E. (2008). An examination of research methods in mathematics education (1995–2005). Journal of Mixed Methods Research, 3 (1), 26–41.

Hay, C. M. (Ed.). (2016). Methods that matter: Integrating mixed methods for more effective social science research . University of Chicago Press.

Hohensee, C. (2014). Backward transfer: An investigation of the influence of quadratic functions instruction on students’ prior ways of reasoning about linear functions. Mathematical Thinking and Learning, 16 (2), 135–174.

Jonson, J. L., & Geisinger, K. F. (Eds.). (2022). Fairness in educational and psychological testing: Examining theoretical, research, practice, and policy implications of the 2014 standards . American Educational Research Association.

Kelly, A. E., & Lesh, R. A. (Eds.). (2000). Handbook of research design in mathematics and science education . Erlbaum.

Makel, M. C., & Plucker, J. A. (2014). Facts are more important than novelty: Replication in the education sciences. Educational Researcher, 43 (6), 304–316.

Maxwell, J. A. (2004). Causal explanation, qualitative research, and scientific inquiry in education. Educational Researcher, 33 (2), 3–11.

Miles, M. B., Huberman, A. M., & Saldaña, J. (2014). Qualitative data analysis: A methods sourcebook (4th ed.). Sage.

National Research Council. (2002). Scientific research in education . National Academy Press.

O’Donnell, C. A. (2008). Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K–12 curriculum intervention research. Review of Educational Research, 78 , 33–84.

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349 (6251), aac4716. https://doi.org/10.1126/science.aac4716

Puntambekar, S. (2018). Design-based research. In Fischer, F., Hmelo-Silver, C. E., Goldman, S. R., & Reimann, P. (Eds.). International handbook of the learning sciences (pp. 383–392). Retrieved from http://ebookcentral.proquest.com

Sechrest, L., Babcock, J., & Smith, B. (1993). An invitation to methodological pluralism. Evaluation Practice, 14 (3), 227–235.

Sherin, M. G., Jacobs, V. R., & Philipp, R. A. (Eds.). (2001). Mathematics teacher noticing: Seeing through teachers’ eyes . Routledge.

Sloane, F. C. (2008). Randomized trials in mathematics education: Recalibrating the proposed high watermark. Educational Researcher, 37 (9), 624–630. https://doi.org/10.3102/0013189X08328879

Sloane, F. C., & Wilkins, J. L. M. (2017). Aligning statistical modeling with theories of learning in mathematics education research. In J. Cai (Ed.), Compendium for research in mathematics education (pp. 183–207). NCTM.

Small, M. L. (2011). How to conduct a mixed methods study: Recent trends in a rapidly growing literature. Annual Review of Sociology, 37 , 57–86.

Steffe, L. P., & Thompson, P. W. (2000). Teaching experiment methodology: Underlying principles and essential elements. In A. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 266–287). Lawrence Erlbaum.

Stemler, S. E. (2004). A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Practical Assessment, Research, and Evaluation, 9 , Article 4. https://doi.org/10.7275/96jp-xz07

Stirman, S. W., & Beidas, R. S. (2020). Expanding the reach of psychological science through implementation science: Introduction to the special issue. American Psychologist, 75 (8), 1033–1037.

Tolman, C. W., Cherry, F., van Hezewijk, R., & Lubek, I. (1996). Problems of theoretical psychology . Ontario, CA.

Weis, L., Eisenhart, M., Duncan, G. J., Albro, E., Bueschel, A. C., Cobb, P., Eccles, J., Mendenhall, R., Moss, P., Penuel, W., Ream, R. K., Rumbaut, R. G., Sloane, F., Weisner, T. S., & Wilson, J. (2019a). Mixed methods for studies that address broad and enduring issues in education research. Teachers College Record, 121 , 100307.

Weis, L., Eisenhart, M., Weisner, T. S., Cobb, P., Duncan, G. J., Albro, E., Mendenhall, R., Penuel, W., Moss, P., Ream, R. K., & Rumbaut, R. G. (2019b). Exemplary mixed-methods research studies compiled by the mixed methods working group. Teachers College Record, 121 , 100308.

Weisner, T., Ryan, G. W., Reese, L., Kroesen, K., Bernheimer, L., & Gallimore, R. (2001). Behavior sampling and ethnography: Complementary methods for understanding home-school connections among Latino immigrant families. Field Methods, 13 (1), 20–46.

Wolf, C., Joye, D., Smith, T. W., & Fu, Y.-C. (2016). The SAGE handbook of survey methodology . SAGE.

Zieky, M. J. (2013). Fairness review in assessment. In K. F. Geisinger, B. A. Bracken, J. F. Carlson, J.-I. C. Hansen, N. R. Kuncel, S. P. Reise, & M. C. Rodriguez (Eds.), APA handbook of testing and assessment in psychology, Vol. 1. Test theory and testing and assessment in industrial and organizational psychology (pp. 293–302). American Psychological Association. https://doi.org/10.1037/14047-017

Chapter   Google Scholar  

Download references

Author information

Authors and affiliations.

School of Education, University of Delaware, Newark, DE, USA

James Hiebert, Anne K Morris & Charles Hohensee

Department of Mathematical Sciences, University of Delaware, Newark, DE, USA

Jinfa Cai & Stephen Hwang

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2023 The Author(s)

About this chapter

Cite this chapter.

Hiebert, J., Cai, J., Hwang, S., Morris, A.K., Hohensee, C. (2023). Crafting the Methods to Test Hypotheses. In: Doing Research: A New Researcher’s Guide. Research in Mathematics Education. Springer, Cham. https://doi.org/10.1007/978-3-031-19078-0_4

Download citation

DOI : https://doi.org/10.1007/978-3-031-19078-0_4

Published : 03 December 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-19077-3

Online ISBN : 978-3-031-19078-0

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

Grad Coach

Research Aims, Objectives & Questions

The “Golden Thread” Explained Simply (+ Examples)

By: David Phair (PhD) and Alexandra Shaeffer (PhD) | June 2022

The research aims , objectives and research questions (collectively called the “golden thread”) are arguably the most important thing you need to get right when you’re crafting a research proposal , dissertation or thesis . We receive questions almost every day about this “holy trinity” of research and there’s certainly a lot of confusion out there, so we’ve crafted this post to help you navigate your way through the fog.

Overview: The Golden Thread

  • What is the golden thread
  • What are research aims ( examples )
  • What are research objectives ( examples )
  • What are research questions ( examples )
  • The importance of alignment in the golden thread

What is the “golden thread”?  

The golden thread simply refers to the collective research aims , research objectives , and research questions for any given project (i.e., a dissertation, thesis, or research paper ). These three elements are bundled together because it’s extremely important that they align with each other, and that the entire research project aligns with them.

Importantly, the golden thread needs to weave its way through the entirety of any research project , from start to end. In other words, it needs to be very clearly defined right at the beginning of the project (the topic ideation and proposal stage) and it needs to inform almost every decision throughout the rest of the project. For example, your research design and methodology will be heavily influenced by the golden thread (we’ll explain this in more detail later), as well as your literature review.

The research aims, objectives and research questions (the golden thread) define the focus and scope ( the delimitations ) of your research project. In other words, they help ringfence your dissertation or thesis to a relatively narrow domain, so that you can “go deep” and really dig into a specific problem or opportunity. They also help keep you on track , as they act as a litmus test for relevance. In other words, if you’re ever unsure whether to include something in your document, simply ask yourself the question, “does this contribute toward my research aims, objectives or questions?”. If it doesn’t, chances are you can drop it.

Alright, enough of the fluffy, conceptual stuff. Let’s get down to business and look at what exactly the research aims, objectives and questions are and outline a few examples to bring these concepts to life.

Free Webinar: How To Find A Dissertation Research Topic

Research Aims: What are they?

Simply put, the research aim(s) is a statement that reflects the broad overarching goal (s) of the research project. Research aims are fairly high-level (low resolution) as they outline the general direction of the research and what it’s trying to achieve .

Research Aims: Examples  

True to the name, research aims usually start with the wording “this research aims to…”, “this research seeks to…”, and so on. For example:

“This research aims to explore employee experiences of digital transformation in retail HR.”   “This study sets out to assess the interaction between student support and self-care on well-being in engineering graduate students”  

As you can see, these research aims provide a high-level description of what the study is about and what it seeks to achieve. They’re not hyper-specific or action-oriented, but they’re clear about what the study’s focus is and what is being investigated.

Need a helping hand?

write a research question to test a prediction

Research Objectives: What are they?

The research objectives take the research aims and make them more practical and actionable . In other words, the research objectives showcase the steps that the researcher will take to achieve the research aims.

The research objectives need to be far more specific (higher resolution) and actionable than the research aims. In fact, it’s always a good idea to craft your research objectives using the “SMART” criteria. In other words, they should be specific, measurable, achievable, relevant and time-bound”.

Research Objectives: Examples  

Let’s look at two examples of research objectives. We’ll stick with the topic and research aims we mentioned previously.  

For the digital transformation topic:

To observe the retail HR employees throughout the digital transformation. To assess employee perceptions of digital transformation in retail HR. To identify the barriers and facilitators of digital transformation in retail HR.

And for the student wellness topic:

To determine whether student self-care predicts the well-being score of engineering graduate students. To determine whether student support predicts the well-being score of engineering students. To assess the interaction between student self-care and student support when predicting well-being in engineering graduate students.

  As you can see, these research objectives clearly align with the previously mentioned research aims and effectively translate the low-resolution aims into (comparatively) higher-resolution objectives and action points . They give the research project a clear focus and present something that resembles a research-based “to-do” list.

The research objectives detail the specific steps that you, as the researcher, will take to achieve the research aims you laid out.

Research Questions: What are they?

Finally, we arrive at the all-important research questions. The research questions are, as the name suggests, the key questions that your study will seek to answer . Simply put, they are the core purpose of your dissertation, thesis, or research project. You’ll present them at the beginning of your document (either in the introduction chapter or literature review chapter) and you’ll answer them at the end of your document (typically in the discussion and conclusion chapters).  

The research questions will be the driving force throughout the research process. For example, in the literature review chapter, you’ll assess the relevance of any given resource based on whether it helps you move towards answering your research questions. Similarly, your methodology and research design will be heavily influenced by the nature of your research questions. For instance, research questions that are exploratory in nature will usually make use of a qualitative approach, whereas questions that relate to measurement or relationship testing will make use of a quantitative approach.  

Let’s look at some examples of research questions to make this more tangible.

Research Questions: Examples  

Again, we’ll stick with the research aims and research objectives we mentioned previously.  

For the digital transformation topic (which would be qualitative in nature):

How do employees perceive digital transformation in retail HR? What are the barriers and facilitators of digital transformation in retail HR?  

And for the student wellness topic (which would be quantitative in nature):

Does student self-care predict the well-being scores of engineering graduate students? Does student support predict the well-being scores of engineering students? Do student self-care and student support interact when predicting well-being in engineering graduate students?  

You’ll probably notice that there’s quite a formulaic approach to this. In other words, the research questions are basically the research objectives “converted” into question format. While that is true most of the time, it’s not always the case. For example, the first research objective for the digital transformation topic was more or less a step on the path toward the other objectives, and as such, it didn’t warrant its own research question.  

So, don’t rush your research questions and sloppily reword your objectives as questions. Carefully think about what exactly you’re trying to achieve (i.e. your research aim) and the objectives you’ve set out, then craft a set of well-aligned research questions . Also, keep in mind that this can be a somewhat iterative process , where you go back and tweak research objectives and aims to ensure tight alignment throughout the golden thread.

The importance of strong alignment 

Alignment is the keyword here and we have to stress its importance . Simply put, you need to make sure that there is a very tight alignment between all three pieces of the golden thread. If your research aims and research questions don’t align, for example, your project will be pulling in different directions and will lack focus . This is a common problem students face and can cause many headaches (and tears), so be warned.

Take the time to carefully craft your research aims, objectives and research questions before you run off down the research path. Ideally, get your research supervisor/advisor to review and comment on your golden thread before you invest significant time into your project, and certainly before you start collecting data .  

Recap: The golden thread

In this post, we unpacked the golden thread of research, consisting of the research aims , research objectives and research questions . You can jump back to any section using the links below.

As always, feel free to leave a comment below – we always love to hear from you. Also, if you’re interested in 1-on-1 support, take a look at our private coaching service here.

write a research question to test a prediction

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Narrative analysis explainer

37 Comments

Isaac Levi

Thank you very much for your great effort put. As an Undergraduate taking Demographic Research & Methodology, I’ve been trying so hard to understand clearly what is a Research Question, Research Aim and the Objectives in a research and the relationship between them etc. But as for now I’m thankful that you’ve solved my problem.

Hatimu Bah

Well appreciated. This has helped me greatly in doing my dissertation.

Dr. Abdallah Kheri

An so delighted with this wonderful information thank you a lot.

so impressive i have benefited a lot looking forward to learn more on research.

Ekwunife, Chukwunonso Onyeka Steve

I am very happy to have carefully gone through this well researched article.

Infact,I used to be phobia about anything research, because of my poor understanding of the concepts.

Now,I get to know that my research question is the same as my research objective(s) rephrased in question format.

I please I would need a follow up on the subject,as I intends to join the team of researchers. Thanks once again.

Tosin

Thanks so much. This was really helpful.

sylas

i found this document so useful towards my study in research methods. thanks so much.

Michael L. Andrion

This is my 2nd read topic in your course and I should commend the simplified explanations of each part. I’m beginning to understand and absorb the use of each part of a dissertation/thesis. I’ll keep on reading your free course and might be able to avail the training course! Kudos!

Scarlett

Thank you! Better put that my lecture and helped to easily understand the basics which I feel often get brushed over when beginning dissertation work.

Enoch Tindiwegi

This is quite helpful. I like how the Golden thread has been explained and the needed alignment.

Sora Dido Boru

This is quite helpful. I really appreciate!

Chulyork

The article made it simple for researcher students to differentiate between three concepts.

Afowosire Wasiu Adekunle

Very innovative and educational in approach to conducting research.

Sàlihu Abubakar Dayyabu

I am very impressed with all these terminology, as I am a fresh student for post graduate, I am highly guided and I promised to continue making consultation when the need arise. Thanks a lot.

Mohammed Shamsudeen

A very helpful piece. thanks, I really appreciate it .

Sonam Jyrwa

Very well explained, and it might be helpful to many people like me.

JB

Wish i had found this (and other) resource(s) at the beginning of my PhD journey… not in my writing up year… 😩 Anyways… just a quick question as i’m having some issues ordering my “golden thread”…. does it matter in what order you mention them? i.e., is it always first aims, then objectives, and finally the questions? or can you first mention the research questions and then the aims and objectives?

UN

Thank you for a very simple explanation that builds upon the concepts in a very logical manner. Just prior to this, I read the research hypothesis article, which was equally very good. This met my primary objective.

My secondary objective was to understand the difference between research questions and research hypothesis, and in which context to use which one. However, I am still not clear on this. Can you kindly please guide?

Derek Jansen

In research, a research question is a clear and specific inquiry that the researcher wants to answer, while a research hypothesis is a tentative statement or prediction about the relationship between variables or the expected outcome of the study. Research questions are broader and guide the overall study, while hypotheses are specific and testable statements used in quantitative research. Research questions identify the problem, while hypotheses provide a focus for testing in the study.

Saen Fanai

Exactly what I need in this research journey, I look forward to more of your coaching videos.

Abubakar Rofiat Opeyemi

This helped a lot. Thanks so much for the effort put into explaining it.

Lamin Tarawally

What data source in writing dissertation/Thesis requires?

What is data source covers when writing dessertation/thesis

Latifat Muhammed

This is quite useful thanks

Yetunde

I’m excited and thankful. I got so much value which will help me progress in my thesis.

Amer Al-Rashid

where are the locations of the reserch statement, research objective and research question in a reserach paper? Can you write an ouline that defines their places in the researh paper?

Webby

Very helpful and important tips on Aims, Objectives and Questions.

Refiloe Raselane

Thank you so much for making research aim, research objectives and research question so clear. This will be helpful to me as i continue with my thesis.

Annabelle Roda-Dafielmoto

Thanks much for this content. I learned a lot. And I am inspired to learn more. I am still struggling with my preparation for dissertation outline/proposal. But I consistently follow contents and tutorials and the new FB of GRAD Coach. Hope to really become confident in writing my dissertation and successfully defend it.

Joe

As a researcher and lecturer, I find splitting research goals into research aims, objectives, and questions is unnecessarily bureaucratic and confusing for students. For most biomedical research projects, including ‘real research’, 1-3 research questions will suffice (numbers may differ by discipline).

Abdella

Awesome! Very important resources and presented in an informative way to easily understand the golden thread. Indeed, thank you so much.

Sheikh

Well explained

New Growth Care Group

The blog article on research aims, objectives, and questions by Grad Coach is a clear and insightful guide that aligns with my experiences in academic research. The article effectively breaks down the often complex concepts of research aims and objectives, providing a straightforward and accessible explanation. Drawing from my own research endeavors, I appreciate the practical tips offered, such as the need for specificity and clarity when formulating research questions. The article serves as a valuable resource for students and researchers, offering a concise roadmap for crafting well-defined research goals and objectives. Whether you’re a novice or an experienced researcher, this article provides practical insights that contribute to the foundational aspects of a successful research endeavor.

yaikobe

A great thanks for you. it is really amazing explanation. I grasp a lot and one step up to research knowledge.

UMAR SALEH

I really found these tips helpful. Thank you very much Grad Coach.

Rahma D.

I found this article helpful. Thanks for sharing this.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

User Preferences

Content preview.

Arcu felis bibendum ut tristique et egestas quis:

  • Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
  • Duis aute irure dolor in reprehenderit in voluptate
  • Excepteur sint occaecat cupidatat non proident

Keyboard Shortcuts

3.1 - the research questions.

In this lesson, we are concerned with answering two different types of research questions. Our goal here — and throughout the practice of statistics — is to translate the research questions into reasonable statistical procedures.

Let's take a look at examples of the two types of research questions we learn how to answer in this lesson:

1. What is the mean weight, \(\mu\), of all American women, aged 18-24?

If we wanted to estimate \(\mu\) , what would be a good estimate? It seems reasonable to calculate a confidence interval for \(\mu\) using \(\bar{y}\), the average weight of a random sample of American women, aged 18-24.

2. What is the weight, y , of an individual American woman, aged 18-24?

If we want to predict y , what would be a good prediction? It seems reasonable to calculate a "prediction interval" for y using, again, \(\bar{y}\), the average weight of a random sample of American women, aged 18-24.

A person's weight is, of course, highly associated with the person's height. In answering each of the above questions, we likely could do better by taking into account a person's height. That's where an estimated regression equation becomes useful.

Here are some weight and height data from a sample of n = 10 people, ( Student Height and Weight data ):

write a research question to test a prediction

If we used the average weight of the 10 people in the sample to estimate \(\mu\), we would claim that the average weight of all American women aged 18-24 is 158.8 pounds regardless of the height of the women. Similarly, if we used the average weight of the 10 people in the sample to predict y , we would claim that the weight of an individual American woman aged 18-24 is 158.8 pounds regardless of the woman's height.

On the other hand, if we used the estimated regression equation to estimate \(\mu\), we would claim that the average weight of all American women aged 18-24 who are only 64 inches tall is -266.5 + 6.1(64) = 123.9 pounds. Similarly, we would predict that the weight y of an individual American woman aged 18-24 who is only 64 inches tall is 123.9 pounds. This example makes it clear that we get significantly different (and better!) answers to our research questions when we take into account a person's height.

Let's make it clear that it is one thing to estimate \(\mu_{Y}\) and yet another thing to predict y . (Note that we subscript \(\mu\) with Y to make it clear that we are talking about the mean of the response Y not the mean of the predictor x .)

Let's return to our example in which we consider the potential relationship between the predictor "high school GPA" and the response "college entrance test score."

entrance test vs gpa plot

For this example, we could ask two different research questions concerning the response:

  • What is the mean college entrance test score for the subpopulation of students whose high school GPA is 3? (Answering this question entails estimating the mean response \(\mu_{Y}\) when x = 3.)
  • What college entrance test score can we predict for a student whose high school GPA is 3? (Answering this question entails predicting the response \(y_{\text{new}}\) when x = 3.)

The two research questions can be asked more generally:

  • What is the mean response \(\mu_{Y}\) when the predictor value is \(x_{h}\)?
  • What value will a new response \(y_{\text{new}}\) be when the predictor value is \(x_{h}\)?

Let's take a look at one more example, namely, the one concerning the relationship between the response "skin cancer mortality" and the predictor "latitude" ( Skin Cancer data ). Again, we could ask two different research questions concerning the response:

  • What is the expected (mean) mortality rate due to skin cancer for all locations at 40 degrees north latitude?
  • What is the predicted mortality rate for one individual location at 40 degrees north, say at Chambersburg, Pennsylvania?

At some level, answering these two research questions is straightforward. Both just involve using the estimated regression equation:

mortality vs latitude plot with prediction = 40

That is, \(\hat{y}_h=b_0+b_1x_h\) is the best answer to each research question. It is the best guess of the mean response at \(x_{h}\), and it is the best guess of a new response at \(x_{h}\):

  • Our best estimate of the mean mortality rate due to skin cancer for all locations at 40 degrees north latitude is 389.19 - 5.97764(40) = 150 deaths per 10 million people.
  • Our best prediction of the mortality rate due to skin cancer in Chambersburg, Pennsylvania is 389.19 - 5.97764(40) = 150 deaths per 10 million people.

The problem with the answers to our two research questions is that we'd have obtained a completely different answer if we had selected a different random sample of data. As always, to be confident in the answer to our research questions, we should put an interval around our best guesses. We learn how to do this in the next two sections. That is, we first learn a " confidence interval for \(\mu_Y\) " and then a " prediction interval for \(y_{\text{new}}\). "

Research questions Section  

For each of the following situations, identify whether the research question of interest entails estimating a mean response \(\mu_{Y}\) or predicting a new response \(y_{\text{new}}\).

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • A Quick Guide to Experimental Design | 5 Steps & Examples

A Quick Guide to Experimental Design | 5 Steps & Examples

Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying. 

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism, run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomised design vs a randomised block design .
  • A between-subjects design vs a within-subjects design .

Randomisation

An experiment can be completely randomised or randomised within blocks (aka strata):

  • In a completely randomised design , every subject is assigned to a treatment group at random.
  • In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.

Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 25 March 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

write a research question to test a prediction

  • Developing a Research Question

by acburton | Mar 22, 2024 | Resources for Students , Writing Resources

Selecting your research question and creating a clear goal and structure for your writing can be challenging – whether you are doing it for the first time or if you’ve done it many times before. It can be especially difficult when your research question starts to look and feel a little different somewhere between your first and final draft. Don’t panic! It’s normal for your research question to change a little (or even quite a bit) as you move through and engage with the writing process. Anticipating this can remind you to stay on track while you work and that it’ll be okay even if the literature takes you in a different direction.

What Makes an Effective Research Question?

The most effective research question will usually be a critical thinking question and should use “how” or “why” to ensure it can move beyond a yes/no or one-word type of answer. Consider how your research question can aim to reveal something new, fill in a gap, even if small, and contribute to the field in a meaningful way; How might the proposed project move knowledge forward about a particular place or process? This should be specific and achievable!

The CEWC’s Grad Writing Consultant Tariq says, “I definitely concentrated on those aspects of what I saw in the field where I believed there was an opportunity to move the discipline forward.”

General Tips

Do your research.

Utilize the librarians at your university and take the time to research your topic first. Try looking at very general sources to get an idea of what could be interesting to you before you move to more academic articles that support your rough idea of the topic. It is important that research is grounded in what you see or experience regarding the topic you have chosen and what is already known in the literature. Spend time researching articles, books, etc. that supports your thesis. Once you have a number of sources that you know support what you want to write about, formulate a research question that serves as the interrogative form of your thesis statement.

Grad Writing Consultant Deni advises, “Delineate your intervention in the literature (i.e., be strategic about the literature you discuss and clear about your contributions to it).”

Start Broadly…. then Narrow Your Topic Down to Something Manageable

When brainstorming your research question, let your mind veer toward connections or associations that you might have already considered or that seem to make sense and consider if new research terms, language or concepts come to mind that may be interesting or exciting for you as a researcher. Sometimes testing out a research question while doing some preliminary researching is also useful to see if the language you are using or the direction you are heading toward is fruitful when trying to search strategically in academic databases. Be prepared to focus on a specific area of a broad topic.

Writing Consultant Jessie recommends outlining: “I think some rough outlining with a research question in mind can be helpful for me. I’ll have a research question and maybe a working thesis that I feel may be my claim to the research question based on some preliminary materials, brainstorming, etc.” — Jessie, CEWC Writing Consultant

Try an Exercise

In the earliest phase of brainstorming, try an exercise suggested by CEWC Writing Specialist, Percival! While it is normally used in classroom or workshop settings, this exercise can easily be modified for someone working alone. The flow of the activity, if done within a group setting, is 1) someone starts with an idea, 2) three other people share their idea, and 3) the starting person picks two of these new ideas they like best and combines their original idea with those. The activity then begins again with the idea that was not chosen. The solo version of this exercise substitutes a ‘word bank,’ created using words, topics, or ideas similar to your broad, overarching theme. Pick two words or phrases from your word bank, combine it with your original idea or topic, and ‘start again’ with two different words. This serves as a replacement for different people’s suggestions. Ideas for your ‘word bank’ can range from vague prompts about mapping or webbing (e.g., where your topic falls within the discipline and others like it), to more specific concepts that come from tracing the history of an idea (its past, present, future) or mapping the idea’s related ideas, influences, etc. Care for a physics analogy? There is a particle (your topic) that you can describe, a wave that the particle traces, and a field that the particle is mapped on.

Get Feedback and Affirm Your Confidence!

Creating a few different versions of your research question (they may be the same topic/issue/theme or differ slightly) can be useful during this process. Sharing these with trusted friends, colleagues, mentors, (or tutors!) and having conversations about your questions and ideas with other people can help you decide which version you may feel most confident or interested in. Ask colleagues and mentors to share their research questions with you to get a lot of examples. Once you have done the work of developing an effective research question, do not forget to affirm your confidence! Based on your working thesis, think about how you might organize your chapters or paragraphs and what resources you have for supporting this structure and organization. This can help boost your confidence that the research question you have created is effective and fruitful.

Be Open to Change

Remember, your research question may change from your first to final draft. For questions along the way, make an appointment with the Writing Center. We are here to help you develop an effective and engaging research question and build the foundation for a solid research paper!

Example 1: In my field developing a research question involves navigating the relationship between 1) what one sees/experiences at their field site and 2) what is already known in the literature. During my preliminary research, I found that the financial value of land was often a matter of precisely these cultural factors. So, my research question ended up being: How do the social and material qualities of land entangle with processes of financialization in the city of Lahore. Regarding point #1, this question was absolutely informed by what I saw in the field. But regarding point #2, the question was also heavily shaped by the literature. – Tariq

Example 2: A research question should not be a yes/no question like “Is pollution bad?”; but an open-ended question where the answer has to be supported with reasons and explanation. The question also has to be narrowed down to a specific topic—using the same example as before—”Is pollution bad?” can be revised to “How does pollution affect people?” I would encourage students to be more specific then; e.g., what area of pollution do you want to talk about: water, air, plastic, climate change… what type of people or demographic can we focus on? …how does this affect marginalized communities, minorities, or specific areas in California? After researching and deciding on a focus, your question might sound something like: How does government policy affect water pollution and how does it affect the marginalized communities in the state of California? -Janella

Our Newest Resources!

  • Transitioning to Long-form Writing
  • Integrating Direct Quotations into Your Writing
  • Nurturing a Growth Mindset to Overcome Writing Challenges and Develop Confidence in College Level Writing
  • An Introduction to Paraphrasing, Summarizing, and Quoting

Additional Resources

  • Graduate Writing Consultants
  • Instructor Resources
  • Student Resources
  • Quick Guides and Handouts
  • Self-Guided and Directed Learning Activities

Capstone and PICO Project Toolkit

  • Starting a Project: Overview
  • Developing a Research Question
  • Selecting Databases
  • Expanding a Search
  • Refining/Narrowing a Search
  • Saving Searches
  • Critical Appraisal & Levels of Evidence
  • Citing & Managing References
  • Database Tutorials
  • Types of Literature Reviews
  • Finding Full Text
  • Term Glossary

Defining the Question: Foreground & Background Questions

In order to most appropriately choose an information resource and craft a search strategy, it is necessary to consider what  kind  of question you are asking: a specific, narrow "foreground" question, or a broader background question that will help give context to your research?

Foreground Questions

A "foreground" question in health research is one that is relatively specific, and is usually best addressed by locating primary research evidence. 

Using a structured question framework can help you clearly define the concepts or variables that make up the specific research question. 

 Across most frameworks, you’ll often be considering:

  • a who (who was studied - a population or sample)
  • a what (what was done or examined - an intervention, an exposure, a policy, a program, a phenomenon)
  • a how ([how] did the [what] affect the [who] - an outcome, an effect). 

PICO is the most common framework for developing a clinical research question, but multiple question frameworks exist.

PICO (Problem/Population, Intervention, Comparison, Outcome)

Appropriate for : clinical questions, often addressing the effect of an intervention/therapy/treatment

Example : For adolescents with type II diabetes (P) does the use of telehealth consultations (I) compared to in-person consultations  (C) improve blood sugar control  (O)?

Framing Different Types of Clinical Questions with PICO

Different types of clinical questions are suited to different syntaxes and phrasings, but all will clearly define the PICO elements.  The definitions and frames below may be helpful for organizing your question:

Intervention/Therapy

Questions addressing how a clinical issue, illness, or disability is treated.

"In__________________(P), how does__________________(I) compared to_________________(C) affect______________(O)?"

Questions that address the causes or origin of disease, the factors which produce or predispose toward a certain disease or disorder.

"Are_________________(P), who have_________________(I) compared with those without_________________(C) at_________________risk for/of_________________(O) over_________________(T)?" 

Questions addressing the act or process of identifying or determining the nature and cause of a disease or injury through evaluation.

In_________________(P) are/is_________________(I) compared with_________________(C) more accurate in diagnosing_________________(O)?

Prognosis/Prediction:

Questions addressing the prediction of the course of a disease.

In_________________(P), how does_________________(I) compared to_________________ (C) influence_________________(O)?

Questions addressing how one experiences a phenomenon or why we need to approach practice differently.

"How do_________________(P) with_________________(I) perceive_________________(O)?" 

Adapted from: Melnyk, B. M., & Fineout-Overholt, E. (2011). Evidence-based practice in nursing & healthcare: A guide to best practice. Philadelphia: Wolters Kluwer/Lippincott Williams & Wilkins.

Beyond PICO: Other Types of Question Frameworks

PICO is a useful framework for clinical research questions, but may not be appropriate for all kinds of reviews.  Also consider:

PEO (Population, Exposure, Outcome)

Appropriate for : describing association between particular exposures/risk factors and outcomes

Example : How do  preparation programs (E) influence the development of teaching competence  (O) among novice nurse educators  (P)?

SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research Type)

Appropriate for : questions of experience or perspectives (questions that may be addressed by qualitative or mixed methods research)

Example : What are the experiences and perspectives (E) of  undergraduate nursing students  (S)  in clinical placements within prison healthcare settings (PI)?

SPICE (Setting, Perspective, Intervention/phenomenon of Interest, Comparison, Evaluation)

Appropriate for : evaluating the outcomes of a service, project, or intervention

Example : What are the impacts and best practices for workplace (S) transition support programs (I) for the retention (E) of newly-hired, new graduate nurses (P)?

PCC (Problem/population, Concept, Context)

Appropriate for : broader (scoping) questions

Example : How do nursing schools  (Context) teach, measure, and maintain nursing students ' (P)  technological literacy  (Concept))throughout their educational programs?

Background Questions

To craft a strong and reasonable foreground research question, it is important to have a firm understanding of the concepts of interest.  As such, it is often necessary to ask background questions, which ask for more general, foundational knowledge about a disorder, disease, patient population, policy issue, etc. 

For example, consider the PICO question outlined above:

"For adolescents with type II diabetes does the use of telehealth consultations compared to in-person consultations  improve blood sugar control ?

To best make sense of the literature that might address this PICO question, you would also need a deep understanding of background questions like:

  • What are the unique barriers or challenges related to blood sugar management in adolescents with TII diabetes?
  • What are the measures of effective blood sugar control?
  • What kinds of interventions would fall under the umbrella of 'telehealth'?
  • What are the qualitative differences in patient experience in telehealth versus in-person interactions with healthcare providers?
  • << Previous: Starting a Project: Overview
  • Next: Selecting Databases >>
  • Last Updated: Mar 12, 2024 4:13 PM
  • URL: https://guides.nyu.edu/pico

Integrations

What's new?

Prototype Testing

Live Website Testing

Feedback Surveys

Interview Studies

Card Sorting

Tree Testing

In-Product Prompts

Participant Management

Automated Reports

Templates Gallery

Choose from our library of pre-built mazes to copy, customize, and share with your own users

Browse all templates

Financial Services

Tech & Software

Product Designers

Product Managers

User Researchers

By use case

Concept & Idea Validation

Wireframe & Usability Test

Content & Copy Testing

Feedback & Satisfaction

Content Hub

Educational resources for product, research and design teams

Explore all resources

Question Bank

Research Maturity Model

Guides & Reports

Help Center

Future of User Research Report

The Optimal Path Podcast

Creating a research hypothesis: How to formulate and test UX expectations

User Research

Mar 21, 2024

Creating a research hypothesis: How to formulate and test UX expectations

A research hypothesis helps guide your UX research with focused predictions you can test and learn from. Here’s how to formulate your own hypotheses.

Armin Tanovic

Armin Tanovic

All great products were once just thoughts—the spark of an idea waiting to be turned into something tangible.

A research hypothesis in UX is very similar. It’s the starting point for your user research; the jumping off point for your product development initiatives.

Formulating a UX research hypothesis helps guide your UX research project in the right direction, collect insights, and evaluate not only whether an idea is worth pursuing, but how to go after it.

In this article, we’ll cover what a research hypothesis is, how it's relevant to UX research, and the best formula to create your own hypothesis and put it to the best.

Test your hypothesis with Maze

Maze lets you validate your design and test research hypotheses to move forward with authentic user insights.

write a research question to test a prediction

What defines a research hypothesis?

A research hypothesis is a statement or prediction that needs testing to be proven or disproven.

Let’s say you’ve got an inkling that making a change to a feature icon will increase the number of users that engage with it—with some minor adjustments, this theory becomes a research hypothesis: “ Adjusting Feature X’s icon will increase daily average users by 20% ”.

A research hypothesis is the starting point that guides user research . It takes your thought and turns it into something you can quantify and evaluate. In this case, you could conduct usability tests and user surveys, and run A/B tests to see if you’re right—or, just as importantly, wrong .

A good research hypothesis has three main features:

  • Specificity: A hypothesis should clearly define what variables you’re studying and what you expect an outcome to be, without ambiguity in its wording
  • Relevance: A research hypothesis should have significance for your research project by addressing a potential opportunity for improvement
  • Testability: Your research hypothesis must be able to be tested in some way such as empirical observation or data collection

What is the difference between a research hypothesis and a research question?

Research questions and research hypotheses are often treated as one and the same, but they’re not quite identical.

A research hypothesis acts as a prediction or educated guess of outcomes , while a research question poses a query on the subject you’re investigating. Put simply, a research hypothesis is a statement, whereas a research question is (you guessed it) a question.

For example, here’s a research hypothesis: “ Implementing a navigation bar on our dashboard will improve customer satisfaction scores by 10%. ”

This statement acts as a testable prediction. It doesn’t pose a question, it’s a prediction. Here’s what the same hypothesis would look like as a research question: “ Will integrating a navigation bar on our dashboard improve customer satisfaction scores? ”

The distinction is minor, and both are focused on uncovering the truth behind the topic, but they’re not quite the same.

Why do you use a research hypothesis in UX?

Research hypotheses in UX are used to establish the direction of a particular study, research project, or test. Formulating a hypothesis and testing it ensures the UX research you conduct is methodical, focused, and actionable. It aids every phase of your research process , acting as a north star that guides your efforts toward successful product development .

Typically, UX researchers will formulate a testable hypothesis to help them fulfill a broader objective, such as improving customer experience or product usability. They’ll then conduct user research to gain insights into their prediction and confirm or reject the hypothesis.

A proven or disproven hypothesis will tell if your prediction is right, and whether you should move forward with your proposed design—or if it's back to the drawing board.

Formulating a hypothesis can be helpful in anything from prototype testing to idea validation, and design iteration. Put simply, it’s one of the first steps in conducting user research.

Whether you’re in the initial stages of product discovery for a new product, a single feature, or conducting ongoing research, a strong hypothesis presents a clear purpose and angle for your research It also helps understand which user research methodology to use to get your answers.

What are the types of research hypotheses?

Not all hypotheses are built the same—there are different types with different objectives. Understanding the different types enables you to formulate a research hypothesis that outlines the angle you need to take to prove or disprove your predictions.

Here are some of the different types of hypotheses to keep in mind.

Null and alternative hypotheses

While a normal research hypothesis predicts that a specific outcome will occur based upon a certain change of variables, a null hypothesis predicts that no difference will occur when you introduce a new condition.

By that reasoning, a null hypothesis would be:

  • Adding a new CTA button to the top of our homepage will make no difference in conversions

Null hypotheses are useful because they help outline what your test or research study is trying to dis prove, rather than prove, through a research hypothesis.

An alternative hypothesis states the exact opposite of a null hypothesis. It proposes that a certain change will occur when you introduce a new condition or variable. For example:

  • Adding a CTA button to the top of our homepage will cause a difference in conversion rates

Simple hypotheses and complex hypotheses

A simple hypothesis is a prediction that includes only two variables in a cause-and-effect sequence, with one variable dependent on the other. It predicts that you'll achieve a particular outcome based on a certain condition. The outcome is known as the dependent variable and the change causing it is the independent variable .

For example, this is a simple hypothesis:

  • Including the search function on our mobile app will increase user retention

The expected outcome of increasing user retention is based on the condition of including a new search function. But, what happens when there are more than two factors at play?

We get what’s called a complex hypothesis. Instead of a simple condition and outcome, complex hypotheses include multiple results. This makes them a perfect research hypothesis type for framing complex studies or tracking multiple KPIs based on a single action.

Building upon our previous example, a complex research hypothesis could be:

  • Including the search function on our mobile app will increase user retention and boost conversions

Directional and non-directional hypotheses

Research hypotheses can also differ in the specificity of outcomes. Put simply, any hypothesis that has a specific outcome or direction based on the relationship of its variables is a directional hypothesis . That means that our previous example of a simple hypothesis is also a directional hypothesis.

Non-directional hypotheses don’t specify the outcome or difference the variables will see. They just state that a difference exists. Following our example above, here’s what a non-directional hypothesis would look like:

  • Including the search function on our mobile app will make a difference in user retention

In this non-directional hypothesis, the direction of difference (increase/decrease) hasn’t been specified, we’ve just noted that there will be a difference.

The type of hypothesis you write helps guide your research—let’s get into it.

How to write and test your UX research hypothesis

Now we’ve covered the types of research hypothesis examples, it’s time to get practical.

Creating your research hypothesis is the first step in conducting successful user research.

Here are the four steps for writing and testing a UX research hypothesis to help you make informed, data-backed decisions for product design and development.

1. Formulate your hypothesis

Start by writing out your hypothesis in a way that’s specific and relevant to a distinct aspect of your user or product experience. Meaning: your prediction should include a design choice followed by the outcome you’d expect—this is what you’re looking to validate or reject.

Your proposed research hypothesis should also be testable through user research data analysis. There’s little point in a hypothesis you can’t test!

Let’s say your focus is your product’s user interface—and how you can improve it to better meet customer needs. A research hypothesis in this instance might be:

  • Adding a settings tab to the navigation bar will improve usability

By writing out a research hypothesis in this way, you’re able to conduct relevant user research to prove or disprove your hypothesis. You can then use the results of your research—and the validation or rejection of your hypothesis—to decide whether or not you need to make changes to your product’s interface.

2. Identify variables and choose your research method

Once you’ve got your hypothesis, you need to map out how exactly you’ll test it. Consider what variables relate to your hypothesis. In our case, the main variable of our outcome is adding a settings tab to the navigation bar.

Once you’ve defined the relevant variables, you’re in a better position to decide on the best UX research method for the job. If you’re after metrics that signal improvement, you’ll want to select a method yielding quantifiable results—like usability testing . If your outcome is geared toward what users feel, then research methods for qualitative user insights, like user interviews , are the way to go.

3. Carry out your study

It’s go time. Now you’ve got your hypothesis, identified the relevant variables, and outlined your method for testing them, you’re ready to run your study. This step involves recruiting participants for your study and reaching out to them through relevant channels like email, live website testing , or social media.

Given our hypothesis, our best bet is to conduct A/B and usability tests with a prototype that includes the additional UI elements, then compare the usability metrics to see whether users find navigation easier with or without the settings button.

We can also follow up with UX surveys to get qualitative insights and ask users how they found the task, what they preferred about each design, and to see what additional customer insights we uncover.

💡 Want more insights from your usability tests? Maze Clips enables you to gather real-time recordings and reactions of users participating in usability tests .

4. Analyze your results and compare them to your hypothesis

By this point, you’ve neatly outlined a hypothesis, chosen a research method, and carried out your study. It’s now time to analyze your findings and evaluate whether they support or reject your hypothesis.

Look at the data you’ve collected and what it means. Given that we conducted usability testing, we’ll want to look to some key usability metrics for an indication of whether the additional settings button improves usability.

For example, with the usability task of ‘ In account settings, find your profile and change your username ’, we can conduct task analysis to compare the times spent on task and misclick rates of the new design, with those same metrics from the old design.

If you also conduct follow-up surveys or interviews, you can ask users directly about their experience and analyze their answers to gather additional qualitative data . Maze AI can handle the analysis automatically, but you can also manually read through responses to get an idea of what users think about the change.

By comparing the findings to your research hypothesis, you can identify whether your research accepts or rejects your hypothesis. If the majority of users struggle with finding the settings page within usability tests, but had a higher success rate with your new prototype, you’ve proved the hypothesis.

However, it's also crucial to acknowledge if the findings refute your hypothesis rather than prove it as true. Ruling something out is just as valuable as confirming a suspicion.

In either case, make sure to draw conclusions based on the relationship between the variables and store findings in your UX research repository . You can conduct deeper analysis with techniques like thematic analysis or affinity mapping .

UX research hypotheses: four best practices to guide your research

Knowing the big steps for formulating and testing a research hypothesis ensures that your next UX research project gives you focused, impactful results and insights. But, that’s only the tip of the research hypothesis iceberg. There are some best practices you’ll want to consider when using a hypothesis to test your UX design ideas.

Here are four research hypothesis best practices to help guide testing and make your UX research systematic and actionable.

Align your hypothesis to broader business and UX goals

Before you begin to formulate your hypothesis, be sure to pause and think about how it connects to broader goals in your UX strategy . This ensures that your efforts and predictions align with your overarching design and development goals.

For example, implementing a brand new navigation menu for current account holders might work for usability, but if the wider team is focused on boosting conversion rates for first-time site viewers, there might be a different research project to prioritize.

Create clear and actionable reports for stakeholders

Once you’ve conducted your testing and proved or disproved your hypothesis, UX reporting and analysis is the next step. You’ll need to present your findings to stakeholders in a way that's clear, concise, and actionable. If your hypothesis insights come in the form of metrics and statistics, then quantitative data visualization tools and reports will help stakeholders understand the significance of your study, while setting the stage for design changes and solutions.

If you went with a research method like user interviews, a narrative UX research report including key themes and findings, proposed solutions, and your original hypothesis will help inform your stakeholders on the best course of action.

Consider different user segments

While getting enough responses is crucial for proving or disproving your hypothesis, you’ll want to consider which users will give you the highest quality and most relevant responses. Remember to consider user personas —e.g. If you’re only introducing a change for premium users, exclude testing with users who are on a free trial of your product.

You can recruit and target specific user demographics with the Maze Panel —which enables you to search for and filter participants that meet your requirements. Doing so allows you to better understand how different users will respond to your hypothesis testing. It also helps you uncover specific needs or issues different users may have.

Involve stakeholders from the start

Before testing or even formulating a research hypothesis by yourself, ensure all your stakeholders are on board. Informing everyone of your plan to formulate and test your hypothesis does three things:

Firstly, it keeps your team in the loop . They’ll be able to inform you of any relevant insights, special considerations, or existing data they already have about your particular design change idea, or KPIs to consider that would benefit the wider team.

Secondly, informing stakeholders ensures seamless collaboration across multiple departments . Together, you’ll be able to fit your testing results into your overall CX strategy , ensuring alignment with business goals and broader objectives.

Finally, getting everyone involved enables them to contribute potential hypotheses to test . You’re not the only one with ideas about what changes could positively impact the user experience, and keeping everyone in the loop brings fresh ideas and perspectives to the table.

Test your UX research hypotheses with Maze

Formulating and testing out a research hypothesis is a great way to define the scope of your UX research project clearly. It helps keep research on track by providing a single statement to come back to and anchor your research in.

Whether you run usability tests or user interviews to assess your hypothesis—Maze's suite of advanced research methods enables you to get the in-depth user and customer insights you need.

Frequently asked questions about research hypothesis

What is the difference between a hypothesis and a problem statement in UX?

A research hypothesis describes the prediction or method of solving that problem. A problem statement, on the other hand, identifies a specific issue in your design that you intend to solve. A problem statement will typically include a user persona, an issue they have, and a desired outcome they need.

How many hypotheses should a UX research problem have?

Technically, there are no limits to the amount of hypotheses you can have for a certain problem or study. However, you should limit it to one hypothesis per specific issue in UX research. This ensures that you can conduct focused testing and reach clear, actionable results.

Hypothesis Maker Online

Looking for a hypothesis maker? This online tool for students will help you formulate a beautiful hypothesis quickly, efficiently, and for free.

Are you looking for an effective hypothesis maker online? Worry no more; try our online tool for students and formulate your hypothesis within no time.

  • 🔎 How to Use the Tool?
  • ⚗️ What Is a Hypothesis in Science?

👍 What Does a Good Hypothesis Mean?

  • 🧭 Steps to Making a Good Hypothesis

🔗 References

📄 hypothesis maker: how to use it.

Our hypothesis maker is a simple and efficient tool you can access online for free.

If you want to create a research hypothesis quickly, you should fill out the research details in the given fields on the hypothesis generator.

Below are the fields you should complete to generate your hypothesis:

  • Who or what is your research based on? For instance, the subject can be research group 1.
  • What does the subject (research group 1) do?
  • What does the subject affect? - This shows the predicted outcome, which is the object.
  • Who or what will be compared with research group 1? (research group 2).

Once you fill the in the fields, you can click the ‘Make a hypothesis’ tab and get your results.

⚗️ What Is a Hypothesis in the Scientific Method?

A hypothesis is a statement describing an expectation or prediction of your research through observation.

It is similar to academic speculation and reasoning that discloses the outcome of your scientific test . An effective hypothesis, therefore, should be crafted carefully and with precision.

A good hypothesis should have dependent and independent variables . These variables are the elements you will test in your research method – it can be a concept, an event, or an object as long as it is observable.

You can observe the dependent variables while the independent variables keep changing during the experiment.

In a nutshell, a hypothesis directs and organizes the research methods you will use, forming a large section of research paper writing.

Hypothesis vs. Theory

A hypothesis is a realistic expectation that researchers make before any investigation. It is formulated and tested to prove whether the statement is true. A theory, on the other hand, is a factual principle supported by evidence. Thus, a theory is more fact-backed compared to a hypothesis.

Another difference is that a hypothesis is presented as a single statement , while a theory can be an assortment of things . Hypotheses are based on future possibilities toward a specific projection, but the results are uncertain. Theories are verified with undisputable results because of proper substantiation.

When it comes to data, a hypothesis relies on limited information , while a theory is established on an extensive data set tested on various conditions.

You should observe the stated assumption to prove its accuracy.

Since hypotheses have observable variables, their outcome is usually based on a specific occurrence. Conversely, theories are grounded on a general principle involving multiple experiments and research tests.

This general principle can apply to many specific cases.

The primary purpose of formulating a hypothesis is to present a tentative prediction for researchers to explore further through tests and observations. Theories, in their turn, aim to explain plausible occurrences in the form of a scientific study.

It would help to rely on several criteria to establish a good hypothesis. Below are the parameters you should use to analyze the quality of your hypothesis.

🧭 6 Steps to Making a Good Hypothesis

Writing a hypothesis becomes way simpler if you follow a tried-and-tested algorithm. Let’s explore how you can formulate a good hypothesis in a few steps:

Step #1: Ask Questions

The first step in hypothesis creation is asking real questions about the surrounding reality.

Why do things happen as they do? What are the causes of some occurrences?

Your curiosity will trigger great questions that you can use to formulate a stellar hypothesis. So, ensure you pick a research topic of interest to scrutinize the world’s phenomena, processes, and events.

Step #2: Do Initial Research

Carry out preliminary research and gather essential background information about your topic of choice.

The extent of the information you collect will depend on what you want to prove.

Your initial research can be complete with a few academic books or a simple Internet search for quick answers with relevant statistics.

Still, keep in mind that in this phase, it is too early to prove or disapprove of your hypothesis.

Step #3: Identify Your Variables

Now that you have a basic understanding of the topic, choose the dependent and independent variables.

Take note that independent variables are the ones you can’t control, so understand the limitations of your test before settling on a final hypothesis.

Step #4: Formulate Your Hypothesis

You can write your hypothesis as an ‘if – then’ expression . Presenting any hypothesis in this format is reliable since it describes the cause-and-effect you want to test.

For instance: If I study every day, then I will get good grades.

Step #5: Gather Relevant Data

Once you have identified your variables and formulated the hypothesis, you can start the experiment. Remember, the conclusion you make will be a proof or rebuttal of your initial assumption.

So, gather relevant information, whether for a simple or statistical hypothesis, because you need to back your statement.

Step #6: Record Your Findings

Finally, write down your conclusions in a research paper .

Outline in detail whether the test has proved or disproved your hypothesis.

Edit and proofread your work, using a plagiarism checker to ensure the authenticity of your text.

We hope that the above tips will be useful for you. Note that if you need to conduct business analysis, you can use the free templates we’ve prepared: SWOT , PESTLE , VRIO , SOAR , and Porter’s 5 Forces .

❓ Hypothesis Formulator FAQ

Updated: Oct 25th, 2023

  • How to Write a Hypothesis in 6 Steps - Grammarly
  • Forming a Good Hypothesis for Scientific Research
  • The Hypothesis in Science Writing
  • Scientific Method: Step 3: HYPOTHESIS - Subject Guides
  • Hypothesis Template & Examples - Video & Lesson Transcript
  • Free Essays
  • Writing Tools
  • Lit. Guides
  • Donate a Paper
  • Referencing Guides
  • Free Textbooks
  • Tongue Twisters
  • Job Openings
  • Expert Application
  • Video Contest
  • Writing Scholarship
  • Discount Codes
  • IvyPanda Shop
  • Terms and Conditions
  • Privacy Policy
  • Cookies Policy
  • Copyright Principles
  • DMCA Request
  • Service Notice

Use our hypothesis maker whenever you need to formulate a hypothesis for your study. We offer a very simple tool where you just need to provide basic info about your variables, subjects, and predicted outcomes. The rest is on us. Get a perfect hypothesis in no time!

  • CBSSports.com
  • Fanatics Sportsbook
  • CBS Sports Home
  • NCAA Tournament
  • W. Tournament
  • Champions League
  • Motor Sports
  • High School
  • Horse Racing 

mens-brackets-180x100.jpg

Men's Brackets

womens-brackets-180x100.jpg

Women's Brackets

Fantasy Baseball

Fantasy football, football pick'em, college pick'em, fantasy basketball, fantasy hockey, franchise games, 24/7 sports news network.

cbs-sports-hq-watch-dropdown.jpg

  • CBS Sports Golazo Network
  • March Madness Live
  • PGA Tour on CBS
  • UEFA Champions League
  • UEFA Europa League
  • Italian Serie A
  • Watch CBS Sports Network
  • TV Shows & Listings

The Early Edge

201120-early-edge-logo-square.jpg

A Daily SportsLine Betting Podcast

With the First Pick

wtfp-logo-01.png

NFL Draft is coming up!

  • Podcasts Home
  • Eye On College Basketball
  • The First Cut Golf
  • NFL Pick Six
  • Cover 3 College Football
  • Fantasy Football Today
  • Morning Kombat
  • My Teams Organize / See All Teams Help Account Settings Log Out

2024 NCAA Tournament bracket predictions: March Madness expert picks, favorites to win, winners, upsets

Our experts have filled out their brackets, so check who they predict will be cutting down the nets.

expert-brackets-no-logos.png

If you went a little too wild with your picks this year projecting upsets and thinking March might be especially mad, then your bracket by now might very well be busted to smithereens. With just 16 teams left standing, every top-two seed is still in the hunt, marking just the fifth time that has happened since NCAA Tournament expansion in 1985. 

We've had wild outcomes the last few years with two No. 1 seeds falling to No. 16 seeds, three No. 15 seeds ousting No. 2 seeds and much more, so of course we don't fault you if you went out on a limb and got some things wrong. Our experts did, too. But each one of us below still has our title pick alive, and most of our Final Four picks are advancing to boot.

We don't always nail things exactly but we've seen enough games this season to have some confidence in our picks looking ahead. So if you're already out in your pool or just need some advice to maybe wager some cheddar moving forward, each of our brackets are in the space below. You can also find all our picks against the spread here.

OK, let's dive into the good stuff: The brackets. ...  

2024 NCAA Tournament bracket predictions

Click each bracket to enlarge.

Gary Parrish

Watching UConn become the first back-to-back national champion since Florida in 2006 and 2007 would be a blast. And let the record show that the Huskies are the betting-market favorites. So I realize picking against them might prove dumb. But, that acknowledged, I'm going to continue to do what I've been doing most of this season and put my faith in the Boilermakers. Wouldn't that be a great story -- Purdue winning the 2024 NCAA Tournament after losing to a No. 16 seed in the opening round of the 2023 NCAA Tournament? Zach Edey holding the championship trophy as a two-time National Player of the Year? Matt Painter shedding his label as the best coach yet to make a Final Four by becoming the first coach to take Purdue to the final weekend of the season since 1980? It's all such good stuff. Just getting to the Final Four will be challenging considering Tennessee, Creighton and Kansas are also in the Midwest Region. But I'm still taking the Boilermakers to make it to Arizona. And then, once they get there, I think they'll win two more games and cut nets on the second Monday in April.

Matt Norlander

A locomotive screaming down the tracks. The 31-3 reigning national champions enter this NCAA Tournament as the strongest team with the best chance to repeat of any squad since Florida in 2007. Dan Hurley's Huskies are led by All-American guard Tristen Newton (15.2 ppg, 7.0 rpg, 6.0 apg), who holds the school record for triple-doubles. In the middle is 7-foot-2 "Cling Kong," Donvan Clingan, a menace of a defender and the type of player you can't simulate in practice. The Huskies boast the nation's most efficient offense (126.6 adjusted points per 100 possessions, via KenPom.com) and overwhelm teams in a variety of ways. Sophomore Alex Karaban (39.5%) and senior Cam Spencer (44.4%) are both outstanding 3-point shooters. The Huskies have been beaten by Kansas, Seton Hall and Creighton, but all of those were road games, and there are no more road games left this season. UConn will try to become the fourth No. 1 overall seed to win the national title, joining 2007 Florida, 2012 Kentucky and 2013 Louisville.

The antagonistic side of me initially picked Purdue over UConn in the title game. But I sat and thought about it and couldn't make any reasonable case to pick any team other than UConn as champion. Of course, that doesn't guarantee the Huskies win it all and become the first repeat champs since Florida in 2007. There's a lot that can happen in the next few weeks. But they have the electric offense, the guard depth, the size down low, the shooting [takes breath] .. the passing and the pizzazz of a team that's best in the country and knows it. Every top team in this field has a high level at which they can play but no one has a top gear like UConn.

Get every pick, every play, every upset and fill out your bracket with our help! Visit SportsLine now to see which teams will make and break your bracket, and see who will cut down the nets , all from the model that nailed a whopping 20 first-round upsets by double-digit seeds.

Purdue is set for redemption after an embarrassing 2023 loss to No. 16 seed Fairleigh Dickinson in the first round. This time around, the Boilermakers are a much better 3-point shooting team and have a more favorable path than No. 1 overall seed UConn. The Huskies were the most dominant team leading up to the Big Dance the East Region bracket is filled with peril.

palm-2024.jpg

This is not the Purdue you have seen the last few years. Braden Smith has made a big jump from last season to this one. Fletcher Loyer is better. Lance Jones gives Purdue defense, shooting and another ball handler. And Zach Edey is better too. This is a team on a mission. This is the year they accomplish it.

Dennis Dodd

What is there not to like? The Heels won the ACC regular season. They beat Tennessee and swept Duke. RJ Davis is an elite guard and ACC Player of the Year. Hubert Davis has settled in after going to the national championship game in his first season and missing the tournament in his second. This is his best team. There will be/and always is pressure to win it all. 

Armando Bacot is not as dominating as previous. Harrison Ingram (Stanford) and Cormac Ryan (Notre Dame) have been big additions in the portal. The West Region is friendly, assuming here that Alabama and Michigan State don't get in the way before the regional in L.A. An interesting regional final against Arizona looms. In the end, sometimes you go with chalk. UNC has been to the most Final Fours (21) and No. 1 seeds (18) all-time. It is tied with Kentucky for the most tournament wins ever (131). This is what the Heels do.

Chip Patterson

The selection committee set up plenty of stumbling blocks for the reigning champs, placing what I believe to be the best No. 1 seed, the best No. 2 seed (Iowa State), the best No. 3 seed (Illinois) and the best No. 4 seed (Auburn) in the Huskies bracket. And if accomplishing a historic feat like the first back-to-back title runs since 2007 is going to require that kind of epic journey, UConn has every skill and tool needed to make it back to the top of the mountain. UConn can win in all different ways, overwhelming teams with their offense in high-scoring track meets or out-executing the opponent in low-possession grinders, and it has a handful of key contributors who could each step up as needed during a title run.

Cameron Salerno

Defense wins championships. That is part of the reason why I'm picking Houston to win it all. The Cougars have the top-ranked scoring defense in the country and terrific guard play on offense to complement it. Jamal Shead is arguably the best point guard in the nation, and J'wan Roberts is an X-Factor on both ends of the floor. Houston's path to the Final Four is favorable. The Cougars weren't able to reach the Final Four in their home state last spring, but this will be the year they run the table and win their first national championship in program history.

Our Latest College Basketball Stories

nit-logo.jpg

NIT: Indiana State wins, Wake falls in second ronnd

Cameron salerno • 1 min read.

ncaatbracket.jpg

Print your 2024 NCAA Tournament bracket

David cobb • 2 min read.

march-madness-getty.jpg

2024 NCAA Women's Tournament: TV, streaming schedule

Wajih albaroudi • 4 min read.

gettyimages-2097719190-1-1.jpg

How to watch second-round NCAA women's games

Isabel gonzalez • 1 min read.

caitlin-clark-crowd.jpg

Iowa vs. West Virginia: How to watch, start time

Chris bengel • 1 min read.

NCAA Football: Florida State at Georgia Tech

Iowa vs. West Virginia odds, NCAAW picks, prediction

Cbs sports staff • 2 min read.

write a research question to test a prediction

Expert brackets: Predictions for NCAA Tournament 2024

write a research question to test a prediction

Reseeding the Sweet 16: UConn remains on top

write a research question to test a prediction

How Houston, Elvin saved each other

write a research question to test a prediction

Breaking down teams in Sweet 16 field

write a research question to test a prediction

Grades: March Madness report card

write a research question to test a prediction

Washington to hire Danny Sprinkle as next coach

write a research question to test a prediction

March Madness odds: UConn still leads the way

write a research question to test a prediction

Sweet 16 opening point spreads, lines and odds

write a research question to test a prediction

Stanford hires Wazzu's Smith as next head coach

write a research question to test a prediction

NCAA Tournament: March Madness scores, schedule

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Multiple Linear Regression | A Quick Guide (Examples)

Published on February 20, 2020 by Rebecca Bevans . Revised on June 22, 2023.

Regression models are used to describe relationships between variables by fitting a line to the observed data. Regression allows you to estimate how a dependent variable changes as the independent variable(s) change.

Multiple linear regression is used to estimate the relationship between  two or more independent variables and one dependent variable . You can use multiple linear regression when you want to know:

  • How strong the relationship is between two or more independent variables and one dependent variable (e.g. how rainfall, temperature, and amount of fertilizer added affect crop growth).
  • The value of the dependent variable at a certain value of the independent variables (e.g. the expected yield of a crop at certain levels of rainfall, temperature, and fertilizer addition).

Table of contents

Assumptions of multiple linear regression, how to perform a multiple linear regression, interpreting the results, presenting the results, other interesting articles, frequently asked questions about multiple linear regression.

Multiple linear regression makes all of the same assumptions as simple linear regression :

Homogeneity of variance (homoscedasticity) : the size of the error in our prediction doesn’t change significantly across the values of the independent variable.

Independence of observations : the observations in the dataset were collected using statistically valid sampling methods , and there are no hidden relationships among variables.

In multiple linear regression, it is possible that some of the independent variables are actually correlated with one another, so it is important to check these before developing the regression model. If two independent variables are too highly correlated (r2 > ~0.6), then only one of them should be used in the regression model.

Normality : The data follows a normal distribution .

Linearity : the line of best fit through the data points is a straight line, rather than a curve or some sort of grouping factor.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Multiple linear regression formula

The formula for a multiple linear regression is:

y = {\beta_0} + {\beta_1{X_1}} + … + {{\beta_n{X_n}} + {\epsilon}

  • … = do the same for however many independent variables you are testing

B_nX_n

To find the best-fit line for each independent variable, multiple linear regression calculates three things:

  • The regression coefficients that lead to the smallest overall model error.
  • The t statistic of the overall model.
  • The associated p value (how likely it is that the t statistic would have occurred by chance if the null hypothesis of no relationship between the independent and dependent variables was true).

It then calculates the t statistic and p value for each regression coefficient in the model.

Multiple linear regression in R

While it is possible to do multiple linear regression by hand, it is much more commonly done via statistical software. We are going to use R for our examples because it is free, powerful, and widely available. Download the sample dataset to try it yourself.

Dataset for multiple linear regression (.csv)

Load the heart.data dataset into your R environment and run the following code:

This code takes the data set heart.data and calculates the effect that the independent variables biking and smoking have on the dependent variable heart disease using the equation for the linear model: lm() .

Learn more by following the full step-by-step guide to linear regression in R .

To view the results of the model, you can use the summary() function:

This function takes the most important parameters from the linear model and puts them into a table that looks like this:

R multiple linear regression summary output

The summary first prints out the formula (‘Call’), then the model residuals (‘Residuals’). If the residuals are roughly centered around zero and with similar spread on either side, as these do ( median 0.03, and min and max around -2 and 2) then the model probably fits the assumption of heteroscedasticity.

Next are the regression coefficients of the model (‘Coefficients’). Row 1 of the coefficients table is labeled (Intercept) – this is the y-intercept of the regression equation. It’s helpful to know the estimated intercept in order to plug it into the regression equation and predict values of the dependent variable:

The most important things to note in this output table are the next two tables – the estimates for the independent variables.

The Estimate column is the estimated effect , also called the regression coefficient or r 2 value. The estimates in the table tell us that for every one percent increase in biking to work there is an associated 0.2 percent decrease in heart disease, and that for every one percent increase in smoking there is an associated .17 percent increase in heart disease.

The Std.error column displays the standard error of the estimate. This number shows how much variation there is around the estimates of the regression coefficient.

The t value column displays the test statistic . Unless otherwise specified, the test statistic used in linear regression is the t value from a two-sided t test . The larger the test statistic, the less likely it is that the results occurred by chance.

The Pr( > | t | ) column shows the p value . This shows how likely the calculated t value would have occurred by chance if the null hypothesis of no effect of the parameter were true.

Because these values are so low ( p < 0.001 in both cases), we can reject the null hypothesis and conclude that both biking to work and smoking both likely influence rates of heart disease.

When reporting your results, include the estimated effect (i.e. the regression coefficient), the standard error of the estimate, and the p value. You should also interpret your numbers to make it clear to your readers what the regression coefficient means.

Visualizing the results in a graph

It can also be helpful to include a graph with your results. Multiple linear regression is somewhat more complicated than simple linear regression, because there are more parameters than will fit on a two-dimensional plot.

However, there are ways to display your results that include the effects of multiple independent variables on the dependent variable, even though only one independent variable can actually be plotted on the x-axis.

Multiple regression in R graph

Here, we have calculated the predicted values of the dependent variable (heart disease) across the full range of observed values for the percentage of people biking to work.

To include the effect of smoking on the independent variable, we calculated these predicted values while holding smoking constant at the minimum, mean , and maximum observed rates of smoking.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis

Methodology

  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

A regression model is a statistical model that estimates the relationship between one dependent variable and one or more independent variables using a line (or a plane in the case of two or more independent variables).

A regression model can be used when the dependent variable is quantitative, except in the case of logistic regression, where the dependent variable is binary.

Multiple linear regression is a regression model that estimates the relationship between a quantitative dependent variable and two or more independent variables using a straight line.

Linear regression most often uses mean-square error (MSE) to calculate the error of the model. MSE is calculated by:

  • measuring the distance of the observed y-values from the predicted y-values at each value of x;
  • squaring each of these distances;
  • calculating the mean of each of the squared distances.

Linear regression fits a line to the data by finding the regression coefficient that results in the smallest MSE.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). Multiple Linear Regression | A Quick Guide (Examples). Scribbr. Retrieved March 25, 2024, from https://www.scribbr.com/statistics/multiple-linear-regression/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, simple linear regression | an easy introduction & examples, an introduction to t tests | definitions, formula and examples, types of variables in research & statistics | examples, what is your plagiarism score.

IMAGES

  1. How to Write a Good Research Question (w/ Examples)

    write a research question to test a prediction

  2. How to Write a Research Question in 2024: Types, Steps, and Examples

    write a research question to test a prediction

  3. How to Write a Research Question: Types with Best Examples

    write a research question to test a prediction

  4. Best Example of How to Write a Hypothesis 2024

    write a research question to test a prediction

  5. How to Develop a Strong Research Question

    write a research question to test a prediction

  6. Research Question: Definition, Types, Examples, Quick Tips

    write a research question to test a prediction

VIDEO

  1. 2 December 2023 IELTS Test Prediction By Asad Yaqub

  2. How to Write an Analysis for a Research Investigation (Science/Physics)

  3. SSLC PHYSICS EXAM BIG BREAKING 🔥⚰️⚰️

  4. 25 November ielts exam prediction 2023 Prediction for 25 november ielts exam

  5. 9th NOV 2023 IELTS EXAM FINAL PREDICTION

  6. NEET 2024 Paper Prediction🔥

COMMENTS

  1. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  2. Types of Research Questions: Descriptive, Predictive, or Causal

    for the study methods. Good-quality, clinically useful research begins. question. Research questions fall into 1 of 3 mutu-ally exclusive types: descriptive, predic-tive, or causal. Imagine you are seeking information about whiplash injuries. You might find studies that address the fol-lowing questions. 1.

  3. PDF Research Questions, Hypotheses and Predictions

    Research Questions •Goal: Capture the uncertainty about a health problem that the investigator can resolve •No shortage of problems, but defining question takes time •Challenge is to find an important question that can be answered with a feasible and valid study plan

  4. How to Write a Strong Hypothesis

    Step 5: Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if … then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  5. Formulating Strong Research Questions: Examples and Writing Tips

    The research question is normally one of the major components of the final paragraph of the introduction section. We will look at the examples of the entire final paragraph of the introduction along with the research questions to put things into perspective. 2.1. Example #1 (Health sciences research paper)

  6. 4.9

    Our best prediction of the mortality rate due to skin cancer in Chambersburg, Pennsylvania is 389.19 - 5.97764 (40) = 150 deaths per 10 million people. The problem with the answers to our two research questions is that we'd have obtained a completely different answer if we had selected a different random sample of data.

  7. Crafting the Methods to Test Hypotheses

    Along with your first guesses about the answers to your research questions, you should write out your explanations for why you think the answers will be accurate. ... it to this point (20 min): Research questions, predictions about the answers, rationales for your predictions (i.e., your theoretical framework), and methods you will use to test ...

  8. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  9. The Writing Center

    Research questions should not be answerable with a simple "yes" or "no" or by easily-found facts. They should, instead, require both research and analysis on the part of the writer. They often begin with "How" or "Why.". Begin your research. After you've come up with a question, think about the possible paths your research ...

  10. Research Questions, Objectives & Aims (+ Examples)

    The research aims, objectives and research questions (collectively called the "golden thread") are arguably the most important thing you need to get right when you're crafting a research proposal, dissertation or thesis.We receive questions almost every day about this "holy trinity" of research and there's certainly a lot of confusion out there, so we've crafted this post to help ...

  11. Writing a quantitative research question

    Formulating a quantitative research question can often be a difficult task. When composing a research question, a researcher needs to determine if they want to describe data, compare differences among groups, assess a relationship, or determine if a set of variables predict another variable. The type of question the researcher asks will help to ...

  12. PDF Writing Good Questions, Hypotheses and Methods for Conservation

    Each of the objectives must have one or a set of hypotheses to test, and each hypothesis should have a prediction, typically derived from existing knowledge reviewed in the background section. A prediction is the way the hypothesis will be accepted or rejected when compared with the collected data.

  13. 3.1

    At some level, answering these two research questions is straightforward. Both just involve using the estimated regression equation: That is, y ^ h = b 0 + b 1 x h is the best answer to each research question. It is the best guess of the mean response at x h, and it is the best guess of a new response at x h: Our best estimate of the mean ...

  14. Hypothesis Testing

    It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories. There are 5 main steps in hypothesis testing: State your research hypothesis as a null hypothesis and alternate hypothesis (H o) and (H a or H 1). Collect data in a way designed to test the hypothesis. Perform an appropriate ...

  15. A Quick Guide to Experimental Design

    A good experimental design requires a strong understanding of the system you are studying. There are five key steps in designing an experiment: Consider your variables and how they are related. Write a specific, testable hypothesis. Design experimental treatments to manipulate your independent variable.

  16. PDF Testing Your Research Question Edited

    Testing Your Research Question The function of a research question is to: • clarify the area of concern, help to organise the project and give it coherence • identify the sort of information that needs to be collected and provide a framework for writing up the project

  17. Developing a Research Question

    The Writing Center 193 Science Library Irvine, CA 92697-5695 (949)-824-8949 [email protected]

  18. Developing a Research Question

    Questions addressing the prediction of the course of a disease. ... To craft a strong and reasonable foreground research question, it is important to have a firm understanding of the concepts of interest. As such, it is often necessary to ask background questions, which ask for more general, foundational knowledge about a disorder, disease ...

  19. How to Create a Research Hypothesis for UX: Step-by-Step

    How to write and test your UX research hypothesis; 05. UX research hypotheses: four best practices to guide your research; Share. All great products were once just thoughts—the spark of an idea waiting to be turned into something tangible. ... This statement acts as a testable prediction. It doesn't pose a question, it's a prediction. ...

  20. The Beginner's Guide to Statistical Analysis

    To collect valid data for statistical analysis, you first need to specify your hypotheses and plan out your research design. Writing statistical hypotheses. The goal of research is often to investigate a relationship between variables within a population. You start with a prediction, and use statistical analysis to test that prediction.

  21. Hypothesis Maker

    One variable should influence others in some way. It should be written as an "if-then" statement to allow the researcher to make accurate predictions of the investigation results. However, this rule does not apply to a null and alternative hypothesis. Clear language: Writing can get complex, especially when complex research terminology is ...

  22. 2024 NCAA Tournament bracket predictions: March Madness expert picks

    2024 NCAA Tournament bracket predictions: March Madness expert picks, favorites to win, winners, upsets Our experts have filled out their brackets, so check who they predict will be cutting down ...

  23. Trump is about to get $3 billion richer after deal is approved to take

    Investors have approved a deal on Friday to make Truth Social owner Trump Media a publicly traded company. The green light from shareholders clears the final major hurdle for a long-delayed ...

  24. Multiple Linear Regression

    The formula for a multiple linear regression is: = the predicted value of the dependent variable. = the y-intercept (value of y when all other parameters are set to 0) = the regression coefficient () of the first independent variable () (a.k.a. the effect that increasing the value of the independent variable has on the predicted y value ...