• Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • A Quick Guide to Experimental Design | 5 Steps & Examples

A Quick Guide to Experimental Design | 5 Steps & Examples

Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying. 

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism, run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomised design vs a randomised block design .
  • A between-subjects design vs a within-subjects design .

Randomisation

An experiment can be completely randomised or randomised within blocks (aka strata):

  • In a completely randomised design , every subject is assigned to a treatment group at random.
  • In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.

Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 31 May 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10 Experimental research

Experimental research—often considered to be the ‘gold standard’ in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research—rather than for descriptive or exploratory research—where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalisability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments are conducted in field settings such as in a real organisation, and are high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favourably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receiving a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the ‘cause’ in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and ensures that each unit in the population has a positive chance of being selected into the sample. Random assignment, however, is a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group prior to treatment administration. Random selection is related to sampling, and is therefore more closely related to the external validity (generalisability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.

Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.

Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam.

Not conducting a pretest can help avoid this threat.

Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.

Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.

Regression threat —also called a regression to the mean—refers to the statistical tendency of a group’s overall performance to regress toward the mean during a posttest rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest were possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-group experimental designs

R

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

Pretest-posttest control group design

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest-posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement—especially if the pretest introduces unusual topics or content.

Posttest -only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

Posttest-only control group design

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

\[E = (O_{1} - O_{2})\,.\]

The appropriate statistical analysis of this design is also a two-group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

C

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

Due to the presence of covariates, the right statistical analysis of this design is a two-group analysis of covariance (ANCOVA). This design has all the advantages of posttest-only design, but with internal validity due to the controlling of covariates. Covariance designs can also be extended to pretest-posttest control group design.

Factorial designs

Two-group designs are inadequate if your research requires manipulation of two or more independent variables (treatments). In such cases, you would need four or higher-group designs. Such designs, quite popular in experimental research, are commonly called factorial designs. Each independent variable in this design is called a factor , and each subdivision of a factor is called a level . Factorial designs enable the researcher to examine not only the individual effect of each treatment on the dependent variables (called main effects), but also their joint effect (called interaction effects).

2 \times 2

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for three hours/week of instructional time than for one and a half hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid experimental designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomised bocks design, Solomon four-group design, and switched replications design.

Randomised block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full-time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between the treatment group (receiving the same treatment) and the control group (see Figure 10.5). The purpose of this design is to reduce the ‘noise’ or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

Randomised blocks design

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs, but not in posttest-only designs. The design notation is shown in Figure 10.6.

Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organisational contexts where organisational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

Switched replication design

Quasi-experimental designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organisation is used as the treatment group, while another section of the same class or a different organisation in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impacted by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

N

In addition, there are quite a few unique non-equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to the treatment or control group based on a cut-off score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardised test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program.

RD design

Because of the use of a cut-off score, it is possible that the observed results may be a function of the cut-off score rather than the treatment, which introduces a new threat to internal validity. However, using the cut-off score also ensures that limited or costly resources are distributed to people who need them the most, rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design do not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

Proxy pretest design

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, say you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data is not available from the same subjects.

Separate pretest-posttest samples design

An interesting variation of the NEDV design is a pattern-matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique—based on the degree of correspondence between theoretical and observed patterns—is a powerful way of alleviating internal validity concerns in the original NEDV design.

NEDV design

Perils of experimental research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, often experimental research uses inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies, and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artefact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if in doubt, use tasks that are simple and familiar for the respondent sample rather than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

experimental study of research

Enago Academy's Most Popular Articles

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Publishing Research
  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Industry News
  • Trending Now

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

ResearchSummary

  • Promoting Research

Plain Language Summary — Communicating your research to bridge the academic-lay gap

Science can be complex, but does that mean it should not be accessible to the…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

Research Recommendations – Guiding policy-makers for evidence-based decision making

experimental study of research

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

experimental study of research

As a researcher, what do you consider most when choosing an image manipulation detector?

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

experimental study of research

Home Market Research

Experimental Research: What it is + Types of designs

Experimental Research Design

Any research conducted under scientifically acceptable conditions uses experimental methods. The success of experimental studies hinges on researchers confirming the change of a variable is based solely on the manipulation of the constant variable. The research should establish a notable cause and effect.

What is Experimental Research?

Experimental research is a study conducted with a scientific approach using two sets of variables. The first set acts as a constant, which you use to measure the differences of the second set. Quantitative research methods , for example, are experimental.

If you don’t have enough data to support your decisions, you must first determine the facts. This research gathers the data necessary to help you make better decisions.

You can conduct experimental research in the following situations:

  • Time is a vital factor in establishing a relationship between cause and effect.
  • Invariable behavior between cause and effect.
  • You wish to understand the importance of cause and effect.

Experimental Research Design Types

The classic experimental design definition is: “The methods used to collect data in experimental studies.”

There are three primary types of experimental design:

  • Pre-experimental research design
  • True experimental research design
  • Quasi-experimental research design

The way you classify research subjects based on conditions or groups determines the type of research design  you should use.

0 1. Pre-Experimental Design

A group, or various groups, are kept under observation after implementing cause and effect factors. You’ll conduct this research to understand whether further investigation is necessary for these particular groups.

You can break down pre-experimental research further into three types:

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

0 2. True Experimental Design

It relies on statistical analysis to prove or disprove a hypothesis, making it the most accurate form of research. Of the types of experimental design, only true design can establish a cause-effect relationship within a group. In a true experiment, three factors need to be satisfied:

  • There is a Control Group, which won’t be subject to changes, and an Experimental Group, which will experience the changed variables.
  • A variable that can be manipulated by the researcher
  • Random distribution

This experimental research method commonly occurs in the physical sciences.

0 3. Quasi-Experimental Design

The word “Quasi” indicates similarity. A quasi-experimental design is similar to an experimental one, but it is not the same. The difference between the two is the assignment of a control group. In this research, an independent variable is manipulated, but the participants of a group are not randomly assigned. Quasi-research is used in field settings where random assignment is either irrelevant or not required.

Importance of Experimental Design

Experimental research is a powerful tool for understanding cause-and-effect relationships. It allows us to manipulate variables and observe the effects, which is crucial for understanding how different factors influence the outcome of a study.

But the importance of experimental research goes beyond that. It’s a critical method for many scientific and academic studies. It allows us to test theories, develop new products, and make groundbreaking discoveries.

For example, this research is essential for developing new drugs and medical treatments. Researchers can understand how a new drug works by manipulating dosage and administration variables and identifying potential side effects.

Similarly, experimental research is used in the field of psychology to test theories and understand human behavior. By manipulating variables such as stimuli, researchers can gain insights into how the brain works and identify new treatment options for mental health disorders.

It is also widely used in the field of education. It allows educators to test new teaching methods and identify what works best. By manipulating variables such as class size, teaching style, and curriculum, researchers can understand how students learn and identify new ways to improve educational outcomes.

In addition, experimental research is a powerful tool for businesses and organizations. By manipulating variables such as marketing strategies, product design, and customer service, companies can understand what works best and identify new opportunities for growth.

Advantages of Experimental Research

When talking about this research, we can think of human life. Babies do their own rudimentary experiments (such as putting objects in their mouths) to learn about the world around them, while older children and teens do experiments at school to learn more about science.

Ancient scientists used this research to prove that their hypotheses were correct. For example, Galileo Galilei and Antoine Lavoisier conducted various experiments to discover key concepts in physics and chemistry. The same is true of modern experts, who use this scientific method to see if new drugs are effective, discover treatments for diseases, and create new electronic devices (among others).

It’s vital to test new ideas or theories. Why put time, effort, and funding into something that may not work?

This research allows you to test your idea in a controlled environment before marketing. It also provides the best method to test your theory thanks to the following advantages:

Advantages of experimental research

  • Researchers have a stronger hold over variables to obtain desired results.
  • The subject or industry does not impact the effectiveness of experimental research. Any industry can implement it for research purposes.
  • The results are specific.
  • After analyzing the results, you can apply your findings to similar ideas or situations.
  • You can identify the cause and effect of a hypothesis. Researchers can further analyze this relationship to determine more in-depth ideas.
  • Experimental research makes an ideal starting point. The data you collect is a foundation for building more ideas and conducting more action research .

Whether you want to know how the public will react to a new product or if a certain food increases the chance of disease, experimental research is the best place to start. Begin your research by finding subjects using  QuestionPro Audience  and other tools today.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

Raked Weighting

Raked Weighting: A Key Tool for Accurate Survey Results

May 31, 2024

Data trends

Top 8 Data Trends to Understand the Future of Data

May 30, 2024

interactive presentation software

Top 12 Interactive Presentation Software to Engage Your User

May 29, 2024

Trend Report

Trend Report: Guide for Market Dynamics & Strategic Analysis

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Experimental Studies and Observational Studies

  • Reference work entry
  • First Online: 01 January 2022
  • pp 1748–1756
  • Cite this reference work entry

experimental study of research

  • Martin Pinquart 3  

781 Accesses

1 Citations

Experimental studies: Experiments, Randomized controlled trials (RCTs) ; Observational studies: Non-experimental studies, Non-manipulation studies, Naturalistic studies

Definitions

The experimental study is a powerful methodology for testing causal relations between one or more explanatory variables (i.e., independent variables) and one or more outcome variables (i.e., dependent variable). In order to accomplish this goal, experiments have to meet three basic criteria: (a) experimental manipulation (variation) of the independent variable(s), (b) randomization – the participants are randomly assigned to one of the experimental conditions, and (c) experimental control for the effect of third variables by eliminating them or keeping them constant.

In observational studies, investigators observe or assess individuals without manipulation or intervention. Observational studies are used for assessing the mean levels, the natural variation, and the structure of variables, as well as...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Atalay K, Barrett GF (2015) The impact of age pension eligibility age on retirement and program dependence: evidence from an Australian experiment. Rev Econ Stat 97:71–87. https://doi.org/10.1162/REST_a_00443

Article   Google Scholar  

Bergeman L, Boker SM (eds) (2016) Methodological issues in aging research. Psychology Press, Hove

Google Scholar  

Byrkes CR, Bielak AMA (under review) Evaluation of publication bias and statistical power in gerontological psychology. Manuscript submitted for publication

Campbell DT, Stanley JC (1966) Experimental and quasi-experimental designs for research. Rand-McNally, Chicago

Carpenter D (2010) Reputation and power: organizational image and pharmaceutical regulation at the FDA. Princeton University Press, Princeton

Cavanaugh JC, Blanchard-Fields F (2019) Adult development and aging, 8th edn. Cengage, Boston

Fölster M, Hess U, Hühnel I et al (2015) Age-related response bias in the decoding of sad facial expressions. Behav Sci 5:443–460. https://doi.org/10.3390/bs5040443

Freund AM, Isaacowitz DM (2013) Beyond age comparisons: a plea for the use of a modified Brunswikian approach to experimental designs in the study of adult development and aging. Hum Dev 56:351–371. https://doi.org/10.1159/000357177

Haslam C, Morton TA, Haslam A et al (2012) “When the age is in, the wit is out”: age-related self-categorization and deficit expectations reduce performance on clinical tests used in dementia assessment. Psychol Aging 27:778–784. https://doi.org/10.1037/a0027754

Institute for Social Research (2018) The health and retirement study. Aging in the 21st century: Challenges and opportunities for americans. Survey Research Center, University of Michigan

Jung J (1971) The experimenter’s dilemma. Harper & Row, New York

Leary MR (2001) Introduction to behavioral research methods, 3rd edn. Allyn & Bacon, Boston

Lindenberger U, Scherer H, Baltes PB (2001) The strong connection between sensory and cognitive performance in old age: not due to sensory acuity reductions operating during cognitive assessment. Psychol Aging 16:196–205. https://doi.org/10.1037//0882-7974.16.2.196

Löckenhoff CE, Carstensen LL (2004) Socioemotional selectivity theory, aging, and health: the increasingly delicate balance between regulating emotions and making tough choices. J Pers 72:1395–1424. https://doi.org/10.1111/j.1467-6494.2004.00301.x

Maxwell SE (2015) Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? Am Psychol 70:487–498. https://doi.org/10.1037/a0039400

Menard S (2002) Longitudinal research (2nd ed.). Sage, Thousand Oaks, CA

Mitchell SJ, Scheibye-Knudsen M, Longo DL et al (2015) Animal models of aging research: implications for human aging and age-related diseases. Ann Rev Anim Biosci 3:283–303. https://doi.org/10.1146/annurev-animal-022114-110829

Moher D (1998) CONSORT: an evolving tool to help improve the quality of reports of randomized controlled trials. JAMA 279:1489–1491. https://doi.org/10.1001/jama.279.18.1489

Oxford Centre for Evidence-Based Medicine (2011) OCEBM levels of evidence working group. The Oxford Levels of Evidence 2. Available at: https://www.cebm.net/category/ebm-resources/loe/ . Retrieved 2018-12-12

Patten ML, Newhart M (2018) Understanding research methods: an overview of the essentials, 10th edn. Routledge, New York

Piccinin AM, Muniz G, Sparks C et al (2011) An evaluation of analytical approaches for understanding change in cognition in the context of aging and health. J Geront 66B(S1):i36–i49. https://doi.org/10.1093/geronb/gbr038

Pinquart M, Silbereisen RK (2006) Socioemotional selectivity in cancer patients. Psychol Aging 21:419–423. https://doi.org/10.1037/0882-7974.21.2.419

Redman LM, Ravussin E (2011) Caloric restriction in humans: impact on physiological, psychological, and behavioral outcomes. Antioxid Redox Signal 14:275–287. https://doi.org/10.1089/ars.2010.3253

Rutter M (2007) Proceeding from observed correlation to causal inference: the use of natural experiments. Perspect Psychol Sci 2:377–395. https://doi.org/10.1111/j.1745-6916.2007.00050.x

Schaie W, Caskle CI (2005) Methodological issues in aging research. In: Teti D (ed) Handbook of research methods in developmental science. Blackwell, Malden, pp 21–39

Shadish WR, Cook TD, Campbell DT (2002) Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin, Boston

Sonnega A, Faul JD, Ofstedal MB et al (2014) Cohort profile: the health and retirement study (HRS). Int J Epidemiol 43:576–585. https://doi.org/10.1093/ije/dyu067

Weil J (2017) Research design in aging and social gerontology: quantitative, qualitative, and mixed methods. Routledge, New York

Download references

Author information

Authors and affiliations.

Psychology, Philipps University, Marburg, Germany

Martin Pinquart

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Martin Pinquart .

Editor information

Editors and affiliations.

Population Division, Department of Economics and Social Affairs, United Nations, New York, NY, USA

Department of Population Health Sciences, Department of Sociology, Duke University, Durham, NC, USA

Matthew E. Dupre

Section Editor information

Department of Sociology and Center for Population Health and Aging, Duke University, Durham, NC, USA

Kenneth C. Land

Department of Sociology, University of Kentucky, Lexington, KY, USA

Anthony R. Bardo

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this entry

Cite this entry.

Pinquart, M. (2021). Experimental Studies and Observational Studies. In: Gu, D., Dupre, M.E. (eds) Encyclopedia of Gerontology and Population Aging. Springer, Cham. https://doi.org/10.1007/978-3-030-22009-9_573

Download citation

DOI : https://doi.org/10.1007/978-3-030-22009-9_573

Published : 24 May 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-22008-2

Online ISBN : 978-3-030-22009-9

eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How the Experimental Method Works in Psychology

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

experimental study of research

Amanda Tust is a fact-checker, researcher, and writer with a Master of Science in Journalism from Northwestern University's Medill School of Journalism.

experimental study of research

sturti/Getty Images

The Experimental Process

Types of experiments, potential pitfalls of the experimental method.

The experimental method is a type of research procedure that involves manipulating variables to determine if there is a cause-and-effect relationship. The results obtained through the experimental method are useful but do not prove with 100% certainty that a singular cause always creates a specific effect. Instead, they show the probability that a cause will or will not lead to a particular effect.

At a Glance

While there are many different research techniques available, the experimental method allows researchers to look at cause-and-effect relationships. Using the experimental method, researchers randomly assign participants to a control or experimental group and manipulate levels of an independent variable. If changes in the independent variable lead to changes in the dependent variable, it indicates there is likely a causal relationship between them.

What Is the Experimental Method in Psychology?

The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis.

For example, researchers may want to learn how different visual patterns may impact our perception. Or they might wonder whether certain actions can improve memory . Experiments are conducted on many behavioral topics, including:

The scientific method forms the basis of the experimental method. This is a process used to determine the relationship between two variables—in this case, to explain human behavior .

Positivism is also important in the experimental method. It refers to factual knowledge that is obtained through observation, which is considered to be trustworthy.

When using the experimental method, researchers first identify and define key variables. Then they formulate a hypothesis, manipulate the variables, and collect data on the results. Unrelated or irrelevant variables are carefully controlled to minimize the potential impact on the experiment outcome.

History of the Experimental Method

The idea of using experiments to better understand human psychology began toward the end of the nineteenth century. Wilhelm Wundt established the first formal laboratory in 1879.

Wundt is often called the father of experimental psychology. He believed that experiments could help explain how psychology works, and used this approach to study consciousness .

Wundt coined the term "physiological psychology." This is a hybrid of physiology and psychology, or how the body affects the brain.

Other early contributors to the development and evolution of experimental psychology as we know it today include:

  • Gustav Fechner (1801-1887), who helped develop procedures for measuring sensations according to the size of the stimulus
  • Hermann von Helmholtz (1821-1894), who analyzed philosophical assumptions through research in an attempt to arrive at scientific conclusions
  • Franz Brentano (1838-1917), who called for a combination of first-person and third-person research methods when studying psychology
  • Georg Elias Müller (1850-1934), who performed an early experiment on attitude which involved the sensory discrimination of weights and revealed how anticipation can affect this discrimination

Key Terms to Know

To understand how the experimental method works, it is important to know some key terms.

Dependent Variable

The dependent variable is the effect that the experimenter is measuring. If a researcher was investigating how sleep influences test scores, for example, the test scores would be the dependent variable.

Independent Variable

The independent variable is the variable that the experimenter manipulates. In the previous example, the amount of sleep an individual gets would be the independent variable.

A hypothesis is a tentative statement or a guess about the possible relationship between two or more variables. In looking at how sleep influences test scores, the researcher might hypothesize that people who get more sleep will perform better on a math test the following day. The purpose of the experiment, then, is to either support or reject this hypothesis.

Operational definitions are necessary when performing an experiment. When we say that something is an independent or dependent variable, we must have a very clear and specific definition of the meaning and scope of that variable.

Extraneous Variables

Extraneous variables are other variables that may also affect the outcome of an experiment. Types of extraneous variables include participant variables, situational variables, demand characteristics, and experimenter effects. In some cases, researchers can take steps to control for extraneous variables.

Demand Characteristics

Demand characteristics are subtle hints that indicate what an experimenter is hoping to find in a psychology experiment. This can sometimes cause participants to alter their behavior, which can affect the results of the experiment.

Intervening Variables

Intervening variables are factors that can affect the relationship between two other variables. 

Confounding Variables

Confounding variables are variables that can affect the dependent variable, but that experimenters cannot control for. Confounding variables can make it difficult to determine if the effect was due to changes in the independent variable or if the confounding variable may have played a role.

Psychologists, like other scientists, use the scientific method when conducting an experiment. The scientific method is a set of procedures and principles that guide how scientists develop research questions, collect data, and come to conclusions.

The five basic steps of the experimental process are:

  • Identifying a problem to study
  • Devising the research protocol
  • Conducting the experiment
  • Analyzing the data collected
  • Sharing the findings (usually in writing or via presentation)

Most psychology students are expected to use the experimental method at some point in their academic careers. Learning how to conduct an experiment is important to understanding how psychologists prove and disprove theories in this field.

There are a few different types of experiments that researchers might use when studying psychology. Each has pros and cons depending on the participants being studied, the hypothesis, and the resources available to conduct the research.

Lab Experiments

Lab experiments are common in psychology because they allow experimenters more control over the variables. These experiments can also be easier for other researchers to replicate. The drawback of this research type is that what takes place in a lab is not always what takes place in the real world.

Field Experiments

Sometimes researchers opt to conduct their experiments in the field. For example, a social psychologist interested in researching prosocial behavior might have a person pretend to faint and observe how long it takes onlookers to respond.

This type of experiment can be a great way to see behavioral responses in realistic settings. But it is more difficult for researchers to control the many variables existing in these settings that could potentially influence the experiment's results.

Quasi-Experiments

While lab experiments are known as true experiments, researchers can also utilize a quasi-experiment. Quasi-experiments are often referred to as natural experiments because the researchers do not have true control over the independent variable.

A researcher looking at personality differences and birth order, for example, is not able to manipulate the independent variable in the situation (personality traits). Participants also cannot be randomly assigned because they naturally fall into pre-existing groups based on their birth order.

So why would a researcher use a quasi-experiment? This is a good choice in situations where scientists are interested in studying phenomena in natural, real-world settings. It's also beneficial if there are limits on research funds or time.

Field experiments can be either quasi-experiments or true experiments.

Examples of the Experimental Method in Use

The experimental method can provide insight into human thoughts and behaviors, Researchers use experiments to study many aspects of psychology.

A 2019 study investigated whether splitting attention between electronic devices and classroom lectures had an effect on college students' learning abilities. It found that dividing attention between these two mediums did not affect lecture comprehension. However, it did impact long-term retention of the lecture information, which affected students' exam performance.

An experiment used participants' eye movements and electroencephalogram (EEG) data to better understand cognitive processing differences between experts and novices. It found that experts had higher power in their theta brain waves than novices, suggesting that they also had a higher cognitive load.

A study looked at whether chatting online with a computer via a chatbot changed the positive effects of emotional disclosure often received when talking with an actual human. It found that the effects were the same in both cases.

One experimental study evaluated whether exercise timing impacts information recall. It found that engaging in exercise prior to performing a memory task helped improve participants' short-term memory abilities.

Sometimes researchers use the experimental method to get a bigger-picture view of psychological behaviors and impacts. For example, one 2018 study examined several lab experiments to learn more about the impact of various environmental factors on building occupant perceptions.

A 2020 study set out to determine the role that sensation-seeking plays in political violence. This research found that sensation-seeking individuals have a higher propensity for engaging in political violence. It also found that providing access to a more peaceful, yet still exciting political group helps reduce this effect.

While the experimental method can be a valuable tool for learning more about psychology and its impacts, it also comes with a few pitfalls.

Experiments may produce artificial results, which are difficult to apply to real-world situations. Similarly, researcher bias can impact the data collected. Results may not be able to be reproduced, meaning the results have low reliability .

Since humans are unpredictable and their behavior can be subjective, it can be hard to measure responses in an experiment. In addition, political pressure may alter the results. The subjects may not be a good representation of the population, or groups used may not be comparable.

And finally, since researchers are human too, results may be degraded due to human error.

What This Means For You

Every psychological research method has its pros and cons. The experimental method can help establish cause and effect, and it's also beneficial when research funds are limited or time is of the essence.

At the same time, it's essential to be aware of this method's pitfalls, such as how biases can affect the results or the potential for low reliability. Keeping these in mind can help you review and assess research studies more accurately, giving you a better idea of whether the results can be trusted or have limitations.

Colorado State University. Experimental and quasi-experimental research .

American Psychological Association. Experimental psychology studies human and animals .

Mayrhofer R, Kuhbandner C, Lindner C. The practice of experimental psychology: An inevitably postmodern endeavor . Front Psychol . 2021;11:612805. doi:10.3389/fpsyg.2020.612805

Mandler G. A History of Modern Experimental Psychology .

Stanford University. Wilhelm Maximilian Wundt . Stanford Encyclopedia of Philosophy.

Britannica. Gustav Fechner .

Britannica. Hermann von Helmholtz .

Meyer A, Hackert B, Weger U. Franz Brentano and the beginning of experimental psychology: implications for the study of psychological phenomena today . Psychol Res . 2018;82:245-254. doi:10.1007/s00426-016-0825-7

Britannica. Georg Elias Müller .

McCambridge J, de Bruin M, Witton J.  The effects of demand characteristics on research participant behaviours in non-laboratory settings: A systematic review .  PLoS ONE . 2012;7(6):e39116. doi:10.1371/journal.pone.0039116

Laboratory experiments . In: The Sage Encyclopedia of Communication Research Methods. Allen M, ed. SAGE Publications, Inc. doi:10.4135/9781483381411.n287

Schweizer M, Braun B, Milstone A. Research methods in healthcare epidemiology and antimicrobial stewardship — quasi-experimental designs . Infect Control Hosp Epidemiol . 2016;37(10):1135-1140. doi:10.1017/ice.2016.117

Glass A, Kang M. Dividing attention in the classroom reduces exam performance . Educ Psychol . 2019;39(3):395-408. doi:10.1080/01443410.2018.1489046

Keskin M, Ooms K, Dogru AO, De Maeyer P. Exploring the cognitive load of expert and novice map users using EEG and eye tracking . ISPRS Int J Geo-Inf . 2020;9(7):429. doi:10.3390.ijgi9070429

Ho A, Hancock J, Miner A. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot . J Commun . 2018;68(4):712-733. doi:10.1093/joc/jqy026

Haynes IV J, Frith E, Sng E, Loprinzi P. Experimental effects of acute exercise on episodic memory function: Considerations for the timing of exercise . Psychol Rep . 2018;122(5):1744-1754. doi:10.1177/0033294118786688

Torresin S, Pernigotto G, Cappelletti F, Gasparella A. Combined effects of environmental factors on human perception and objective performance: A review of experimental laboratory works . Indoor Air . 2018;28(4):525-538. doi:10.1111/ina.12457

Schumpe BM, Belanger JJ, Moyano M, Nisa CF. The role of sensation seeking in political violence: An extension of the significance quest theory . J Personal Social Psychol . 2020;118(4):743-761. doi:10.1037/pspp0000223

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Experimental Method In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The experimental method involves the manipulation of variables to establish cause-and-effect relationships. The key features are controlled methods and the random allocation of participants into controlled and experimental groups .

What is an Experiment?

An experiment is an investigation in which a hypothesis is scientifically tested. An independent variable (the cause) is manipulated in an experiment, and the dependent variable (the effect) is measured; any extraneous variables are controlled.

An advantage is that experiments should be objective. The researcher’s views and opinions should not affect a study’s results. This is good as it makes the data more valid  and less biased.

There are three types of experiments you need to know:

1. Lab Experiment

A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions.

A laboratory experiment is conducted under highly controlled conditions (not necessarily a laboratory) where accurate measurements are possible.

The researcher uses a standardized procedure to determine where the experiment will take place, at what time, with which participants, and in what circumstances.

Participants are randomly allocated to each independent variable group.

Examples are Milgram’s experiment on obedience and  Loftus and Palmer’s car crash study .

  • Strength : It is easier to replicate (i.e., copy) a laboratory experiment. This is because a standardized procedure is used.
  • Strength : They allow for precise control of extraneous and independent variables. This allows a cause-and-effect relationship to be established.
  • Limitation : The artificiality of the setting may produce unnatural behavior that does not reflect real life, i.e., low ecological validity. This means it would not be possible to generalize the findings to a real-life setting.
  • Limitation : Demand characteristics or experimenter effects may bias the results and become confounding variables .

2. Field Experiment

A field experiment is a research method in psychology that takes place in a natural, real-world setting. It is similar to a laboratory experiment in that the experimenter manipulates one or more independent variables and measures the effects on the dependent variable.

However, in a field experiment, the participants are unaware they are being studied, and the experimenter has less control over the extraneous variables .

Field experiments are often used to study social phenomena, such as altruism, obedience, and persuasion. They are also used to test the effectiveness of interventions in real-world settings, such as educational programs and public health campaigns.

An example is Holfing’s hospital study on obedience .

  • Strength : behavior in a field experiment is more likely to reflect real life because of its natural setting, i.e., higher ecological validity than a lab experiment.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied. This occurs when the study is covert.
  • Limitation : There is less control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

3. Natural Experiment

A natural experiment in psychology is a research method in which the experimenter observes the effects of a naturally occurring event or situation on the dependent variable without manipulating any variables.

Natural experiments are conducted in the day (i.e., real life) environment of the participants, but here, the experimenter has no control over the independent variable as it occurs naturally in real life.

Natural experiments are often used to study psychological phenomena that would be difficult or unethical to study in a laboratory setting, such as the effects of natural disasters, policy changes, or social movements.

For example, Hodges and Tizard’s attachment research (1989) compared the long-term development of children who have been adopted, fostered, or returned to their mothers with a control group of children who had spent all their lives in their biological families.

Here is a fictional example of a natural experiment in psychology:

Researchers might compare academic achievement rates among students born before and after a major policy change that increased funding for education.

In this case, the independent variable is the timing of the policy change, and the dependent variable is academic achievement. The researchers would not be able to manipulate the independent variable, but they could observe its effects on the dependent variable.

  • Strength : behavior in a natural experiment is more likely to reflect real life because of its natural setting, i.e., very high ecological validity.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied.
  • Strength : It can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e.g., researching stress .
  • Limitation : They may be more expensive and time-consuming than lab experiments.
  • Limitation : There is no control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

Key Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. EVs should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of participating in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

6.1 Experiment Basics

Learning objectives.

  • Explain what an experiment is and recognize examples of studies that are experiments and studies that are not experiments.
  • Explain what internal validity is and why experiments are considered to be high in internal validity.
  • Explain what external validity is and evaluate studies in terms of their external validity.
  • Distinguish between the manipulation of the independent variable and control of extraneous variables and explain the importance of each.
  • Recognize examples of confounding variables and explain how they affect the internal validity of a study.

What Is an Experiment?

As we saw earlier in the book, an experiment is a type of study designed specifically to answer the question of whether there is a causal relationship between two variables. Do changes in an independent variable cause changes in a dependent variable? Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions. For example, in Darley and Latané’s experiment, the independent variable was the number of witnesses that participants believed to be present. The researchers manipulated this independent variable by telling participants that there were either one, two, or five other students involved in the discussion, thereby creating three conditions. The second fundamental feature of an experiment is that the researcher controls, or minimizes the variability in, variables other than the independent and dependent variable. These other variables are called extraneous variables. Darley and Latané tested all their participants in the same room, exposed them to the same emergency situation, and so on. They also randomly assigned their participants to conditions so that the three groups would be similar to each other to begin with. Notice that although the words manipulation and control have similar meanings in everyday language, researchers make a clear distinction between them. They manipulate the independent variable by systematically changing its levels and control other variables by holding them constant.

Internal and External Validity

Internal validity.

Recall that the fact that two variables are statistically related does not necessarily mean that one causes the other. “Correlation does not imply causation.” For example, if it were the case that people who exercise regularly are happier than people who do not exercise regularly, this would not necessarily mean that exercising increases people’s happiness. It could mean instead that greater happiness causes people to exercise (the directionality problem) or that something like better physical health causes people to exercise and be happier (the third-variable problem).

The purpose of an experiment, however, is to show that two variables are statistically related and to do so in a way that supports the conclusion that the independent variable caused any observed differences in the dependent variable. The basic logic is this: If the researcher creates two or more highly similar conditions and then manipulates the independent variable to produce just one difference between them, then any later difference between the conditions must have been caused by the independent variable. For example, because the only difference between Darley and Latané’s conditions was the number of students that participants believed to be involved in the discussion, this must have been responsible for differences in helping between the conditions.

An empirical study is said to be high in internal validity if the way it was conducted supports the conclusion that the independent variable caused any observed differences in the dependent variable. Thus experiments are high in internal validity because the way they are conducted—with the manipulation of the independent variable and the control of extraneous variables—provides strong support for causal conclusions.

External Validity

At the same time, the way that experiments are conducted sometimes leads to a different kind of criticism. Specifically, the need to manipulate the independent variable and control extraneous variables means that experiments are often conducted under conditions that seem artificial or unlike “real life” (Stanovich, 2010). In many psychology experiments, the participants are all college undergraduates and come to a classroom or laboratory to fill out a series of paper-and-pencil questionnaires or to perform a carefully designed computerized task. Consider, for example, an experiment in which researcher Barbara Fredrickson and her colleagues had college students come to a laboratory on campus and complete a math test while wearing a swimsuit (Fredrickson, Roberts, Noll, Quinn, & Twenge, 1998). At first, this might seem silly. When will college students ever have to complete math tests in their swimsuits outside of this experiment?

The issue we are confronting is that of external validity. An empirical study is high in external validity if the way it was conducted supports generalizing the results to people and situations beyond those actually studied. As a general rule, studies are higher in external validity when the participants and the situation studied are similar to those that the researchers want to generalize to. Imagine, for example, that a group of researchers is interested in how shoppers in large grocery stores are affected by whether breakfast cereal is packaged in yellow or purple boxes. Their study would be high in external validity if they studied the decisions of ordinary people doing their weekly shopping in a real grocery store. If the shoppers bought much more cereal in purple boxes, the researchers would be fairly confident that this would be true for other shoppers in other stores. Their study would be relatively low in external validity, however, if they studied a sample of college students in a laboratory at a selective college who merely judged the appeal of various colors presented on a computer screen. If the students judged purple to be more appealing than yellow, the researchers would not be very confident that this is relevant to grocery shoppers’ cereal-buying decisions.

We should be careful, however, not to draw the blanket conclusion that experiments are low in external validity. One reason is that experiments need not seem artificial. Consider that Darley and Latané’s experiment provided a reasonably good simulation of a real emergency situation. Or consider field experiments that are conducted entirely outside the laboratory. In one such experiment, Robert Cialdini and his colleagues studied whether hotel guests choose to reuse their towels for a second day as opposed to having them washed as a way of conserving water and energy (Cialdini, 2005). These researchers manipulated the message on a card left in a large sample of hotel rooms. One version of the message emphasized showing respect for the environment, another emphasized that the hotel would donate a portion of their savings to an environmental cause, and a third emphasized that most hotel guests choose to reuse their towels. The result was that guests who received the message that most hotel guests choose to reuse their towels reused their own towels substantially more often than guests receiving either of the other two messages. Given the way they conducted their study, it seems very likely that their result would hold true for other guests in other hotels.

A second reason not to draw the blanket conclusion that experiments are low in external validity is that they are often conducted to learn about psychological processes that are likely to operate in a variety of people and situations. Let us return to the experiment by Fredrickson and colleagues. They found that the women in their study, but not the men, performed worse on the math test when they were wearing swimsuits. They argued that this was due to women’s greater tendency to objectify themselves—to think about themselves from the perspective of an outside observer—which diverts their attention away from other tasks. They argued, furthermore, that this process of self-objectification and its effect on attention is likely to operate in a variety of women and situations—even if none of them ever finds herself taking a math test in her swimsuit.

Manipulation of the Independent Variable

Again, to manipulate an independent variable means to change its level systematically so that different groups of participants are exposed to different levels of that variable, or the same group of participants is exposed to different levels at different times. For example, to see whether expressive writing affects people’s health, a researcher might instruct some participants to write about traumatic experiences and others to write about neutral experiences. The different levels of the independent variable are referred to as conditions , and researchers often give the conditions short descriptive names to make it easy to talk and write about them. In this case, the conditions might be called the “traumatic condition” and the “neutral condition.”

Notice that the manipulation of an independent variable must involve the active intervention of the researcher. Comparing groups of people who differ on the independent variable before the study begins is not the same as manipulating that variable. For example, a researcher who compares the health of people who already keep a journal with the health of people who do not keep a journal has not manipulated this variable and therefore not conducted an experiment. This is important because groups that already differ in one way at the beginning of a study are likely to differ in other ways too. For example, people who choose to keep journals might also be more conscientious, more introverted, or less stressed than people who do not. Therefore, any observed difference between the two groups in terms of their health might have been caused by whether or not they keep a journal, or it might have been caused by any of the other differences between people who do and do not keep journals. Thus the active manipulation of the independent variable is crucial for eliminating the third-variable problem.

Of course, there are many situations in which the independent variable cannot be manipulated for practical or ethical reasons and therefore an experiment is not possible. For example, whether or not people have a significant early illness experience cannot be manipulated, making it impossible to do an experiment on the effect of early illness experiences on the development of hypochondriasis. This does not mean it is impossible to study the relationship between early illness experiences and hypochondriasis—only that it must be done using nonexperimental approaches. We will discuss this in detail later in the book.

In many experiments, the independent variable is a construct that can only be manipulated indirectly. For example, a researcher might try to manipulate participants’ stress levels indirectly by telling some of them that they have five minutes to prepare a short speech that they will then have to give to an audience of other participants. In such situations, researchers often include a manipulation check in their procedure. A manipulation check is a separate measure of the construct the researcher is trying to manipulate. For example, researchers trying to manipulate participants’ stress levels might give them a paper-and-pencil stress questionnaire or take their blood pressure—perhaps right after the manipulation or at the end of the procedure—to verify that they successfully manipulated this variable.

Control of Extraneous Variables

An extraneous variable is anything that varies in the context of a study other than the independent and dependent variables. In an experiment on the effect of expressive writing on health, for example, extraneous variables would include participant variables (individual differences) such as their writing ability, their diet, and their shoe size. They would also include situation or task variables such as the time of day when participants write, whether they write by hand or on a computer, and the weather. Extraneous variables pose a problem because many of them are likely to have some effect on the dependent variable. For example, participants’ health will be affected by many things other than whether or not they engage in expressive writing. This can make it difficult to separate the effect of the independent variable from the effects of the extraneous variables, which is why it is important to control extraneous variables by holding them constant.

Extraneous Variables as “Noise”

Extraneous variables make it difficult to detect the effect of the independent variable in two ways. One is by adding variability or “noise” to the data. Imagine a simple experiment on the effect of mood (happy vs. sad) on the number of happy childhood events people are able to recall. Participants are put into a negative or positive mood (by showing them a happy or sad video clip) and then asked to recall as many happy childhood events as they can. The two leftmost columns of Table 6.1 “Hypothetical Noiseless Data and Realistic Noisy Data” show what the data might look like if there were no extraneous variables and the number of happy childhood events participants recalled was affected only by their moods. Every participant in the happy mood condition recalled exactly four happy childhood events, and every participant in the sad mood condition recalled exactly three. The effect of mood here is quite obvious. In reality, however, the data would probably look more like those in the two rightmost columns of Table 6.1 “Hypothetical Noiseless Data and Realistic Noisy Data” . Even in the happy mood condition, some participants would recall fewer happy memories because they have fewer to draw on, use less effective strategies, or are less motivated. And even in the sad mood condition, some participants would recall more happy childhood memories because they have more happy memories to draw on, they use more effective recall strategies, or they are more motivated. Although the mean difference between the two groups is the same as in the idealized data, this difference is much less obvious in the context of the greater variability in the data. Thus one reason researchers try to control extraneous variables is so their data look more like the idealized data in Table 6.1 “Hypothetical Noiseless Data and Realistic Noisy Data” , which makes the effect of the independent variable is easier to detect (although real data never look quite that good).

Table 6.1 Hypothetical Noiseless Data and Realistic Noisy Data

One way to control extraneous variables is to hold them constant. This can mean holding situation or task variables constant by testing all participants in the same location, giving them identical instructions, treating them in the same way, and so on. It can also mean holding participant variables constant. For example, many studies of language limit participants to right-handed people, who generally have their language areas isolated in their left cerebral hemispheres. Left-handed people are more likely to have their language areas isolated in their right cerebral hemispheres or distributed across both hemispheres, which can change the way they process language and thereby add noise to the data.

In principle, researchers can control extraneous variables by limiting participants to one very specific category of person, such as 20-year-old, straight, female, right-handed, sophomore psychology majors. The obvious downside to this approach is that it would lower the external validity of the study—in particular, the extent to which the results can be generalized beyond the people actually studied. For example, it might be unclear whether results obtained with a sample of younger straight women would apply to older gay men. In many situations, the advantages of a diverse sample outweigh the reduction in noise achieved by a homogeneous one.

Extraneous Variables as Confounding Variables

The second way that extraneous variables can make it difficult to detect the effect of the independent variable is by becoming confounding variables. A confounding variable is an extraneous variable that differs on average across levels of the independent variable. For example, in almost all experiments, participants’ intelligence quotients (IQs) will be an extraneous variable. But as long as there are participants with lower and higher IQs at each level of the independent variable so that the average IQ is roughly equal, then this variation is probably acceptable (and may even be desirable). What would be bad, however, would be for participants at one level of the independent variable to have substantially lower IQs on average and participants at another level to have substantially higher IQs on average. In this case, IQ would be a confounding variable.

To confound means to confuse, and this is exactly what confounding variables do. Because they differ across conditions—just like the independent variable—they provide an alternative explanation for any observed difference in the dependent variable. Figure 6.1 “Hypothetical Results From a Study on the Effect of Mood on Memory” shows the results of a hypothetical study, in which participants in a positive mood condition scored higher on a memory task than participants in a negative mood condition. But if IQ is a confounding variable—with participants in the positive mood condition having higher IQs on average than participants in the negative mood condition—then it is unclear whether it was the positive moods or the higher IQs that caused participants in the first condition to score higher. One way to avoid confounding variables is by holding extraneous variables constant. For example, one could prevent IQ from becoming a confounding variable by limiting participants only to those with IQs of exactly 100. But this approach is not always desirable for reasons we have already discussed. A second and much more general approach—random assignment to conditions—will be discussed in detail shortly.

Figure 6.1 Hypothetical Results From a Study on the Effect of Mood on Memory

Hypothetical Results From a Study on the Effect of Mood on Memory

Because IQ also differs across conditions, it is a confounding variable.

Key Takeaways

  • An experiment is a type of empirical study that features the manipulation of an independent variable, the measurement of a dependent variable, and control of extraneous variables.
  • Studies are high in internal validity to the extent that the way they are conducted supports the conclusion that the independent variable caused any observed differences in the dependent variable. Experiments are generally high in internal validity because of the manipulation of the independent variable and control of extraneous variables.
  • Studies are high in external validity to the extent that the result can be generalized to people and situations beyond those actually studied. Although experiments can seem “artificial”—and low in external validity—it is important to consider whether the psychological processes under study are likely to operate in other people and situations.
  • Practice: List five variables that can be manipulated by the researcher in an experiment. List five variables that cannot be manipulated by the researcher in an experiment.

Practice: For each of the following topics, decide whether that topic could be studied using an experimental research design and explain why or why not.

  • Effect of parietal lobe damage on people’s ability to do basic arithmetic.
  • Effect of being clinically depressed on the number of close friendships people have.
  • Effect of group training on the social skills of teenagers with Asperger’s syndrome.
  • Effect of paying people to take an IQ test on their performance on that test.

Cialdini, R. (2005, April). Don’t throw in the towel: Use social influence research. APS Observer . Retrieved from http://www.psychologicalscience.org/observer/getArticle.cfm?id=1762 .

Fredrickson, B. L., Roberts, T.-A., Noll, S. M., Quinn, D. M., & Twenge, J. M. (1998). The swimsuit becomes you: Sex differences in self-objectification, restrained eating, and math performance. Journal of Personality and Social Psychology, 75 , 269–284.

Stanovich, K. E. (2010). How to think straight about psychology (9th ed.). Boston, MA: Allyn & Bacon.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

  • Experimental Research Designs: Types, Examples & Methods

busayo.longe

Experimental research is the most familiar type of research design for individuals in the physical sciences and a host of other fields. This is mainly because experimental research is a classical scientific experiment, similar to those performed in high school science classes.

Imagine taking 2 samples of the same plant and exposing one of them to sunlight, while the other is kept away from sunlight. Let the plant exposed to sunlight be called sample A, while the latter is called sample B.

If after the duration of the research, we find out that sample A grows and sample B dies, even though they are both regularly wetted and given the same treatment. Therefore, we can conclude that sunlight will aid growth in all similar plants.

What is Experimental Research?

Experimental research is a scientific approach to research, where one or more independent variables are manipulated and applied to one or more dependent variables to measure their effect on the latter. The effect of the independent variables on the dependent variables is usually observed and recorded over some time, to aid researchers in drawing a reasonable conclusion regarding the relationship between these 2 variable types.

The experimental research method is widely used in physical and social sciences, psychology, and education. It is based on the comparison between two or more groups with a straightforward logic, which may, however, be difficult to execute.

Mostly related to a laboratory test procedure, experimental research designs involve collecting quantitative data and performing statistical analysis on them during research. Therefore, making it an example of quantitative research method .

What are The Types of Experimental Research Design?

The types of experimental research design are determined by the way the researcher assigns subjects to different conditions and groups. They are of 3 types, namely; pre-experimental, quasi-experimental, and true experimental research.

Pre-experimental Research Design

In pre-experimental research design, either a group or various dependent groups are observed for the effect of the application of an independent variable which is presumed to cause change. It is the simplest form of experimental research design and is treated with no control group.

Although very practical, experimental research is lacking in several areas of the true-experimental criteria. The pre-experimental research design is further divided into three types

  • One-shot Case Study Research Design

In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  • One-group Pretest-posttest Research Design: 

This research design combines both posttest and pretest study by carrying out a test on a single group before the treatment is administered and after the treatment is administered. With the former being administered at the beginning of treatment and later at the end.

  • Static-group Comparison: 

In a static-group comparison study, 2 or more groups are placed under observation, where only one of the groups is subjected to some treatment while the other groups are held static. All the groups are post-tested, and the observed differences between the groups are assumed to be a result of the treatment.

Quasi-experimental Research Design

  The word “quasi” means partial, half, or pseudo. Therefore, the quasi-experimental research bearing a resemblance to the true experimental research, but not the same.  In quasi-experiments, the participants are not randomly assigned, and as such, they are used in settings where randomization is difficult or impossible.

 This is very common in educational research, where administrators are unwilling to allow the random selection of students for experimental samples.

Some examples of quasi-experimental research design include; the time series, no equivalent control group design, and the counterbalanced design.

True Experimental Research Design

The true experimental research design relies on statistical analysis to approve or disprove a hypothesis. It is the most accurate type of experimental design and may be carried out with or without a pretest on at least 2 randomly assigned dependent subjects.

The true experimental research design must contain a control group, a variable that can be manipulated by the researcher, and the distribution must be random. The classification of true experimental design include:

  • The posttest-only Control Group Design: In this design, subjects are randomly selected and assigned to the 2 groups (control and experimental), and only the experimental group is treated. After close observation, both groups are post-tested, and a conclusion is drawn from the difference between these groups.
  • The pretest-posttest Control Group Design: For this control group design, subjects are randomly assigned to the 2 groups, both are presented, but only the experimental group is treated. After close observation, both groups are post-tested to measure the degree of change in each group.
  • Solomon four-group Design: This is the combination of the pretest-only and the pretest-posttest control groups. In this case, the randomly selected subjects are placed into 4 groups.

The first two of these groups are tested using the posttest-only method, while the other two are tested using the pretest-posttest method.

Examples of Experimental Research

Experimental research examples are different, depending on the type of experimental research design that is being considered. The most basic example of experimental research is laboratory experiments, which may differ in nature depending on the subject of research.

Administering Exams After The End of Semester

During the semester, students in a class are lectured on particular courses and an exam is administered at the end of the semester. In this case, the students are the subjects or dependent variables while the lectures are the independent variables treated on the subjects.

Only one group of carefully selected subjects are considered in this research, making it a pre-experimental research design example. We will also notice that tests are only carried out at the end of the semester, and not at the beginning.

Further making it easy for us to conclude that it is a one-shot case study research. 

Employee Skill Evaluation

Before employing a job seeker, organizations conduct tests that are used to screen out less qualified candidates from the pool of qualified applicants. This way, organizations can determine an employee’s skill set at the point of employment.

In the course of employment, organizations also carry out employee training to improve employee productivity and generally grow the organization. Further evaluation is carried out at the end of each training to test the impact of the training on employee skills, and test for improvement.

Here, the subject is the employee, while the treatment is the training conducted. This is a pretest-posttest control group experimental research example.

Evaluation of Teaching Method

Let us consider an academic institution that wants to evaluate the teaching method of 2 teachers to determine which is best. Imagine a case whereby the students assigned to each teacher is carefully selected probably due to personal request by parents or due to stubbornness and smartness.

This is a no equivalent group design example because the samples are not equal. By evaluating the effectiveness of each teacher’s teaching method this way, we may conclude after a post-test has been carried out.

However, this may be influenced by factors like the natural sweetness of a student. For example, a very smart student will grab more easily than his or her peers irrespective of the method of teaching.

What are the Characteristics of Experimental Research?  

Experimental research contains dependent, independent and extraneous variables. The dependent variables are the variables being treated or manipulated and are sometimes called the subject of the research.

The independent variables are the experimental treatment being exerted on the dependent variables. Extraneous variables, on the other hand, are other factors affecting the experiment that may also contribute to the change.

The setting is where the experiment is carried out. Many experiments are carried out in the laboratory, where control can be exerted on the extraneous variables, thereby eliminating them. 

Other experiments are carried out in a less controllable setting. The choice of setting used in research depends on the nature of the experiment being carried out.

  • Multivariable

Experimental research may include multiple independent variables, e.g. time, skills, test scores, etc.

Why Use Experimental Research Design?  

Experimental research design can be majorly used in physical sciences, social sciences, education, and psychology. It is used to make predictions and draw conclusions on a subject matter. 

Some uses of experimental research design are highlighted below.

  • Medicine: Experimental research is used to provide the proper treatment for diseases. In most cases, rather than directly using patients as the research subject, researchers take a sample of the bacteria from the patient’s body and are treated with the developed antibacterial

The changes observed during this period are recorded and evaluated to determine its effectiveness. This process can be carried out using different experimental research methods.

  • Education: Asides from science subjects like Chemistry and Physics which involves teaching students how to perform experimental research, it can also be used in improving the standard of an academic institution. This includes testing students’ knowledge on different topics, coming up with better teaching methods, and the implementation of other programs that will aid student learning.
  • Human Behavior: Social scientists are the ones who mostly use experimental research to test human behaviour. For example, consider 2 people randomly chosen to be the subject of the social interaction research where one person is placed in a room without human interaction for 1 year.

The other person is placed in a room with a few other people, enjoying human interaction. There will be a difference in their behaviour at the end of the experiment.

  • UI/UX: During the product development phase, one of the major aims of the product team is to create a great user experience with the product. Therefore, before launching the final product design, potential are brought in to interact with the product.

For example, when finding it difficult to choose how to position a button or feature on the app interface, a random sample of product testers are allowed to test the 2 samples and how the button positioning influences the user interaction is recorded.

What are the Disadvantages of Experimental Research?  

  • It is highly prone to human error due to its dependency on variable control which may not be properly implemented. These errors could eliminate the validity of the experiment and the research being conducted.
  • Exerting control of extraneous variables may create unrealistic situations. Eliminating real-life variables will result in inaccurate conclusions. This may also result in researchers controlling the variables to suit his or her personal preferences.
  • It is a time-consuming process. So much time is spent on testing dependent variables and waiting for the effect of the manipulation of dependent variables to manifest.
  • It is expensive. 
  • It is very risky and may have ethical complications that cannot be ignored. This is common in medical research, where failed trials may lead to a patient’s death or a deteriorating health condition.
  • Experimental research results are not descriptive.
  • Response bias can also be supplied by the subject of the conversation.
  • Human responses in experimental research can be difficult to measure. 

What are the Data Collection Methods in Experimental Research?  

Data collection methods in experimental research are the different ways in which data can be collected for experimental research. They are used in different cases, depending on the type of research being carried out.

1. Observational Study

This type of study is carried out over a long period. It measures and observes the variables of interest without changing existing conditions.

When researching the effect of social interaction on human behavior, the subjects who are placed in 2 different environments are observed throughout the research. No matter the kind of absurd behavior that is exhibited by the subject during this period, its condition will not be changed.

This may be a very risky thing to do in medical cases because it may lead to death or worse medical conditions.

2. Simulations

This procedure uses mathematical, physical, or computer models to replicate a real-life process or situation. It is frequently used when the actual situation is too expensive, dangerous, or impractical to replicate in real life.

This method is commonly used in engineering and operational research for learning purposes and sometimes as a tool to estimate possible outcomes of real research. Some common situation software are Simulink, MATLAB, and Simul8.

Not all kinds of experimental research can be carried out using simulation as a data collection tool . It is very impractical for a lot of laboratory-based research that involves chemical processes.

A survey is a tool used to gather relevant data about the characteristics of a population and is one of the most common data collection tools. A survey consists of a group of questions prepared by the researcher, to be answered by the research subject.

Surveys can be shared with the respondents both physically and electronically. When collecting data through surveys, the kind of data collected depends on the respondent, and researchers have limited control over it.

Formplus is the best tool for collecting experimental data using survey s. It has relevant features that will aid the data collection process and can also be used in other aspects of experimental research.

Differences between Experimental and Non-Experimental Research 

1. In experimental research, the researcher can control and manipulate the environment of the research, including the predictor variable which can be changed. On the other hand, non-experimental research cannot be controlled or manipulated by the researcher at will.

This is because it takes place in a real-life setting, where extraneous variables cannot be eliminated. Therefore, it is more difficult to conclude non-experimental studies, even though they are much more flexible and allow for a greater range of study fields.

2. The relationship between cause and effect cannot be established in non-experimental research, while it can be established in experimental research. This may be because many extraneous variables also influence the changes in the research subject, making it difficult to point at a particular variable as the cause of a particular change

3. Independent variables are not introduced, withdrawn, or manipulated in non-experimental designs, but the same may not be said about experimental research.

Conclusion  

Experimental research designs are often considered to be the standard in research designs. This is partly due to the common misconception that research is equivalent to scientific experiments—a component of experimental research design.

In this research design, one or more subjects or dependent variables are randomly assigned to different treatments (i.e. independent variables manipulated by the researcher) and the results are observed to conclude. One of the uniqueness of experimental research is in its ability to control the effect of extraneous variables.

Experimental research is suitable for research whose goal is to examine cause-effect relationships, e.g. explanatory research. It can be conducted in the laboratory or field settings, depending on the aim of the research that is being carried out. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • examples of experimental research
  • experimental research methods
  • types of experimental research
  • busayo.longe

Formplus

You may also like:

What is Experimenter Bias? Definition, Types & Mitigation

In this article, we will look into the concept of experimental bias and how it can be identified in your research

experimental study of research

Response vs Explanatory Variables: Definition & Examples

In this article, we’ll be comparing the two types of variables, what they both mean and see some of their real-life applications in research

Simpson’s Paradox & How to Avoid it in Experimental Research

In this article, we are going to look at Simpson’s Paradox from its historical point and later, we’ll consider its effect in...

Experimental Vs Non-Experimental Research: 15 Key Differences

Differences between experimental and non experimental research on definitions, types, examples, data collection tools, uses, advantages etc.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Download PDF
  • Share X Facebook Email LinkedIn
  • Permissions

TREND Reporting Guidelines for Nonrandomized/Quasi-Experimental Study Designs

  • 1 Department of Surgery and Perioperative Care, Dell Medical School, University of Texas, Austin
  • 2 Department of Emergency Medicine, Denver Health Medical Center, University of Colorado School of Medicine, Denver
  • 3 Department of Epidemiology, Colorado School of Public Health, Aurora
  • 4 Statistical Editor, JAMA Surgery
  • 5 Department of Surgery, University of Michigan Health System, Ann Arbor
  • 6 Section Editor, JAMA Surgery
  • Editorial Effective Use of Reporting Guidelines to Improve the Quality of Surgical Research Benjamin S. Brooke, MD, PhD; Amir A. Ghaferi, MD, MSc; Melina R. Kibbe, MD JAMA Surgery
  • Guide to Statistics and Methods SQUIRE Reporting Guidelines for Quality Improvement Studies Rachel R. Kelz, MD, MSCE, MBA; Todd A. Schwartz, DrPH; Elliott R. Haut, MD, PhD JAMA Surgery
  • Guide to Statistics and Methods STROBE Reporting Guidelines for Observational Studies Amir A. Ghaferi, MD, MS; Todd A. Schwartz, DrPH; Timothy M. Pawlik, MD, MPH, PhD JAMA Surgery
  • Guide to Statistics and Methods CHEERS Reporting Guidelines for Economic Evaluations Oluwadamilola M. Fayanju, MD, MA, MPHS; Jason S. Haukoos, MD, MSc; Jennifer F. Tseng, MD, MPH JAMA Surgery
  • Guide to Statistics and Methods TRIPOD Reporting Guidelines for Diagnostic and Prognostic Studies Rachel E. Patzer, PhD, MPH; Amy H. Kaji, MD, PhD; Yuman Fong, MD JAMA Surgery
  • Guide to Statistics and Methods ISPOR Reporting Guidelines for Comparative Effectiveness Research Nader N. Massarweh, MD, MPH; Jason S. Haukoos, MD, MSc; Amir A. Ghaferi, MD, MS JAMA Surgery
  • Guide to Statistics and Methods PRISMA Reporting Guidelines for Meta-analyses and Systematic Reviews Shipra Arya, MD, SM; Amy H. Kaji, MD, PhD; Marja A. Boermeester, MD, PhD JAMA Surgery
  • Guide to Statistics and Methods AAPOR Reporting Guidelines for Survey Studies Susan C. Pitt, MD, MPHS; Todd A. Schwartz, DrPH; Danny Chu, MD JAMA Surgery
  • Guide to Statistics and Methods MOOSE Reporting Guidelines for Meta-analyses of Observational Studies Benjamin S. Brooke, MD, PhD; Todd A. Schwartz, DrPH, MS; Timothy M. Pawlik, MD, MPH, PhD JAMA Surgery
  • Guide to Statistics and Methods The CONSORT Framework Ryan P. Merkow, MD, MS; Amy H. Kaji, MD, PhD; Kamal M. F. Itani, MD JAMA Surgery
  • Guide to Statistics and Methods SRQR and COREQ Reporting Guidelines for Qualitative Studies Lesly A. Dossett, MD, MPH; Amy H. Kaji, MD, PhD; Amalia Cochran, MD JAMA Surgery

The Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) guidelines were first published in 2004, in response to the perceived value and effect of the Consolidated Standards of Reporting Trials (CONSORT) guidelines that had been introduced a decade earlier. 1 The initial development of these guidelines was spearheaded by the US Centers for Disease Control and Prevention HIV/AIDS Prevention Research Synthesis team. The initial interest was specifically in standardization of the reporting of behavioral interventions in HIV/AIDS (eg, interventions to improve adherence to antiretroviral therapy or to increase frequency of partner testing), but the group broadened its interest to include all evaluations of interventions using nonrandomized designs. The guideline authors, comprising researchers, policy makers, and journal editors, developed a reporting framework that was based on the CONSORT guidelines but adapted for evaluation of interventions other than randomized trials.

  • Editorial Effective Use of Reporting Guidelines to Improve the Quality of Surgical Research JAMA Surgery

Read More About

Haynes AB , Haukoos JS , Dimick JB. TREND Reporting Guidelines for Nonrandomized/Quasi-Experimental Study Designs. JAMA Surg. 2021;156(9):879–880. doi:10.1001/jamasurg.2021.0552

Manage citations:

© 2024

Artificial Intelligence Resource Center

Surgery in JAMA : Read the Latest

Browse and subscribe to JAMA Network podcasts!

Others Also Liked

Select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing
  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Experimental induction of state rumination: A study evaluating the efficacy of goal-cueing task in different experimental settings

Roles Conceptualization, Formal analysis, Investigation, Project administration, Writing – original draft

* E-mail: [email protected]

Affiliation Department of Clinical Psychology and Neuropsychology, Institute for Psychology, Johannes Gutenberg-University Mainz, Mainz, Germany

ORCID logo

Roles Writing – review & editing

Affiliations Department of Clinical Psychology and Neuropsychology, Institute for Psychology, Johannes Gutenberg-University Mainz, Mainz, Germany, Leibniz Institute for Resilience Research (LIR), Mainz, Germany

Roles Conceptualization, Writing – review & editing

  • Alena Michel-Kröhler, 
  • Michèle Wessa, 
  • Stefan Berti

PLOS

  • Published: November 22, 2023
  • https://doi.org/10.1371/journal.pone.0288450
  • Reader Comments

Table 1

Based on previous studies, the present four experiments (total N = 468) aimed at investigating the effectivity of rumination induction in different experimental settings. We were particularly interested in rumination in the context of individual goal achievement and tested whether an instruction that referred to unresolved goals had a direct observable effect on state rumination. For this purpose, participants were asked to identify, evaluate, and focus on a personally relevant goal that was previously unresolved and still bothered them. In Experiment 1a to 1c, we compared three different modifications of the unresolved condition with shortened instructions with the elaborated unresolved condition and an additional control condition that did not refer to goals. In general, the results were mixed, but basically confirmed the effectiveness of the method used. Finally, in Experiment 2, we compared the two most promising versions of the unresolved condition and, by adding a goal-related control condition, we examined which control condition was best suited to maximize effects related to state rumination in future research. Results of various mixed ANOVAs demonstrated that a shortened version (in terms of shortened audio instructions) of the unresolved condition could be used as well as the original unresolved condition to induce reliable state rumination. The significance of the effects obtained with this method for real-life applications as well as approaches for future research are discussed.

Citation: Michel-Kröhler A, Wessa M, Berti S (2023) Experimental induction of state rumination: A study evaluating the efficacy of goal-cueing task in different experimental settings. PLoS ONE 18(11): e0288450. https://doi.org/10.1371/journal.pone.0288450

Editor: Ricky Siu Wong, University of Hertfordshire, UNITED KINGDOM

Received: October 17, 2022; Accepted: June 27, 2023; Published: November 22, 2023

Copyright: © 2023 Michel-Kröhler et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The data and materials for all experiments are available at https://osf.io/h7zng/?view_only=e3cd29d81b9a4fdeb1a3210d80e69a88 .

Funding: This work was partly supported (Experiment 1c and Experiment 2) by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation; https://www.dfg.de/ ) awarded to AMK – 454608048 The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Rumination, defined as repetitive, intrusive thoughts, can affect different cognitive processes, and has the potential to impair performance, psychological wellbeing, or mental health [ 1 , 2 ]. Despite increasing research in this area, specific mechanisms underlying these negative effects are not completely understood. One way to gain detailed insight into the effects of rumination as well as the mechanisms that cause these effects, is to induce ruminative thoughts within an experimental design so that a direct measurement of the potential effects of rumination on selected outcome variables is possible. However, to the best of our knowledge evaluation of rumination induction tasks outside of a clinical context is lacking. Here, we report results of four experiments in which we aimed at evaluating the effectivity of a procedure to induce rumination (namely, the goal-cueing task [ 3 ]) in different experimental settings, with the goal of enabling its application in and outside the laboratory. In contrast to other approaches, we focus on a non-clinical setting because the overall aim of the study is to establish an effective procedure for rumination induction within a broad range of rumination research. In the following, we will first describe the theoretical background of the goal-cueing task to motivate the application of this specific procedure. Second, we will give an overview of the four experiments performed within this study.

Relevance of goals in our daily lives and the Goal-Progress Theory

Goals structure our daily actions and serve as the main motivation for pursuing our careers, our studies, our leisure activities such as sports, and various other activities in our private lives. In doing so, we try to pursue various goals from different areas at the same time ("providing a good presentation", "keeping physically fit", "being a good friend", etc.) [ 4 ]. However, what happens when we fail to achieve a goal that we really want to achieve, or when we continue to hold on to a goal even though we seem not capable of achieving it [ 5 ]? A discrepancy occurs that implies dissatisfaction with the current state and a desire to achieve a goal or specific outcome [ 6 ]. As a result, negative thoughts and feelings can arise that revolve around the failed goal achievement, and if persistent and recurring, can lead to rumination [ 5 ].

The Goal Progress Theory (GPT) by Martin and Tesser [ 4 ] provides a detailed account of goal discrepancies as a central mechanism of rumination. GPT states that people’s thoughts and actions are guided by individual, conscious or unconscious goals, which thus play an important role in the self-regulation of behavior [ 7 ]. In addition, a lack of progress (more or less) toward an individual goal is likely to trigger rumination, which persists until the discrepancy is resolved either by restoring goal progress or by moving away from the goal [ 4 ]. It is worth noting that rumination does not necessarily have to be dysfunctional. It can also be adaptive and functional in certain circumstances to highlight unresolved issues and then address them as progress is made in reducing the discrepancy [ 1 , 8 ]. However, rumination often does not help to reduce goal discrepancy or to disengage from the goal even when there is no prospect of success. This is especially the case when individuals are unable to reprioritize or identify alternative ways to achieve the goal [ 4 ]. Thus, rumination can be seen mainly as an unsuccessful problem-solving attempt that is unhelpful and causes a persistent goal discrepancy [ 1 , 4 ]. Nevertheless, individuals continue to ruminate in some situations, believing that this gives them an advantage or brings them closer to their goal [ 9 – 11 ]. Therefore, first, it is important to understand the thoughts associated with the individual goals and how they affect individuals’ well-being and daily performance. Second, it is advantageous if individuals adopt a proactive attitude after setbacks or failures and sustain capabilities that contribute to the improvement of the situation instead of remaining in the situation that has occurred [ 12 – 14 ].

To date, some studies have linked problematic goal attainment with rumination: For instance, results of an experience-sampling study [ 15 ] revealed that low goal success and high goal importance was associated with high levels of negative affect, and that this interaction was marginally significant for ruminative self-focus. In addition, Huffziger and colleagues [ 16 , 17 ] demonstrated that induced rumination in an everyday life application immediately deteriorated mood-related valence and calmness and was linked to stronger reductions in positive mood. Moreover, Krys [ 18 ] showed in a two-week diary study that rumination exerted a negative indirect effect on subjective problem solving via perceived stress and negative mood. In contrast, results of another study [ 8 ] demonstrated that goal-directed rumination has positive effects on academic performance if negative effects of psychological distress were simultaneously accounted for. However, goal-directed rumination per se was not related to academic performance [ 8 ]. Finally, one study from the context of sports [ 19 ] showed that athletes who fulfilled their goals at the end of a competitive season had lower rumination scores compared with athletes who did not. Overall, the results of the studies indicate that rumination can play a relevant role in different contexts during the individual goal achievement process. Moreover, these findings also suggest that goals, more precisely, the perceived goal discrepancy, can serve as a trigger for rumination especially if they are related to failure.

Rumination and the goal-cueing task

Rumination and its consequences have attracted increasing theoretical and empirical interest in the past 30 years [ 20 , 21 ]. In this research process, rumination has been examined both as a habitual tendency toward repetitive and passive self-focus in response to depressed mood [ 1 , 22 ] and as a temporary cognitive response that is highly dependent on situational cues [ 4 , 20 , 23 , 24 ]. In this context, numerous scales have been developed to capture rumination as a trait (for an overview see Krys [ 25 ]), including the Rumination-Reflection Questionnaire (RRQ; [ 26 ]) and the Perseverative Thinking Questionnaire (PTQ; [ 27 ]). In contrast, there is no “gold standard” for measuring state rumination [ 23 ]. Numerous studies have used various state rumination measures with different psychometric properties and levels of validity, among which the Brief State Rumination Inventory (BSRI) can be considered the most established [ 23 ]. In addition, however, various experimental approaches have been developed with the aim of examining ruminative thoughts within an experimental study so that a direct measurement of the potential effects of rumination on selected outcome variables is possible [e.g., 28–31]. An effective possibility of inducing rumination is the response manipulation task, which was designed to influence the contents of participants’ thoughts by forcing them to focus their attention on emotion-, symptom- and self-focused thoughts (cf. [ 31 ], e.g.: “think about how active/passive you feel” or “think about what your feelings might mean”). The response manipulation task is well established, but limits rumination to a specific subset of thoughts related to mental health. A potential alternative to induce ruminative thoughts, which might be more common in everyday life, is the so-called goal-cueing task by Roberts and colleagues [ 3 ]. This task does not restrict or pre-define the area of ruminative thoughts by cueing specific feelings and personal attributes related to rumination. Instead, it focuses on concerns or problems during the individual goal achievement process. In this context, participants are free to choose the content so that a broader range of thoughts can be addressed, which occur more naturally in everyday situations. Ruminating can refer, for example, to a certain negative event and its influence on how the person has felt in recent weeks. It can also be thoughts about the fact that the comparison to other people in relation to a personal important aspect is not positive or one has the feeling to disappoint a person.

The goal-cueing task developed by Roberts and colleagues [ 3 , 32 ] consists of three steps, namely (1) identification of a problem in the individual goal achievement process, (2) evaluation of the identified problem, and (3) 10-minute goal focus period, which is predicted to elicit rumination. In addition, the task consists of two conditions, a resolved and an unresolved goal condition. In the unresolved goal condition, participants are instructed to identify an ongoing and unresolved concern in their goal achievement process that repeatedly comes into their mind and caused them to feel negative or stressed during the previous week. In contrast, participants in the resolved goal condition are asked to identify a concern in their goal achievement process that had previously troubled them, but that had since been resolved. Thus, the two conditions of the goal-cueing task were designed to directly contrast the impact of self-focus on resolved and unresolved goals, thereby manipulating self-discrepancy and rumination [ 33 ]. However, there are only a few studies applying this procedure and systematic evaluation is still lacking. Following the original study by Roberts and colleagues [ 3 ], Lanning [ 33 ] applied the procedure in the context of research on the influence on general memory, and Edwards [ 34 ] investigated whether an approach or avoidance framing influences rumination cued by unresolved goals. Kornacka and colleagues [ 28 ] used the unresolved condition of the goal-cueing task to activate rumination in their participants before investigating the experimental induction of abstract versus concrete repetitive negative thoughts and their influence on emotional reactivity and attentional disengagement. Roberts et al. [ 32 ] found that cueing an unresolved goal triggered more spontaneous rumination compared to cueing a resolved goal. Importantly, in the studies by Roberts et al. [ 3 , 32 ] and Edwards [ 34 ], state rumination was not assessed in a direct way (i.e., by means of a validated questionnaire. In contrast, state rumination was measured with probes of which one probe represented the identified concern (i.e., the resolved or unresolved problem) as index of state rumination (for more details regarding probe approach see also Kane et al. [ 35 ]). Further, Roberts et al. [ 3 , 32 ] (see also Edwards [ 34 ]) applied an additional outcome measure (SART: Sustained Attention to response task [ 36 ]) to test potential effects of rumination on performance in an unrelated cognitive task, assessing attentional lapses due to minimal demands on control processes. Results were mixed, as condition differences in task performance were found in one study [ 3 ]), while no differences were obtained in two other studies [ 32 , 34 ].

In summary, one of the advantages of the goal-cueing task is that it deals with unresolved goals in the individual goal achievement process, and because people consciously and unconsciously pursue multiple goals daily, the likelihood is very high that discrepancies will be perceived, which can lead to rumination. On the one hand, this suggests that the individual goal achievement process can be considered a reliable indicator of rumination. On the other hand, the reference to goals creates a more natural setting (compared to the response task), which produces more personally relevant reasons to ruminate [ 15 ]). Finally, referring to individual goals may increase participants’ commitment to an experiment as well as personal relevance compared to clinical settings. Overall, the goal-cueing task is promising in its application and theoretically justified, but direct evidence for inducing state rumination is lacking.

Overview of Experiments 1a-c

The overall goal of the following three experiments is to examine the effectiveness of the goal-cueing task in different experimental settings. For this, we used different measures of state rumination covering different facets of rumination: First, we applied the BSRI, to capture the momentary occurrence of thoughts that focus one’s attention on one’s distress along with its possible causes and implications [ 23 ]. Second, we used a single rumination item according to Koval et al. [ 37 ], to measure general state rumination and third, we utilized the index of ruminative self-focus by Moberly and Watkins [ 14 ], which involves the focus on feelings as well as the focus on problems that matches discrepancy-based accounts.

Moreover, following the GPT and findings of previous studies [ 3 , 32 ], we focused exclusively on unresolved goals in the individual goal-achievement process to induce state rumination in participants and evaluate to what extent this condition can be modified for application in laboratory or field research. The effectivity of these modifications (i.e., shortenings of the elaborated unresolved condition to different degrees) is compared with the elaborated unresolved condition according to Roberts et al. [ 3 ] and with an additional neutral control condition in Experiments 1b-c. Specifically, we were interested in whether the use of the goal-cueing task had a direct observable effect on state rumination as assessed by different measures. In addition, we tested whether the effectivity of the goal-cueing task in inducing state rumination remains stable when the experimental condition is shortened. Therefore, in Experiment 1a, we compared the elaborated unresolved condition (i.e., 3 steps of the goal-cueing task) with an experimental condition in which step 3 (goal focus period) was omitted. Based on the results, in Experiment 1b we further shortened the experimental condition (including only step 1 of the goal-cueing task) and compared it to the elaborated unresolved condition and to a neutral control condition that was not related to personal goals. In Experiment 1c, we then reduced the goal focus period of the elaborated unresolved condition from 10 min to 5 min and compared this again with the shortened experimental condition from Experiment 1a (i.e., step 3 –goal focus period was omitted) and the neutral control condition already mentioned. We assessed further variables, using the same approach, to account for emotional processes in addition to cognitive processes (for more details see Material & Measures for each study).

Shortening the elaborated unresolved condition would have the advantage of (1) reducing the mental stress on participants, (2) allowing for application in a variety of contexts including performance contexts (in sports, in the academic or medical domain), where time economy plays an important role, and (3) future application in the field or in experimental ambulatory assessment studies.

Experiment 1a

In the first experiment, we examined the efficacy of a shortened unresolved condition (referred to as experimental condition 2; EC2) compared to the elaborated unresolved condition (referred to as experimental condition 1; EC1) to provide information for further application. Therefore, we modified the unresolved condition so that we only applied the first two steps of the goal-cueing task (goal identification and goal evaluation) and checked whether different modifications in the unresolved condition led to same outcomes in terms of state-rumination.

Participants.

In our Experiment 1a, 164 participants (female: n = 101; male: n = 61) completed an online session. The mean age was 28.68 ( SD = 13.11). Most of the participants were students with different subjects (n = 94), followed by 34 employers and 12 officers. Twenty-four of the participants had other employment relationships. Table 1 presents an overview of sample characteristics separated by condition. Moreover, participants in the two conditions did not significantly differ in their level of trait rumination, t (161.57) = -0.54, p = .59, d = 0.08.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0288450.t001

Participants were invited to participate in the experiments via institutes’ student mailing list, student contacts, and notices on the bulletin board. The experiment consisted of two parts that both were completed online at home due to the ongoing pandemic at the time of experiment. First, participants completed an initial online survey and then completed the experimental part second.

The initial survey was conducted using SoSci Survey [ 38 ] and comprised biographical and sociodemographic questions as well as different personality questionnaires (see below for detailed descriptions of the utilized questionnaires). The questionnaire data was also the basis for a stratified randomization of the participants. In detail, we first grouped participants according to their level of trait rumination (measured with the Perseverative Thinking Questionnaire; PTQ, [ 27 ]) and their gender, and second randomly assigned them to one of the two experimental conditions (or to a neutral control condition in Experiment 1b and 1c). We thus ensured that the participants in the different conditions did not differ significantly from each other in these characteristics at the group level.

The study protocol was approved by the local Review Board of the Institute for Psychology of the Johannes Gutenberg-University Mainz and was conducted according to the guidelines of the Declaration of Helsinki. Participants were informed about the nature and the procedure of the experiment and gave consent before completing the questionnaires. Participation in this experiment was voluntary. Psychology students received course credits for participating. This procedure also corresponds to that of the following two experiments, unless otherwise stated.

Advantage and disadvantage of applying an online experiment in our study.

On the one hand, an online experiment has some advantages, such as facilitation regarding sample size, flexibility of data collection, and time economy. On the other hand, the main disadvantages of online experiments include the difficulty to control the situation in which participants complete the task and thus the standardization of data collection, as well as the more difficult protection of the participants from stress exposure [ 39 ]. Therefore, we took a few steps to increase the quality of the implementation of the online experiments and to ensure the protection of participants from negative emotional reactions or stress.

First, even though in previous experiments in our department, there were no negative emotional reactions of the participants after the experimental setting, potential negative reactions cannot be detected in the online experiments due to the lack of interaction. Consequently, before participants had the possibility to complete the initial survey, we conducted a screening in advance to exclude vulnerable individuals from the experiment. Exclusion criteria were an experienced traumatic event in the past, an actual diagnosed mental disorder, or a current psychotherapeutic treatment. In addition, we provided our contact details on several occasions, enabling participants to contact us if they felt unwell or found themselves in a bad emotional state.

Second, to increase the understanding of important information and the procedure of the experiment, as well as to reduce the number of confounding variables, participants received detailed instructions on how to conduct the experiment prior to their participation. We instructed the participants to find a quiet place for the duration of the experiment where they would be undisturbed and not distracted by friends or family, the radio, television, or their smartphone. Participants were advised to switch off their smartphone or put it in airplane mode and find a place where they could be themselves and did not have to leave for 60 minutes.

Third, to get an idea about the quality of performance in the online experiment, we asked participants at the end of the experiment (1) how focused they were, (2) how seriously they tried to implement the instructions on how to perform the experiment, and (3) how well they succeeded. Participants answered these three questions on a 5-point scale ranging from ‘1’ ( not at all ) to ‘5’ ( very ). Participants indicated that their concentration was 3.79 ( SD = 1.00), on average they tried to follow the instructions seriously with 4.51 ( SD = 0.70), and on average they succeeded with a mean of 4.06 ( SD = 0.93) on the 5-point scale. Furthermore, participants in the two conditions showed no significant differences in the three quality measures (all p wilcox ‘s > .05).

Measures & material.

We used the German versions of the following questionnaires. Unless otherwise stated, these questionnaires were used in all four experiments. Means, standard deviations, 95% confidence intervals as well as Cronbach’s alpha for the present sample are summarized in Table 1 .

Perseverative thinking . The Perseverative Thinking Questionnaire (PTQ; [ 27 ]) is a content-independent self-report questionnaire of repetitive negative thoughts. The PTQ consists of 15 items (e.g., “Thoughts come to my mind without me wanting them to”) and is rated on a 5-point scale, ranging from ‘0’ ( never ) to ‘4’ ( almost always ). Here, we report the general PTQ score. Cronbach’s alpha for the entire PTQ is α = 0.95 for the original study [ 27 ].

Brooding and reflection . Huffziger and Kühner [ 40 ] validated the 10-item short version of the Response Styles Questionnaire (RSQ; original English version: Treynor et al. [ 41 ]; long version: Nolen-Hoeksema [ 42 ]) with the facets Reflection and Brooding. In the RSQ, it is assumed that brooding describes dysfunctional ruminating about an unattained goal (e.g., “What am I doing to deserve this?"), while reflection describes a more goal- and solution-oriented self-reflection (e.g., “I write down what I am thinking and analyze it.”). Each scale comprises five items. Participants rated all 10 items on a 4-point Likert Scale ranging from ‘1’ ( almost never ) to ‘4’ ( almost always ). Cronbach’s α for the original study is .60 for brooding and .73 for reflection [ 40 ].

Self-efficacy . We used the General Self-Efficacy Scale (GSE; Hinz et al. [ 43 ]; English version: Schwarzer & Jerusalem [ 44 ]) to assess participants’ general sense of perceived self-efficacy, for instance in relation to coping with everyday life or after experiencing all kinds of stressful life events. The GSE comprises 10 items (e.g., “I can always manage to solve difficult problems if I try hard enough.”) and is answered on a 4-point scale (1 = not at all true , 2 = hardly true , 3 = moderately true , 4 = exactly true ). Cronbach’s α in the original study is .92 [ 43 ].

State rumination . We tested state rumination with two measures: First, we assessed the 8-item Brief State Rumination Inventory (BSRI, Marchetti et al. [ 23 ] to capture the current level of repetitive negative thinking at the time of answering. The BSRI was designed to capture maladaptive state rumination defined as “the momentary occurrence of thoughts that focus one’s attention on one’s distress along with its possible causes and implications ([ 23 ], p.2). All eight statements of the BSRI (e.g., “Right now, it is hard for me to shut off negative thoughts about myself”) were answered on an 11-point scale ranging from ‘0’ ( not at all ) to ‘10’ ( very ). We have translated the items into German for the purpose of our experiments. However, the items in the translated version are not validated. Cronbach’s α of the original study is .89 and .91 for before and after an experimental manipulation [ 23 ]. Second, we used a single item (“To which extent did you ruminate over something?”, see also Koval et al. [ 37 ]) to assess general state rumination (hereinafter referred to as general rumination rating), which was also rated on an 11-point scale ranging from ‘0’ ( not at all ) to ‘10’ ( very ).

Perceived strain . In addition, participants complete the same 11-point scale on how much they felt burdened by the problem.

It should be noted that there is a risk that the use of single-item scales may not adequately capture the construct. Nevertheless, the use of such single-item scales is also recommended [ 45 ] because they are “easier and take less time to complete, may be less expensive, may contain more face validity, and may be more flexible than multiple-item scales” [ 46 , p. 77].

Mood . We assessed the Multidimensional Mood Questionnaire (MDMQ; Wilhelm & Schoebi [ 47 ]) to measure three basic dimensions of mood–valence, energetic arousal, and calmness. The MDMQ consists of six items and is a bipolar measure that comprises three pairs of adjectives rated on a 7-point scale describing opposite end points of different mood dimensions (e.g., energetic arousal: tired vs. awake, full of energy vs. without energy). Cronbach’s α of the three scales ranged from .73 to .89 in the original study [ 47 ].

Goal-cueing task. The goal-cueing task by Roberts et al. [ 3 ] can be divided into three steps, i.e., identification, evaluation and focusing period of a personal relevant goal. In the EC1 (i.e., unresolved goal condition), participants are instructed to identify an ongoing unresolved goal that repeatedly troubled them, causing them to feel sad, negative, or stressed during the previous week. Participants are also provided with appropriate examples of problems. Then, participants briefly outline their problem in 5 to 10 sentences. In step 2 (evaluation), participants indicate the extent to which the unresolved hamper their individual goal achievement process at two points in time, the present time, and the time when it was at its worst. Further, participants indicate how important the goal is, how much the problem in the individual goal achievement process exemplifies more general problems, how long the problem exists and how much time they spend thinking about the problem last week. Step 3 consists of a 10-minute goal-focused period, where participants work through a pre-recoded script delivered over headphones, which guide them thorough focusing on the identified unresolved concern. Example instructions are “Think about what is important about this difficulty in terms of your personal goals”, or “Focus on the aspects of the difficulty that repeatedly come to mind”. Based on findings of Klinger et al. [ 48 , p. 3] showing that cues may take many forms, for example, “a word, an image, or a smell that is associated with an ongoing goal pursuit (including cues related to failure to achieve a goal)”, we extended the original goal-focusing period to increase the induction of state rumination. We then added seven new instructions at the beginning of the goal focus period that trigger all five senses of the participants (e.g., “What did the situation look like in concrete terms?… What did you see? Remember all the sounds you perceived in the situation?…Try to remember every detail of your surroundings.”) to increase the probability of finding a trigger for spontaneous state rumination. We also shortened the intervals between the instructions so that the total time of 10 minutes was not exceeded.

Sustained Attention to Response Task (SART) . The SART [ 36 ] is a simple go/no-go paradigm in which neutral words (e.g., “father”, “shirt”, or “green”) are presented in white text on a black background in the center of the screen. Each word appeared individually on screen for 300ms followed by a 900ms mask (see Fig 1 ). The participants’ task was to respond to words in lowercase letters as quickly as possible by pressing the space bar (go-trials, e.g., “flower”). When a word appeared in upper case, participants were required to withhold their response (no-go-trial, e.g., “CHURCH”). The SART comprises in its shortened version two blocks of 450 trials each, consisting of 45 words repeated ten times in a different order. Within each set of 45 words, five uppercase words appeared randomly among 40 lowercase words. After 450 trials (i.e., one block), we allowed the participants to take a break, where they could determine the length of the break themselves up to a maximum of 3 minutes. The SART took approximately 30 minutes to complete.

thumbnail

(A) Overview of the procedure of Experiment 1a. After participants completed the initial survey, they were assigned to the different conditions and given access information for the experimental session. The experimental session started with a baseline survey, in which the current levels of state rumination, mood, and perceived strain were collected. Subsequently, the goal-cueing task, with its three steps–step1: Goal identification, step 2: Goal evaluation and, step 3: Goal focus period) was carried out. The manipulation check, which included the same items as the baseline survey followed. To get familiar with the task, participants played 18 practice trials before they started with the first bock of the SART (for Experiments 1c and 2, the practice trials were conducted at the beginning of the experimental session before the baseline survey). Participants played two blocks with self-determined breaks, in which the current level of state rumination, mood, and perceived strain were collected. We also performed the same query after 60% of no-go trials in every block and after the SART ended. In addition, five post questions concerning the current evaluation of the identified goal followed. Finally, the debriefing took place in which the participants are fully informed about the objectives of the experiment. (B) shows a sequence of the SART, consisting of a go-trial (lowercase word) and a no-go trial (uppercase word), which are displayed 300ms each. In between appeared a mask for 900ms.

https://doi.org/10.1371/journal.pone.0288450.g001

Post evaluation of the identified problem . After completing the experimental part, participants indicated (1) how difficult it was to stop thinking about the concern, (2) to what extent their focus was mainly on negative aspects, or (3) on bad feelings, (4) to what extent thinking about the problem made it seem worse and (5) made them feel worse (questions were adapted from Mosewich et al. [ 49 ]). Participants rated all questions on a 5-point scale ranging from ‘1’ ( not at all ) to ‘5’ ( very ).

Data analyses.

Collection of experimental data from all four experiments was carried out with the Inquisit Webplayer [ 50 ] and data preparation and all statistical analyses were performed with the software RStudio [ 51 ].

Statistical tests . To analyze the effects of our goal-cueing task on state-rumination, mood and perceived strain, we applied single mixed analyses of variance (ANOVAs) using the ezANOVA-function (“ez”-package; [ 52 ]) with time (pre vs. post goal-cueing task) and condition (unresolved vs. shortened unresolved [vs. control]) as a factor. Beforehand, we checked the requirements for the application (normal distribution and homogeneity of variances). We conducted a Shapiro-Wilk Test for testing the assumption of normality ( p > .05) and a Levene’s Test for testing the homogeneity of variance ( p > .05). Given the data we collected to test the effectiveness of the experimental setting in different contexts, we were also able to conduct an exploratory analysis related to gender differences to provide further information to researchers and practitioners. For this purpose, we included gender as an additional factor in our ANOVA. Since this analysis was not the focus of our research question, we did not formulate a specific hypothesis in this regard. However, previous studies highlighted the tendency of women to show more rumination as opposed to men [ 41 , 53 , 54 ]. The results of the exploratory analyses are summarized in S1 Table .

We additionally applied single mixed ANOVAs with time (four time points during the SART) and condition (unresolved vs. modified unresolved goal) as factors to investigate whether the goal-cueing task also led to changes of state rumination, mood, and perceived strain during the SART. Before that, we checked whether the data fulfilled the requirements. In case of violation of sphericity, we used Mauchly’s Test ( p < .05) and applied a Greenhouse-Geisser correction for ANOVAs. Since these analyses were not the focus of our experiments, but we are pleased to provide the information for further research purposes, we report the results in S2 and S3 Tables.

To compare the SART task performance across participants in different conditions, we examined the errors of commission (i.e., incorrect responses: key presses in no-go trials) as well as the mean reaction times (mean RTs) for correct go-trials. According to Cheyne et al. [ 55 ], mean RTs were calculated for all response latencies over 200ms. Reaction times less than 100ms were coded as anticipatory and reaction times between 100ms and 200ms were coded as ambiguous. We then applied mixed ANOVAs with time (SART: Block 1 vs. Block 2) and condition (e.g., unresolved vs. shortened unresolved [vs. control]) as factors.

Regarding participants’ characterization of problems in the individual goal achievement process and their evaluation of the identified problem after the experimental setting, we analyzed mean differences between conditions with independent t -tests. In case of non-parametric distribution, we reported the significance of Wilcoxon signed-rank test as robust alternative for independent t -tests ( p wilcox ; Field et al. [ 56 ]).

Effect sizes . We report the effect size of mean differences between conditions with Cohen’s d [ 57 ] with the following criteria: d = 0.20, d = 0.50 and d = 0.80 for small, medium, and large effects (see also Fritz et al. [ 58 ] for interpretation). In case of non-parametric distribution, we report the significance of Wilcoxon signed-rank test with corresponding robust effect size (r). The interpretation values for r are: 0.10 to < 0.30 for a small effect, 0.30 to < 0.50 for a moderate effect and ≥ 0.50 for a large effect [ 58 ]. For the ANOVAs, we report partial eta squared (ηp2) as a measure of effect with the following criteria for small, medium, and large effect: 0.01, 0.06, and > 0.14 [ 59 , 60 ].

Characterization of problems in the individual goal achievement process.

Considering the results of the characterization of the identified problems, there were no differences between the participants in both conditions. That is, there were no differences in terms of how the unresolved goal hampered participants individual goal achievement process at the present time and at the time when it was at its worst, how important the goal was, how much the problem in the individual goal achievement process exemplified more general problems, how long the problem existed and how much time they spent thinking about the problem last week. The upper part of Table 2 summarizes the descriptive statistics and the results of the independent t -tests for participants’ goal-evaluations for Experiments 1a-c.

thumbnail

https://doi.org/10.1371/journal.pone.0288450.t002

Effects of goal-cueing task on state rumination, mood and perceived strain.

Table 4 summarizes the results of the respective F -statistic for each mixed ANOVA separated by main effects.

We found no significant difference between the conditions in our experimental variables except for perceived strain, F (1,162) = 4.13, p = .04, ηp 2 = .02, indicating higher values for participant in the EC1 compared to participants in the EC2. Furthermore, time effects were obtained in all variables used except for energetic arousal, F (1,162) < 1. Participants’ valence and calmness levels decreased significantly after the goal-cueing task, whereas levels of state rumination and perceived strain increased.

In addition, there was a significant interaction effect for the general rumination rating, and energetic arousal. Participants’ general state rumination was lower before than after the goal-cueing task in both experimental conditions (EC1: p bonf < .001; EC2: p bonf < .01) with significantly different scores between the two conditions after the goal-cueing task ( p bonf = .04). Regarding participants’ energetic arousal, post-hoc test revealed no significant differences.

Errors of commission and mean RT during the SART.

There are no differences in SART performance between participants in either condition (see Table 5 ).

Evaluation of the identified problem/concern after the experimental setting.

To retrospectively assess the impact of the identified problem on various aspects and to examine respective potential differences between conditions, participants answered five different questions. Results indicated that participants in the EC1 had significantly more difficulty stopping to think about the problem than participants in the EC2 ( p wilcox < .01, r = .24). There were no differences in the other goal evaluation questions. Table 2 summarizes mean values and standard deviations, as well as the respective test statistic for each experiment.

Summary of the results

The results confirm a successful induction of state rumination in that values of both state rumination measures increase from before to after the goal-cueing task. Interestingly, an additional interaction effect shows that the values after the goal-cueing task are significantly higher in the EC1 than in EC2. However, this applies only for the general rumination rating and the effect can be considered small. In terms of mood, there was essentially a decrease over time (except for energetic arousal) and perceived strain showed an inverse effect. In addition, there were no differences in performance on the SART and no differences in problem characterization. Furthermore, regarding post-evaluation, participants of EC1 reported more difficulties to stop thinking about the problem (which is a definitional component of rumination) than participants of EC2.

Discussion Experiment 1a

Shortening the instructions of the unresolved condition (i.e., omitting step 3 –goal focus period) did not in principle affect the results and led to similar outcomes in terms of state-rumination on both measures compared to the elaborated unresolved condition. This suggests that both experimental conditions can be used equally to induce state rumination in the participants. However, the experiment is limited by the absence of a control condition. To make a valid statement regarding the effectivity of the two experimental conditions, they must be compared with a control condition in a next step.

In general, the shortened version would have the advantage of saving time and reducing mental stress for the participants. This would be beneficial for future application in laboratory and in field studies. To this end, it would be necessary to verify whether the difference between the two ECs in terms of participants’ perceived strain disappears because it is due to the different duration of the conditions (and thus to a difference in intensity), or whether other causes underlie the difference in strain perception.

Experiment 1b

Results from Experiment 1a suggested that the identification of a problem in the individual goal achievement and its evaluation is sufficient to produce state rumination. The aim of the second experiment was now to compare the experimental condition (i.e., the elaborated unresolved condition) with a further shortened experimental condition (i.e., omitting step 2 –goal evaluation and step 3 –goal focus period) to examine whether the effects remain stable with respect to state rumination. To better interpret potential effects, we also added a neutral control condition [i.e., NCC], which does not refer to individual goals.

Overall, forty psychology students were recruited from the Johannes Gutenberg-University Mainz for the online experiment. However, we excluded one participant who reported not making a serious effort to follow the implementation instructions from further data analysis (see also Evaluation of the quality of the online experiment ). Thus, 39 participants (female: n = 27; male: n = 12) were included in the following analysis. At the time of the data collection, 20 students were enrolled in the bachelor and 19 in the master program. The mean age was 26.20 ( SD = 6.79), and participants received course credits for participating. The lower part of Table 1 presents an overview of sample characteristics separated by condition. Moreover, participants in the three conditions did not significantly differ in their level of trait rumination, F (2,36) = 0.02, p = .97, ηp 2 < .01.

Evaluation of the quality of the online experiment.

Participants rated the quality of their performance, indicating that their concentration was 3.67 ( SD = 0.84), that they made an average effort of 4.54 ( SD = 0.68) to follow the instructions seriously, and that they succeeded with 4.05 ( SD = 1.00). Furthermore, participants in the three conditions did not differ significantly on these three quality measures (all F ’s [ 2 , 36 ] < 1).

Goal-cueing task . The EC1 remained the same as in the previous experiment. EC2 was a shortened version of the original unresolved condition, in which participants identified only a problem in their individual goal achievement process (step 1), whereas participants in the NCC were given a neutral writing task that did not relate to personal goal but had the same extent. Participants were asked to describe what they did from the morning until the time of the experiment [ 61 , 62 ] instead of identifying a problem of their individual goal achievement process.

The characterization of the identified problem refers solely to EC1, since this part was not included in EC2. Table 2 shows mean values and standard deviations for comparison with the other experiments.

Effects of goal-cueing task on state rumination, mood, and perceived strain.

Mean values and standard deviations of the relevant experimental variables are presented in Table 3 and the results of the respective F -statistic for each mixed ANOVA are summarized in Table 4 .

thumbnail

https://doi.org/10.1371/journal.pone.0288450.t003

thumbnail

https://doi.org/10.1371/journal.pone.0288450.t004

State rumination . With respect to the state ruminations measures, a significant interaction effect for the general ruminations rating was observed in addition to significant time and condition effects. Post-hoc tests indicated a significant difference before and after the goal-cueing task for EC2 ( p bonf = .04). Furthermore, both experimental conditions differ significantly in their general state rumination after the goal-cueing task from the NCC (EC1: p bonf < .001; EC2: p bonf = .001). Finally, we found no differences between the two experimental conditions after the goal-cueing task and no differences before the goal-cueing task for all conditions.

Perceived strain . A similar pattern of results emerged for the perceived strain, where participants in NCC reported significant lower perceived strain after the goal-cueing task than the EC1 and the EC2 (both ECs: p bonf < .001). Differences in perceived strain before and after the goal-cueing task were evident only for both ECs (EC1: p bonf < .01; EC2: p bonf = .04). There were no significant differences between conditions before the goal-cueing task.

Mood . For mood, there were no significant main effects for either condition or time. However, significant interactions were found for energetic arousal and valence, although these related only to differences in valence before and after the goal-cueing task in the EC1 ( p bonf = .02) and to differences after the goal-cueing task between EC1 and NCC ( p bonf < .01).

With regard to participants SART performance, results revealed a single significant effect for the factor time, F (1,36) = 5.70, p = .02, ηp 2 = .14, indicating that more errors of commission were made as time went on (see Table 5 ).

thumbnail

https://doi.org/10.1371/journal.pone.0288450.t005

No differences were found between the EC1 and the EC2 regarding the evaluation of the identified problems after the experiment.

Summary of the results.

The results of the mixed ANOVAs showed a significant interaction effect for the general rumination rating, in that both ECs were significantly different from the NCC after the goal-cueing task. Post-hoc analyses also showed a significant change in the general rumination rating from before to after the goal-cueing task for EC2. The same pattern of results applies to the participants’ perceived strain. Here, in addition to the change in EC2 over time in the goal-cueing task, an increase in perceived strain in EC1 was also noted. With respect to the BSRI, the interaction effect was again absent. In terms of mood, the only notable effect is the significant interaction between condition and time in the valence scores, which increased significantly in EC1 after the goal-cueing task and differed significantly from NCC after the goal-cueing task. The results of the SART performance analysis showed that errors of commission increased over time regardless of conditions. Finally, the goal characteristic of EC1 is comparable to the values from Experiment 1 and there are no differences between ECs in terms of the post evaluation.

Discussion Experiment 1b

The main aim of Experiment 1b was to further shorten the experimental condition (i.e., original unresolved condition) for future application and to examine whether the effects remain stable with respect to state rumination. We also added a neutral control condition to better interpret potential effects. In detail, we maintained the EC1 from the previous experiment and compared it to two further conditions: EC2 consisted of a shortened version of the elaborated unresolved condition and included only step 1 of the goal-cueing task while NCC was not related to a personal goal. As expected, significant condition and time effects were obtained using both measures of state rumination, namely the BSRI and the general rumination rating. With respect to the latter, there was a significant condition x time interaction that, in addition to significant differences in state rumination before and after the goal-cueing task for both experimental conditions, also significantly differentiated the EC1 and EC2 from the NCC after the goal-cueing task. At this point, however, it must be noted that even though the conditions before the goal-cueing task did not differ significantly in their level of state rumination, the EC1 showed relatively high values. This could be due, among other things, to the fact that participants are already told in the instructions immediately before the experimental part that they must identify a personal problem, which could already trigger initial ruminative processes. However, the question of why the elevated values were found exclusively in the EC1 remained unanswered.

As part of the development process of the goal-cueing task, we only aimed to test whether further shortening would tend to result in comparable state rumination, so Experiment 1b is more of an exploratory approach. Further, Experiment 1b is also severely limited by the small sample size and the general homogeneous sample of psychology students. During college, reflecting on successes and failures in achieving important personal goals is a commonplace process [ 63 ]. Especially psychology students represent a target group with experience in self-regulation or at least a good knowledge of it. Thus, it cannot be ruled out that theoretical and practical knowledge of self-regulation influenced the effects of the goal-cueing task. Hence, only cautious recommendations can be made to further shorten the experimental condition for future laboratory and field applications. In addition, it must be considered that while further shortening may save time in an experimental setting, valuable information related to individual goals is lost.

Experiment 1c

Since EC1 has so far achieved the strongest effects in Experiments 1a and 1b, but this is the "least favorable" condition due to its length and the loss of information when reducing the experimental condition to only one step of the goal-cueing task seems too large for some application contexts, we reduced the goal focus period (step 3) of the elaborated unresolved condition from 10 min to 5 min in Experiment 1c. We then compared these again with the shortened experimental condition from Study 1 (omitting step 3 –goal focus period) and the neutral control condition. In addition, we applied a new measure of state rumination, namely the ruminative self-focus index according to Moberly and Watkins [ 15 ], which captures participants’ focus on their own feelings and problems at the time of the prompt.

Sixty participants completed the experiment. One participant who reported not making a serious effort to follow the implementation instructions was excluded from further data analysis (see also Evaluation of the quality of the online experiment ). Thus, 59 participants (female: n = 41, male: n = 18, M age = 24.76, SD age = 5.37) were included in the following analyses. Participants received 12 Euro as compensation for their participation in the experiment. Table 1 presents an overview of sample characteristics separated by both experimental (i.e., EC1 and EC2) and the neutral control condition (i.e., NCC). Participants in three conditions did not significantly differ in their level of trait rumination, F (2,56) = 0.04, p = .96, ηp 2 < .01.

Regarding the quality of performing the online experiment, participants indicated that their concentration was 3.65 ( SD = 0.94), that they made an average effort of 4.59 ( SD = 0.65) to follow the instructions seriously, and that they succeeded with 4.08 ( SD = 0.95). Participants in the three conditions did not differ significantly on the three quality measures (concentration & seriousness: F (2,56) < 1; success: F (2,56) = 1.49, p = .23, ηp 2 = .05).

Ruminative Self-focus . In addition to the BSRI, we now measure state rumination with the momentary ruminative self-focus index from Moberly and Watkins [ 15 ], which includes two items assessing (1) to which extent participants were focused on their feelings and (2) to which extent they were focused on their problems. Both items were rated on an 11-point scale ranging from ‘0’ ( not at all ) to ‘10’ ( very ).

Goal-cueing task . The goal focus period of the original unresolved condition was now shortened von 10 to 5 minutes (EC1). The shortening essentially referred to the removal of the seven instructions at the beginning of the audio script that were intended to trigger the participants’ five senses. EC2 corresponds to EC2 from Experiment 1a (i.e., step3 –goal focus period was omitted, and the neutral control condition (NCC) remained the same as in Experiment 1b.

There were no differences between the two experimental conditions in terms of their characterization of the identified problem (see Table 2 ).

Table 3 presents mean values and standard deviations of the relevant experimental variables. Table 4 summarizes the results of the respective F -statistic for each mixed ANOVA.

Ruminative self-focus . With respect to the ruminative self-focus of participants, a significant interaction effect was found in addition to significant time and condition effects. Post-hoc tests indicated that ruminative self-focus significantly increased from before to after the goal-cueing task ( p bonf < .001) in the EC1. There were no differences for the NCC ( p bonf > .99) and no differences for EC2 ( p bonf = .08). In addition, the two experimental conditions showed no significant differences in ruminative self-focus after the goal-cueing task ( p bonf > .99); however, both differed significantly from the NCC (EC1: p bonf < .001; EC2: p bonf < .01).

BSRI . Regarding the results of the BSRI, a significant time effect as well as a significant time x condition interaction was obtained. Subsequent post-hoc analyses showed that the NCC significantly differed in their BSRI scores from the EC1 ( p bonf = .01) but not from the EC2 ( p bonf = .18) after the goal-cueing task. Moreover, we obtained no other significant differences between conditions.

Perceived strain . For the main effects, the pattern of results was the same as for the BSRI. Post-hoc analysis showed that perceived strain significantly differed from before to after the goal-cueing task in the EC1 ( p bonf < .001) and EC2 ( p bonf < .01) but not in the NCC ( p bonf > .99). In addition, the two experimental conditions did not significantly differ in their perceived strain after the task ( p bonf > .99); however, both differed significantly from the NCC (EC1: p bonf < .001; EC2: p bonf < .001).

Mood . We found no significant effects for either condition or time or its interaction in the mood measures.

Regarding participants SART performance, results revealed a significant effect for the factor time, F (1,57) = 4.41, p = .04, ηp 2 = .08 for errors of commission, indicating that more errors of commission were made as time went on (see Table 5 ).

No differences were found between the EC1 and the EC2 regarding the evaluation of the identified problems after the experiment (see Table 2 ).

The results confirm a successful induction of state rumination, which is first expressed by a significant interaction with a large effect on both rumination measures. Both measures differ significantly from the NCC after the goal-cueing task, although for EC2 this only applies to ruminative self-focus. The same pattern of results was found for perceived strain. No significant effects were observed regarding mood. There were also no significant differences in terms of goal characteristic and post evaluation. Finally, the results of the SART analysis, showed a similar result as in Experiment 1b: regardless of the condition, more errors of commission were made over time.

Discussion Experiment 1c

Since EC1 has so far produced the strongest effects in Experiments 1a and 1b but is the "least favorable" condition due to its length, the third experiment aimed to test the effectivity of the elaborated unresolved condition with a reduced goal focus period (5 min). This reduced EC1 was then compared again with EC2 from Experiment 1a (i.e., omitted step 3 –goal focus period) and NCC (i.e., not related to personal goals) from Experiment 1b. In addition, we added a new measure of state rumination, namely the rumination self-focus index according to Moberly and Watkins [ 15 ], which captures participants’ focus on their own feelings and problems.

The results underscore the findings from the previous two experiments, in the sense that now significant interaction effects were obtained for both ruminations measures, confirming the effectivity in inducing state rumination as well as the clear distinction from a neutral control condition unrelated to personal goals. With respect to SART performance, a further time effect for errors of commission was shown in addition to Experiment 1b, indicating an increase in errors of commission for all conditions. Why no other significant main effects were obtained in the SART task in Experiments 1a-c is discussed in the general discussion.

General discussion and limitations Experiments 1a-c

In our three experiments, we focused on an experimental condition (i.e., goal-cueing task) to induce state rumination. In this task, participants were asked to identify, evaluate, and focus on a personally relevant goal that was previously unresolved and still bothering them. We further tested modifications of the task (i.e., shortenings of the elaborated unresolved condition to different degrees) and tested the effectivity of these modifications compared to the elaborated unresolved condition. The results of these three experiments demonstrate that state rumination can be induced by the application of the goal-cueing task. However, some shortcomings of the general approach of these three studies must be considered before a final conclusion can be drawn.

One limitation is that the three experiments obtained mixed results regarding the state rumination measures. While the overall pattern of results supports the assumption that the goal-cueing task can elicit rumination in an experimental context, the question remains whether all three versions are correspondingly effective. One reason for the mixed results could be that the three experiments have very different sample sizes, which leads to a potential power problem especially in Experiments 1b and 1c. To test this, we calculated post-hoc power analyses for all three experiments, which showed low power for Experiments 1a and 1b and sufficient power for Experiment 1c. In detail, results of post-hoc power analyses in terms of the strongest interaction effects were as follows: Experiment 1a: N = 164, α = .03, η p 2 = .03 ( f = .17), power = .52; Experiment 1b: N = 39, α = .03, η p 2 = .17 ( f = .45), power = .55; Experiment 1c: N = 59, α = .001, η p 2 = .33 ( f = .77), power = .91). Another drawback of our approach is the type of state rumination measures we used. To conduct our experiments, we relied on partially unvalidated measures such as our translated version of the BSRI. Although this means that the results should be interpreted with caution, this fact points to a general problem of limited measures of state rumination (especially in the German-speaking area). Complementing this, Marchetti et al. [ 23 ] highlighted that even available measures are often still characterized by unknown and/or unreplicated psychometric properties. To further validate the efficacy of the goal-cueing task, additional state measures should be considered that capture additional facets of rumination, such as repetitive quality, degree of abstractness, or uncontrollability of rumination processes. Thus, for future research, a 4-item measure from Rosenkranz et al. [ 64 ] or Krys [ 18 ] seems promising for capturing more facets of rumination.

Considering the results from the first three experiments (especially those from Experiment 1c), the EC1 with the reduced goal focus period (5 minutes) seems to be a reliable induction of rumination. The shortening of the goal-cueing task also has the advantage of avoiding confusion between abstract and concrete processing style of rumination. While in Experiments 1a and 1b the intention was to enhance the induction of rumination by adding items that involved the use of all senses, participants may also have been more focused on the direct, specific, and contextualized experience of an event through this procedure, which would correspond to a more adaptive, concrete processing style of rumination [ 1 , 28 ]. However, this is in contrast to the more abstract processing style, which refers to repetitive and difficult to control dwelling on one or more negative topics, which can be considered a more maladaptive processing style that has classically been defined as rumination [ 1 , 28 ]. Consequently, different processing styles and their mixture can have different effects on affect, mood, and emotion regulation [ 1 , 2 , 28 , 29 , 65 ]. Although the results of Experiments 1a and 1b pointed in the desired direction regarding state rumination, the instructions should be clearly assigned to a processing style in the future to increase the effects of the goal-cueing task. Furthermore, the use of a neutral control condition can also be seen as limiting. In Roberts’ original study [ 3 ], the control condition also related to a goal. This allows the comparison of variables according to similar induction conditions with the emphasis on the goal discrepancy to be activated in the unresolved goal condition but not in the resolved goal condition. However, the reflection of an achieved goal can initiate the processing of currently pursued, unachieved goals [ 66 ]. Consequently, information associated with unachieved goals may have higher availability, be prioritized accordingly, and thus automatically attract attention [ 67 ]. Therefore, to draw more precise conclusions about which conditions in the goal-cueing task should be used in further research, it is necessary to compare the two control conditions (resolved goal condition vs . neutral control condition) with each other as well as the effectivity of the shortened experimental condition compared to the original unresolved condition from Roberts and colleagues [ 3 ].

Experiment 2

Experiment 2 aimed to address some of the limitations observed in Experiments 1a-c, namely, the lack of comparison between the original goal-related control condition of Roberts et al. [ 3 ] and the neutral control condition previously used in Experiment 1b and 1c. Further we aimed at comparing our experimental condition 1 (i.e., shortened audio instructions; 5 min), with the original unresolved experimental condition of Roberts et al. [ 3 ] (see for a detailed description Measures and Material section Experiment 1a). In detail, we compared the experimental condition 1 (i.e., shortened unresolved condition; EC1) and the neutral control condition (i.e., neutral control condition not related to a goal; NCC) from Experiment 1c with the original experimental condition from Roberts et al. [ 3 , i.e., original unresolved condition; EC2] as well as the original resolved control condition from Roberts et al. [ 3 , i.e., goal-related control condition; GRCC]. Using four 4 (EC1, EC2, GRCC, NCC) x 2 (before the goal-cueing task, after the goal-cueing task) mixed ANOVAs, we examined the effectiveness of the goal-cueing task with different state ruminations measures. For this purpose, in addition to the already applied measures (BSRI, ruminative self-focus index, and general rumination rating) we used a new measure to capture participant’s momentary repetitive negative thinking [ 64 ]. Although we criticized the use of the BSRI in Experiments 1a-c, we applied it again in Experiment 2 to be consistent with our previous experiments and to be able to directly compare results. Moreover, we assessed the same additional variables (MDMQ and perceived strain) to account for emotional processes in addition to cognitive processes, and again used the SART as an additional outcome measure.

We performed an a priori power analysis (G*Power 3.1.) [ 68 ] for Experiment 2 to determine an appropriate sample size. To obtain a mean interaction effect ( f = 0.25) with an alpha of α = .05 and a power of .80 using a mixed ANOVA with time (induction: pre vs. post) and condition (EG1 vs. EG2 vs. GRCC vs. NCC) as factors, a total sample size of N = 184 was recommended (n = 46 per condition).

Overall, 214 participants completed the experiment. However, we excluded two participants who reported not making a serious effort to follow the implementation instructions and further seven participants, who stated that they had already participated in a previous study on the same topic. Accordingly, the final sample consisted of 205 participants (female: n = 142; male: n = 61; divers: n = 2). The mean age was 24.28 ( SD = 5.70). Most of the participants were students with different subjects (n = 172), followed by 21 employers. The remaining 12 participants were either pupils, trainees, civil servants, self-employed, job seekers or other. Table 6 presents an overview of sample characteristics separated by condition. Moreover, participants in the four conditions did not significantly differ in their level of trait rumination, F (1,203) = 0.07, p = .79, ƞp 2 < .01.

thumbnail

https://doi.org/10.1371/journal.pone.0288450.t006

Participants rated the quality of their performance, indicating that their concentration was 3.48 ( SD = 1.01), that they made an average effort of 4.46 ( SD = 0.68) to follow the instructions seriously, and that they succeeded with 4.02 ( SD = 0.92). In terms of concentration, there were no differences between participants in the four conditions, F (3,201) = 1.61, p = .19, ƞp 2 = .02. In contrast, GRCC ( M = 4.25, SD = 0.72) differed significantly from NCC ( M = 4.68, SD = 0.55; p bonf < .01) in how seriously they tried to perform the experiment, F (3,201) = 4.14, p = .01, ƞp 2 = .06. In addition, there was also a significant difference between NCC ( M = 4.32, SD = 0.85) and GRCC ( M = 3.83, SD = 0.89; p bonf = .03) in terms of how well they succeeded in implementing the instructions, F (3,201) = 3.45, p = .02, ƞp 2 = .05.

Participants were invited to participate in the experiment via institutes’ student mailing list, via the respective study offices of the study programs as well as notices on the bulletin board of the Johannes Gutenberg-University Mainz, and personal contacts. Participation in this experiment was voluntary and participants received 12 Euro as compensation (psychology students could alternatively receive course credit for their participation). Otherwise, the procedure is the same as in the Experiments 1a-c.

Here we report only the measures that we added compared to Experiments 1a-c.

State Rumination . In addition to the already used BSRI, ruminative self-focus index, and the general rumination rating, we applied a measure to capture momentary repetitive negative thinking (MRNT, i.e., process-related scale) [ 64 ]. The scale comprises four items (e.g., “Thoughts come to my mind without me wanting them to”.), each of which focused on a core characteristic of rumination–repetitiveness, intrusiveness, uncontrollability, and impairment. Participants answer each statement on a 7-point scale ranging from ‘1’ ( not at all ) to 7 ( very ). Just like the BSRI, we collected the MRNT at four points in time.

Goal-cueing task . EC1 and NCC remained the same as in Experiment 1c. EC2 and GRCC referred to the original conditions of Roberts and colleagues [ 3 , for a detailed description of EC2 see Measures & Materials section of Experiment 1a]. Participants in the GRCC were instructed to identify a problem from the past that has since been resolved, has not come to mind in the past, and no longer makes them feel bad, sad, down, or stressed. Like EC2, participants were asked to briefly outline the identified problem in writing, answer some questions about it, and follow a 10-minute audio instruction (for example: "Think about why solving this problem will make progress toward your personal goals; for detailed information see Roberts et al. [ 3 ]).

We performed the same analysis as in Experiments 1c. The only difference is that we used four conditions in the group comparisons. In case of a significant two-way interaction, we applied a simple main effect analysis (one-way model) of the condition variable at each level of the time variable and reported adjusted p -values according to Bonferroni. If the simple main effect was significant, we ran multiple pairwise comparisons to determine which conditions were different. In addition, differences in terms of goal characteristics and post evaluation were calculated using an ANOVA rather than independent t -tests due to the three conditions being compared. Moreover, we also computed the same exploratory analyses in terms of the influence of gender as well as the change in the state measures during the SART. The results of these analyses are summarized in S4 to S6 Tables.

Results indicated that there were significant differences between the conditions in five of six goal-related questions. There was no significant effect of condition on participants’ evaluation of how long the problem exists. The upper part of Table 7 summarizes the descriptive statistics and the results of the F -tests for participants’ goal-evaluations. Post-hoc tests showed that both ECs differed significantly from GRCC in terms of trouble at the time of the experiment and time spent on the problem (both p bonf < .001).

thumbnail

https://doi.org/10.1371/journal.pone.0288450.t007

In addition, only the EC1 significantly differed from the GRCC in terms of goal importance ( p bonf < .01), how much the goal related to general concerns ( p bonf = .04), and how much the problem had bothered them at its worst ( p bonf = .01). Thus, the goals identified in the two ECs did not differ in subjective evaluations of their nature or severity, but participants in the ECs (especially in the EC1) reported that the problem was one that was bothering them more than participants in the GRCC.

Four two-factorial mixed effects ANOVAs with the between group factor condition (4 levels) and the repeated measure factor time (2 levels: pre vs. post goal-cueing task) were computed to examine the effects of the goal-cueing task on state rumination, mood, and perceived strain. Table 8 presents the mean values and standard deviations of the relevant experimental variables. With regard to state rumination, results indicated that there were significant interaction effects and significant condition effects for all used measures. For time, however, only two of four state rumination measures showed a significant effect.

thumbnail

https://doi.org/10.1371/journal.pone.0288450.t008

BSRI . There was a significant main effect of condition, F (3,201) = 3.86, p = .01, ηp 2 = .05, and a significant main effect of time, F (1,201) = 11.22, p = .001, ηp 2 = .05 which were qualified by a significant interaction between time and condition, F (3,201) = 8.94, p < .001, ηp 2 = .12. Subsequent pairwise comparisons between condition levels showed that the simple main effect of condition was not significant for the time prior the goal-cueing task ( p bonf = .67) but for the time after to the goal-cueing task ( p bonf < .001). Pairwise comparison between post goal-cueing task and each condition level showed that the mean BSRI score was significantly different in EC1 vs. NCC ( p bonf = .01), in EC2 vs. NCC ( p bonf < .001) and in CC vs. NCC ( p bonf = .02).

Momentary repetitive negative thinking . With regard to the results of MRNT, a significant condition effect was obtained, F (3,201) = 4.75, p < .01, ηp 2 = .07, as well as a significant time x condition interaction, F (3,201) = 11.67, < .001, ηp 2 = .15. The effect for time was not significant, F (1,201) = 1.35, p = .25, ηp 2 = .01. Considering the Bonferroni-adjusted p -value of the simple main effect analysis, the effect for condition at the time before the goal-cueing task was not significant ( p bonf = .44) compared to the time after the goal-cueing task ( p bonf < .001). Pairwise comparison showed that the MRNT was significantly different in EC1 vs. NCC ( p bonf < .001), in EC2 vs. NCC ( p bonf < .001), as well as in EC1 vs. GRCC ( p bonf = .01) and in EC2 vs. GRCC ( p bonf = .01).

Ruminative self-focus . There was a significant main effect of condition, F (3,201) = 16.54, p < .001, ηp 2 = .20, and a significant main effect of time, F (1,201) = 7.46, p < .01, ηp 2 = .03, which were qualified by a significant interaction between time and condition, F (3,201) = 29.96, p < .001, ηp 2 = .31. Subsequent pairwise comparisons between condition levels showed that the simple main effect of condition was not significant for the time prior the goal-cueing task ( p bonf = .87) but for the time after to the goal-cueing task ( p bonf < .001). Pairwise comparison between post goal-cueing task and each condition level showed that ruminative focus differed in EC1 vs. NCC ( p bonf < .001), in EC2 vs. NCC ( p bonf < .001) and in GRCC vs. NCC ( p bonf < .001).

General rumination rating . Regarding the general state rumination rating, a significant effect of condition, F (3,201) = 9.88, p < .001, ηp 2 = .13, as well as of the interaction between time and condition, F (3,201) = 13.82, p < .001, ηp 2 = .17 was obtained. The effect for time was not significant, F (1,201) < 1. Results of the subsequent simple main effect analysis showed that the effect for condition at the time before the goal-cueing task was not significant ( p bonf = .73) but was significant at the time after the goal-cueing task ( p bonf < .001). Pairwise comparison showed that the general rumination rating was different in participants from EC1 vs. NCC ( p bonf < .001), in EC2 vs. NCC ( p bonf < .001) and in GRCC vs. NCC ( p bonf < .001). There were also significant differences between EC1 vs. GRCC ( p bonf < .001) and EC2 vs. GRCC ( p bonf = .01).

Mood . We conducted three more 4 × 2 ANOVAs to also examine the effects of the goal-cueing task on mood assessed by the MDMQ. Regarding the energetic arousal, there was a significant effect for condition, F (3,201) = 4.02, p < .01, ηp 2 = .05, and for the time x condition interaction, F (3,201) = 4.61, p < .01, ηp 2 = .06, as well as a tendency to significance for time, F (1,201) = 3.77, p = .05, ηp 2 = .02. Subsequent pairwise comparisons between condition levels showed that the simple main effect of condition was significant for the time prior the goal-cueing task ( p bonf < .01) but not for the time after to the goal-cueing task ( p bonf = .07). Pairwise comparison between pre goal-cueing task and each condition level showed that energetic arousal differed in EC1 vs. EC2 ( p bonf = .001) as well as in EC1 and GRCC ( p bonf = .03). With regard to valence, we found a significant effect of condition, F (3,201) = 3.09, p = .03, ηp 2 = .04, and a significant main effect of time, F (1,201) = 20.69, p < .001, ηp 2 = .09 which were qualified by a significant interaction between time and condition, F (3,201) = 11.36, p < .001, ηp 2 = .14. Subsequent main effect analysis of the condition variable revealed no significant differences before the goal-cueing task ( p bonf = .70), but significant differences after the goal-cueing task ( p bonf < .001). Pairwise comparison showed that valence was different in participants from EC1 vs. NCC ( p bonf < .001), in EC2 vs. NCC ( p bonf < .01), and EC1 vs. GRCC ( p bonf = .03). Finally, results of the 4 x 2 ANOVA for calmness indicated a significant effect for condition, F (3,201) = 3.04, p = .03, ηp 2 = .04, and for the time x condition interaction, F (3,201) = 6.77, p < .001, ηp 2 = .09, but not for time, F (1,201) = 2.70, p = .11, ηp 2 = .01. Subsequent pairwise comparisons between condition levels showed that the simple main effect of condition was not significant for the time prior the goal-cueing task ( p bonf = .99) but for the time after to the goal-cueing task ( p bonf < .001). Pairwise comparison showed that calmness values were different in participants from EC1 vs. NCC ( p bonf = .001), and in EC2 vs. NCC ( p bonf < .001).

Perceived strain . Last, a 4 × 2 mixed ANOVA was applied to investigate the effect of the goal-cueing task on the perceived strain of the participants. There was a significant effect of condition, F (3,201) = 11.68, p < .001, ηp 2 = .15, and a significant effect of the interaction between time and condition, F (3,201) = 18.14, p < .001, ηp 2 = .21. The effect for time was not significant, F (1,201) < 1. Subsequent pairwise comparisons between condition levels showed that the simple main effect of condition was not significant for the time prior the goal-cueing task ( p bonf = .99) but for the time after to the goal-cueing task ( p bonf < .001). Pairwise comparison between post goal-cueing task and each condition level showed that perceived strain differed in EC1 vs. NCC ( p bonf < .001), in EC2 vs. NCC ( p bonf <. 001), as well as in EC1 vs. GRCC ( p bonf < .001) and in EC2 vs. GRCC ( p bonf < .001).

Table 9 illustrates the percentage error rates and mean RTs for each condition during each block of the SART and the respective results of the F -statistic. Regarding participants errors of commission, results revealed a significant effect for condition, F (3,187) = 3.30, p = .02, ηp 2 = .05, and for time, F (1,187) = 7.36, p < .01, ηp 2 = .04, but not for the interaction between condition x time, F (3,187) = 2.04, p = .11, ηp 2 = .03. With regard to the mean RT for correct go-trials, there was significant effect for time, F (1,187) = 6.26, p = .01, ηp 2 = .03, but no significant effects for condition as well as for the condition x time interaction, both F ’s(3,187) < 1, indicating slower mean RTs as time went on.

thumbnail

https://doi.org/10.1371/journal.pone.0288450.t009

No differences were found between the EC1, EC1 and the GRCC regarding the evaluation of the identified problems after the experiment (see Table 7 ). We also asked participants in both control conditions if they were thinking about a current problem that was bothering them during the experiment. This question was answered in the affirmative by 40 participants from GRCC (75.5%) and 31 participants from NCC (58.5%).

Discussion Experiment 2

Experiment 2 had two aims. First, we aimed at testing which control condition (goal-related vs . neutral) is best suited to maximize the effects of the goal-cueing task in terms of state rumination in future research. Second, we wanted to examine whether the shortened unresolved condition is as effective in inducing state rumination as the original unresolved condition by Roberts et al. [ 3 ]. For this purpose, we applied different rumination measures, namely the BSRI, the ruminative self-focus index, a general rumination rating, and a momentary repetitive negative thinking measure and performed four two-factorial mixed effects ANOVAs. Moreover, we assessed further variables (MDMQ and perceived strain), using the same approach, to account for emotional processes in addition to cognitive processes and again used the SART as an additional outcome measure.

Results of the mixed ANOVA indicated a successful induction of rumination, which is expressed by significant interaction effects (time x condition) on all four state rumination measures. Subsequent pairwise comparisons between condition levels showed that the simple main effect of condition was not significant for the time prior the goal-cueing task but was significant for the time after the goal-cueing task. This was consistent for all applied state rumination measures. In detail, participants in the two experimental conditions reported significantly higher scores on all four state ruminations measures after the goal-cueing task (except for the BSRI for the EC1, which showed only a trend effect with p bonf = .06) compared to participants from the neutral control condition. However, this did not apply to the goal-related control condition (i.e., original resolved condition by Roberts et al. [ 3 ]). Here, only the examination of the general rumination rating showed a significant difference to EC1 after the goal-cueing task. In addition, the GRCC showed higher state ruminations scores on two of four measures (ruminative self-focus, general ruminations rating) compared to the NCC. This could also be partly because 75% of the participants in the GRCC also thought about a current problem while conducting the experiment. According to Shah [ 67 ], reflecting on a solved problem during the goal-focusing period may have resulted in initiating the processing of currently pursued, unachieved goals. Consequently, it cannot be assumed that the goal discrepancy is activated only in the unresolved goal condition and not in the resolved goal condition. Whether these results ultimately also had an influence on the quality of the execution of the experiment, however, remains open. Here, minor differences between the two control conditions ( M Δ < 0.50) emerged in the sense that the goal-related control condition reported lower values for seriousness and successful execution. To summarize, in terms of state rumination, the results of Experiment 2 showed that the two ECs can be used equally well to induce state rumination in participants, which means that depending on the experimental design and the time available, either the original experimental condition or the shortened experimental condition can be applied. To maximize the effects in terms of state rumination, the NCC seems to be more suitable due to the better differentiation in the state measures.

In terms of mood, valence in EC2 was found to decrease during the goal-cueing task, and participants in both ECs reported significantly lower valence scores thereafter compared to the NCC. It seems unsurprising that participants who are asked to address an unresolved issue that has made them sad, depressed, or down in the recent past feel more uncomfortable than someone who is asked to be as objective as possible about their past daily routine. In addition, the NCC participants showed higher calmness values after the goal-cueing task compared to both ECs and the effect of energetic arousal refers to the difference in pre induction values between EC1 and GRCC and the two ECs. With this one exception, no further significant differences were found between the GRCC and the two ECs. Accordingly, it can be assumed that the mood in both ECs is similar, in some cases slightly more "negative" compared to the NCC, but not compared to the GRCC. The latter is consistent with Roberts et al. [ 3 ], who showed that participants in the unresolved and resolved condition did not differ with respect to sadness, tension, and self-focus. However, in the subsequent study [ 32 ], the two conditions were found to have different effects on participants’ negative affect (measured with the Positive Affect and Negative Affect Schedule [ 69 ], in the sense that negative affect increased during the goal-cueing task in the unresolved condition and differed significantly from the resolved condition thereafter. So far, however, it seems difficult to draw conclusions about the change in mood during the goal-cueing task, as either mood or affect were assessed with different measures in the studies.

In terms of performance in the SART, participants’ reaction times worsened regardless of condition and more errors were made as time went on. A condition effect indicates that the EC1 in principle has a higher error rate in the SART compared to the other conditions. However, due to the lack of interaction effects, no significant differences can be found with respect to the goal-cueing task. Why no other significant main effects were obtained in the SART is discussed in the general discussion.

Finally, we would like to address a potential limitation of this study: To examine the effectiveness of the goal-cueing task and its directly observable effect on state rumination, we used different state rumination measures to capture rumination with focus on different aspects (e.g., core characteristics of rumination, self-focused attention, attention to one’s distress and its possible causes and effects, general state of rumination), and to figure out which measure would be most appropriate for future studies. However, it cannot be ruled out that these in some way influenced each other. Therefore, the number of measurement variables for rumination should be reduced in future studies to avoid potential interference.

General discussion

The aim of the four experiments of the present study was to evaluate the effectivity of the goal-cueing task procedure [ 3 ] to induce state rumination on an individual level in different experimental settings. For this, we used different measures of state rumination covering different facets of rumination, namely the BSRI [ 23 ], an index of ruminative self-focus [ 15 ], a general rumination rating [ 37 ], and an EMA measure of momentary repetitive negative thinking [ 64 ]. Specifically, we were interested in whether the use of the goal-cueing task had a direct observable effect on state rumination as assessed by these different state rumination measures.

Since we have already discussed the results of each of the individual experiments above, we would only like to give a brief overview of the main results and discuss overarching aspects in this section. Experiment 1a showed that shortening the instructions of the unresolved condition (i.e., omitting step 3 –goal focus period) did not in principle affect the results. However, the significance is limited because a control condition was missing to relate the effects found. In Experiment 1b, we compared the EC from the previous experiment with two further conditions: another shortened experimental condition (included only step 1 of the goal-cueing task) and a neutral control condition that was not related to a personal goal. Here, for the first time, we found a significant interaction between time and condition with respect to the general rumination rating. However, Experiment 1b was severely limited by the small sample size and the general homogeneous sample of psychology students. Experiment 1c picked up the shortened experimental condition from Experiment 1a and compared it with another shortened experimental condition (reduced goal-focus period to 5 min) and the neutral control condition. By shortening the goal focus period, previously existing concerns about the processing style of rumination during the goal-cueing task could be eliminated. Significant interaction effects on both state rumination measures underlined the assumption of an effective form of rumination induction. Finally, Experiment 2 filled the gap, which became evident by the limitations and open questions of the previous experiments: 1) the comparison of the experimental condition with the shortened goal focus period and the neutral control condition from Experiment 1c with the original experimental condition from Roberts et al. [ 3 , i.e., original unresolved condition] as well as the original resolved control condition from Roberts et al. [ 3 , i.e., goal-related control condition]. Results from Experiment 2 confirmed a successful induction of rumination through the shortened experimental condition, which is expressed by significant interaction effects on all four state rumination measures.

Regarding the performance in the SART, apart from a few time effects (and an additional condition effect in Experiment 2); there were no relevant rumination-related effects. Null results between conditions regarding the SART may be attributed to three different reasons: First, consistent with the findings of Roberts et al. [ 32 ], the absence of condition effects could indicate that participants in the conditions did not differ in their attentional control capabilities. This is in line with a study by Schmitz-Hübsch and Schindler [ 70 ] who could not replicate the postulated relationship between performance on the SART in two different versions and self-reported everyday errors. Further, a study by Edwards [ 34 ] did not find differences in SART performance between a goal-avoidance and a goal-approach condition following the goal-cueing task. Second, participants in the unresolved condition, due to the perceived goal discrepancy, might have used more resources, such as attention, effort, or energy to solve the current discrepancy accordingly [ 71 – 73 ], and thus obtained similar results in the SART as the participants in the control conditions. Roberts et al. [ 3 ] even indicated that participants who focused on unresolved goals were slower and more accurate on the SART compared to participants in the resolved condition and speculated that this might be a compensatory strategy of slowing responses to reduce the risk of errors. Accordingly, the SART could also be seen as a welcome distraction to switch from a self-referential focus to an external focus. Third, Huffziger and colleagues [ 16 ] suggested that consequences of induced rumination might be stronger when inductions occur in natural contexts because it produces more personal-relevant reasons to ruminate about, which may consequently influence individuals’ actions and behaviors. Following the three reasons mentioned above, the use of the SART should be reconsidered in future studies depending on the research question.

Furthermore, in our own as well as previous studies [ 3 , 32 , 34 ], participants defined a personally relevant goal but were then confronted with a task that was unrelated to the defined goal and thus did not seem goal enhancing or particularly personally relevant. Accordingly, the question remains open how the goal manipulation affects the performance of personally relevant tasks such as they may occur in everyday life, for example perceived discrepancy or problems at work [ 74 ], an exam at university [ 8 ], or competitions in sports [ 75 ]. Future studies could therefore combine the goal-cueing task with a more relevant task in the laboratory or directly transfer the goal-cueing task into everyday life by means of an ambulatory assessment approach (see for example Huffziger et al. [ 16 ]). For this, the different modifications of our experimental conditions could provide first interesting starting points.

Finally, successful, and valid induction of rumination could be particularly useful in explaining causal relationships between rumination and individual goal achievement (e.g., what types of goals are more likely to trigger rumination that potentially affect daily performance and impact well-being, are they avoidance or approach goals, well versus poorly defined goals, or externally motivated goals versus internally motivated goals). Moreover, as a tool, this induction form can support further research in uncovering the mechanisms underlying rumination that influence specific mental processes (e.g., testing further hypotheses from the H-EX-A-GO-N model [ 1 ]). Building on the findings, future research could develop specific cognitive interventions (e.g., adapting cognitive-behavioral therapy for rumination for a nonclinical context [see: 24]) and effective problem-solving strategies through which rumination could be resolved and prevented in the future.

Limitations

A general limitation was that Experiments 1a-c took place during pandemic lockdown and therefore were performed as online experiments (Experiment 2 was then also conducted as an online experiment to be consistent with the previous experiments), making it much more difficult to control conditions and potential confounding variables compared to conducting the experiments in the laboratory [ 39 ]. However, we took some steps to increase the quality of the implementation of the online experiments (compare Advantage and disadvantage of applying an online experiment in our study section Experiment 1a). In addition, we asked the participants for their honest evaluation of their performance quality and the results suggested that the implementation and the environment was appropriate for participation in the experiments. In addition, existing comparative studies between laboratory and online experiments showed that the results agree surprisingly well [ 76 , 77 ].

In conclusion, the current experiments add to the research field of rumination and goal achievement by confirming experimental evidence for the correlation of rumination and individual goal achievement in the face of unresolved goals. Ultimately, these findings contribute to our understanding of potential triggers for rumination in non-clinical samples. Thus, the goal-cueing task allows researchers to test causal links between rumination and factors that are thought to lead to ruminative thinking. In addition, the results of the four experiments provide hints as to how the paradigm can be modified to be applied in other experimental settings in and outside the laboratory. In the future, the effects potentially achieved with this paradigm could also be significant in real life. For example, it would be valuable to examine the relationship between the dynamics of momentary rumination and day-to-day performance, as well as performance in specific contexts (e.g., work, sport, academics etc.).

Supporting information

S1 table. results of the multivariate analysis with group, gender and measure time as well as their respective interaction as factors and different rumination measures as dependent variables for experiments 1a-c..

https://doi.org/10.1371/journal.pone.0288450.s001

S2 Table. Test statistics for variables during the SART and corresponding effect sizes (ηp 2 ) of the respective mixed ANOVAs separated by effects for Experiments 1a-c.

https://doi.org/10.1371/journal.pone.0288450.s002

S3 Table. Mean values and standard deviations of relevant experimental variables assessed during the SART and separated by condition and time for Experiments 1a-c.

https://doi.org/10.1371/journal.pone.0288450.s003

S4 Table. Results of the multivariate analysis with group, gender and measure time as well as their respective interaction as factors and different rumination measures as dependent variables for Experiment 2.

https://doi.org/10.1371/journal.pone.0288450.s004

S5 Table. Test statistics for variables during the SART and corresponding effect sizes (ηp 2 ) of the respective mixed ANOVAs separated by effects for Experiment 2.

https://doi.org/10.1371/journal.pone.0288450.s005

S6 Table. Mean values and standard deviations of relevant experimental variables assessed during the SART and separated by condition and time for Experiments 2.

https://doi.org/10.1371/journal.pone.0288450.s006

Acknowledgments

We would like to thank Laura Huber and Jannis Tauchert for their help in collecting the data as well as all the volunteers who participated in our experiments. We would also like to thank Sandra Schönfelder for her advice on modifying the audio instructions.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 4. Martin LL, Tesser A. Some ruminative thoughts. In: Wyer RS Jr., editor. Ruminative Thoughts. Advances in Social Cognition. Psychology Press; 1996. pp. 1–47. https://doi.org/10.4324/9780203763513
  • 5. Martin LL, Tesser A. Extending the Goal Progress Theory of RuminationGoal Reevaluation and Growth. In Sanna LJ, Chang EC, editors. Judgments Over Time: The Interplay of Thoughts, Feelings, and Behaviors. Oxford University Press. 2006 Mar 30;145–62. https://doi.org/10.1093/acprof:oso/9780195177664.003.0009
  • 9. Wimmer T. Positive und negative Metakognitionen über die Rumination und ihre differentiellen Effekte auf die kognitive Flexibilität bei dysphorisch/depressiven Frauen [Positive and negative metacognitions about rumination and their differential effects on cognitive flexibility in dysphoric/depressed women; dissertation]. Westfälischen Wilhelms-University, Münster; 2005. Available at https://miami.uni-muenster.de/Record/2ff91faf-5c87-47c5-9dc2-aaac555a0962 .
  • 12. Kuhl J. Motivation und Persönlichkeit: Interaktionen psychischer Systeme [motivation and personality: Interactions of psychological systems]. Göttingen: Hogrefe; 2001. German.
  • 25. Krys S. Intrusive and repetitive thoughts: Investigating the construct of rumination [dissertation]. Kiel University; 2019.Available at https://macau.uni-kiel.de/servlets/MCRFileNodeServlet/dissertation_derivate_00008314/Dissertation_Sabrina_Krys.pdf .
  • 33. Lanning LER. Effect of Goal Discrepancy Rumination on Overgeneral Memory [dissertation]. University of Exeter: 2015. Available at http://hdl.handle.net/10871/20203 .
  • 34. Edwards LC. Does approach vs. avoidance framing influence rumination cued by unresolved goals? [dissertation]. University of Exeter: 2017. Available at: https://ore.exeter.ac.uk/repository/bitstream/handle/10871/29758/EdwardsL.pdf?sequence=1&isAllowed=y .
  • 44. Schwarzer R, Jerusalem M. Generalized Self-Efficacy scale. In: Weinman J, Wright S, Johnston M, editors. Measures in health psychology: A user’s portfolio. Causal and control beliefs. Windsor, UK: 1995. p. 35–37.
  • 48. Klinger E, Koster EHW, Marchetti I. Spontaneous Thought and Goal Pursuit. From functions such as planning to dysfunctions such as rumination. In: Christoff K, Fox KCR, editors. Oxford Handbooks Online. Oxford University Press. 2018 Apr 5; 215–247. https://doi.org/10.1093/oxfordhb/9780190464745.013.24
  • 50. Free Inquisit 5 Player app. version 5 [software]. Seattle, WA: Millisecond Software.
  • 51. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna: 2018. Available from: https://www.R-project.org/ .
  • 56. Field A, Miles J, Field Z. Discovering Statistics Using R. Sage Publication Ltd; 2012.
  • 70. Schmitz-Hübsch M, Schindler S. Alltagsfehler und Sustained Attention- eine Replikation zum SART [Everyday errors and sustained attention- a replication of the SART]. [Unpublished undergraduate theses]. Technical University Dresden, 2008. German.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.11(2); 2019 Feb

Logo of cureus

Planning and Conducting Clinical Research: The Whole Process

Boon-how chew.

1 Family Medicine, Universiti Putra Malaysia, Serdang, MYS

The goal of this review was to present the essential steps in the entire process of clinical research. Research should begin with an educated idea arising from a clinical practice issue. A research topic rooted in a clinical problem provides the motivation for the completion of the research and relevancy for affecting medical practice changes and improvements. The research idea is further informed through a systematic literature review, clarified into a conceptual framework, and defined into an answerable research question. Engagement with clinical experts, experienced researchers, relevant stakeholders of the research topic, and even patients can enhance the research question’s relevance, feasibility, and efficiency. Clinical research can be completed in two major steps: study designing and study reporting. Three study designs should be planned in sequence and iterated until properly refined: theoretical design, data collection design, and statistical analysis design. The design of data collection could be further categorized into three facets: experimental or non-experimental, sampling or census, and time features of the variables to be studied. The ultimate aims of research reporting are to present findings succinctly and timely. Concise, explicit, and complete reporting are the guiding principles in clinical studies reporting.

Introduction and background

Medical and clinical research can be classified in many different ways. Probably, most people are familiar with basic (laboratory) research, clinical research, healthcare (services) research, health systems (policy) research, and educational research. Clinical research in this review refers to scientific research related to clinical practices. There are many ways a clinical research's findings can become invalid or less impactful including ignorance of previous similar studies, a paucity of similar studies, poor study design and implementation, low test agent efficacy, no predetermined statistical analysis, insufficient reporting, bias, and conflicts of interest [ 1 - 4 ]. Scientific, ethical, and moral decadence among researchers can be due to incognizant criteria in academic promotion and remuneration and too many forced studies by amateurs and students for the sake of research without adequate training or guidance [ 2 , 5 - 6 ]. This article will review the proper methods to conduct medical research from the planning stage to submission for publication (Table ​ (Table1 1 ).

a Feasibility and efficiency are considered during the refinement of the research question and adhered to during data collection.

Epidemiologic studies in clinical and medical fields focus on the effect of a determinant on an outcome [ 7 ]. Measurement errors that happen systematically give rise to biases leading to invalid study results, whereas random measurement errors will cause imprecise reporting of effects. Precision can usually be increased with an increased sample size provided biases are avoided or trivialized. Otherwise, the increased precision will aggravate the biases. Because epidemiologic, clinical research focuses on measurement, measurement errors are addressed throughout the research process. Obtaining the most accurate estimate of a treatment effect constitutes the whole business of epidemiologic research in clinical practice. This is greatly facilitated by clinical expertise and current scientific knowledge of the research topic. Current scientific knowledge is acquired through literature reviews or in collaboration with an expert clinician. Collaboration and consultation with an expert clinician should also include input from the target population to confirm the relevance of the research question. The novelty of a research topic is less important than the clinical applicability of the topic. Researchers need to acquire appropriate writing and reporting skills from the beginning of their careers, and these skills should improve with persistent use and regular reviewing of published journal articles. A published clinical research study stands on solid scientific ground to inform clinical practice given the article has passed through proper peer-reviews, revision, and content improvement.

Systematic literature reviews

Systematic literature reviews of published papers will inform authors of the existing clinical evidence on a research topic. This is an important step to reduce wasted efforts and evaluate the planned study [ 8 ]. Conducting a systematic literature review is a well-known important step before embarking on a new study [ 9 ]. A rigorously performed and cautiously interpreted systematic review that includes in-process trials can inform researchers of several factors [ 10 ]. Reviewing the literature will inform the choice of recruitment methods, outcome measures, questionnaires, intervention details, and statistical strategies – useful information to increase the study’s relevance, value, and power. A good review of previous studies will also provide evidence of the effects of an intervention that may or may not be worthwhile; this would suggest either no further studies are warranted or that further study of the intervention is needed. A review can also inform whether a larger and better study is preferable to an additional small study. Reviews of previously published work may yield few studies or low-quality evidence from small or poorly designed studies on certain intervention or observation; this may encourage or discourage further research or prompt consideration of a first clinical trial.

Conceptual framework

The result of a literature review should include identifying a working conceptual framework to clarify the nature of the research problem, questions, and designs, and even guide the latter discussion of the findings and development of possible solutions. Conceptual frameworks represent ways of thinking about a problem or how complex things work the way they do [ 11 ]. Different frameworks will emphasize different variables and outcomes, and their inter-relatedness. Each framework highlights or emphasizes different aspects of a problem or research question. Often, any single conceptual framework presents only a partial view of reality [ 11 ]. Furthermore, each framework magnifies certain elements of the problem. Therefore, a thorough literature search is warranted for authors to avoid repeating the same research endeavors or mistakes. It may also help them find relevant conceptual frameworks including those that are outside one’s specialty or system. 

Conceptual frameworks can come from theories with well-organized principles and propositions that have been confirmed by observations or experiments. Conceptual frameworks can also come from models derived from theories, observations or sets of concepts or even evidence-based best practices derived from past studies [ 11 ].

Researchers convey their assumptions of the associations of the variables explicitly in the conceptual framework to connect the research to the literature. After selecting a single conceptual framework or a combination of a few frameworks, a clinical study can be completed in two fundamental steps: study design and study report. Three study designs should be planned in sequence and iterated until satisfaction: the theoretical design, data collection design, and statistical analysis design [ 7 ]. 

Study designs

Theoretical Design

Theoretical design is the next important step in the research process after a literature review and conceptual framework identification. While the theoretical design is a crucial step in research planning, it is often dealt with lightly because of the more alluring second step (data collection design). In the theoretical design phase, a research question is designed to address a clinical problem, which involves an informed understanding based on the literature review and effective collaboration with the right experts and clinicians. A well-developed research question will have an initial hypothesis of the possible relationship between the explanatory variable/exposure and the outcome. This will inform the nature of the study design, be it qualitative or quantitative, primary or secondary, and non-causal or causal (Figure ​ (Figure1 1 ).

An external file that holds a picture, illustration, etc.
Object name is cureus-0011-00000004112-i01.jpg

A study is qualitative if the research question aims to explore, understand, describe, discover or generate reasons underlying certain phenomena. Qualitative studies usually focus on a process to determine how and why things happen [ 12 ]. Quantitative studies use deductive reasoning, and numerical statistical quantification of the association between groups on data often gathered during experiments [ 13 ]. A primary clinical study is an original study gathering a new set of patient-level data. Secondary research draws on the existing available data and pooling them into a larger database to generate a wider perspective or a more powerful conclusion. Non-causal or descriptive research aims to identify the determinants or associated factors for the outcome or health condition, without regard for causal relationships. Causal research is an exploration of the determinants of an outcome while mitigating confounding variables. Table ​ Table2 2 shows examples of non-causal (e.g., diagnostic and prognostic) and causal (e.g., intervention and etiologic) clinical studies. Concordance between the research question, its aim, and the choice of theoretical design will provide a strong foundation and the right direction for the research process and path. 

A problem in clinical epidemiology is phrased in a mathematical relationship below, where the outcome is a function of the determinant (D) conditional on the extraneous determinants (ED) or more commonly known as the confounding factors [ 7 ]:

For non-causal research, Outcome = f (D1, D2…Dn) For causal research, Outcome = f (D | ED)

A fine research question is composed of at least three components: 1) an outcome or a health condition, 2) determinant/s or associated factors to the outcome, and 3) the domain. The outcome and the determinants have to be clearly conceptualized and operationalized as measurable variables (Table ​ (Table3; 3 ; PICOT [ 14 ] and FINER [ 15 ]). The study domain is the theoretical source population from which the study population will be sampled, similar to the wording on a drug package insert that reads, “use this medication (study results) in people with this disease” [ 7 ].

The interpretation of study results as they apply to wider populations is known as generalization, and generalization can either be statistical or made using scientific inferences [ 16 ]. Generalization supported by statistical inferences is seen in studies on disease prevalence where the sample population is representative of the source population. By contrast, generalizations made using scientific inferences are not bound by the representativeness of the sample in the study; rather, the generalization should be plausible from the underlying scientific mechanisms as long as the study design is valid and nonbiased. Scientific inferences and generalizations are usually the aims of causal studies. 

Confounding: Confounding is a situation where true effects are obscured or confused [ 7 , 16 ]. Confounding variables or confounders affect the validity of a study’s outcomes and should be prevented or mitigated in the planning stages and further managed in the analytical stages. Confounders are also known as extraneous determinants in epidemiology due to their inherent and simultaneous relationships to both the determinant and outcome (Figure ​ (Figure2), 2 ), which are usually one-determinant-to-one outcome in causal clinical studies. The known confounders are also called observed confounders. These can be minimized using randomization, restriction, or a matching strategy. Residual confounding has occurred in a causal relationship when identified confounders were not measured accurately. Unobserved confounding occurs when the confounding effect is present as a variable or factor not observed or yet defined and, thus, not measured in the study. Age and gender are almost universal confounders followed by ethnicity and socio-economic status.

An external file that holds a picture, illustration, etc.
Object name is cureus-0011-00000004112-i02.jpg

Confounders have three main characteristics. They are a potential risk factor for the disease, associated with the determinant of interest, and should not be an intermediate variable between the determinant and the outcome or a precursor to the determinant. For example, a sedentary lifestyle is a cause for acute coronary syndrome (ACS), and smoking could be a confounder but not cardiorespiratory unfitness (which is an intermediate factor between a sedentary lifestyle and ACS). For patients with ACS, not having a pair of sports shoes is not a confounder – it is a correlate for the sedentary lifestyle. Similarly, depression would be a precursor, not a confounder.

Sample size consideration: Sample size calculation provides the required number of participants to be recruited in a new study to detect true differences in the target population if they exist. Sample size calculation is based on three facets: an estimated difference in group sizes, the probability of α (Type I) and β (Type II) errors chosen based on the nature of the treatment or intervention, and the estimated variability (interval data) or proportion of the outcome (nominal data) [ 17 - 18 ]. The clinically important effect sizes are determined based on expert consensus or patients’ perception of benefit. Value and economic consideration have increasingly been included in sample size estimations. Sample size and the degree to which the sample represents the target population affect the accuracy and generalization of a study’s reported effects. 

Pilot study: Pilot studies assess the feasibility of the proposed research procedures on small sample size. Pilot studies test the efficiency of participant recruitment with minimal practice or service interruptions. Pilot studies should not be conducted to obtain a projected effect size for a larger study population because, in a typical pilot study, the sample size is small, leading to a large standard error of that effect size. This leads to bias when projected for a large population. In the case of underestimation, this could lead to inappropriately terminating the full-scale study. As the small pilot study is equally prone to bias of overestimation of the effect size, this would lead to an underpowered study and a failed full-scale study [ 19 ]. 

The Design of Data Collection

The “perfect” study design in the theoretical phase now faces the practical and realistic challenges of feasibility. This is the step where different methods for data collection are considered, with one selected as the most appropriate based on the theoretical design along with feasibility and efficiency. The goal of this stage is to achieve the highest possible validity with the lowest risk of biases given available resources and existing constraints. 

In causal research, data on the outcome and determinants are collected with utmost accuracy via a strict protocol to maximize validity and precision. The validity of an instrument is defined as the degree of fidelity of the instrument, measuring what it is intended to measure, that is, the results of the measurement correlate with the true state of an occurrence. Another widely used word for validity is accuracy. Internal validity refers to the degree of accuracy of a study’s results to its own study sample. Internal validity is influenced by the study designs, whereas the external validity refers to the applicability of a study’s result in other populations. External validity is also known as generalizability and expresses the validity of assuming the similarity and comparability between the study population and the other populations. Reliability of an instrument denotes the extent of agreeableness of the results of repeated measurements of an occurrence by that instrument at a different time, by different investigators or in a different setting. Other terms that are used for reliability include reproducibility and precision. Preventing confounders by identifying and including them in data collection will allow statistical adjustment in the later analyses. In descriptive research, outcomes must be confirmed with a referent standard, and the determinants should be as valid as those found in real clinical practice.

Common designs for data collection include cross-sectional, case-control, cohort, and randomized controlled trials (RCTs). Many other modern epidemiology study designs are based on these classical study designs such as nested case-control, case-crossover, case-control without control, and stepwise wedge clustered RCTs. A cross-sectional study is typically a snapshot of the study population, and an RCT is almost always a prospective study. Case-control and cohort studies can be retrospective or prospective in data collection. The nested case-control design differs from the traditional case-control design in that it is “nested” in a well-defined cohort from which information on the cohorts can be obtained. This design also satisfies the assumption that cases and controls represent random samples of the same study base. Table ​ Table4 4 provides examples of these data collection designs.

Additional aspects in data collection: No single design of data collection for any research question as stated in the theoretical design will be perfect in actual conduct. This is because of myriad issues facing the investigators such as the dynamic clinical practices, constraints of time and budget, the urgency for an answer to the research question, and the ethical integrity of the proposed experiment. Therefore, feasibility and efficiency without sacrificing validity and precision are important considerations in data collection design. Therefore, data collection design requires additional consideration in the following three aspects: experimental/non-experimental, sampling, and timing [ 7 ]:

Experimental or non-experimental: Non-experimental research (i.e., “observational”), in contrast to experimental, involves data collection of the study participants in their natural or real-world environments. Non-experimental researches are usually the diagnostic and prognostic studies with cross-sectional in data collection. The pinnacle of non-experimental research is the comparative effectiveness study, which is grouped with other non-experimental study designs such as cross-sectional, case-control, and cohort studies [ 20 ]. It is also known as the benchmarking-controlled trials because of the element of peer comparison (using comparable groups) in interpreting the outcome effects [ 20 ]. Experimental study designs are characterized by an intervention on a selected group of the study population in a controlled environment, and often in the presence of a similar group of the study population to act as a comparison group who receive no intervention (i.e., the control group). Thus, the widely known RCT is classified as an experimental design in data collection. An experimental study design without randomization is referred to as a quasi-experimental study. Experimental studies try to determine the efficacy of a new intervention on a specified population. Table ​ Table5 5 presents the advantages and disadvantages of experimental and non-experimental studies [ 21 ].

a May be an issue in cross-sectional studies that require a long recall to the past such as dietary patterns, antenatal events, and life experiences during childhood.

Once an intervention yields a proven effect in an experimental study, non-experimental and quasi-experimental studies can be used to determine the intervention’s effect in a wider population and within real-world settings and clinical practices. Pragmatic or comparative effectiveness are the usual designs used for data collection in these situations [ 22 ].

Sampling/census: Census is a data collection on the whole source population (i.e., the study population is the source population). This is possible when the defined population is restricted to a given geographical area. A cohort study uses the census method in data collection. An ecologic study is a cohort study that collects summary measures of the study population instead of individual patient data. However, many studies sample from the source population and infer the results of the study to the source population for feasibility and efficiency because adequate sampling provides similar results to the census of the whole population. Important aspects of sampling in research planning are sample size and representation of the population. Sample size calculation accounts for the number of participants needed to be in the study to discover the actual association between the determinant and outcome. Sample size calculation relies on the primary objective or outcome of interest and is informed by the estimated possible differences or effect size from previous similar studies. Therefore, the sample size is a scientific estimation for the design of the planned study.

A sampling of participants or cases in a study can represent the study population and the larger population of patients in that disease space, but only in prevalence, diagnostic, and prognostic studies. Etiologic and interventional studies do not share this same level of representation. A cross-sectional study design is common for determining disease prevalence in the population. Cross-sectional studies can also determine the referent ranges of variables in the population and measure change over time (e.g., repeated cross-sectional studies). Besides being cost- and time-efficient, cross-sectional studies have no loss to follow-up; recall bias; learning effect on the participant; or variability over time in equipment, measurement, and technician. A cross-sectional design for an etiologic study is possible when the determinants do not change with time (e.g., gender, ethnicity, genetic traits, and blood groups). 

In etiologic research, comparability between the exposed and the non-exposed groups is more important than sample representation. Comparability between these two groups will provide an accurate estimate of the effect of the exposure (risk factor) on the outcome (disease) and enable valid inference of the causal relation to the domain (the theoretical population). In a case-control study, a sampling of the control group should be taken from the same study population (study base), have similar profiles to the cases (matching) but do not have the outcome seen in the cases. Matching important factors minimizes the confounding of the factors and increases statistical efficiency by ensuring similar numbers of cases and controls in confounders’ strata [ 23 - 24 ]. Nonetheless, perfect matching is neither necessary nor achievable in a case-control study because a partial match could achieve most of the benefits of the perfect match regarding a more precise estimate of odds ratio than statistical control of confounding in unmatched designs [ 25 - 26 ]. Moreover, perfect or full matching can lead to an underestimation of the point estimates [ 27 - 28 ].

Time feature: The timing of data collection for the determinant and outcome characterizes the types of studies. A cross-sectional study has the axis of time zero (T = 0) for both the determinant and the outcome, which separates it from all other types of research that have time for the outcome T > 0. Retrospective or prospective studies refer to the direction of data collection. In retrospective studies, information on the determinant and outcome have been collected or recorded before. In prospective studies, this information will be collected in the future. These terms should not be used to describe the relationship between the determinant and the outcome in etiologic studies. Time of exposure to the determinant, the time of induction, and the time at risk for the outcome are important aspects to understand. Time at risk is the period of time exposed to the determinant risk factors. Time of induction is the time from the sufficient exposure to the risk or causal factors to the occurrence of a disease. The latent period is when the occurrence of a disease without manifestation of the disease such as in “silence” diseases for example cancers, hypertension and type 2 diabetes mellitus which is detected from screening practices. Figure ​ Figure3 3 illustrates the time features of a variable. Variable timing is important for accurate data capture. 

An external file that holds a picture, illustration, etc.
Object name is cureus-0011-00000004112-i03.jpg

The Design of Statistical Analysis

Statistical analysis of epidemiologic data provides the estimate of effects after correcting for biases (e.g., confounding factors) measures the variability in the data from random errors or chance [ 7 , 16 , 29 ]. An effect estimate gives the size of an association between the studied variables or the level of effectiveness of an intervention. This quantitative result allows for comparison and assessment of the usefulness and significance of the association or the intervention between studies. This significance must be interpreted with a statistical model and an appropriate study design. Random errors could arise in the study resulting from unexplained personal choices by the participants. Random error is, therefore, when values or units of measurement between variables change in non-concerted or non-directional manner. Conversely, when these values or units of measurement between variables change in a concerted or directional manner, we note a significant relationship as shown by statistical significance. 

Variability: Researchers almost always collect the needed data through a sampling of subjects/participants from a population instead of a census. The process of sampling or multiple sampling in different geographical regions or over different periods contributes to varied information due to the random inclusion of different participants and chance occurrence. This sampling variation becomes the focus of statistics when communicating the degree and intensity of variation in the sampled data and the level of inference in the population. Sampling variation can be influenced profoundly by the total number of participants and the width of differences of the measured variable (standard deviation). Hence, the characteristics of the participants, measurements and sample size are all important factors in planning a study.

Statistical strategy: Statistical strategy is usually determined based on the theoretical and data collection designs. Use of a prespecified statistical strategy (including the decision to dichotomize any continuous data at certain cut-points, sub-group analysis or sensitive analyses) is recommended in the study proposal (i.e., protocol) to prevent data dredging and data-driven reports that predispose to bias. The nature of the study hypothesis also dictates whether directional (one-tailed) or non-directional (two-tailed) significance tests are conducted. In most studies, two-sided tests are used except in specific instances when unidirectional hypotheses may be appropriate (e.g., in superiority or non-inferiority trials). While data exploration is discouraged, epidemiological research is, by nature of its objectives, statistical research. Hence, it is acceptable to report the presence of persistent associations between any variables with plausible underlying mechanisms during the exploration of the data. The statistical methods used to produce the results should be explicitly explained. Many different statistical tests are used to handle various kinds of data appropriately (e.g., interval vs discrete), and/or the various distribution of the data (e.g., normally distributed or skewed). For additional details on statistical explanations and underlying concepts of statistical tests, readers are recommended the references as cited in this sentence [ 30 - 31 ]. 

Steps in statistical analyses: Statistical analysis begins with checking for data entry errors. Duplicates are eliminated, and proper units should be confirmed. Extremely low, high or suspicious values are confirmed from the source data again. If this is not possible, this is better classified as a missing value. However, if the unverified suspicious data are not obviously wrong, they should be further examined as an outlier in the analysis. The data checking and cleaning enables the analyst to establish a connection with the raw data and to anticipate possible results from further analyses. This initial step involves descriptive statistics that analyze central tendency (i.e., mode, median, and mean) and dispersion (i.e., (minimum, maximum, range, quartiles, absolute deviation, variance, and standard deviation) of the data. Certain graphical plotting such as scatter plot, a box-whiskers plot, histogram or normal Q-Q plot are helpful at this stage to verify data normality in distribution. See Figure ​ Figure4 4 for the statistical tests available for analyses of different types of data.

An external file that holds a picture, illustration, etc.
Object name is cureus-0011-00000004112-i04.jpg

Once data characteristics are ascertained, further statistical tests are selected. The analytical strategy sometimes involves the transformation of the data distribution for the selected tests (e.g., log, natural log, exponential, quadratic) or for checking the robustness of the association between the determinants and their outcomes. This step is also referred to as inferential statistics whereby the results are about hypothesis testing and generalization to the wider population that the study’s sampled participants represent. The last statistical step is checking whether the statistical analyses fulfill the assumptions of that particular statistical test and model to avoid violation and misleading results. These assumptions include evaluating normality, variance homogeneity, and residuals included in the final statistical model. Other statistical values such as Akaike information criterion, variance inflation factor/tolerance, and R2 are also considered when choosing the best-fitted models. Transforming raw data could be done, or a higher level of statistical analyses can be used (e.g., generalized linear models and mixed-effect modeling). Successful statistical analysis allows conclusions of the study to fit the data. 

Bayesian and Frequentist statistical frameworks: Most of the current clinical research reporting is based on the frequentist approach and hypotheses testing p values and confidence intervals. The frequentist approach assumes the acquired data are random, attained by random sampling, through randomized experiments or influences, and with random errors. The distribution of the data (its point estimate and confident interval) infers a true parameter in the real population. The major conceptual difference between Bayesian statistics and frequentist statistics is that in Bayesian statistics, the parameter (i.e., the studied variable in the population) is random and the data acquired is real (true or fix). Therefore, the Bayesian approach provides a probability interval for the parameter. The studied parameter is random because it could vary and be affected by prior beliefs, experience or evidence of plausibility. In the Bayesian statistical approach, this prior belief or available knowledge is quantified into a probability distribution and incorporated into the acquired data to get the results (i.e., the posterior distribution). This uses mathematical theory of Bayes’ Theorem to “turn around” conditional probabilities.

The goal of research reporting is to present findings succinctly and timely via conference proceedings or journal publication. Concise and explicit language use, with all the necessary details to enable replication and judgment of the study applicability, are the guiding principles in clinical studies reporting.

Writing for Reporting

Medical writing is very much a technical chore that accommodates little artistic expression. Research reporting in medicine and health sciences emphasize clear and standardized reporting, eschewing adjectives and adverbs extensively used in popular literature. Regularly reviewing published journal articles can familiarize authors with proper reporting styles and help enhance writing skills. Authors should familiarize themselves with standard, concise, and appropriate rhetoric for the intended audience, which includes consideration for journal reviewers, editors, and referees. However, proper language can be somewhat subjective. While each publication may have varying requirements for submission, the technical requirements for formatting an article are usually available via author or submission guidelines provided by the target journal. 

Research reports for publication often contain a title, abstract, introduction, methods, results, discussion, and conclusions section, and authors may want to write each section in sequence. However, best practices indicate the abstract and title should be written last. Authors may find that when writing one section of the report, ideas come to mind that pertains to other sections, so careful note taking is encouraged. One effective approach is to organize and write the result section first, followed by the discussion and conclusions sections. Once these are drafted, write the introduction, abstract, and the title of the report. Regardless of the sequence of writing, the author should begin with a clear and relevant research question to guide the statistical analyses, result interpretation, and discussion. The study findings can be a motivator to propel the author through the writing process, and the conclusions can help the author draft a focused introduction.

Writing for Publication

Specific recommendations on effective medical writing and table generation are available [ 32 ]. One such resource is Effective Medical Writing: The Write Way to Get Published, which is an updated collection of medical writing articles previously published in the Singapore Medical Journal [ 33 ]. The British Medical Journal’s Statistics Notes series also elucidates common and important statistical concepts and usages in clinical studies. Writing guides are also available from individual professional societies, journals, or publishers such as Chest (American College of Physicians) medical writing tips, PLoS Reporting guidelines collection, Springer’s Journal Author Academy, and SAGE’s Research methods [ 34 - 37 ]. Standardized research reporting guidelines often come in the form of checklists and flow diagrams. Table ​ Table6 6 presents a list of reporting guidelines. A full compilation of these guidelines is available at the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network website [ 38 ] which aims to improve the reliability and value of medical literature by promoting transparent and accurate reporting of research studies. Publication of the trial protocol in a publicly available database is almost compulsory for publication of the full report in many potential journals.

Graphics and Tables

Graphics and tables should emphasize salient features of the underlying data and should coherently summarize large quantities of information. Although graphics provide a break from dense prose, authors must not forget that these illustrations should be scientifically informative, not decorative. The titles for graphics and tables should be clear, informative, provide the sample size, and use minimal font weight and formatting only to distinguish headings, data entry or to highlight certain results. Provide a consistent number of decimal points for the numerical results, and with no more than four for the P value. Most journals prefer cell-delineated tables created using the table function in word processing or spreadsheet programs. Some journals require specific table formatting such as the absence or presence of intermediate horizontal lines between cells.

Decisions of authorship are both sensitive and important and should be made at an early stage by the study’s stakeholders. Guidelines and journals’ instructions to authors abound with authorship qualifications. The guideline on authorship by the International Committee of Medical Journal Editors is widely known and provides a standard used by many medical and clinical journals [ 39 ]. Generally, authors are those who have made major contributions to the design, conduct, and analysis of the study, and who provided critical readings of the manuscript (if not involved directly in manuscript writing). 

Picking a target journal for submission

Once a report has been written and revised, the authors should select a relevant target journal for submission. Authors should avoid predatory journals—publications that do not aim to advance science and disseminate quality research. These journals focus on commercial gain in medical and clinical publishing. Two good resources for authors during journal selection are Think-Check-Submit and the defunct Beall's List of Predatory Publishers and Journals (now archived and maintained by an anonymous third-party) [ 40 , 41 ]. Alternatively, reputable journal indexes such as Thomson Reuters Journal Citation Reports, SCOPUS, MedLine, PubMed, EMBASE, EBSCO Publishing's Electronic Databases are available areas to start the search for an appropriate target journal. Authors should review the journals’ names, aims/scope, and recently published articles to determine the kind of research each journal accepts for publication. Open-access journals almost always charge article publication fees, while subscription-based journals tend to publish without author fees and instead rely on subscription or access fees for the full text of published articles.

Conclusions

Conducting a valid clinical research requires consideration of theoretical study design, data collection design, and statistical analysis design. Proper study design implementation and quality control during data collection ensures high-quality data analysis and can mitigate bias and confounders during statistical analysis and data interpretation. Clear, effective study reporting facilitates dissemination, appreciation, and adoption, and allows the researchers to affect real-world change in clinical practices and care models. Neutral or absence of findings in a clinical study are as important as positive or negative findings. Valid studies, even when they report an absence of expected results, still inform scientific communities of the nature of a certain treatment or intervention, and this contributes to future research, systematic reviews, and meta-analyses. Reporting a study adequately and comprehensively is important for accuracy, transparency, and reproducibility of the scientific work as well as informing readers.

Acknowledgments

The author would like to thank Universiti Putra Malaysia and the Ministry of Higher Education, Malaysia for their support in sponsoring the Ph.D. study and living allowances for Boon-How Chew.

The content published in Cureus is the result of clinical experience and/or research by independent individuals or organizations. Cureus is not responsible for the scientific accuracy or reliability of data or conclusions published herein. All content published within Cureus is intended only for educational, research and reference purposes. Additionally, articles published within Cureus should not be deemed a suitable substitute for the advice of a qualified health care professional. Do not disregard or avoid professional medical advice due to content published within Cureus.

The materials presented in this paper is being organized by the author into a book.

Chapter 10 Experimental Research

Experimental research, often considered to be the “gold standard” in research designs, is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research (rather than for descriptive or exploratory research), where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalizability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments , conducted in field settings such as in a real organization, and high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic Concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favorably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receives a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the “cause” in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and assures that each unit in the population has a positive chance of being selected into the sample. Random assignment is however a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group, prior to treatment administration. Random selection is related to sampling, and is therefore, more closely related to the external validity (generalizability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

  • History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.
  • Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.
  • Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam. Not conducting a pretest can help avoid this threat.
  • Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.
  • Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.
  • Regression threat , also called a regression to the mean, refers to the statistical tendency of a group’s overall performance on a measure during a posttest to regress toward the mean of that measure rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest was possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-Group Experimental Designs

The simplest true experimental designs are two group designs involving one treatment group and one control group, and are ideally suited for testing the effects of a single independent variable that can be manipulated as a treatment. The two basic two-group designs are the pretest-posttest control group design and the posttest-only control group design, while variations may include covariance designs. These designs are often depicted using a standardized design notation, where R represents random assignment of subjects to groups, X represents the treatment administered to the treatment group, and O represents pretest or posttest observations of the dependent variable (with different subscripts to distinguish between pretest and posttest observations of treatment and control groups).

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

experimental study of research

Figure 10.1. Pretest-posttest control group design

The effect E of the experimental treatment in the pretest posttest design is measured as the difference in the posttest and pretest scores between the treatment and control groups:

E = (O 2 – O 1 ) – (O 4 – O 3 )

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement (especially if the pretest introduces unusual topics or content).

Posttest-only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

experimental study of research

Figure 10.2. Posttest only control group design.

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

E = (O 1 – O 2 )

The appropriate statistical analysis of this design is also a two- group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

Covariance designs . Sometimes, measures of dependent variables may be influenced by extraneous variables called covariates . Covariates are those variables that are not of central interest to an experimental study, but should nevertheless be controlled in an experimental design in order to eliminate their potential effect on the dependent variable and therefore allow for a more accurate detection of the effects of the independent variables of interest. The experimental designs discussed earlier did not control for such covariates. A covariance design (also called a concomitant variable design) is a special type of pretest posttest control group design where the pretest measure is essentially a measurement of the covariates of interest rather than that of the dependent variables. The design notation is shown in Figure 10.3, where C represents the covariates:

experimental study of research

Figure 10.3. Covariance design

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

experimental study of research

Figure 10.4. 2 x 2 factorial design

Factorial designs can also be depicted using a design notation, such as that shown on the right panel of Figure 10.4. R represents random assignment of subjects to treatment groups, X represents the treatment groups themselves (the subscripts of X represents the level of each factor), and O represent observations of the dependent variable. Notice that the 2 x 2 factorial design will have four treatment groups, corresponding to the four combinations of the two levels of each factor. Correspondingly, the 2 x 3 design will have six treatment groups, and the 2 x 2 x 2 design will have eight treatment groups. As a rule of thumb, each cell in a factorial design should have a minimum sample size of 20 (this estimate is derived from Cohen’s power calculations based on medium effect sizes). So a 2 x 2 x 2 factorial design requires a minimum total sample size of 160 subjects, with at least 20 subjects in each cell. As you can see, the cost of data collection can increase substantially with more levels or factors in your factorial design. Sometimes, due to resource constraints, some cells in such factorial designs may not receive any treatment at all, which are called incomplete factorial designs . Such incomplete designs hurt our ability to draw inferences about the incomplete factors.

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for 3 hours/week of instructional time than for 1.5 hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid Experimental Designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomized bocks design, Solomon four-group design, and switched replications design.

Randomized block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full -time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between treatment group (receiving the same treatment) or control group (see Figure 10.5). The purpose of this design is to reduce the “noise” or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

experimental study of research

Figure 10.5. Randomized blocks design.

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs but not in posttest only designs. The design notation is shown in Figure 10.6.

experimental study of research

Figure 10.6. Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organizational contexts where organizational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

experimental study of research

Figure 10.7. Switched replication design.

Quasi-Experimental Designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organization is used as the treatment group, while another section of the same class or a different organization in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of a certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impact by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

Many true experimental designs can be converted to quasi-experimental designs by omitting random assignment. For instance, the quasi-equivalent version of pretest-posttest control group design is called nonequivalent groups design (NEGD), as shown in Figure 10.8, with random assignment R replaced by non-equivalent (non-random) assignment N . Likewise, the quasi -experimental version of switched replication design is called non-equivalent switched replication design (see Figure 10.9).

experimental study of research

Figure 10.8. NEGD design.

experimental study of research

Figure 10.9. Non-equivalent switched replication design.

In addition, there are quite a few unique non -equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression-discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to treatment or control group based on a cutoff score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardized test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program. The design notation can be represented as follows, where C represents the cutoff score:

experimental study of research

Figure 10.10. RD design.

Because of the use of a cutoff score, it is possible that the observed results may be a function of the cutoff score rather than the treatment, which introduces a new threat to internal validity. However, using the cutoff score also ensures that limited or costly resources are distributed to people who need them the most rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design does not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

experimental study of research

Figure 10.11. Proxy pretest design.

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data are not available from the same subjects.

experimental study of research

Figure 10.12. Separate pretest-posttest samples design.

Nonequivalent dependent variable (NEDV) design . This is a single-group pre-post quasi-experimental design with two outcome measures, where one measure is theoretically expected to be influenced by the treatment and the other measure is not. For instance, if you are designing a new calculus curriculum for high school students, this curriculum is likely to influence students’ posttest calculus scores but not algebra scores. However, the posttest algebra scores may still vary due to extraneous factors such as history or maturation. Hence, the pre-post algebra scores can be used as a control measure, while that of pre-post calculus can be treated as the treatment measure. The design notation, shown in Figure 10.13, indicates the single group by a single N , followed by pretest O 1 and posttest O 2 for calculus and algebra for the same group of students. This design is weak in internal validity, but its advantage lies in not having to use a separate control group.

An interesting variation of the NEDV design is a pattern matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique, based on the degree of correspondence between theoretical and observed patterns is a powerful way of alleviating internal validity concerns in the original NEDV design.

experimental study of research

Figure 10.13. NEDV design.

Perils of Experimental Research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, many experimental research use inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artifact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if doubt, using tasks that are simpler and familiar for the respondent sample than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

  • Social Science Research: Principles, Methods, and Practices. Authored by : Anol Bhattacherjee. Provided by : University of South Florida. Located at : http://scholarcommons.usf.edu/oa_textbooks/3/ . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike
  • Frontiers in Public Health
  • Radiation and Health
  • Research Topics

Recent studies on the use of magnetic fields in experimental oncology: Effects of magnetic fields on cancer biology

Total Downloads

Total Views and Downloads

About this Research Topic

Cancer is a serious threat to human health. In fact, despite decades of scientific efforts and many enlightening discoveries about mechanisms and intervention pathways, clinical results are not yet completely satisfactory and cancer remains one of main causes of death in both developed and developing countries. At present, the primary options for advanced cancer treatments, namely chemotherapy and radiotherapy, always have some limitations such as severe side effects and drug resistance. It is necessary to improve multidisciplinary efforts develop to address these disadvantages. With this objective, a large number of studies have been published showing that magnetic fields (MFs) could inhibit tumor cells growth and proliferation, induce cell cycle arrest, apoptosis, autophagy and differentiation, regulate the immune system and suppress angiogenesis and metastasis via various signaling pathways. In addition, MFs are effective in combination therapies: MFs not only promote the absorption of chemotherapy drugs but also enhance the inhibitory effects by regulating apoptosis and cell cycle related proteins. The majority of the reported results were accompanied by no toxicity in vitro and in vivo. The available data suggest that MFs can significantly inhibit tumor growth, and the inhibitory effect has a positive correlation with time and intensity. Meanwhile, the production of reactive oxygen species (ROS) is an inevitable phenomenon considered to be the key to the inhibitory effect of the MFs. However, this area of research needs more efforts to gain better consensus. It is in fact characterized by two main weaknesses being the majority of studies confined to in vitro with the use of different experimental variables that do not allow good correlation between in vitro and the fewer available in vivo studies. From a biophysical stand point the intriguing aspect is that the use of quantum physics allows the evaluation of appropriate MFs to selectivity influence the activity of electrons mainly the electron spin state. In fact The activity of electrons must obey to quantum physics, which assigns to electron spin a key role in biochemistry since all molecular processes are spin selective. The spin state plays a pivotal role in all the redox reactions that are at the core of our metabolic machinery. Influencing the spin we can then selectively influence the ROS chemistry potentially having a selective effect on cancer biology without affecting normal cell biology. This is the fascinating aspect of using MFs fields on experimental oncology. This is the reason why is strongly justified to call for additional multidisciplinary research in this area that may have a potentiality to help improving individual and precision therapies.

Keywords : Cancer, Magnetic Fields, Reactive Oxygen Species, Electron Spin, Individual and Precision Therapies

Important Note : All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Topic coordinators, submission deadlines, participating journals.

Manuscripts can be submitted to this Research Topic via the following journals:

total views

  • Demographics

No records found

total views article views downloads topic views

Top countries

Top referring sites, about frontiers research topics.

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

This paper is in the following e-collection/theme issue:

Published on 30.5.2024 in Vol 26 (2024)

Influence of Physical Attractiveness and Gender on Patient Preferences in Digital Doctor Consultations: Experimental Study

Authors of this article:

Author Orcid Image

Xia Wei   1 , BA, MBA, PhD ;   Shubin Yu   2 , BA, MA, MSc, PhD ;   Changxu (Victor) Li   3 , BA, MA, MSc

1 College of Management, Shenzhen University, Shenzhen, China

2 Department of Communication and Culture, BI Norwegian Business School, Oslo, Norway

3 Department of Marketing, KU Leuven, Leuven, Belgium

Corresponding Author:

  • Shubin Yu , BA, MA, MSc, PhD
  • Department of Communication and Culture
  • BI Norwegian Business School
  • Nydalsveien 37
  • Oslo , 0484
  • Phone: 47 41228055
  • Email: [email protected]

IMAGES

  1. Experimental research

    experimental study of research

  2. What is Experimental Research & How is it Significant for Your Business

    experimental study of research

  3. Experimental Study Design: Types, Methods, Advantages

    experimental study of research

  4. The 3 Types Of Experimental Design (2024)

    experimental study of research

  5. Experimental research design.revised

    experimental study of research

  6. Experimental Research: What it is + Types of designs

    experimental study of research

VIDEO

  1. Experimental Research

  2. Difference Between Experimental & Non-Experimental Research in Hindi

  3. What is experimental research design? (4 of 11)

  4. 12

  5. 07 Experimental study designs

  6. Experimental Research Designs

COMMENTS

  1. Guide to Experimental Design

    If your study system doesn't match these criteria, there are other types of research you can use to answer your research question. Step 3: Design your experimental treatments How you manipulate the independent variable can affect the experiment's external validity - that is, the extent to which the results can be generalized and applied ...

  2. Exploring Experimental Research: Methodologies, Designs, and

    Originality/value This is the first study to review common mistakes in experimental research in hospitality research and to recommend some remedies. The findings of this study can contribute to ...

  3. Experimental Design

    Experimental Design. Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results. Experimental design typically includes ...

  4. A Quick Guide to Experimental Design

    If your study system doesn't match these criteria, there are other types of research you can use to answer your research question. Step 3: Design your experimental treatments How you manipulate the independent variable can affect the experiment's external validity - that is, the extent to which the results can be generalised and applied ...

  5. Experimental research

    Experimental research is best suited for explanatory research—rather than for descriptive or exploratory research—where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled ...

  6. Study/Experimental/Research Design: Much More Than Statistics

    Study, experimental, or research design is the backbone of good research. It directs the experiment by orchestrating data collection, defines the statistical analysis of the resultant data, and guides the interpretation of the results. When properly described in the written report of the experiment, it serves as a road map to readers, 1 helping ...

  7. Experimental Research Designs: Types, Examples & Advantages

    A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

  8. Experimental Research: What it is + Types of designs

    The classic experimental design definition is: "The methods used to collect data in experimental studies.". There are three primary types of experimental design: The way you classify research subjects based on conditions or groups determines the type of research design you should use. 01. Pre-Experimental Design.

  9. Clinical research study designs: The essentials

    The conclusions derived from a research study can either improve health care or result in inadvertent harm to patients. Hence, this requires a well‐designed clinical research study that rests on a strong foundation of a detailed methodology and governed by ethical clinical principles. ... Experimental study designs can be divided into 3 broad ...

  10. Experimental Research Design

    Experimental research design is centrally concerned with constructing research that is high in causal (internal) validity. Randomized experimental designs provide the highest levels of causal validity. Quasi-experimental designs have a number of potential threats to their causal validity. Yet, new quasi-experimental designs adopted from fields ...

  11. Experimental Research

    Experimental research differs from other research approaches, as it has greater control over the objects of its study. When you conduct experimental research, you are not going to merely describe a condition, determine the status of something, or record past events as in other non-experimental methods described in the previous chapter.

  12. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

  13. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  14. Experimental Studies and Observational Studies

    Different study designs are needed for the description of aging and for the analysis of the explanatory mechanisms that cause age-associated change. Scientists should use the most appropriate design to study their research questions. At a general level, observational (non-experimental) studies and experimental studies can be distinguished (Fig ...

  15. How the Experimental Method Works in Psychology

    The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis. For example, researchers may want to learn how different visual patterns may impact our perception.

  16. Experimental Method In Psychology

    This makes it difficult for another researcher to replicate the study in exactly the same way. 3. Natural Experiment. A natural experiment in psychology is a research method in which the experimenter observes the effects of a naturally occurring event or situation on the dependent variable without manipulating any variables.

  17. 6.1 Experiment Basics

    Experiments have two fundamental features. The first is that the researchers manipulate, or systematically vary, the level of the independent variable. The different levels of the independent variable are called conditions. For example, in Darley and Latané's experiment, the independent variable was the number of witnesses that participants ...

  18. Experimental Research Designs: Types, Examples & Methods

    The pre-experimental research design is further divided into three types. One-shot Case Study Research Design. In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  19. Experimental Research: Definition, Types and Examples

    The three main types of experimental research design are: 1. Pre-experimental research. A pre-experimental research study is an observational approach to performing an experiment. It's the most basic style of experimental research. Free experimental research can occur in one of these design structures: One-shot case study research design: In ...

  20. TREND Reporting Guidelines for Nonrandomized/Quasi-Experimental Study

    The Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) guidelines were first published in 2004, in response to the perceived value and effect of the Consolidated Standards of Reporting Trials (CONSORT) guidelines that had been introduced a decade earlier. 1 The initial development of these guidelines was spearheaded by the US Centers for Disease Control and Prevention HIV ...

  21. Experimental induction of state rumination: A study evaluating the

    Based on previous studies, the present four experiments (total N = 468) aimed at investigating the effectivity of rumination induction in different experimental settings. We were particularly interested in rumination in the context of individual goal achievement and tested whether an instruction that referred to unresolved goals had a direct observable effect on state rumination.

  22. Planning and Conducting Clinical Research: The Whole Process

    Non-experimental researches are usually the diagnostic and prognostic studies with cross-sectional in data collection. The pinnacle of non-experimental research is the comparative effectiveness study, which is grouped with other non-experimental study designs such as cross-sectional, case-control, and cohort studies .

  23. Masks and respirators for prevention of respiratory infections: a state

    Such studies depend heavily on input parameters derived from mechanistic, observational, and experimental research. Results are somewhat predictable, since when masks and respirators are parameterized as being sufficiently effective to bring the reproduction number below 1 (either by themselves ( 204 , 212 ) or in combination with vaccination ...

  24. REM sleep preserves affective response to social stress

    Sleep's contribution to affective regulation is insufficiently understood. Previous human research has focused on memorizing or rating affective pictures and less on physiological affective responsivity. This may result in overlapping definitions of affective and declarative memories, and inconsistent deductions for how rapid eye movement sleep (REMS) and slow-wave sleep (SWS) are involved ...

  25. Chapter 10 Experimental Research

    Experimental research is best suited for explanatory research (rather than for descriptive or exploratory research), where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled.

  26. Recent studies on the use of magnetic fields in experimental oncology

    However, this area of research needs more efforts to gain better consensus. It is in fact characterized by two main weaknesses being the majority of studies confined to in vitro with the use of different experimental variables that do not allow good correlation between in vitro and the fewer available in vivo studies.

  27. Static and dynamic guided bone regeneration using a shape‐memory

    Clinical Implant Dentistry & Related Research is an oral & maxillofacial research journal covering osseointegrated implants, bone biology, ... An experimental study in rabbit mandible. Kazuhiro Imoto, Kazuhiro Imoto. Division of Oral and Maxillofacial Reconstructive Surgery, Department of Disease Management Dentistry, Tohoku University Graduate ...

  28. Journal of Medical Internet Research

    Methods: Three experimental studies were conducted to examine the influence of health care providers' physical attractiveness and gender on patient preferences in digital consultations. Study 1 (n=282) used a 2×2 between-subjects factorial design, manipulating doctor attractiveness and gender.