Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

As previously mentioned, one of the characteristics of a true experiment is that researchers use a random process to decide which participants are tested under which conditions. Random assignation is a powerful research technique that addresses the assumption of pre-test equivalence – that the experimental and control group are equal in all respects before the administration of the independent variable (Palys & Atchison, 2014).

Random assignation is the primary way that researchers attempt to control extraneous variables across conditions. Random assignation is associated with experimental research methods. In its strictest sense, random assignment should meet two criteria.  One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus, one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands on the heads side, the participant is assigned to Condition A, and if it lands on the tails side, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and, if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested.

However, one problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible.

One approach is block randomization. In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence. When the procedure is computerized, the computer program often handles the random assignment, which is obviously much easier. You can also find programs online to help you randomize your random assignation. For example, the Research Randomizer website will generate block randomization sequences for any number of participants and conditions ( Research Randomizer ).

Random assignation is not guaranteed to control all extraneous variables across conditions. It is always possible that, just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this may not be a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population take the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this confound is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design. Note: Do not confuse random assignation with random sampling. Random sampling is a method for selecting a sample from a population; we will talk about this in Chapter 7.

Research Methods, Data Collection and Ethics Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 6: Data Collection Strategies

6.1.1 Random Assignation

As previously mentioned, one of the characteristics of a true experiment is that researchers use a random process to decide which participants are tested under which conditions. Random assignation is a powerful research technique that addresses the assumption of pre-test equivalence – that the experimental and control group are equal in all respects before the administration of the independent variable (Palys & Atchison, 2014).

Random assignation is the primary way that researchers attempt to control extraneous variables across conditions. Random assignation is associated with experimental research methods. In its strictest sense, random assignment should meet two criteria.  One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus, one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands on the heads side, the participant is assigned to Condition A, and if it lands on the tails side, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and, if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested.

However, one problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible.

One approach is block randomization. In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence. When the procedure is computerized, the computer program often handles the random assignment, which is obviously much easier. You can also find programs online to help you randomize your random assignation. For example, the Research Randomizer website will generate block randomization sequences for any number of participants and conditions.

Random assignation is not guaranteed to control all extraneous variables across conditions. It is always possible that, just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this may not be a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population take the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this confound is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design. Note: Do not confuse random assignation with random sampling. Random sampling is a method for selecting a sample from a population; we will talk about this in Chapter 7.

Research Methods for the Social Sciences: An Introduction Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Sac State Library

  • My Library Account
  • Articles, Books & More
  • Course Reserves
  • Site Search
  • Advanced Search
  • Sac State Library
  • Research Guides

Research Methods Simplified

  • Quantitative Research
  • Qualitative Research
  • Comparative Method/Quasi-Experimental
  • Primary, Seconday and Tertiary Research and Resources
  • Definitions
  • Sources Consulted

Quantitative

Quantitative research - concerned with precise measurement, replicable, controlled and used to predict events. It is a formal, objective, systematic process. N umerical data are used to obtain information about the subject under study.

-uses data that are numeric

- primarily intended to test theories

-it is deductive and outcome orientated

-examples of statistical techniques used for quantitative data analysis are random sampling, regression analysis, factor analysis, correlation, cluster analysis, causal modeling and standardized tests

For comparative information on qualitative v.s. quantitative see: The University of Arkansas University Library Lib Guides

Related Information

Control group - the group of subjects or elements NOT exposed to the experimental treatment in a study where the sample is randomly selected

Experimental group - the group of subjects receiving the experimental treatment, i.e., the independent variable ( controlled measure or cause ) in an experiment.

Independent variable - the variable or measure being manipulated or controlled by the experimenter. The independent variable is assigned to participants by random assignment.

Dependent variable or dependent measure - the factor that the experimenter predicts is affected by the independent variable, i.e., the response, outcome or effect from the participants that the experimenter is measuring.

Four types of Quantitative Research

Descriptive  

1) Descriptive - provides a description and exploration of phenomena in real-life situations and characteristics. Correlational of particular individuals, situations or groups are described.  

Comparative

2) Comparative - a systematic investigation of relationships between two or more variables used to explain the nature of relationships in the world. Correlations may be positive (e.g., if one variable increases, so does the other), or negative (correlation occurs when one variable increases and the other decreases).

Quasi-experimental

3) Quasi-experimental - a study that resembles an experiment but random assignment had no role in determining which participants were placed on a specific level of treatment. Generally would have less validity than experiments.

Experimental (empirical)

4) Experimental (empirical) method- the scientific method used to test an experimental hypothesis or premise. Consists of a control group (not exposed to the experimental treatment, i.e.. is dependent) and the experimental group (is exposed to the treatment, i.e., independent)

  • << Previous: Home
  • Next: Qualitative Research >>
  • Last Updated: Jul 3, 2024 2:35 PM
  • URL: https://csus.libguides.com/res-meth

Random Sampling vs. Random Assignment

Random sampling and random assignment are fundamental concepts in the realm of research methods and statistics. However, many students struggle to differentiate between these two concepts, and very often use these terms interchangeably. Here we will explain the distinction between random sampling and random assignment.

Random sampling refers to the method you use to select individuals from the population to participate in your study. In other words, random sampling means that you are randomly selecting individuals from the population to participate in your study. This type of sampling is typically done to help ensure the representativeness of the sample (i.e., external validity). It is worth noting that a sample is only truly random if all individuals in the population have an equal probability of being selected to participate in the study. In practice, very few research studies use “true” random sampling because it is usually not feasible to ensure that all individuals in the population have an equal chance of being selected. For this reason, it is especially important to avoid using the term “random sample” if your study uses a nonprobability sampling method (such as convenience sampling).

request a consultation

Discover How We Assist to Edit Your Dissertation Chapters

Aligning theoretical framework, gathering articles, synthesizing gaps, articulating a clear methodology and data plan, and writing about the theoretical and practical implications of your research are part of our comprehensive dissertation editing services.

  • Bring dissertation editing expertise to chapters 1-5 in timely manner.
  • Track all changes, then work with you to bring about scholarly writing.
  • Ongoing support to address committee feedback, reducing revisions.

author david

Random assignment refers to the method you use to place participants into groups in an experimental study. For example, say you are conducting a study comparing the blood pressure of patients after taking aspirin or a placebo. You have two groups of patients to compare: patients who will take aspirin (the experimental group) and patients who will take the placebo (the control group). Ideally, you would want to randomly assign the participants to be in the experimental group or the control group, meaning that each participant has an equal probability of being placed in the experimental or control group. This helps ensure that there are no systematic differences between the groups before the treatment (e.g., the aspirin or placebo) is given to the participants. Random assignment is a fundamental part of a “true” experiment because it helps ensure that any differences found between the groups are attributable to the treatment, rather than a confounding variable.

So, to summarize, random sampling refers to how you select individuals from the population to participate in your study. Random assignment refers to how you place those participants into groups (such as experimental vs. control). Knowing this distinction will help you clearly and accurately describe the methods you use to collect your data and conduct your study.

Library Homepage

Research Process Guide

  • Step 1 - Identifying and Developing a Topic
  • Step 2 - Narrowing Your Topic
  • Step 3 - Developing Research Questions
  • Step 4 - Conducting a Literature Review
  • Step 5 - Choosing a Conceptual or Theoretical Framework
  • Step 6 - Determining Research Methodology
  • Step 6a - Determining Research Methodology - Quantitative Research Methods
  • Step 6b - Determining Research Methodology - Qualitative Design
  • Step 7 - Considering Ethical Issues in Research with Human Subjects - Institutional Review Board (IRB)
  • Step 8 - Collecting Data
  • Step 9 - Analyzing Data
  • Step 10 - Interpreting Results
  • Step 11 - Writing Up Results

Step 6a: Determining Research Methodology - Quantitative Research Methods

Quantitative research methods have a few designs to choose from, mostly rooted in the postpositivist worldview. The experimental design, quasi-experimental design and single subject experimental design (Bloomfield & Fisher, 2019; Creswell & Creswell, 2018). Single- subject or applied behavioral analysis consists of administering an experimental treatment to a person or small group of people over an extended period of time. Of the quasi experimental designs, subcategories; causal-comparative design and correlational design. Causal-comparative research allows for the investigator to compare two or more groups in terms of a treatment that has already happened. For correlational design, the researcher is looking to examine the relationship between variables or set of scores (Bloomfield & Fisher, 2019; Creswell & Creswell, 2018).

Generally, these kinds of designs fall into two categories, Survey Research and Experimental Research. Survey research uses a quantitative (numerical) description of trends, attitudes, opinions of a population by examining a sample of that population through questionnaires or structured interviews for data collection (Fowler, 2008; Fowler, 2014; Bloomfield & Fisher, 2019; Creswell & Creswell, 2018). These studies can be cross-sectional and longitudinal. Ultimately, the goal is to analyze the data and have the finding be generalizable to the entire population.

Experimental research uses the scientific method to determine if a specific treatment influences an outcome. This design requires random assignment of treatment conditions, and the quasi-experimental and single subject  version of this uses nonrandomized assignment of treatment (Bloomfield & Fisher, 2019). Survey Methods

Survey research methods are widely used and follow a standard format.  Examining survey research in scholarly journals would be a great way to familiarize yourself with the format and determine how to do it and, more importantly, if this method is right for your research.

How to prepare to do survey research? Creswell and Creswell (2018) as well as Fowler (2014), have provided basic framework for the rationale of survey research that you consider as you make the decision abou what kind of methods you will employ to conduct your inquiry.

  • Identify the purpose of your survey research- what variables interest you? This means start sketching out a purpose statement such as “ The primary purpose of this study is to empirically evaluate whether the number of overtime hours predicts subsequent burnout symptoms in emergency room nurses” (Creswell & Creswell, 2018, p. 149).
  • Write out why a survey method is the appropriate kind of approach for your study. It may be beneficial to discuss the advantages of survey research and the disadvantages of other methods.
  • Decide whether the survey will be cross-sectional or longitudinal. Meaning, will you gather the data at the same time  or collect it over time?
  • How will the data be collected, meaning how will the survey be filled out? Mail, phone, internet, structured interviews? Please provide the rationale for your choice.
  • Discuss your population and sampling - who is the target population? What is the size? Who are they in terms of demographic information? How do you plan to identify individuals in this population? Random sampling or systematic sampling and what is the rationale behind your choice? You should really aim for a particular fraction of the population that is typical based on past studies conducted on this topic.
  • Determine the estimated size of the correlation (r) . Using our above example, you might be looking at the relationship between hours worked and burnout symptoms. This might be difficult to determine if no other studies have been completed with these two variables involved.
  • Determine the two tailed value (a) This is called a Type I error and deals with the risk associated with a false positive. Typically, the accepted Type 1 alpha value is set at 0.5%, meaning there is a 5% probability that there is a significant (non-zero) relationship between the two variables (number of hours worked and burnout symptoms.
  • A beta value (b) is called a Type II error which refers to the risk we take saying there is no significant effect when there is a significant effect (false negative) Beta value is commonly set at .20.
  • By plugging in these numbers, r, alpha, and beta into a power analysis tool, you will be able to determine your sample size.

Survey Instrument

As you determine what instrument you will use, a survey you create or that has been used and created by someone else, you should consider the following (Fowler, 2008; Creswell & Creswell, 2018; Bloomfield & Fisher, 2019):

  • Name and give credit to the instrument and the researchers who developed it.  Or discuss your use of proprietary or free survey products online (Qualtrics, Survey Monkey).
  • Content validity (did the survey measure what it was intended to measure?)
  • Predictive validity (do scores predict a criterion measure? Do the scores correlate with other results?)
  • Construct validity (does the survey measure hypothetical concepts?)
  • What is the internal consistency of the survey? Does it perform in the same way with each variable and each item on the survey behaves in the same way? You can use the test-retest reliability, whether the instrument is stable over time.

Experimental Design

There are three components to experimental design which also follows a standard form: participants and design, procedure and measurement. There are a few considerations that Bouma et al. (2012), Bloomfield and Fisher (2019), Creswell and Creswell (2018) suggest you determine early on in your design.

  • Random Sampling - the sampling technique in which each sample has an equal probability of being chosen and is meant to be an unbiased representation of the total population.
  • Quota Sampling - is defined as a non-probability sampling method in which researchers create a sample involving individuals that represent a population.
  • Convenience Sampling - defined as a method adopted by researchers where they collect market research data from a conveniently available pool of participants.
  • Probability Sampling - refers to sampling techniques which  are aiming to identify a representative sample from which to collect data.
  • The idea of randomized assignment is a distinct feature of experimental design. When participants are randomly assigned to groups, the process is called a true experiment. If this is the case with your study, you should discuss how, when and why you are assigning participants to treatment groups. You need to describe in detail how each participant is placed to eliminate systematic bias in assigning participants. If your study design deals with more than one variable or treatment that cannot utilize random assignment (i.e. female school children benefit from a different teaching technique than male school children), this would change your design from true experimental design to a quasi-experimental design.
  • As with survey research, it would be essential to conduct a power analysis for sample size. The steps for power analysis are the same as survey design, however the focus for a power analysis of experimental design is about measuring effect size, meaning the estimated differences between groups of manipulated variables of interest. Please review steps  for power analysis in the survey research section.
  • Identify variables in the study, specifically the dependent and independent variables, as well as any other variables you intend to measure in the study. For example, you might want to think about participant demographic variables, variables that might impact your study design like time of day (i.e. energy levels might fluctuate during the day so that could impact measurement) and lastly, other variables that might impact your study’s outcomes.

Instrumentation

Just like with survey research, it is important to discuss how you are collecting your data, through what instrument or instruments are used, what scales are used, what their reliability and validity are based on past uses (Bouma et al., 2012; Creswell & Creswell, 2018; Bloomfield and Fisher, 2019). Ultimately, some quantitative experimental models may use  data sets that have already been collected like the National Center for Educational Statistics (NCES). In that case, you will be able to discuss the validity and reliability easily as it is well-established. However, if you are collecting your own data, you must discuss in detail what materials are used in the manipulation of variables. For example, you might want to pilot test the experiment so you have a detailed knowledge of the procedure (Bouma et al., 2012; Creswell & Creswell, 2018).

Also, often in experimental design, you don’t want the participants to know which variables are being manipulated or which group they are being assigned to. In order to be sure you are in line with IRB regulations (See IRB section), you want to draft a letter that will be used to explain the procedures and the study’s purpose to the participants (Creswell & Creswell, 2018). If there is any deception used in the study, be sure to check the IRB guidelines to ensure that you have all procedures and documents approved by Kean University’s IRB . Measurement and Data Analysis for Quantitative Methods

It is important to reiterate that there are several kinds of ways to collect data for a quantitative study. The data is always numerical, as opposed to qualitative data, which is largely narrative. The most common data collection methods for quantitative research are:

  • Close-ended surveys
  • Close-ended questionnaires
  • Structured interviews

The data is collected across populations, using a large sample size and then is analyzed using statistical analysis. Then, the results would be generalizable across populations. However, before you collect the data, you need to determine what exactly you are proposing to measure as you choose your variables and you. There are several kinds of statistical measurements in quantitative research. Each has its own purpose and objective. Ultimately, you need to decide if you are going to describe, explain, predict, or control your numerical data.

Quantitative data collection typically means there are a lot of data. Once the data is gathered, it may seem to be messy and disorganized at first. Your job as the researcher is to organize and then make the significance of the data clear. You do this by cleaning your data through “measurements” or scales and then running statistical analysis tests through your statistical analysis software program.

There are several purposes to statistical analysis in a quantitative study, such as (Kumar, 2015):

  • Summarize your data by identifying what is typical and what is atypical within a group.
  • Identify the rank of an individual or entity within a group
  • Demonstrate the relationship between or among variables.
  • Show similarities and differences among groups.
  • Identify any error that is inherent in a sample.
  • Test for significance.
  • Can support you in making inferences about the population being studied.

It is important to know that in order to properly analyze your numerical data, you will need access to statistical analysis software such as SPSS. The OCIS Help Desk website provides information on how to access SPSS under the Remote Learning (Students)  section.

Once you have collected your numerical data, you can run a series of statistical tests on your data, depending on your research questions.

There are four kinds of statistical measurements that you will be able to choose from in order to determine the best statistical tests to be utilized to explore your research inquiry. These measurements are also referred to as scales, and have very particular sets of statistical analysis tools that go along with each kind of scale (Bryman & Cramer, 2009).

Nominal measurements are labels (names, hence nominal) of  specific categories within mutually exclusive populations or treatment groups. These labels delineate non-numerical data such as gender, city of birth, race, ethnicity, or marital status (Bryman & Cramer, 2009; Ong & Puteh, 2017).

Ordinal measurements detail the order in which data is organized and ranked. These measures or scales deal with the greater than( >)compared to those less than (<) within a data set. Again, these are organized (named/ categorized)  and ranked (ordinal), such as class rank, ability level (beginner, intermediate, expert), or Likert scale answers (strongly agree, agree, undecided, disagree, strongly disagree) (Bryman & Cramer, 2009; Ong & Puteh, 2017).  

Interval measurements take data and order them (nominal), rank them (ordinal) and then evenly distribute them in equal intervals. There is also a zero point which is established. deal with equal units where a zero point is established. Interval measurements can be used for height, weight where there would be an absence of one of those variables (Bryman & Cramer, 2009; Ong & Puteh, 2017).

Ratio measurements allow for data to be measured by equal units (interval) and an absolute zero point is established. Here, in ratio measurements, the absolute zero value signifies the absence of the variable. For example, 0 lbs means the absence of weight. Height, weight, temperature are all examples of variables that can be measured through ratio scale (Bryman & Cramer, 2009; Ong & Puteh, 2017).

Bloomfield, J., & Fisher, M. J. (2019). Quantitative research design. Journal of the Australasian Rehabilitation Nurses Association, 22 (2), 27-30. https://doi-org.kean.idm.oclc.org/10.33235/jarna.22.2.27-30

Bouma, G. D., Ling, R., & Wilkinson, L. (2012). The research process (2nd Canadian ed.). Oxford University Press.

Bryman, A., & Cramer, D. (2009). Quantitative data analysis with SPSS 14, 15 & 16: A guide for social scientists. Routledge/Taylor & Francis Group.

Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches. Sage.

Fowler, F. J., Jr. (2008). Survey research methods (4th ed.). Sage.

Fowler, F. J., Jr. (2014). The problem with survey research. Contemporary Sociology, 43 (5): 660-662.

Kraemer, H. C., & Blasey, C. (2015). How many subjects?: Statistical power analysis in research. Sage.

Kumar, S. (2015). IRS introduction to research in special and inclusive education. [PowerPoint slides 4, 5, 37, 38, 39,43]. Informační systém Masarykovy univerzity. https://is.muni.cz/el/1441/podzim2015/SP_IRS/

Ong, M. H. A., & Puteh, F. (2017). Quantitative data analysis: Choosing between SPSS, PLS, and AMOS in social science research. International Interdisciplinary Journal of Scientific Research, 3 (1), 14-25.

Sharma, G. (2017). Pros and cons of different sampling techniques. International Journal of Applied Research, 3 (7), 749-752.

  • Last Updated: Jun 29, 2023 1:35 PM
  • URL: https://libguides.kean.edu/ResearchProcessGuide

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Random Assignment in Experiments | Introduction & Examples

Random Assignment in Experiments | Introduction & Examples

Published on 6 May 2022 by Pritha Bhandari . Revised on 13 February 2023.

In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomisation.

With simple random assignment, every member of the sample has a known or equal chance of being placed in a control group or an experimental group. Studies that use simple random assignment are also called completely randomised designs .

Random assignment is a key part of experimental design . It helps you ensure that all groups are comparable at the start of a study: any differences between them are due to random factors.

Table of contents

Why does random assignment matter, random sampling vs random assignment, how do you use random assignment, when is random assignment not used, frequently asked questions about random assignment.

Random assignment is an important part of control in experimental research, because it helps strengthen the internal validity of an experiment.

In experiments, researchers manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. To do so, they often use different levels of an independent variable for different groups of participants.

This is called a between-groups or independent measures design.

You use three groups of participants that are each given a different level of the independent variable:

  • A control group that’s given a placebo (no dosage)
  • An experimental group that’s given a low dosage
  • A second experimental group that’s given a high dosage

Random assignment to helps you make sure that the treatment groups don’t differ in systematic or biased ways at the start of the experiment.

If you don’t use random assignment, you may not be able to rule out alternative explanations for your results.

  • Participants recruited from pubs are placed in the control group
  • Participants recruited from local community centres are placed in the low-dosage experimental group
  • Participants recruited from gyms are placed in the high-dosage group

With this type of assignment, it’s hard to tell whether the participant characteristics are the same across all groups at the start of the study. Gym users may tend to engage in more healthy behaviours than people who frequent pubs or community centres, and this would introduce a healthy user bias in your study.

Although random assignment helps even out baseline differences between groups, it doesn’t always make them completely equivalent. There may still be extraneous variables that differ between groups, and there will always be some group differences that arise from chance.

Most of the time, the random variation between groups is low, and, therefore, it’s acceptable for further analysis. This is especially true when you have a large sample. In general, you should always use random assignment in experiments when it is ethically possible and makes sense for your study topic.

Prevent plagiarism, run a free check.

Random sampling and random assignment are both important concepts in research, but it’s important to understand the difference between them.

Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups.

While random sampling is used in many types of studies, random assignment is only used in between-subjects experimental designs.

Some studies use both random sampling and random assignment, while others use only one or the other.

Random sample vs random assignment

Random sampling enhances the external validity or generalisability of your results, because it helps to ensure that your sample is unbiased and representative of the whole population. This allows you to make stronger statistical inferences .

You use a simple random sample to collect data. Because you have access to the whole population (all employees), you can assign all 8,000 employees a number and use a random number generator to select 300 employees. These 300 employees are your full sample.

Random assignment enhances the internal validity of the study, because it ensures that there are no systematic differences between the participants in each group. This helps you conclude that the outcomes can be attributed to the independent variable .

  • A control group that receives no intervention
  • An experimental group that has a remote team-building intervention every week for a month

You use random assignment to place participants into the control or experimental group. To do so, you take your list of participants and assign each participant a number. Again, you use a random number generator to place each participant in one of the two groups.

To use simple random assignment, you start by giving every member of the sample a unique number. Then, you can use computer programs or manual methods to randomly assign each participant to a group.

  • Random number generator: Use a computer program to generate random numbers from the list for each group.
  • Lottery method: Place all numbers individually into a hat or a bucket, and draw numbers at random for each group.
  • Flip a coin: When you only have two groups, for each number on the list, flip a coin to decide if they’ll be in the control or the experimental group.
  • Use a dice: When you have three groups, for each number on the list, roll a die to decide which of the groups they will be in. For example, assume that rolling 1 or 2 lands them in a control group; 3 or 4 in an experimental group; and 5 or 6 in a second control or experimental group.

This type of random assignment is the most powerful method of placing participants in conditions, because each individual has an equal chance of being placed in any one of your treatment groups.

Random assignment in block designs

In more complicated experimental designs, random assignment is only used after participants are grouped into blocks based on some characteristic (e.g., test score or demographic variable). These groupings mean that you need a larger sample to achieve high statistical power .

For example, a randomised block design involves placing participants into blocks based on a shared characteristic (e.g., college students vs graduates), and then using random assignment within each block to assign participants to every treatment condition. This helps you assess whether the characteristic affects the outcomes of your treatment.

In an experimental matched design , you use blocking and then match up individual participants from each block based on specific characteristics. Within each matched pair or group, you randomly assign each participant to one of the conditions in the experiment and compare their outcomes.

Sometimes, it’s not relevant or ethical to use simple random assignment, so groups are assigned in a different way.

When comparing different groups

Sometimes, differences between participants are the main focus of a study, for example, when comparing children and adults or people with and without health conditions. Participants are not randomly assigned to different groups, but instead assigned based on their characteristics.

In this type of study, the characteristic of interest (e.g., gender) is an independent variable, and the groups differ based on the different levels (e.g., men, women). All participants are tested the same way, and then their group-level outcomes are compared.

When it’s not ethically permissible

When studying unhealthy or dangerous behaviours, it’s not possible to use random assignment. For example, if you’re studying heavy drinkers and social drinkers, it’s unethical to randomly assign participants to one of the two groups and ask them to drink large amounts of alcohol for your experiment.

When you can’t assign participants to groups, you can also conduct a quasi-experimental study . In a quasi-experiment, you study the outcomes of pre-existing groups who receive treatments that you may not have any control over (e.g., heavy drinkers and social drinkers).

These groups aren’t randomly assigned, but may be considered comparable when some other variables (e.g., age or socioeconomic status) are controlled for.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalisability of your results, while random assignment improves the internal validity of your study.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a die to randomly assign participants to groups.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2023, February 13). Random Assignment in Experiments | Introduction & Examples. Scribbr. Retrieved 9 September 2024, from https://www.scribbr.co.uk/research-methods/random-assignment-experiments/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, a quick guide to experimental design | 5 steps & examples, controlled experiments | methods & examples of control, control groups and treatment groups | uses & examples.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Am Med Inform Assoc
  • v.13(1); Jan-Feb 2006

The Use and Interpretation of Quasi-Experimental Studies in Medical Informatics

Associated data.

Quasi-experimental study designs, often described as nonrandomized, pre-post intervention studies, are common in the medical informatics literature. Yet little has been written about the benefits and limitations of the quasi-experimental approach as applied to informatics studies. This paper outlines a relative hierarchy and nomenclature of quasi-experimental study designs that is applicable to medical informatics intervention studies. In addition, the authors performed a systematic review of two medical informatics journals, the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI), to determine the number of quasi-experimental studies published and how the studies are classified on the above-mentioned relative hierarchy. They hope that future medical informatics studies will implement higher level quasi-experimental study designs that yield more convincing evidence for causal links between medical informatics interventions and outcomes.

Quasi-experimental studies encompass a broad range of nonrandomized intervention studies. These designs are frequently used when it is not logistically feasible or ethical to conduct a randomized controlled trial. Examples of quasi-experimental studies follow. As one example of a quasi-experimental study, a hospital introduces a new order-entry system and wishes to study the impact of this intervention on the number of medication-related adverse events before and after the intervention. As another example, an informatics technology group is introducing a pharmacy order-entry system aimed at decreasing pharmacy costs. The intervention is implemented and pharmacy costs before and after the intervention are measured.

In medical informatics, the quasi-experimental, sometimes called the pre-post intervention, design often is used to evaluate the benefits of specific interventions. The increasing capacity of health care institutions to collect routine clinical data has led to the growing use of quasi-experimental study designs in the field of medical informatics as well as in other medical disciplines. However, little is written about these study designs in the medical literature or in traditional epidemiology textbooks. 1 , 2 , 3 In contrast, the social sciences literature is replete with examples of ways to implement and improve quasi-experimental studies. 4 , 5 , 6

In this paper, we review the different pretest-posttest quasi-experimental study designs, their nomenclature, and the relative hierarchy of these designs with respect to their ability to establish causal associations between an intervention and an outcome. The example of a pharmacy order-entry system aimed at decreasing pharmacy costs will be used throughout this article to illustrate the different quasi-experimental designs. We discuss limitations of quasi-experimental designs and offer methods to improve them. We also perform a systematic review of four years of publications from two informatics journals to determine the number of quasi-experimental studies, classify these studies into their application domains, determine whether the potential limitations of quasi-experimental studies were acknowledged by the authors, and place these studies into the above-mentioned relative hierarchy.

The authors reviewed articles and book chapters on the design of quasi-experimental studies. 4 , 5 , 6 , 7 , 8 , 9 , 10 Most of the reviewed articles referenced two textbooks that were then reviewed in depth. 4 , 6

Key advantages and disadvantages of quasi-experimental studies, as they pertain to the study of medical informatics, were identified. The potential methodological flaws of quasi-experimental medical informatics studies, which have the potential to introduce bias, were also identified. In addition, a summary table outlining a relative hierarchy and nomenclature of quasi-experimental study designs is described. In general, the higher the design is in the hierarchy, the greater the internal validity that the study traditionally possesses because the evidence of the potential causation between the intervention and the outcome is strengthened. 4

We then performed a systematic review of four years of publications from two informatics journals. First, we determined the number of quasi-experimental studies. We then classified these studies on the above-mentioned hierarchy. We also classified the quasi-experimental studies according to their application domain. The categories of application domains employed were based on categorization used by Yearbooks of Medical Informatics 1992–2005 and were similar to the categories of application domains employed by Annual Symposiums of the American Medical Informatics Association. 11 The categories were (1) health and clinical management; (2) patient records; (3) health information systems; (4) medical signal processing and biomedical imaging; (5) decision support, knowledge representation, and management; (6) education and consumer informatics; and (7) bioinformatics. Because the quasi-experimental study design has recognized limitations, we sought to determine whether authors acknowledged the potential limitations of this design. Examples of acknowledgment included mention of lack of randomization, the potential for regression to the mean, the presence of temporal confounders and the mention of another design that would have more internal validity.

All original scientific manuscripts published between January 2000 and December 2003 in the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI) were reviewed. One author (ADH) reviewed all the papers to identify the number of quasi-experimental studies. Other authors (ADH, JCM, JF) then independently reviewed all the studies identified as quasi-experimental. The three authors then convened as a group to resolve any disagreements in study classification, application domain, and acknowledgment of limitations.

Results and Discussion

What is a quasi-experiment.

Quasi-experiments are studies that aim to evaluate interventions but that do not use randomization. Similar to randomized trials, quasi-experiments aim to demonstrate causality between an intervention and an outcome. Quasi-experimental studies can use both preintervention and postintervention measurements as well as nonrandomly selected control groups.

Using this basic definition, it is evident that many published studies in medical informatics utilize the quasi-experimental design. Although the randomized controlled trial is generally considered to have the highest level of credibility with regard to assessing causality, in medical informatics, researchers often choose not to randomize the intervention for one or more reasons: (1) ethical considerations, (2) difficulty of randomizing subjects, (3) difficulty to randomize by locations (e.g., by wards), (4) small available sample size. Each of these reasons is discussed below.

Ethical considerations typically will not allow random withholding of an intervention with known efficacy. Thus, if the efficacy of an intervention has not been established, a randomized controlled trial is the design of choice to determine efficacy. But if the intervention under study incorporates an accepted, well-established therapeutic intervention, or if the intervention has either questionable efficacy or safety based on previously conducted studies, then the ethical issues of randomizing patients are sometimes raised. In the area of medical informatics, it is often believed prior to an implementation that an informatics intervention will likely be beneficial and thus medical informaticians and hospital administrators are often reluctant to randomize medical informatics interventions. In addition, there is often pressure to implement the intervention quickly because of its believed efficacy, thus not allowing researchers sufficient time to plan a randomized trial.

For medical informatics interventions, it is often difficult to randomize the intervention to individual patients or to individual informatics users. So while this randomization is technically possible, it is underused and thus compromises the eventual strength of concluding that an informatics intervention resulted in an outcome. For example, randomly allowing only half of medical residents to use pharmacy order-entry software at a tertiary care hospital is a scenario that hospital administrators and informatics users may not agree to for numerous reasons.

Similarly, informatics interventions often cannot be randomized to individual locations. Using the pharmacy order-entry system example, it may be difficult to randomize use of the system to only certain locations in a hospital or portions of certain locations. For example, if the pharmacy order-entry system involves an educational component, then people may apply the knowledge learned to nonintervention wards, thereby potentially masking the true effect of the intervention. When a design using randomized locations is employed successfully, the locations may be different in other respects (confounding variables), and this further complicates the analysis and interpretation.

In situations where it is known that only a small sample size will be available to test the efficacy of an intervention, randomization may not be a viable option. Randomization is beneficial because on average it tends to evenly distribute both known and unknown confounding variables between the intervention and control group. However, when the sample size is small, randomization may not adequately accomplish this balance. Thus, alternative design and analytical methods are often used in place of randomization when only small sample sizes are available.

What Are the Threats to Establishing Causality When Using Quasi-experimental Designs in Medical Informatics?

The lack of random assignment is the major weakness of the quasi-experimental study design. Associations identified in quasi-experiments meet one important requirement of causality since the intervention precedes the measurement of the outcome. Another requirement is that the outcome can be demonstrated to vary statistically with the intervention. Unfortunately, statistical association does not imply causality, especially if the study is poorly designed. Thus, in many quasi-experiments, one is most often left with the question: “Are there alternative explanations for the apparent causal association?” If these alternative explanations are credible, then the evidence of causation is less convincing. These rival hypotheses, or alternative explanations, arise from principles of epidemiologic study design.

Shadish et al. 4 outline nine threats to internal validity that are outlined in ▶ . Internal validity is defined as the degree to which observed changes in outcomes can be correctly inferred to be caused by an exposure or an intervention. In quasi-experimental studies of medical informatics, we believe that the methodological principles that most often result in alternative explanations for the apparent causal effect include (a) difficulty in measuring or controlling for important confounding variables, particularly unmeasured confounding variables, which can be viewed as a subset of the selection threat in ▶ ; (b) results being explained by the statistical principle of regression to the mean . Each of these latter two principles is discussed in turn.

Threats to Internal Validity

1. Ambiguous temporal precedence: Lack of clarity about whether intervention occurred before outcome
2. Selection: Systematic differences over conditions in respondent characteristics that could also cause the observed effect
3. History: Events occurring concurrently with intervention could cause the observed effect
4. Maturation: Naturally occurring changes over time could be confused with a treatment effect
5. Regression: When units are selected for their extreme scores, they will often have less extreme subsequent scores, an occurrence that can be confused with an intervention effect
6. Attrition: Loss of respondents can produce artifactual effects if that loss is correlated with intervention
7. Testing: Exposure to a test can affect scores on subsequent exposures to that test
8. Instrumentation: The nature of a measurement may change over time or conditions
9. Interactive effects: The impact of an intervention may depend on the level of another intervention

Adapted from Shadish et al. 4

An inability to sufficiently control for important confounding variables arises from the lack of randomization. A variable is a confounding variable if it is associated with the exposure of interest and is also associated with the outcome of interest; the confounding variable leads to a situation where a causal association between a given exposure and an outcome is observed as a result of the influence of the confounding variable. For example, in a study aiming to demonstrate that the introduction of a pharmacy order-entry system led to lower pharmacy costs, there are a number of important potential confounding variables (e.g., severity of illness of the patients, knowledge and experience of the software users, other changes in hospital policy) that may have differed in the preintervention and postintervention time periods ( ▶ ). In a multivariable regression, the first confounding variable could be addressed with severity of illness measures, but the second confounding variable would be difficult if not nearly impossible to measure and control. In addition, potential confounding variables that are unmeasured or immeasurable cannot be controlled for in nonrandomized quasi-experimental study designs and can only be properly controlled by the randomization process in randomized controlled trials.

An external file that holds a picture, illustration, etc.
Object name is 16f01.jpg

Example of confounding. To get the true effect of the intervention of interest, we need to control for the confounding variable.

Another important threat to establishing causality is regression to the mean. 12 , 13 , 14 This widespread statistical phenomenon can result in wrongly concluding that an effect is due to the intervention when in reality it is due to chance. The phenomenon was first described in 1886 by Francis Galton who measured the adult height of children and their parents. He noted that when the average height of the parents was greater than the mean of the population, the children tended to be shorter than their parents, and conversely, when the average height of the parents was shorter than the population mean, the children tended to be taller than their parents.

In medical informatics, what often triggers the development and implementation of an intervention is a rise in the rate above the mean or norm. For example, increasing pharmacy costs and adverse events may prompt hospital informatics personnel to design and implement pharmacy order-entry systems. If this rise in costs or adverse events is really just an extreme observation that is still within the normal range of the hospital's pharmaceutical costs (i.e., the mean pharmaceutical cost for the hospital has not shifted), then the statistical principle of regression to the mean predicts that these elevated rates will tend to decline even without intervention. However, often informatics personnel and hospital administrators cannot wait passively for this decline to occur. Therefore, hospital personnel often implement one or more interventions, and if a decline in the rate occurs, they may mistakenly conclude that the decline is causally related to the intervention. In fact, an alternative explanation for the finding could be regression to the mean.

What Are the Different Quasi-experimental Study Designs?

In the social sciences literature, quasi-experimental studies are divided into four study design groups 4 , 6 :

  • Quasi-experimental designs without control groups
  • Quasi-experimental designs that use control groups but no pretest
  • Quasi-experimental designs that use control groups and pretests
  • Interrupted time-series designs

There is a relative hierarchy within these categories of study designs, with category D studies being sounder than categories C, B, or A in terms of establishing causality. Thus, if feasible from a design and implementation point of view, investigators should aim to design studies that fall in to the higher rated categories. Shadish et al. 4 discuss 17 possible designs, with seven designs falling into category A, three designs in category B, and six designs in category C, and one major design in category D. In our review, we determined that most medical informatics quasi-experiments could be characterized by 11 of 17 designs, with six study designs in category A, one in category B, three designs in category C, and one design in category D because the other study designs were not used or feasible in the medical informatics literature. Thus, for simplicity, we have summarized the 11 study designs most relevant to medical informatics research in ▶ .

Relative Hierarchy of Quasi-experimental Designs

Quasi-experimental Study DesignsDesign Notation
A. Quasi-experimental designs without control groups
    1. The one-group posttest-only designX O1
    2. The one-group pretest-posttest designO1 X O2
    3. The one-group pretest-posttest design using a double pretestO1 O2 X O3
    4. The one-group pretest-posttest design using a nonequivalent dependent variable(O1a, O1b) X (O2a, O2b)
    5. The removed-treatment designO1 X O2 O3 removeX O4
    6. The repeated-treatment designO1 X O2 removeX O3 X O4
B. Quasi-experimental designs that use a control group but no pretest
    1. Posttest-only design with nonequivalent groupsIntervention group: X O1
Control group: O2
C. Quasi-experimental designs that use control groups and pretests
    1. Untreated control group with dependent pretest and posttest samplesIntervention group: O1a X O2a
Control group: O1b O2b
    2. Untreated control group design with dependent pretest and posttest samples using a double pretestIntervention group: O1a O2a X O3a
Control group: O1b O2b O3b
    3. Untreated control group design with dependent pretest and posttest samples using switching replicationsIntervention group: O1a X O2a O3a
Control group: O1b O2b X O3b
D. Interrupted time-series design
    1. Multiple pretest and posttest observations spaced at equal intervals of timeO1 O2 O3 O4 O5 X O6 O7 O8 O9 O10

O = Observational Measurement; X = Intervention Under Study. Time moves from left to right.

The nomenclature and relative hierarchy were used in the systematic review of four years of JAMIA and the IJMI. Similar to the relative hierarchy that exists in the evidence-based literature that assigns a hierarchy to randomized controlled trials, cohort studies, case-control studies, and case series, the hierarchy in ▶ is not absolute in that in some cases, it may be infeasible to perform a higher level study. For example, there may be instances where an A6 design established stronger causality than a B1 design. 15 , 16 , 17

Quasi-experimental Designs without Control Groups

equation M1

Here, X is the intervention and O is the outcome variable (this notation is continued throughout the article). In this study design, an intervention (X) is implemented and a posttest observation (O1) is taken. For example, X could be the introduction of a pharmacy order-entry intervention and O1 could be the pharmacy costs following the intervention. This design is the weakest of the quasi-experimental designs that are discussed in this article. Without any pretest observations or a control group, there are multiple threats to internal validity. Unfortunately, this study design is often used in medical informatics when new software is introduced since it may be difficult to have pretest measurements due to time, technical, or cost constraints.

equation M2

This is a commonly used study design. A single pretest measurement is taken (O1), an intervention (X) is implemented, and a posttest measurement is taken (O2). In this instance, period O1 frequently serves as the “control” period. For example, O1 could be pharmacy costs prior to the intervention, X could be the introduction of a pharmacy order-entry system, and O2 could be the pharmacy costs following the intervention. Including a pretest provides some information about what the pharmacy costs would have been had the intervention not occurred.

equation M3

The advantage of this study design over A2 is that adding a second pretest prior to the intervention helps provide evidence that can be used to refute the phenomenon of regression to the mean and confounding as alternative explanations for any observed association between the intervention and the posttest outcome. For example, in a study where a pharmacy order-entry system led to lower pharmacy costs (O3 < O2 and O1), if one had two preintervention measurements of pharmacy costs (O1 and O2) and they were both elevated, this would suggest that there was a decreased likelihood that O3 is lower due to confounding and regression to the mean. Similarly, extending this study design by increasing the number of measurements postintervention could also help to provide evidence against confounding and regression to the mean as alternate explanations for observed associations.

equation M4

This design involves the inclusion of a nonequivalent dependent variable ( b ) in addition to the primary dependent variable ( a ). Variables a and b should assess similar constructs; that is, the two measures should be affected by similar factors and confounding variables except for the effect of the intervention. Variable a is expected to change because of the intervention X, whereas variable b is not. Taking our example, variable a could be pharmacy costs and variable b could be the length of stay of patients. If our informatics intervention is aimed at decreasing pharmacy costs, we would expect to observe a decrease in pharmacy costs but not in the average length of stay of patients. However, a number of important confounding variables, such as severity of illness and knowledge of software users, might affect both outcome measures. Thus, if the average length of stay did not change following the intervention but pharmacy costs did, then the data are more convincing than if just pharmacy costs were measured.

The Removed-Treatment Design

equation M5

This design adds a third posttest measurement (O3) to the one-group pretest-posttest design and then removes the intervention before a final measure (O4) is made. The advantage of this design is that it allows one to test hypotheses about the outcome in the presence of the intervention and in the absence of the intervention. Thus, if one predicts a decrease in the outcome between O1 and O2 (after implementation of the intervention), then one would predict an increase in the outcome between O3 and O4 (after removal of the intervention). One caveat is that if the intervention is thought to have persistent effects, then O4 needs to be measured after these effects are likely to have disappeared. For example, a study would be more convincing if it demonstrated that pharmacy costs decreased after pharmacy order-entry system introduction (O2 and O3 less than O1) and that when the order-entry system was removed or disabled, the costs increased (O4 greater than O2 and O3 and closer to O1). In addition, there are often ethical issues in this design in terms of removing an intervention that may be providing benefit.

The Repeated-Treatment Design

equation M6

The advantage of this design is that it demonstrates reproducibility of the association between the intervention and the outcome. For example, the association is more likely to be causal if one demonstrates that a pharmacy order-entry system results in decreased pharmacy costs when it is first introduced and again when it is reintroduced following an interruption of the intervention. As for design A5, the assumption must be made that the effect of the intervention is transient, which is most often applicable to medical informatics interventions. Because in this design, subjects may serve as their own controls, this may yield greater statistical efficiency with fewer numbers of subjects.

Quasi-experimental Designs That Use a Control Group but No Pretest

equation M7

An intervention X is implemented for one group and compared to a second group. The use of a comparison group helps prevent certain threats to validity including the ability to statistically adjust for confounding variables. Because in this study design, the two groups may not be equivalent (assignment to the groups is not by randomization), confounding may exist. For example, suppose that a pharmacy order-entry intervention was instituted in the medical intensive care unit (MICU) and not the surgical intensive care unit (SICU). O1 would be pharmacy costs in the MICU after the intervention and O2 would be pharmacy costs in the SICU after the intervention. The absence of a pretest makes it difficult to know whether a change has occurred in the MICU. Also, the absence of pretest measurements comparing the SICU to the MICU makes it difficult to know whether differences in O1 and O2 are due to the intervention or due to other differences in the two units (confounding variables).

Quasi-experimental Designs That Use Control Groups and Pretests

The reader should note that with all the studies in this category, the intervention is not randomized. The control groups chosen are comparison groups. Obtaining pretest measurements on both the intervention and control groups allows one to assess the initial comparability of the groups. The assumption is that if the intervention and the control groups are similar at the pretest, the smaller the likelihood there is of important confounding variables differing between the two groups.

equation M8

The use of both a pretest and a comparison group makes it easier to avoid certain threats to validity. However, because the two groups are nonequivalent (assignment to the groups is not by randomization), selection bias may exist. Selection bias exists when selection results in differences in unit characteristics between conditions that may be related to outcome differences. For example, suppose that a pharmacy order-entry intervention was instituted in the MICU and not the SICU. If preintervention pharmacy costs in the MICU (O1a) and SICU (O1b) are similar, it suggests that it is less likely that there are differences in the important confounding variables between the two units. If MICU postintervention costs (O2a) are less than preintervention MICU costs (O1a), but SICU costs (O1b) and (O2b) are similar, this suggests that the observed outcome may be causally related to the intervention.

equation M9

In this design, the pretests are administered at two different times. The main advantage of this design is that it controls for potentially different time-varying confounding effects in the intervention group and the comparison group. In our example, measuring points O1 and O2 would allow for the assessment of time-dependent changes in pharmacy costs, e.g., due to differences in experience of residents, preintervention between the intervention and control group, and whether these changes were similar or different.

equation M10

With this study design, the researcher administers an intervention at a later time to a group that initially served as a nonintervention control. The advantage of this design over design C2 is that it demonstrates reproducibility in two different settings. This study design is not limited to two groups; in fact, the study results have greater validity if the intervention effect is replicated in different groups at multiple times. In the example of a pharmacy order-entry system, one could implement or intervene in the MICU and then at a later time, intervene in the SICU. This latter design is often very applicable to medical informatics where new technology and new software is often introduced or made available gradually.

Interrupted Time-Series Designs

equation M11

An interrupted time-series design is one in which a string of consecutive observations equally spaced in time is interrupted by the imposition of a treatment or intervention. The advantage of this design is that with multiple measurements both pre- and postintervention, it is easier to address and control for confounding and regression to the mean. In addition, statistically, there is a more robust analytic capability, and there is the ability to detect changes in the slope or intercept as a result of the intervention in addition to a change in the mean values. 18 A change in intercept could represent an immediate effect while a change in slope could represent a gradual effect of the intervention on the outcome. In the example of a pharmacy order-entry system, O1 through O5 could represent monthly pharmacy costs preintervention and O6 through O10 monthly pharmacy costs post the introduction of the pharmacy order-entry system. Interrupted time-series designs also can be further strengthened by incorporating many of the design features previously mentioned in other categories (such as removal of the treatment, inclusion of a nondependent outcome variable, or the addition of a control group).

Systematic Review Results

The results of the systematic review are in ▶ . In the four-year period of JAMIA publications that the authors reviewed, 25 quasi-experimental studies among 22 articles were published. Of these 25, 15 studies were of category A, five studies were of category B, two studies were of category C, and no studies were of category D. Although there were no studies of category D (interrupted time-series analyses), three of the studies classified as category A had data collected that could have been analyzed as an interrupted time-series analysis. Nine of the 25 studies (36%) mentioned at least one of the potential limitations of the quasi-experimental study design. In the four-year period of IJMI publications reviewed by the authors, nine quasi-experimental studies among eight manuscripts were published. Of these nine, five studies were of category A, one of category B, one of category C, and two of category D. Two of the nine studies (22%) mentioned at least one of the potential limitations of the quasi-experimental study design.

Systematic Review of Four Years of Quasi-designs in JAMIA

StudyJournalInformatics Topic CategoryQuasi-experimental DesignLimitation of Quasi-design Mentioned in Article
Staggers and Kobus JAMIA1Counterbalanced study designYes
Schriger et al. JAMIA1A5Yes
Patel et al. JAMIA2A5 (study 1, phase 1)No
Patel et al. JAMIA2A2 (study 1, phase 2)No
Borowitz JAMIA1A2No
Patterson and Harasym JAMIA6C1Yes
Rocha et al. JAMIA5A2Yes
Lovis et al. JAMIA1Counterbalanced study designNo
Hersh et al. JAMIA6B1No
Makoul et al. JAMIA2B1Yes
Ruland JAMIA3B1No
DeLusignan et al. JAMIA1A1No
Mekhjian et al. JAMIA1A2 (study design 1)Yes
Mekhjian et al. JAMIA1B1 (study design 2)Yes
Ammenwerth et al. JAMIA1A2No
Oniki et al. JAMIA5C1Yes
Liederman and Morefield JAMIA1A1 (study 1)No
Liederman and Morefield JAMIA1A2 (study 2)No
Rotich et al. JAMIA2A2 No
Payne et al. JAMIA1A1No
Hoch et al. JAMIA3A2 No
Laerum et al. JAMIA1B1Yes
Devine et al. JAMIA1Counterbalanced study design
Dunbar et al. JAMIA6A1
Lenert et al. JAMIA6A2
Koide et al. IJMI5D4No
Gonzalez-Hendrich et al. IJMI2A1No
Anantharaman and Swee Han IJMI3B1No
Chae et al. IJMI6A2No
Lin et al. IJMI3A1No
Mikulich et al. IJMI1A2Yes
Hwang et al. IJMI1A2Yes
Park et al. IJMI1C2No
Park et al. IJMI1D4No

JAMIA = Journal of the American Medical Informatics Association; IJMI = International Journal of Medical Informatics.

In addition, three studies from JAMIA were based on a counterbalanced design. A counterbalanced design is a higher order study design than other studies in category A. The counterbalanced design is sometimes referred to as a Latin-square arrangement. In this design, all subjects receive all the different interventions but the order of intervention assignment is not random. 19 This design can only be used when the intervention is compared against some existing standard, for example, if a new PDA-based order entry system is to be compared to a computer terminal–based order entry system. In this design, all subjects receive the new PDA-based order entry system and the old computer terminal-based order entry system. The counterbalanced design is a within-participants design, where the order of the intervention is varied (e.g., one group is given software A followed by software B and another group is given software B followed by software A). The counterbalanced design is typically used when the available sample size is small, thus preventing the use of randomization. This design also allows investigators to study the potential effect of ordering of the informatics intervention.

Although quasi-experimental study designs are ubiquitous in the medical informatics literature, as evidenced by 34 studies in the past four years of the two informatics journals, little has been written about the benefits and limitations of the quasi-experimental approach. As we have outlined in this paper, a relative hierarchy and nomenclature of quasi-experimental study designs exist, with some designs being more likely than others to permit causal interpretations of observed associations. Strengths and limitations of a particular study design should be discussed when presenting data collected in the setting of a quasi-experimental study. Future medical informatics investigators should choose the strongest design that is feasible given the particular circumstances.

Supplementary Material

Dr. Harris was supported by NIH grants K23 AI01752-01A1 and R01 AI60859-01A1. Dr. Perencevich was supported by a VA Health Services Research and Development Service (HSR&D) Research Career Development Award (RCD-02026-1). Dr. Finkelstein was supported by NIH grant RO1 HL71690.

We're sorry, but some features of Research Randomizer require JavaScript. If you cannot enable JavaScript, we suggest you use an alternative random number generator such as the one available at Random.org .

RESEARCH RANDOMIZER

Random sampling and random assignment made easy.

Research Randomizer is a free resource for researchers and students in need of a quick way to generate random numbers or assign participants to experimental conditions. This site can be used for a variety of purposes, including psychology experiments, medical trials, and survey research.

GENERATE NUMBERS

In some cases, you may wish to generate more than one set of numbers at a time (e.g., when randomly assigning people to experimental conditions in a "blocked" research design). If you wish to generate multiple sets of random numbers, simply enter the number of sets you want, and Research Randomizer will display all sets in the results.

Specify how many numbers you want Research Randomizer to generate in each set. For example, a request for 5 numbers might yield the following set of random numbers: 2, 17, 23, 42, 50.

Specify the lowest and highest value of the numbers you want to generate. For example, a range of 1 up to 50 would only generate random numbers between 1 and 50 (e.g., 2, 17, 23, 42, 50). Enter the lowest number you want in the "From" field and the highest number you want in the "To" field.

Selecting "Yes" means that any particular number will appear only once in a given set (e.g., 2, 17, 23, 42, 50). Selecting "No" means that numbers may repeat within a given set (e.g., 2, 17, 17, 42, 50). Please note: Numbers will remain unique only within a single set, not across multiple sets. If you request multiple sets, any particular number in Set 1 may still show up again in Set 2.

Sorting your numbers can be helpful if you are performing random sampling, but it is not desirable if you are performing random assignment. To learn more about the difference between random sampling and random assignment, please see the Research Randomizer Quick Tutorial.

Place Markers let you know where in the sequence a particular random number falls (by marking it with a small number immediately to the left). Examples: With Place Markers Off, your results will look something like this: Set #1: 2, 17, 23, 42, 50 Set #2: 5, 3, 42, 18, 20 This is the default layout Research Randomizer uses. With Place Markers Within, your results will look something like this: Set #1: p1=2, p2=17, p3=23, p4=42, p5=50 Set #2: p1=5, p2=3, p3=42, p4=18, p5=20 This layout allows you to know instantly that the number 23 is the third number in Set #1, whereas the number 18 is the fourth number in Set #2. Notice that with this option, the Place Markers begin again at p1 in each set. With Place Markers Across, your results will look something like this: Set #1: p1=2, p2=17, p3=23, p4=42, p5=50 Set #2: p6=5, p7=3, p8=42, p9=18, p10=20 This layout allows you to know that 23 is the third number in the sequence, and 18 is the ninth number over both sets. As discussed in the Quick Tutorial, this option is especially helpful for doing random assignment by blocks.

Please note: By using this service, you agree to abide by the SPN User Policy and to hold Research Randomizer and its staff harmless in the event that you experience a problem with the program or its results. Although every effort has been made to develop a useful means of generating random numbers, Research Randomizer and its staff do not guarantee the quality or randomness of numbers generated. Any use to which these numbers are put remains the sole responsibility of the user who generated them.

Note: By using Research Randomizer, you agree to its Terms of Service .

Pardon Our Interruption

As you were browsing something about your browser made us think you were a bot. There are a few reasons this might happen:

  • You've disabled JavaScript in your web browser.
  • You're a power user moving through this website with super-human speed.
  • You've disabled cookies in your web browser.
  • A third-party browser plugin, such as Ghostery or NoScript, is preventing JavaScript from running. Additional information is available in this support article .

To regain access, please make sure that cookies and JavaScript are enabled before reloading the page.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Quasi-Experimental Design | Definition, Types & Examples

Quasi-Experimental Design | Definition, Types & Examples

Published on July 31, 2020 by Lauren Thomas . Revised on January 22, 2024.

Like a true experiment , a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable .

However, unlike a true experiment, a quasi-experiment does not rely on random assignment . Instead, subjects are assigned to groups based on non-random criteria.

Quasi-experimental design is a useful tool in situations where true experiments cannot be used for ethical or practical reasons.

Quasi-experimental design vs. experimental design

Table of contents

Differences between quasi-experiments and true experiments, types of quasi-experimental designs, when to use quasi-experimental design, advantages and disadvantages, other interesting articles, frequently asked questions about quasi-experimental designs.

There are several common differences between true and quasi-experimental designs.

True experimental design Quasi-experimental design
Assignment to treatment The researcher subjects to control and treatment groups. Some other, method is used to assign subjects to groups.
Control over treatment The researcher usually . The researcher often , but instead studies pre-existing groups that received different treatments after the fact.
Use of Requires the use of . Control groups are not required (although they are commonly used).

Example of a true experiment vs a quasi-experiment

However, for ethical reasons, the directors of the mental health clinic may not give you permission to randomly assign their patients to treatments. In this case, you cannot run a true experiment.

Instead, you can use a quasi-experimental design.

You can use these pre-existing groups to study the symptom progression of the patients treated with the new therapy versus those receiving the standard course of treatment.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

random assignment in quantitative research

Many types of quasi-experimental designs exist. Here we explain three of the most common types: nonequivalent groups design, regression discontinuity, and natural experiments.

Nonequivalent groups design

In nonequivalent group design, the researcher chooses existing groups that appear similar, but where only one of the groups experiences the treatment.

In a true experiment with random assignment , the control and treatment groups are considered equivalent in every way other than the treatment. But in a quasi-experiment where the groups are not random, they may differ in other ways—they are nonequivalent groups .

When using this kind of design, researchers try to account for any confounding variables by controlling for them in their analysis or by choosing groups that are as similar as possible.

This is the most common type of quasi-experimental design.

Regression discontinuity

Many potential treatments that researchers wish to study are designed around an essentially arbitrary cutoff, where those above the threshold receive the treatment and those below it do not.

Near this threshold, the differences between the two groups are often so minimal as to be nearly nonexistent. Therefore, researchers can use individuals just below the threshold as a control group and those just above as a treatment group.

However, since the exact cutoff score is arbitrary, the students near the threshold—those who just barely pass the exam and those who fail by a very small margin—tend to be very similar, with the small differences in their scores mostly due to random chance. You can therefore conclude that any outcome differences must come from the school they attended.

Natural experiments

In both laboratory and field experiments, researchers normally control which group the subjects are assigned to. In a natural experiment, an external event or situation (“nature”) results in the random or random-like assignment of subjects to the treatment group.

Even though some use random assignments, natural experiments are not considered to be true experiments because they are observational in nature.

Although the researchers have no control over the independent variable , they can exploit this event after the fact to study the effect of the treatment.

However, as they could not afford to cover everyone who they deemed eligible for the program, they instead allocated spots in the program based on a random lottery.

Although true experiments have higher internal validity , you might choose to use a quasi-experimental design for ethical or practical reasons.

Sometimes it would be unethical to provide or withhold a treatment on a random basis, so a true experiment is not feasible. In this case, a quasi-experiment can allow you to study the same causal relationship without the ethical issues.

The Oregon Health Study is a good example. It would be unethical to randomly provide some people with health insurance but purposely prevent others from receiving it solely for the purposes of research.

However, since the Oregon government faced financial constraints and decided to provide health insurance via lottery, studying this event after the fact is a much more ethical approach to studying the same problem.

True experimental design may be infeasible to implement or simply too expensive, particularly for researchers without access to large funding streams.

At other times, too much work is involved in recruiting and properly designing an experimental intervention for an adequate number of subjects to justify a true experiment.

In either case, quasi-experimental designs allow you to study the question by taking advantage of data that has previously been paid for or collected by others (often the government).

Quasi-experimental designs have various pros and cons compared to other types of studies.

  • Higher external validity than most true experiments, because they often involve real-world interventions instead of artificial laboratory settings.
  • Higher internal validity than other non-experimental types of research, because they allow you to better control for confounding variables than other types of studies do.
  • Lower internal validity than true experiments—without randomization, it can be difficult to verify that all confounding variables have been accounted for.
  • The use of retrospective data that has already been collected for other purposes can be inaccurate, incomplete or difficult to access.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Thomas, L. (2024, January 22). Quasi-Experimental Design | Definition, Types & Examples. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/methodology/quasi-experimental-design/

Is this article helpful?

Lauren Thomas

Lauren Thomas

Other students also liked, guide to experimental design | overview, steps, & examples, random assignment in experiments | introduction & examples, control variables | what are they & why do they matter, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

The Effectiveness of AI on K-12 Students’ Mathematics Learning: A Systematic Review and Meta-Analysis

  • Published: 12 September 2024

Cite this article

random assignment in quantitative research

  • Linxuan Yi 1 ,
  • Di Liu   ORCID: orcid.org/0000-0003-0461-1012 1 ,
  • Tiancheng Jiang 1 &
  • Yucheng Xian 1  

Artificial intelligence (AI) shows increasing potential to improve mathematics instruction, yet integrative quantitative evidence currently is lacking on its overall effectiveness and factors influencing success. This systematic review and meta-analysis investigate the effectiveness of AI on improving mathematics performance in K-12 classrooms compared to traditional classroom instruction. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, we searched five databases from 2000 through December 2023, synthesizing findings from 21 relevant studies (40 samples) which met screening criteria. Results indicate a small overall effect size of 0.343 favoring AI under a random-effects model, showing a generally positive impact. Only one variable, AI type, was identified as having moderate effects, with AI demonstrating a greater impact when served as an intelligent tutoring system and adaptive learning system. Our findings establish an initial knowledge base for implementation and future research on the effective integration of AI into K–12 mathematics classrooms. This study also focuses on the appropriateness across age, mathematical contents, and AI design factors is aimed at further advancing the judicious adoption and success of classroom AI integration.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

random assignment in quantitative research

Similar content being viewed by others

random assignment in quantitative research

Machine Learning and the Five Big Ideas in AI

random assignment in quantitative research

Meta-analysis on effects of artificial intelligence education in K-12 South Korean classrooms

random assignment in quantitative research

Exploring the Impact of Artificial Intelligence in Teaching and Learning of Science: A Systematic Review of Empirical Research

Explore related subjects.

  • Artificial Intelligence

Besimi, N., Besimi, A., & Cico, B. (2022). Artificial intelligence in education and learning (AI in research). In 2022 11th Mediterranean Conference on Embedded Computing (MECO) (pp. 1–6). IEEE. https://doi.org/10.1109/meco55406.2022.9797216

Chapter   Google Scholar  

Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2021). Introduction to meta-analysis (2nd ed.). John Wiley and Sons.

Book   Google Scholar  

Büchele, S., & Feudel, F. (2023). Changes in students’ mathematical competencies at the beginning of higher education within the last decade at a German University. International Journal of Science and Mathematics Education, 21 (8), 2325–2347. https://doi.org/10.1007/s10763-022-10350-x

Article   Google Scholar  

Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. Ieee Access: Practical Innovations, Open Solutions , 8 , 75264–75278. https://doi.org/10.1109/access.2020.2988510

Chen, X., Xie, H., & Hwang, G. J. (2020). A multi-perspective study on Artificial Intelligence in Education: Grants, conferences, journals, software tools, institutions, and researchers. Computers and Education: Artificial Intelligence, 1 , 100005. https://doi.org/10.1016/j.caeai.2020.100005

de Morais, F., & Jaques, P. A. (2021). Does handwriting impact learning on math tutoring systems? Informatics in Education , 21 (1), 55–90. https://doi.org/10.15388/infedu.2022.03

Duval, S., & Tweedie, R. (2000). Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics , 56 (2), 455–463. https://doi.org/10.1111/j.0006-341x.2000.00455.x

Fang, Y., Ren, Z., Hu, X., & Graesser, A. C. (2018). A meta-analysis of the effectiveness of Aleks on learning. Educational Psychology , 39 (10), 1278–1292. https://doi.org/10.1080/01443410.2018.1495829

Hall, J. A., & Rosenthal, R. (1991). Testing for moderator variables in meta-analysis: Issues and methods. Communication Monographs , 58 (4), 437–448. https://doi.org/10.1080/03637759109376240

Hascoët, M., Giaconi, V., & Jamain, L. (2021). Family socioeconomic status and parental expectations affect mathematics achievement in a national sample of Chilean students. International Journal of Behavioral Development , 45 (2), 122–132. https://doi.org/10.1177/0165025420965731

Hillmayr, D., Ziernwald, L., Reinhold, F., Hofer, S. I., & Reiss, K. M. (2020). The potential of digital tools to enhance mathematics and science learning in secondary schools: A context-specific meta-analysis. Computers & Education , 153 , 103897. https://doi.org/10.1016/j.compedu.2020.103897

Hwang, G. J., & Tu, Y. F. (2021). Roles and research trends of Artificial Intelligence in mathematics education: A bibliometric mapping analysis and systematic review. Mathematics , 9 (6), 584. https://doi.org/10.3390/math9060584

Hwang, S. (2022). Examining the effects of artificial intelligence on Elementary Students’ mathematics achievement: A meta-analysis. Sustainability , 14 (20), 13185. https://doi.org/10.3390/su142013185

Li, Q., & Ma, X. (2010). A meta-analysis of the effects of computer technology on school students’ mathematics learning. Educational Psychology Review , 22 (3), 215–243. https://doi.org/10.1007/s10648-010-9125-8

Ma, W., Zhao, X., & Guo, Y. (2021). Improving the effectiveness of traditional education based on Computer Artificial Intelligence and neural network system. Journal of Intelligent & Fuzzy Systems , 40 (2), 2565–2575. https://doi.org/10.3233/jifs-189249

Mavrikis, M., Rummel, N., Wiedmann, M., Loibl, K., & Holmes, W. (2022). Combining exploratory learning with structured practice educational technologies to foster both conceptual and procedural fractions knowledge. Educational Technology Research and Development , 70 (3), 691–712. https://doi.org/10.1007/s11423-022-10104-0

Mohamed, M. Z., Hidayat, R., Suhaizi, N. N., Sabri, N., Mahmud, M. K., & Baharuddin, S. N. (2022). Artificial Intelligence in mathematics education: A systematic literature review. International Electronic Journal of Mathematics Education , 17 (3), em0694. https://doi.org/10.29333/iejme/12132

National Council of Teachers of Mathematics. (2022, October 24). 2022 NAEP math scores reinforce why systemic change is needed in mathematics education . NCTM Responds to 2022 Math NAEP Results . Retrieved from https://www.nctm.org/News-and-Calendar/News/NCTM-News-Releases/NCTM-Responds-to-2022-Math-NAEP-Results/

National Council of Teachers of Mathematics. (2000). Principles and standards for school mathematics . NCTM.

Google Scholar  

Parviainen, P., Eklund, K., Koivula, M., Liinamaa, T., & Rutanen, N. (2023). Teaching early mathematical skills to 3- to 7-year-old children — differences related to mathematical skill category, children’s age group and teachers’ characteristics. International Journal of Science and Mathematics Education, 21 (7), 1961–1983. https://doi.org/10.1007/s10763-022-10341-y

Peng, P., & Lin, X. (2019). The relation between mathematics vocabulary and mathematics performance among fourth graders. Learning and Individual Differences , 69 , 11–21. https://doi.org/10.1016/j.lindif.2018.11.006

Peng, P., Namkung, J., Barnes, M., & Sun, C. (2016). A meta-analysis of mathematics and working memory: Moderating effects of working memory domain, type of mathematics skill, and sample characteristics. Journal of Educational Psychology , 108 (4), 455–473. https://doi.org/10.1037/edu0000079

Qu, J., Zhao, Y., & Xie, Y. (2022). Artificial intelligence leads the reform of Education models. Systems Research and Behavioral Science , 39 (3), 581–588. https://doi.org/10.1002/sres.2864

Steenbergen-Hu, S., & Cooper, H. (2013). A meta-analysis of the effectiveness of intelligent tutoring systems on K–12 students’ mathematical learning. Journal of Educational Psychology , 105 (4), 970–987. https://doi.org/10.1037/a0032447

Steenbergen-Hu, S., & Cooper, H. (2014). A meta-analysis of the effectiveness of intelligent tutoring systems on college students’ academic learning. Journal of Educational Psychology , 106 (2), 331–347. https://doi.org/10.1037/a0034752

Tang, K. Y., Chang, C. Y., & Hwang, G. J. (2021). Trends in artificial intelligence-supported e-learning: A systematic review and co-citation network analysis (1998–2019). Interactive Learning Environments , 31 (4), 2134–2152. https://doi.org/10.1080/10494820.2021.1875001

Thai, K. P., Bang, H. J., & Li, L. (2021). Accelerating early math learning with research-based personalized learning games: A cluster randomized controlled trial. Journal of Research on Educational Effectiveness , 15 (1), 28–51. https://doi.org/10.1080/19345747.2021.1969710

United Nations Educational, Scientific and Cultural Organization. (2019). Beijing consensus on AI and education . Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000368303

Xu, W., & Ouyang, F. (2022). The application of AI Technologies in STEM education: A systematic review from 2011 to 2021. International Journal of STEM Education , 9 (59), 1–20. https://doi.org/10.1186/s40594-022-00377-5

Yang, S. J. H., Ogata, H., Matsui, T., & Chen, N. S. (2021). Human-centered artificial intelligence in education: Seeing the invisible through the visible. Computers and Education: Artificial Intelligence , 2 , 100008. https://doi.org/10.1016/j.caeai.2021.100008

Yu, X., Xia, J., & Cheng, W. (2022). Prospects and challenges of equipping mathematics tutoring systems with personalized learning strategies . 2022 International Conference on Intelligent Education and Intelligent Research (IEIR), Wuhan, China. https://doi.org/10.1109/ieir56323.2022.10050082

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16 (1). https://doi.org/10.1186/s41239-019-0171-0

Zhang, S., & Chen, X. (2022). Applying artificial intelligence into early childhood math education: Lesson Design and course effect . 2022 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE), Hung Hom, Hong Kong. https://doi.org/10.1109/tale54877.2022.00109

Studies included in the meta-analysis

Arroyo, I., Royer, J., & Woolf, B. (2011). Using an intelligent tutor and math fluency training to improve math performance. International Journal of Artificial Intelligence in Education , 21 (1), 135–152. https://doi.org/10.3233/jai-2011-020

Bringula, R. P., Fosgate, I. C., Garcia, N. P., & Yorobe, J. L. (2017). Effects of pedagogical agents on students’ mathematics performance: A comparison between two versions. Journal of Educational Computing Research , 56 (5), 701–722. https://doi.org/10.1177/0735633117722494

del Olmo-Muñoz, J., González‐Calero, J. A., Diago, P. D., Arnau, D., & Arevalillo‐Herráez, M. (2022). Using intra‐task flexibility on an intelligent tutoring system to promote arithmetic problem‐solving proficiency. British Journal of Educational Technology , 53 (6), 1976–1992. https://doi.org/10.1111/bjet.13228

González-Calero, J. A., Arnau, D., Puig, L., & Arevalillo‐Herráez, M. (2014). Intensive scaffolding in an intelligent tutoring system for the learning of algebraic word problem solving. British Journal of Educational Technology , 46 (6), 1189–1200. https://doi.org/10.1111/bjet.12183

Harskamp, E. G., & Suhre, C. J. M. (2006). Improving mathematical problem solving: A computerized approach. Computers in Human Behavior , 22 (5), 801–815. https://doi.org/10.1016/j.chb.2004.03.023

Hou, X., Nguyen, H. A., Richey, J. E., Harpstead, E., Hammer, J., & McLaren, B. M. (2022). Assessing the effects of open models of learning and enjoyment in a digital learning game. International Journal of Artificial Intelligence in Education , 32 (1), 120–150. https://doi.org/10.1007/s40593-021-00250-6

Huang, X., Craig, S. D., Xie, J., Graesser, A., & Hu, X. (2016). Intelligent tutoring systems work as a math gap reducer in 6th grade after-school program. Learning and Individual Differences , 47 , 258–265. https://doi.org/10.1016/j.lindif.2016.01.012

Hwang, G. J., Tseng, J. C. R., & Hwang, G. H. (2008). Diagnosing student learning problems based on Historical Assessment Records. Innovations in Education and Teaching International , 45 (1), 77–89. https://doi.org/10.1080/14703290701757476

Jaques, P. A., Seffrin, H., Rubi, G., de Morais, F., Ghilardi, C., Bittencourt, I. I., & Isotani, S. (2013). Rule-based expert systems to support step-by-step guidance in algebraic problem solving: The case of the tutor pat2math. Expert Systems with Applications , 40 (14), 5456–5465. https://doi.org/10.1016/j.eswa.2013.04.004

Jia, J., Li, S., Miao, Y., & Li, J. (2023). The effects of personalised mathematic instruction supported by an intelligent tutoring system during the COVID-19 epidemic and the post-epidemic era. International Journal of Innovation and Learning, 33 (3), 330–343. https://doi.org/10.1504/ijil.2023.130099

Khodeir, N., Wanas, N., & Elazhary, H. (2018). Constraint-based student modelling in probability story problems with scaffolding techniques. International Journal of Emerging Technologies in Learning (IJET) , 13 (01), 178–205. https://doi.org/10.3991/ijet.v13i01.7397

Kim, Y., Thayne, J., & Wei, Q. (2017). An embodied agent helps anxious students in mathematics learning. Educational Technology Research and Development, 65 (1), 219–235. https://doi.org/10.1007/s11423-016-9476-z

Kohn, J., Rauscher, L., Kucian, K., Käser, T., Wyschkon, A., Esser, G., & von Aster, M. (2020). Efficacy of a computer-based learning program in children with developmental dyscalculia. what influences individual responsiveness? Frontiers in Psychology , 11 , 1115. https://doi.org/10.3389/fpsyg.2020.01115

Li, Y., Zhao, K., & Xu, W. (2015). Developing an intelligent tutoring system that has automatically generated hints and summarization for algebra and geometry. International Journal of Information and Communication Technology Education , 11 (2), 14–31. https://doi.org/10.4018/ijicte.2015040102

Matsuda, N., Yarzebinski, E., Keiser, V., Raizada, R., Cohen, W. W., Stylianides, G. J., & Koedinger, K. R. (2013). Cognitive anatomy of tutor learning: Lessons learned with Simstudent. Journal of Educational Psychology , 105 (4), 1152–1163. https://doi.org/10.1037/a0031955

Özyurt, Ö., Özyurt, H., Güven, B., & Baki, A. (2014). The effects of UZWEBMAT on the probability unit achievement of Turkish eleventh grade students and the reasons for such effects. Computers & Education , 75 , 1–18. https://doi.org/10.1016/j.compedu.2014.02.005

Pai, K. C., Kuo, B. C., Liao, C. H., & Liu, Y. M. (2020). An application of Chinese dialogue-based intelligent tutoring system in remedial instruction for Mathematics learning. Educational Psychology , 41 (2), 137–152. https://doi.org/10.1080/01443410.2020.1731427

Shih, S. C., Chang, C. C., Kuo, B. C., & Huang, Y. H. (2023). Mathematics intelligent tutoring system for learning multiplication and division of fractions based on diagnostic teaching. Education and Information Technologies , 28 (7), 9189–9210. https://doi.org/10.1007/s10639-022-11553-z

Wang, S., Christensen, C., Cui, W., Tong, R., Yarnall, L., Shear, L., & Feng, M. (2023). When adaptive learning is effective learning: Comparison of an adaptive learning system to teacher-led instruction. Interactive Learning Environments, 31 (2), 793–803. https://doi.org/10.1080/10494820.2020.1808794

Wu, H. M. (2018). Online individualised tutor for improving Mathematics learning: A cognitive diagnostic model approach. Educational Psychology , 39 (10), 1218–1232. https://doi.org/10.1080/01443410.2018.1494819

Xin, Y. P., Tzur, R., Hord, C., Liu, J., Park, J. Y., & Si, L. (2016). An intelligent tutor-assisted Mathematics Intervention Program for students with learning difficulties. Learning Disability Quarterly , 40 (1), 4–16. https://doi.org/10.1177/0731948716648740

Download references

Author information

Authors and affiliations.

School of Mathematical Sciences, East China Normal University, 500 Dongchuan Rd, Shanghai, 200241, China

Linxuan Yi, Di Liu, Tiancheng Jiang & Yucheng Xian

You can also search for this author in PubMed   Google Scholar

Contributions

Di Liu had the idea for the article, and all authors contributed to the study design. Searching procedure, quality evaluation, data extraction and descriptive coding were conducted by Di Liu, Linxuan Yi, Tiancheng Jiang and Yucheng Xian. Data analysis was conducted by Linxuan Yi and Di Liu. The first draft of the manuscript was written by Linxuan Yi and it was critically revised by Di Liu. All authors commented on previous versions of manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Di Liu .

Ethics declarations

Conflict of interests.

The authors declare that they have no conflict of interest. This study did not require direct contact to human participants nor an ethical approval. The authors state that the research met all ethical requirements.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Yi, L., Liu, D., Jiang, T. et al. The Effectiveness of AI on K-12 Students’ Mathematics Learning: A Systematic Review and Meta-Analysis. Int J of Sci and Math Educ (2024). https://doi.org/10.1007/s10763-024-10499-7

Download citation

Received : 22 April 2023

Accepted : 19 August 2024

Published : 12 September 2024

DOI : https://doi.org/10.1007/s10763-024-10499-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Effectiveness
  • K-12 Students
  • Mathematics Learning
  • Meta-Analysis
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Random Assignment in Experiments

    random assignment in quantitative research

  2. Random Assignment in Experiments

    random assignment in quantitative research

  3. Introduction to Random Assignment -Voxco

    random assignment in quantitative research

  4. Simple Random Sample

    random assignment in quantitative research

  5. PPT

    random assignment in quantitative research

  6. PPT

    random assignment in quantitative research

VIDEO

  1. Conditions Most Conducive to Random Assignment

  2. Assignment 2 Quantitative technique

  3. Skills in Quantitative Methods QMI1500 LAST Assignment Prep

  4. QUANTITATIVE ASSIGNMENT MGM3063 TASK 3 (PROBLEM 8.4)

  5. random sampling & assignment

  6. BA4201 Unit 2

COMMENTS

  1. Random Assignment in Experiments

    Random Assignment in Experiments | Introduction & Examples. Published on March 8, 2021 by Pritha Bhandari.Revised on June 22, 2023. In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomization. With simple random assignment, every member of the sample has a known or equal chance of being placed in a control ...

  2. 6.1.1 Random Assignation

    Random assignation is associated with experimental research methods. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a ...

  3. 6.1.1 Random Assignation

    The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design. Note: Do not confuse random assignation with random sampling. Random sampling is a method for selecting a sample from a population; we will talk about this in Chapter 7.

  4. PDF Quantitative Research Designs: Experimental, Quasi-Experimental, and

    True experimental research involves random assignment to groups so participants each have an equal chance of receiving any of the treatments (including no treatment) under study. Quasi-experimental research does not ... Quantitative research designs are often used to look at causal relationships, but they

  5. 1.3: Threats to Internal Validity and Different Control Techniques

    Random assignment. Random assignment is the single most powerful control technique we can use to minimize the potential threats of the confounding variables in research design. As we have seen in Dunn and her colleagues' study earlier, participants are not allowed to self select into either conditions (spend $20 on self or spend on others).

  6. Issues in Outcomes Research: An Overview of Randomization Techniques

    What Is Randomization? Randomization is the process of assigning participants to treatment and control groups, assuming that each participant has an equal chance of being assigned to any group. 12 Randomization has evolved into a fundamental aspect of scientific research methodology. Demands have increased for more randomized clinical trials in many areas of biomedical research, such as ...

  7. Simple Random Sampling

    Revised on December 18, 2023. A simple random sample is a randomly selected subset of a population. In this sampling method, each member of the population has an exactly equal chance of being selected. This method is the most straightforward of all the probability sampling methods, since it only involves a single random selection and requires ...

  8. Quantitative Research

    Quantitative research- concerned with precise measurement, replicable, controlled and used to predict events. It is a formal, objective, systematic process. ... The independent variable is assigned to participants by random assignment. Dependent variable or dependent measure- the factor that the experimenter predicts is affected by the ...

  9. What is random assignment?

    In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. ... A scope is needed for all types of research: quantitative, qualitative, and mixed methods. To define your scope of research, consider the following:

  10. What's the difference between random assignment and random ...

    Random selection, or random sampling, is a way of selecting members of a population for your study's sample. In contrast, random assignment is a way of sorting the sample into control and experimental groups. Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal ...

  11. Random Sampling vs. Random Assignment

    So, to summarize, random sampling refers to how you select individuals from the population to participate in your study. Random assignment refers to how you place those participants into groups (such as experimental vs. control). Knowing this distinction will help you clearly and accurately describe the methods you use to collect your data and ...

  12. PDF VALIDITY OF QUANTITATIVE RESEARCH

    The research design can address threats to validity through • random assignment of subjects to groups (experimental or control) • holding extraneous variables constant or restricting their range (for example, focusing only on young adults) • including extraneous variables in the design to measure their effects (e.g.,

  13. An overview of randomization techniques: An unbiased assessment of

    A random number table found in a statistics book or computer-generated random numbers can also be used for simple randomization of subjects. This randomization approach is simple and easy to implement in a clinical research. In large clinical research, simple randomization can be trusted to generate similar numbers of subjects among groups.

  14. Step 6a

    Quantitative research methods have a few designs to choose from, mostly rooted in the postpositivist worldview. The experimental design, quasi-experimental design and single subject experimental design (Bloomfield & Fisher, 2019; Creswell & Creswell, 2018). ... This design requires random assignment of treatment conditions, and the quasi ...

  15. PDF CHAPTER 4 Quantitative and Qualitative Research

    Quantitative research is an inquiry into an identified problem, based on testing a theory, measured with numbers, and analyzed using statistical techniques. The goal of quantitative methods is to determine whether the predictive generalizations of a theory hold true. By contrast, a study based upon a qualitative process of inquiry has the goal ...

  16. Randomised controlled trials—the gold standard for effectiveness research

    Randomised controlled trials—the gold standard for effectiveness research. Randomized controlled trials (RCT) are prospective studies that measure the effectiveness of a new intervention or treatment. Although no study is likely on its own to prove causality, randomization reduces bias and provides a rigorous tool to examine cause-effect ...

  17. Random Assignment in Experiments

    Random Assignment in Experiments | Introduction & Examples. Published on 6 May 2022 by Pritha Bhandari.Revised on 13 February 2023. In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomisation. With simple random assignment, every member of the sample has a known or equal chance of being placed in a control ...

  18. Random Assignment

    The research team then does the random assignment and sends back listings of research subjects, sorted into treatment and control groups. ... of the evaluation community espouses the virtues of qualitative research in place of randomized experiments and other quantitative approaches (e.g., Guba and Lincoln 1982), while another sector is more ...

  19. What Is Quantitative Research?

    Revised on June 22, 2023. Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations. Quantitative research is the opposite of qualitative research, which involves collecting and analyzing ...

  20. Challenges and Dilemmas in Implementing Random Assignment in

    Consideration of challenges encountered in implementing random assignment suggests that 1) researcher communication with program staff improves compliance, but may not overcome the need for learning through experience; 2) in keeping with arguments in favor of random assignment-based research, random assignment may control for diverse selection ...

  21. The Use and Interpretation of Quasi-Experimental Studies in Medical

    The lack of random assignment is the major weakness of the quasi-experimental study design. Associations identified in quasi-experiments meet one important requirement of causality since the intervention precedes the measurement of the outcome. Another requirement is that the outcome can be demonstrated to vary statistically with the intervention.

  22. Research Randomizer

    RANDOM SAMPLING AND. RANDOM ASSIGNMENT MADE EASY! Research Randomizer is a free resource for researchers and students in need of a quick way to generate random numbers or assign participants to experimental conditions. This site can be used for a variety of purposes, including psychology experiments, medical trials, and survey research.

  23. Exploring Research Design Options for Crime Rates & Community

    test my research question the best because of the lack of random assignment. However, the ecological validity his high rates, which means it is easier to generalize because of the real world like aspects. Field experiments have high levels of ecological and internal validity, because of its inclusion of random assignment. However, you cannot randomly assign participants into a group when it ...

  24. Quasi-Experimental Design

    Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.

  25. The Effectiveness of AI on K-12 Students' Mathematics ...

    Artificial intelligence (AI) shows increasing potential to improve mathematics instruction, yet integrative quantitative evidence currently is lacking on its overall effectiveness and factors influencing success. This systematic review and meta-analysis investigate the effectiveness of AI on improving mathematics performance in K-12 classrooms compared to traditional classroom instruction ...