Random Assignment in Psychology: Definition & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

In psychology, random assignment refers to the practice of allocating participants to different experimental groups in a study in a completely unbiased way, ensuring each participant has an equal chance of being assigned to any group.

In experimental research, random assignment, or random placement, organizes participants from your sample into different groups using randomization. 

Random assignment uses chance procedures to ensure that each participant has an equal opportunity of being assigned to either a control or experimental group.

The control group does not receive the treatment in question, whereas the experimental group does receive the treatment.

When using random assignment, neither the researcher nor the participant can choose the group to which the participant is assigned. This ensures that any differences between and within the groups are not systematic at the onset of the study. 

In a study to test the success of a weight-loss program, investigators randomly assigned a pool of participants to one of two groups.

Group A participants participated in the weight-loss program for 10 weeks and took a class where they learned about the benefits of healthy eating and exercise.

Group B participants read a 200-page book that explains the benefits of weight loss. The investigator randomly assigned participants to one of the two groups.

The researchers found that those who participated in the program and took the class were more likely to lose weight than those in the other group that received only the book.

Importance 

Random assignment ensures that each group in the experiment is identical before applying the independent variable.

In experiments , researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. Random assignment increases the likelihood that the treatment groups are the same at the onset of a study.

Thus, any changes that result from the independent variable can be assumed to be a result of the treatment of interest. This is particularly important for eliminating sources of bias and strengthening the internal validity of an experiment.

Random assignment is the best method for inferring a causal relationship between a treatment and an outcome.

Random Selection vs. Random Assignment 

Random selection (also called probability sampling or random sampling) is a way of randomly selecting members of a population to be included in your study.

On the other hand, random assignment is a way of sorting the sample participants into control and treatment groups. 

Random selection ensures that everyone in the population has an equal chance of being selected for the study. Once the pool of participants has been chosen, experimenters use random assignment to assign participants into groups. 

Random assignment is only used in between-subjects experimental designs, while random selection can be used in a variety of study designs.

Random Assignment vs Random Sampling

Random sampling refers to selecting participants from a population so that each individual has an equal chance of being chosen. This method enhances the representativeness of the sample.

Random assignment, on the other hand, is used in experimental designs once participants are selected. It involves allocating these participants to different experimental groups or conditions randomly.

This helps ensure that any differences in results across groups are due to manipulating the independent variable, not preexisting differences among participants.

When to Use Random Assignment

Random assignment is used in experiments with a between-groups or independent measures design.

In these research designs, researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables.

There is usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable at the onset of the study.

How to Use Random Assignment

There are a variety of ways to assign participants into study groups randomly. Here are a handful of popular methods: 

  • Random Number Generator : Give each member of the sample a unique number; use a computer program to randomly generate a number from the list for each group.
  • Lottery : Give each member of the sample a unique number. Place all numbers in a hat or bucket and draw numbers at random for each group.
  • Flipping a Coin : Flip a coin for each participant to decide if they will be in the control group or experimental group (this method can only be used when you have just two groups) 
  • Roll a Die : For each number on the list, roll a dice to decide which of the groups they will be in. For example, assume that rolling 1, 2, or 3 places them in a control group and rolling 3, 4, 5 lands them in an experimental group.

When is Random Assignment not used?

  • When it is not ethically permissible: Randomization is only ethical if the researcher has no evidence that one treatment is superior to the other or that one treatment might have harmful side effects. 
  • When answering non-causal questions : If the researcher is just interested in predicting the probability of an event, the causal relationship between the variables is not important and observational designs would be more suitable than random assignment. 
  • When studying the effect of variables that cannot be manipulated: Some risk factors cannot be manipulated and so it would not make any sense to study them in a randomized trial. For example, we cannot randomly assign participants into categories based on age, gender, or genetic factors.

Drawbacks of Random Assignment

While randomization assures an unbiased assignment of participants to groups, it does not guarantee the equality of these groups. There could still be extraneous variables that differ between groups or group differences that arise from chance. Additionally, there is still an element of luck with random assignments.

Thus, researchers can not produce perfectly equal groups for each specific study. Differences between the treatment group and control group might still exist, and the results of a randomized trial may sometimes be wrong, but this is absolutely okay.

Scientific evidence is a long and continuous process, and the groups will tend to be equal in the long run when data is aggregated in a meta-analysis.

Additionally, external validity (i.e., the extent to which the researcher can use the results of the study to generalize to the larger population) is compromised with random assignment.

Random assignment is challenging to implement outside of controlled laboratory conditions and might not represent what would happen in the real world at the population level. 

Random assignment can also be more costly than simple observational studies, where an investigator is just observing events without intervening with the population.

Randomization also can be time-consuming and challenging, especially when participants refuse to receive the assigned treatment or do not adhere to recommendations. 

What is the difference between random sampling and random assignment?

Random sampling refers to randomly selecting a sample of participants from a population. Random assignment refers to randomly assigning participants to treatment groups from the selected sample.

Does random assignment increase internal validity?

Yes, random assignment ensures that there are no systematic differences between the participants in each group, enhancing the study’s internal validity .

Does random assignment reduce sampling error?

Yes, with random assignment, participants have an equal chance of being assigned to either a control group or an experimental group, resulting in a sample that is, in theory, representative of the population.

Random assignment does not completely eliminate sampling error because a sample only approximates the population from which it is drawn. However, random sampling is a way to minimize sampling errors. 

When is random assignment not possible?

Random assignment is not possible when the experimenters cannot control the treatment or independent variable.

For example, if you want to compare how men and women perform on a test, you cannot randomly assign subjects to these groups.

Participants are not randomly assigned to different groups in this study, but instead assigned based on their characteristics.

Does random assignment eliminate confounding variables?

Yes, random assignment eliminates the influence of any confounding variables on the treatment because it distributes them at random among the study groups. Randomization invalidates any relationship between a confounding variable and the treatment.

Why is random assignment of participants to treatment conditions in an experiment used?

Random assignment is used to ensure that all groups are comparable at the start of a study. This allows researchers to conclude that the outcomes of the study can be attributed to the intervention at hand and to rule out alternative explanations for study results.

Further Reading

  • Bogomolnaia, A., & Moulin, H. (2001). A new solution to the random assignment problem .  Journal of Economic theory ,  100 (2), 295-328.
  • Krause, M. S., & Howard, K. I. (2003). What random assignment does and does not do .  Journal of Clinical Psychology ,  59 (7), 751-766.

Print Friendly, PDF & Email

helpful professor logo

15 Random Assignment Examples

random assignment examples and definition, explained below

In research, random assignment refers to the process of randomly assigning research participants into groups (conditions) in order to minimize the influence of confounding variables or extraneous factors .

Ideally, through randomization, each research participant has an equal chance of ending up in either the control or treatment condition group.

For example, consider the following two groups under analysis. Under a model such as self-selection or snowball sampling, there may be a chance that the reds cluster themselves into one group (The reason for this would likely be that there is a confounding variable that the researchers have not controlled for):

a representation of a treatment condition showing 12 red people in the cohort

To maximize the chances that the reds will be evenly split between groups, we could employ a random assignment method, which might produce the following more balanced outcome:

a representation of a treatment condition showing 4 red people in the cohort

This process is considered a gold standard for experimental research and is generally expected of major studies that explore the effects of independent variables on dependent variables .

However, random assignment is not without its flaws – chief among them being the importance of a sufficiently sized sample which will allow for randomization to tend toward a mean (take, for example, the odds of 50/50 heads and tail after 100 coin flips being higher than 1/1 heads and tail after 2 coin flips). In fact, even in the above example where I randomized the colors, you can see that there are twice as many yellows in the treatment condition than the control condition, likely because of the low number of research participants.

Methods for Random Assignment of Participants

Randomly assigning research participants into controls is relatively easy. However, there is a range of ways to go about it, and each method has its own pros and cons.

For example, there are some strategies – like the matched-pair method – that can help you to control for confounds in interesting ways.

Here are some of the most common methods of random assignment, with explanations of when you might want to use each one:

1. Simple Random Assignment This is the most basic form of random assignment. All participants are pooled together and then divided randomly into groups using an equivalent chance process such as flipping a coin, drawing names from a hat, or using a random number generator. This method is straightforward and ensures each participant has an equal chance of being assigned to any group (Jamison, 2019; Nestor & Schutt, 2018).

2. Block Randomization In this method, the researcher divides the participants into “blocks” or batches of a pre-determined size, which is then randomized (Alferes, 2012). This technique ensures that the researcher will have evenly sized groups by the end of the randomization process. It’s especially useful in clinical trials where balanced and similar-sized groups are vital.

3. Stratified Random Assignment In stratified random assignment, the researcher categorizes the participants based on key characteristics (such as gender, age, ethnicity) before the random allocation process begins. Each stratum is then subjected to simple random assignment. This method is beneficial when the researcher aims to ensure that the groups are balanced with regard to certain characteristics or variables (Rosenberger & Lachin, 2015).

4. Cluster Random Assignment Here, pre-existing groups or clusters, such as schools, households, or communities, are randomly assigned to different conditions of a research study. It’s ideal when individual random assignment is not feasible, or when the treatment is naturally delivered at the group or community level (Blair, Coppock & Humphreys, 2023).

5. Matched-Pair Random Assignment In this method, participants are first paired based on a particular characteristic or set of characteristics that are relevant to the research study, such as age, gender, or a specific health condition. Each pair is then split randomly into different research conditions or groups. This can help control for the influence of specific variables and increase the likelihood that the groups will be comparable, thereby increasing the validity of the results (Nestor & Schutt, 2018).

Random Assignment Examples

1. Pharmaceutical Efficacy Study In this type of research, consider a scenario where a pharmaceutical company wishes to test the potency of two different versions of a medication, Medication A and Medication B. The researcher recruits a group of volunteers and randomly assigns them to receive either Medication A or Medication B. This method ensures that each participant has an equal chance of being given either option, mitigating potential bias from the investigator’s side. It’s an expectation, for example, for FDA approval pre-trials (Rosenberger & Lachin, 2015).

2. Educational Techniques Study In this approach, an educator looking to evaluate a new teaching technique may randomly assign their students into two distinct classrooms. In one classroom, the new teaching technique will be implemented, while in the other, traditional methods will be utilized. The students’ performance will then be analyzed to determine if the new teaching strategy yields better results. To ensure the class cohorts are randomly assigned, we need to make sure there is no interference from parents, administrators, or others.

3. Website Usability Test In this digital-oriented example, a web designer could be researching the most effective layout for a website. Participants would be randomly assigned to use websites with a different layout and their navigation and satisfaction would be subsequently measured. This technique helps identify which design is user-friendlier based on the measured outcomes.

4. Physical Fitness Research For an investigator looking to evaluate the effectiveness of different exercise routines for weight loss, they could randomly assign participants to either a High-Intensity Interval Training (HIIT) or an endurance-based running program. By studying the participants’ weight changes across a specified time, a conclusion can be drawn on which exercise regime produces better weight loss results.

5. Environmental Psychology Study In this illustration, imagine a psychologist wanting to understand how office settings influence employees’ productivity. He could randomly assign employees to work in one of two offices: one with windows and natural light, the other windowless. The psychologist would then measure their work output to gauge if the environmental conditions impact productivity.

6. Dietary Research Test In this case, a dietician, striving to determine the efficacy of two diets on heart health, might randomly assign participants to adhere to either a Mediterranean diet or a low-fat diet. The dietician would then track cholesterol levels, blood pressure, and other heart health indicators over a determined period to discern which diet benefits heart health the most.

7. Mental Health Study In examining the IMPACT (Improving Mood-Promoting Access to Collaborative Treatment) model, a mental health researcher could randomly assign patients to receive either standard depression treatment or the IMPACT model treatment. Here, the purpose is to cross-compare recovery rates to gauge the effectiveness of the IMPACT model against the standard treatment.

8. Marketing Research A company intending to validate the effectiveness of different marketing strategies could randomly assign customers to receive either email marketing materials or social media marketing materials. Customer response and engagement rates would then be measured to evaluate which strategy is more beneficial and drives better engagement.

9. Sleep Study Research Suppose a researcher wants to investigate the effects of different levels of screen time on sleep quality. The researcher may randomly assign participants to varying amounts of nightly screen time, then compare sleep quality metrics (such as total sleep time, sleep latency, and awakenings during the night).

10. Workplace Productivity Experiment Let’s consider an HR professional who aims to evaluate the efficacy of open office and closed office layouts on employee productivity. She could randomly assign a group of employees to work in either environment and measure metrics such as work completed, attention to detail, and number of errors made to determine which office layout promotes higher productivity.

11. Child Development Study Suppose a developmental psychologist wants to investigate the effect of different learning tools on children’s development. The psychologist could randomly assign children to use either digital learning tools or traditional physical learning tools, such as books, for a fixed period. Subsequently, their development and learning progression would be tracked to determine which tool fosters more effective learning.

12. Traffic Management Research In an urban planning study, researchers could randomly assign streets to implement either traditional stop signs or roundabouts. The researchers, over a predetermined period, could then measure accident rates, traffic flow, and average travel times to identify which traffic management method is safer and more efficient.

13. Energy Consumption Study In a research project comparing the effectiveness of various energy-saving strategies, residents could be randomly assigned to implement either energy-saving light bulbs or regular bulbs in their homes. After a specific duration, their energy consumption would be compared to evaluate which measure yields better energy conservation.

14. Product Testing Research In a consumer goods case, a company looking to launch a new dishwashing detergent could randomly assign the new product or the existing best seller to a group of consumers. By analyzing their feedback on cleaning capabilities, scent, and product usage, the company can find out if the new detergent is an improvement over the existing one Nestor & Schutt, 2018.

15. Physical Therapy Research A physical therapist might be interested in comparing the effectiveness of different treatment regimens for patients with lower back pain. They could randomly assign patients to undergo either manual therapy or exercise therapy for a set duration and later evaluate pain levels and mobility.

Random assignment is effective, but not infallible. Nevertheless, it does help us to achieve greater control over our experiments and minimize the chances that confounding variables are undermining the direct correlation between independent and dependent variables within a study. Over time, when a sufficient number of high-quality and well-designed studies are conducted, with sufficient sample sizes and sufficient generalizability, we can gain greater confidence in the causation between a treatment and its effects.

Read Next: Types of Research Design

Alferes, V. R. (2012). Methods of randomization in experimental design . Sage Publications.

Blair, G., Coppock, A., & Humphreys, M. (2023). Research Design in the Social Sciences: Declaration, Diagnosis, and Redesign. New Jersey: Princeton University Press.

Jamison, J. C. (2019). The entry of randomized assignment into the social sciences. Journal of Causal Inference , 7 (1), 20170025.

Nestor, P. G., & Schutt, R. K. (2018). Research Methods in Psychology: Investigating Human Behavior. New York: SAGE Publications.

Rosenberger, W. F., & Lachin, J. M. (2015). Randomization in Clinical Trials: Theory and Practice. London: Wiley.

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 5 Top Tips for Succeeding at University
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 50 Durable Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 100 Consumer Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 30 Globalization Pros and Cons

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Random Assignment in Psychology (Definition + 40 Examples)

practical psychology logo

Have you ever wondered how researchers discover new ways to help people learn, make decisions, or overcome challenges? A hidden hero in this adventure of discovery is a method called random assignment, a cornerstone in psychological research that helps scientists uncover the truths about the human mind and behavior.

Random Assignment is a process used in research where each participant has an equal chance of being placed in any group within the study. This technique is essential in experiments as it helps to eliminate biases, ensuring that the different groups being compared are similar in all important aspects.

By doing so, researchers can be confident that any differences observed are likely due to the variable being tested, rather than other factors.

In this article, we’ll explore the intriguing world of random assignment, diving into its history, principles, real-world examples, and the impact it has had on the field of psychology.

History of Random Assignment

two women in different conditions

Stepping back in time, we delve into the origins of random assignment, which finds its roots in the early 20th century.

The pioneering mind behind this innovative technique was Sir Ronald A. Fisher , a British statistician and biologist. Fisher introduced the concept of random assignment in the 1920s, aiming to improve the quality and reliability of experimental research .

His contributions laid the groundwork for the method's evolution and its widespread adoption in various fields, particularly in psychology.

Fisher’s groundbreaking work on random assignment was motivated by his desire to control for confounding variables – those pesky factors that could muddy the waters of research findings.

By assigning participants to different groups purely by chance, he realized that the influence of these confounding variables could be minimized, paving the way for more accurate and trustworthy results.

Early Studies Utilizing Random Assignment

Following Fisher's initial development, random assignment started to gain traction in the research community. Early studies adopting this methodology focused on a variety of topics, from agriculture (which was Fisher’s primary field of interest) to medicine and psychology.

The approach allowed researchers to draw stronger conclusions from their experiments, bolstering the development of new theories and practices.

One notable early study utilizing random assignment was conducted in the field of educational psychology. Researchers were keen to understand the impact of different teaching methods on student outcomes.

By randomly assigning students to various instructional approaches, they were able to isolate the effects of the teaching methods, leading to valuable insights and recommendations for educators.

Evolution of the Methodology

As the decades rolled on, random assignment continued to evolve and adapt to the changing landscape of research.

Advances in technology introduced new tools and techniques for implementing randomization, such as computerized random number generators, which offered greater precision and ease of use.

The application of random assignment expanded beyond the confines of the laboratory, finding its way into field studies and large-scale surveys.

Researchers across diverse disciplines embraced the methodology, recognizing its potential to enhance the validity of their findings and contribute to the advancement of knowledge.

From its humble beginnings in the early 20th century to its widespread use today, random assignment has proven to be a cornerstone of scientific inquiry.

Its development and evolution have played a pivotal role in shaping the landscape of psychological research, driving discoveries that have improved lives and deepened our understanding of the human experience.

Principles of Random Assignment

Delving into the heart of random assignment, we uncover the theories and principles that form its foundation.

The method is steeped in the basics of probability theory and statistical inference, ensuring that each participant has an equal chance of being placed in any group, thus fostering fair and unbiased results.

Basic Principles of Random Assignment

Understanding the core principles of random assignment is key to grasping its significance in research. There are three principles: equal probability of selection, reduction of bias, and ensuring representativeness.

The first principle, equal probability of selection , ensures that every participant has an identical chance of being assigned to any group in the study. This randomness is crucial as it mitigates the risk of bias and establishes a level playing field.

The second principle focuses on the reduction of bias . Random assignment acts as a safeguard, ensuring that the groups being compared are alike in all essential aspects before the experiment begins.

This similarity between groups allows researchers to attribute any differences observed in the outcomes directly to the independent variable being studied.

Lastly, ensuring representativeness is a vital principle. When participants are assigned randomly, the resulting groups are more likely to be representative of the larger population.

This characteristic is crucial for the generalizability of the study’s findings, allowing researchers to apply their insights broadly.

Theoretical Foundation

The theoretical foundation of random assignment lies in probability theory and statistical inference .

Probability theory deals with the likelihood of different outcomes, providing a mathematical framework for analyzing random phenomena. In the context of random assignment, it helps in ensuring that each participant has an equal chance of being placed in any group.

Statistical inference, on the other hand, allows researchers to draw conclusions about a population based on a sample of data drawn from that population. It is the mechanism through which the results of a study can be generalized to a broader context.

Random assignment enhances the reliability of statistical inferences by reducing biases and ensuring that the sample is representative.

Differentiating Random Assignment from Random Selection

It’s essential to distinguish between random assignment and random selection, as the two terms, while related, have distinct meanings in the realm of research.

Random assignment refers to how participants are placed into different groups in an experiment, aiming to control for confounding variables and help determine causes.

In contrast, random selection pertains to how individuals are chosen to participate in a study. This method is used to ensure that the sample of participants is representative of the larger population, which is vital for the external validity of the research.

While both methods are rooted in randomness and probability, they serve different purposes in the research process.

Understanding the theories, principles, and distinctions of random assignment illuminates its pivotal role in psychological research.

This method, anchored in probability theory and statistical inference, serves as a beacon of reliability, guiding researchers in their quest for knowledge and ensuring that their findings stand the test of validity and applicability.

Methodology of Random Assignment

woman sleeping with a brain monitor

Implementing random assignment in a study is a meticulous process that involves several crucial steps.

The initial step is participant selection, where individuals are chosen to partake in the study. This stage is critical to ensure that the pool of participants is diverse and representative of the population the study aims to generalize to.

Once the pool of participants has been established, the actual assignment process begins. In this step, each participant is allocated randomly to one of the groups in the study.

Researchers use various tools, such as random number generators or computerized methods, to ensure that this assignment is genuinely random and free from biases.

Monitoring and adjusting form the final step in the implementation of random assignment. Researchers need to continuously observe the groups to ensure that they remain comparable in all essential aspects throughout the study.

If any significant discrepancies arise, adjustments might be necessary to maintain the study’s integrity and validity.

Tools and Techniques Used

The evolution of technology has introduced a variety of tools and techniques to facilitate random assignment.

Random number generators, both manual and computerized, are commonly used to assign participants to different groups. These generators ensure that each individual has an equal chance of being placed in any group, upholding the principle of equal probability of selection.

In addition to random number generators, researchers often use specialized computer software designed for statistical analysis and experimental design.

These software programs offer advanced features that allow for precise and efficient random assignment, minimizing the risk of human error and enhancing the study’s reliability.

Ethical Considerations

The implementation of random assignment is not devoid of ethical considerations. Informed consent is a fundamental ethical principle that researchers must uphold.

Informed consent means that every participant should be fully informed about the nature of the study, the procedures involved, and any potential risks or benefits, ensuring that they voluntarily agree to participate.

Beyond informed consent, researchers must conduct a thorough risk and benefit analysis. The potential benefits of the study should outweigh any risks or harms to the participants.

Safeguarding the well-being of participants is paramount, and any study employing random assignment must adhere to established ethical guidelines and standards.

Conclusion of Methodology

The methodology of random assignment, while seemingly straightforward, is a multifaceted process that demands precision, fairness, and ethical integrity. From participant selection to assignment and monitoring, each step is crucial to ensure the validity of the study’s findings.

The tools and techniques employed, coupled with a steadfast commitment to ethical principles, underscore the significance of random assignment as a cornerstone of robust psychological research.

Benefits of Random Assignment in Psychological Research

The impact and importance of random assignment in psychological research cannot be overstated. It is fundamental for ensuring the study is accurate, allowing the researchers to determine if their study actually caused the results they saw, and making sure the findings can be applied to the real world.

Facilitating Causal Inferences

When participants are randomly assigned to different groups, researchers can be more confident that the observed effects are due to the independent variable being changed, and not other factors.

This ability to determine the cause is called causal inference .

This confidence allows for the drawing of causal relationships, which are foundational for theory development and application in psychology.

Ensuring Internal Validity

One of the foremost impacts of random assignment is its ability to enhance the internal validity of an experiment.

Internal validity refers to the extent to which a researcher can assert that changes in the dependent variable are solely due to manipulations of the independent variable , and not due to confounding variables.

By ensuring that each participant has an equal chance of being in any condition of the experiment, random assignment helps control for participant characteristics that could otherwise complicate the results.

Enhancing Generalizability

Beyond internal validity, random assignment also plays a crucial role in enhancing the generalizability of research findings.

When done correctly, it ensures that the sample groups are representative of the larger population, so can allow researchers to apply their findings more broadly.

This representative nature is essential for the practical application of research, impacting policy, interventions, and psychological therapies.

Limitations of Random Assignment

Potential for implementation issues.

While the principles of random assignment are robust, the method can face implementation issues.

One of the most common problems is logistical constraints. Some studies, due to their nature or the specific population being studied, find it challenging to implement random assignment effectively.

For instance, in educational settings, logistical issues such as class schedules and school policies might stop the random allocation of students to different teaching methods .

Ethical Dilemmas

Random assignment, while methodologically sound, can also present ethical dilemmas.

In some cases, withholding a potentially beneficial treatment from one of the groups of participants can raise serious ethical questions, especially in medical or clinical research where participants' well-being might be directly affected.

Researchers must navigate these ethical waters carefully, balancing the pursuit of knowledge with the well-being of participants.

Generalizability Concerns

Even when implemented correctly, random assignment does not always guarantee generalizable results.

The types of people in the participant pool, the specific context of the study, and the nature of the variables being studied can all influence the extent to which the findings can be applied to the broader population.

Researchers must be cautious in making broad generalizations from studies, even those employing strict random assignment.

Practical and Real-World Limitations

In the real world, many variables cannot be manipulated for ethical or practical reasons, limiting the applicability of random assignment.

For instance, researchers cannot randomly assign individuals to different levels of intelligence, socioeconomic status, or cultural backgrounds.

This limitation necessitates the use of other research designs, such as correlational or observational studies , when exploring relationships involving such variables.

Response to Critiques

In response to these critiques, people in favor of random assignment argue that the method, despite its limitations, remains one of the most reliable ways to establish cause and effect in experimental research.

They acknowledge the challenges and ethical considerations but emphasize the rigorous frameworks in place to address them.

The ongoing discussion around the limitations and critiques of random assignment contributes to the evolution of the method, making sure it is continuously relevant and applicable in psychological research.

While random assignment is a powerful tool in experimental research, it is not without its critiques and limitations. Implementation issues, ethical dilemmas, generalizability concerns, and real-world limitations can pose significant challenges.

However, the continued discourse and refinement around these issues underline the method's enduring significance in the pursuit of knowledge in psychology.

By being careful with how we do things and doing what's right, random assignment stays a really important part of studying how people act and think.

Real-World Applications and Examples

man on a treadmill

Random assignment has been employed in many studies across various fields of psychology, leading to significant discoveries and advancements.

Here are some real-world applications and examples illustrating the diversity and impact of this method:

  • Medicine and Health Psychology: Randomized Controlled Trials (RCTs) are the gold standard in medical research. In these studies, participants are randomly assigned to either the treatment or control group to test the efficacy of new medications or interventions.
  • Educational Psychology: Studies in this field have used random assignment to explore the effects of different teaching methods, classroom environments, and educational technologies on student learning and outcomes.
  • Cognitive Psychology: Researchers have employed random assignment to investigate various aspects of human cognition, including memory, attention, and problem-solving, leading to a deeper understanding of how the mind works.
  • Social Psychology: Random assignment has been instrumental in studying social phenomena, such as conformity, aggression, and prosocial behavior, shedding light on the intricate dynamics of human interaction.

Let's get into some specific examples. You'll need to know one term though, and that is "control group." A control group is a set of participants in a study who do not receive the treatment or intervention being tested , serving as a baseline to compare with the group that does, in order to assess the effectiveness of the treatment.

  • Smoking Cessation Study: Researchers used random assignment to put participants into two groups. One group received a new anti-smoking program, while the other did not. This helped determine if the program was effective in helping people quit smoking.
  • Math Tutoring Program: A study on students used random assignment to place them into two groups. One group received additional math tutoring, while the other continued with regular classes, to see if the extra help improved their grades.
  • Exercise and Mental Health: Adults were randomly assigned to either an exercise group or a control group to study the impact of physical activity on mental health and mood.
  • Diet and Weight Loss: A study randomly assigned participants to different diet plans to compare their effectiveness in promoting weight loss and improving health markers.
  • Sleep and Learning: Researchers randomly assigned students to either a sleep extension group or a regular sleep group to study the impact of sleep on learning and memory.
  • Classroom Seating Arrangement: Teachers used random assignment to place students in different seating arrangements to examine the effect on focus and academic performance.
  • Music and Productivity: Employees were randomly assigned to listen to music or work in silence to investigate the effect of music on workplace productivity.
  • Medication for ADHD: Children with ADHD were randomly assigned to receive either medication, behavioral therapy, or a placebo to compare treatment effectiveness.
  • Mindfulness Meditation for Stress: Adults were randomly assigned to a mindfulness meditation group or a waitlist control group to study the impact on stress levels.
  • Video Games and Aggression: A study randomly assigned participants to play either violent or non-violent video games and then measured their aggression levels.
  • Online Learning Platforms: Students were randomly assigned to use different online learning platforms to evaluate their effectiveness in enhancing learning outcomes.
  • Hand Sanitizers in Schools: Schools were randomly assigned to use hand sanitizers or not to study the impact on student illness and absenteeism.
  • Caffeine and Alertness: Participants were randomly assigned to consume caffeinated or decaffeinated beverages to measure the effects on alertness and cognitive performance.
  • Green Spaces and Well-being: Neighborhoods were randomly assigned to receive green space interventions to study the impact on residents’ well-being and community connections.
  • Pet Therapy for Hospital Patients: Patients were randomly assigned to receive pet therapy or standard care to assess the impact on recovery and mood.
  • Yoga for Chronic Pain: Individuals with chronic pain were randomly assigned to a yoga intervention group or a control group to study the effect on pain levels and quality of life.
  • Flu Vaccines Effectiveness: Different groups of people were randomly assigned to receive either the flu vaccine or a placebo to determine the vaccine’s effectiveness.
  • Reading Strategies for Dyslexia: Children with dyslexia were randomly assigned to different reading intervention strategies to compare their effectiveness.
  • Physical Environment and Creativity: Participants were randomly assigned to different room setups to study the impact of physical environment on creative thinking.
  • Laughter Therapy for Depression: Individuals with depression were randomly assigned to laughter therapy sessions or control groups to assess the impact on mood.
  • Financial Incentives for Exercise: Participants were randomly assigned to receive financial incentives for exercising to study the impact on physical activity levels.
  • Art Therapy for Anxiety: Individuals with anxiety were randomly assigned to art therapy sessions or a waitlist control group to measure the effect on anxiety levels.
  • Natural Light in Offices: Employees were randomly assigned to workspaces with natural or artificial light to study the impact on productivity and job satisfaction.
  • School Start Times and Academic Performance: Schools were randomly assigned different start times to study the effect on student academic performance and well-being.
  • Horticulture Therapy for Seniors: Older adults were randomly assigned to participate in horticulture therapy or traditional activities to study the impact on cognitive function and life satisfaction.
  • Hydration and Cognitive Function: Participants were randomly assigned to different hydration levels to measure the impact on cognitive function and alertness.
  • Intergenerational Programs: Seniors and young people were randomly assigned to intergenerational programs to study the effects on well-being and cross-generational understanding.
  • Therapeutic Horseback Riding for Autism: Children with autism were randomly assigned to therapeutic horseback riding or traditional therapy to study the impact on social communication skills.
  • Active Commuting and Health: Employees were randomly assigned to active commuting (cycling, walking) or passive commuting to study the effect on physical health.
  • Mindful Eating for Weight Management: Individuals were randomly assigned to mindful eating workshops or control groups to study the impact on weight management and eating habits.
  • Noise Levels and Learning: Students were randomly assigned to classrooms with different noise levels to study the effect on learning and concentration.
  • Bilingual Education Methods: Schools were randomly assigned different bilingual education methods to compare their effectiveness in language acquisition.
  • Outdoor Play and Child Development: Children were randomly assigned to different amounts of outdoor playtime to study the impact on physical and cognitive development.
  • Social Media Detox: Participants were randomly assigned to a social media detox or regular usage to study the impact on mental health and well-being.
  • Therapeutic Writing for Trauma Survivors: Individuals who experienced trauma were randomly assigned to therapeutic writing sessions or control groups to study the impact on psychological well-being.
  • Mentoring Programs for At-risk Youth: At-risk youth were randomly assigned to mentoring programs or control groups to assess the impact on academic achievement and behavior.
  • Dance Therapy for Parkinson’s Disease: Individuals with Parkinson’s disease were randomly assigned to dance therapy or traditional exercise to study the effect on motor function and quality of life.
  • Aquaponics in Schools: Schools were randomly assigned to implement aquaponics programs to study the impact on student engagement and environmental awareness.
  • Virtual Reality for Phobia Treatment: Individuals with phobias were randomly assigned to virtual reality exposure therapy or traditional therapy to compare effectiveness.
  • Gardening and Mental Health: Participants were randomly assigned to engage in gardening or other leisure activities to study the impact on mental health and stress reduction.

Each of these studies exemplifies how random assignment is utilized in various fields and settings, shedding light on the multitude of ways it can be applied to glean valuable insights and knowledge.

Real-world Impact of Random Assignment

old lady gardening

Random assignment is like a key tool in the world of learning about people's minds and behaviors. It’s super important and helps in many different areas of our everyday lives. It helps make better rules, creates new ways to help people, and is used in lots of different fields.

Health and Medicine

In health and medicine, random assignment has helped doctors and scientists make lots of discoveries. It’s a big part of tests that help create new medicines and treatments.

By putting people into different groups by chance, scientists can really see if a medicine works.

This has led to new ways to help people with all sorts of health problems, like diabetes, heart disease, and mental health issues like depression and anxiety.

Schools and education have also learned a lot from random assignment. Researchers have used it to look at different ways of teaching, what kind of classrooms are best, and how technology can help learning.

This knowledge has helped make better school rules, develop what we learn in school, and find the best ways to teach students of all ages and backgrounds.

Workplace and Organizational Behavior

Random assignment helps us understand how people act at work and what makes a workplace good or bad.

Studies have looked at different kinds of workplaces, how bosses should act, and how teams should be put together. This has helped companies make better rules and create places to work that are helpful and make people happy.

Environmental and Social Changes

Random assignment is also used to see how changes in the community and environment affect people. Studies have looked at community projects, changes to the environment, and social programs to see how they help or hurt people’s well-being.

This has led to better community projects, efforts to protect the environment, and programs to help people in society.

Technology and Human Interaction

In our world where technology is always changing, studies with random assignment help us see how tech like social media, virtual reality, and online stuff affect how we act and feel.

This has helped make better and safer technology and rules about using it so that everyone can benefit.

The effects of random assignment go far and wide, way beyond just a science lab. It helps us understand lots of different things, leads to new and improved ways to do things, and really makes a difference in the world around us.

From making healthcare and schools better to creating positive changes in communities and the environment, the real-world impact of random assignment shows just how important it is in helping us learn and make the world a better place.

So, what have we learned? Random assignment is like a super tool in learning about how people think and act. It's like a detective helping us find clues and solve mysteries in many parts of our lives.

From creating new medicines to helping kids learn better in school, and from making workplaces happier to protecting the environment, it’s got a big job!

This method isn’t just something scientists use in labs; it reaches out and touches our everyday lives. It helps make positive changes and teaches us valuable lessons.

Whether we are talking about technology, health, education, or the environment, random assignment is there, working behind the scenes, making things better and safer for all of us.

In the end, the simple act of putting people into groups by chance helps us make big discoveries and improvements. It’s like throwing a small stone into a pond and watching the ripples spread out far and wide.

Thanks to random assignment, we are always learning, growing, and finding new ways to make our world a happier and healthier place for everyone!

Related posts:

  • 19+ Experimental Design Examples (Methods + Types)
  • Cluster Sampling vs Stratified Sampling
  • 41+ White Collar Job Examples (Salary + Path)
  • 47+ Blue Collar Job Examples (Salary + Path)
  • McDonaldization of Society (Definition + Examples)

Reference this article:

About The Author

Photo of author

Free Personality Test

Free Personality Quiz

Free Memory Test

Free Memory Test

Free IQ Test

Free IQ Test

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

Explore Psychology

What Is Random Assignment in Psychology?

Categories Research Methods

What Is Random Assignment in Psychology?

Sharing is caring!

Random assignment means that every participant has the same chance of being chosen for the experimental or control group. It involves using procedures that rely on chance to assign participants to groups. Doing this means that every participant in a study has an equal opportunity to be assigned to any group.

For example, in a psychology experiment, participants might be assigned to either a control or experimental group. Some experiments might only have one experimental group, while others may have several treatment variations.

Using random assignment means that each participant has the same chance of being assigned to any of these groups.

Table of Contents

How to Use Random Assignment

So what type of procedures might psychologists utilize for random assignment? Strategies can include:

  • Flipping a coin
  • Assigning random numbers
  • Rolling dice
  • Drawing names out of a hat

How Does Random Assignment Work?

A psychology experiment aims to determine if changes in one variable lead to changes in another variable. Researchers will first begin by coming up with a hypothesis. Once researchers have an idea of what they think they might find in a population, they will come up with an experimental design and then recruit participants for their study.

Once they have a pool of participants representative of the population they are interested in looking at, they will randomly assign the participants to their groups.

  • Control group : Some participants will end up in the control group, which serves as a baseline and does not receive the independent variables.
  • Experimental group : Other participants will end up in the experimental groups that receive some form of the independent variables.

By using random assignment, the researchers make it more likely that the groups are equal at the start of the experiment. Since the groups are the same on other variables, it can be assumed that any changes that occur are the result of varying the independent variables.

After a treatment has been administered, the researchers will then collect data in order to determine if the independent variable had any impact on the dependent variable.

Random Assignment vs. Random Selection

It is important to remember that random assignment is not the same thing as random selection , also known as random sampling.

Random selection instead involves how people are chosen to be in a study. Using random selection, every member of a population stands an equal chance of being chosen for a study or experiment.

So random sampling affects how participants are chosen for a study, while random assignment affects how participants are then assigned to groups.

Examples of Random Assignment

Imagine that a psychology researcher is conducting an experiment to determine if getting adequate sleep the night before an exam results in better test scores.

Forming a Hypothesis

They hypothesize that participants who get 8 hours of sleep will do better on a math exam than participants who only get 4 hours of sleep.

Obtaining Participants

The researcher starts by obtaining a pool of participants. They find 100 participants from a local university. Half of the participants are female, and half are male.

Randomly Assign Participants to Groups

The researcher then assigns random numbers to each participant and uses a random number generator to randomly assign each number to either the 4-hour or 8-hour sleep groups.

Conduct the Experiment

Those in the 8-hour sleep group agree to sleep for 8 hours that night, while those in the 4-hour group agree to wake up after only 4 hours. The following day, all of the participants meet in a classroom.

Collect and Analyze Data

Everyone takes the same math test. The test scores are then compared to see if the amount of sleep the night before had any impact on test scores.

Why Is Random Assignment Important in Psychology Research?

Random assignment is important in psychology research because it helps improve a study’s internal validity. This means that the researchers are sure that the study demonstrates a cause-and-effect relationship between an independent and dependent variable.

Random assignment improves the internal validity by minimizing the risk that there are systematic differences in the participants who are in each group.

Key Points to Remember About Random Assignment

  • Random assignment in psychology involves each participant having an equal chance of being chosen for any of the groups, including the control and experimental groups.
  • It helps control for potential confounding variables, reducing the likelihood of pre-existing differences between groups.
  • This method enhances the internal validity of experiments, allowing researchers to draw more reliable conclusions about cause-and-effect relationships.
  • Random assignment is crucial for creating comparable groups and increasing the scientific rigor of psychological studies.
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

The Definition of Random Assignment According to Psychology

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

example of research random assignment

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

example of research random assignment

Materio / Getty Images

Random assignment refers to the use of chance procedures in psychology experiments to ensure that each participant has the same opportunity to be assigned to any given group in a study to eliminate any potential bias in the experiment at the outset. Participants are randomly assigned to different groups, such as the treatment group versus the control group. In clinical research, randomized clinical trials are known as the gold standard for meaningful results.

Simple random assignment techniques might involve tactics such as flipping a coin, drawing names out of a hat, rolling dice, or assigning random numbers to a list of participants. It is important to note that random assignment differs from random selection .

While random selection refers to how participants are randomly chosen from a target population as representatives of that population, random assignment refers to how those chosen participants are then assigned to experimental groups.

Random Assignment In Research

To determine if changes in one variable will cause changes in another variable, psychologists must perform an experiment. Random assignment is a critical part of the experimental design that helps ensure the reliability of the study outcomes.

Researchers often begin by forming a testable hypothesis predicting that one variable of interest will have some predictable impact on another variable.

The variable that the experimenters will manipulate in the experiment is known as the independent variable , while the variable that they will then measure for different outcomes is known as the dependent variable. While there are different ways to look at relationships between variables, an experiment is the best way to get a clear idea if there is a cause-and-effect relationship between two or more variables.

Once researchers have formulated a hypothesis, conducted background research, and chosen an experimental design, it is time to find participants for their experiment. How exactly do researchers decide who will be part of an experiment? As mentioned previously, this is often accomplished through something known as random selection.

Random Selection

In order to generalize the results of an experiment to a larger group, it is important to choose a sample that is representative of the qualities found in that population. For example, if the total population is 60% female and 40% male, then the sample should reflect those same percentages.

Choosing a representative sample is often accomplished by randomly picking people from the population to be participants in a study. Random selection means that everyone in the group stands an equal chance of being chosen to minimize any bias. Once a pool of participants has been selected, it is time to assign them to groups.

By randomly assigning the participants into groups, the experimenters can be fairly sure that each group will have the same characteristics before the independent variable is applied.

Participants might be randomly assigned to the control group , which does not receive the treatment in question. The control group may receive a placebo or receive the standard treatment. Participants may also be randomly assigned to the experimental group , which receives the treatment of interest. In larger studies, there can be multiple treatment groups for comparison.

There are simple methods of random assignment, like rolling the die. However, there are more complex techniques that involve random number generators to remove any human error.

There can also be random assignment to groups with pre-established rules or parameters. For example, if you want to have an equal number of men and women in each of your study groups, you might separate your sample into two groups (by sex) before randomly assigning each of those groups into the treatment group and control group.

Random assignment is essential because it increases the likelihood that the groups are the same at the outset. With all characteristics being equal between groups, other than the application of the independent variable, any differences found between group outcomes can be more confidently attributed to the effect of the intervention.

Example of Random Assignment

Imagine that a researcher is interested in learning whether or not drinking caffeinated beverages prior to an exam will improve test performance. After randomly selecting a pool of participants, each person is randomly assigned to either the control group or the experimental group.

The participants in the control group consume a placebo drink prior to the exam that does not contain any caffeine. Those in the experimental group, on the other hand, consume a caffeinated beverage before taking the test.

Participants in both groups then take the test, and the researcher compares the results to determine if the caffeinated beverage had any impact on test performance.

A Word From Verywell

Random assignment plays an important role in the psychology research process. Not only does this process help eliminate possible sources of bias, but it also makes it easier to generalize the results of a tested sample of participants to a larger population.

Random assignment helps ensure that members of each group in the experiment are the same, which means that the groups are also likely more representative of what is present in the larger population of interest. Through the use of this technique, psychology researchers are able to study complex phenomena and contribute to our understanding of the human mind and behavior.

Lin Y, Zhu M, Su Z. The pursuit of balance: An overview of covariate-adaptive randomization techniques in clinical trials . Contemp Clin Trials. 2015;45(Pt A):21-25. doi:10.1016/j.cct.2015.07.011

Sullivan L. Random assignment versus random selection . In: The SAGE Glossary of the Social and Behavioral Sciences. SAGE Publications, Inc.; 2009. doi:10.4135/9781412972024.n2108

Alferes VR. Methods of Randomization in Experimental Design . SAGE Publications, Inc.; 2012. doi:10.4135/9781452270012

Nestor PG, Schutt RK. Research Methods in Psychology: Investigating Human Behavior. (2nd Ed.). SAGE Publications, Inc.; 2015.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

As previously mentioned, one of the characteristics of a true experiment is that researchers use a random process to decide which participants are tested under which conditions. Random assignation is a powerful research technique that addresses the assumption of pre-test equivalence – that the experimental and control group are equal in all respects before the administration of the independent variable (Palys & Atchison, 2014).

Random assignation is the primary way that researchers attempt to control extraneous variables across conditions. Random assignation is associated with experimental research methods. In its strictest sense, random assignment should meet two criteria.  One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus, one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands on the heads side, the participant is assigned to Condition A, and if it lands on the tails side, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and, if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested.

However, one problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible.

One approach is block randomization. In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence. When the procedure is computerized, the computer program often handles the random assignment, which is obviously much easier. You can also find programs online to help you randomize your random assignation. For example, the Research Randomizer website will generate block randomization sequences for any number of participants and conditions ( Research Randomizer ).

Random assignation is not guaranteed to control all extraneous variables across conditions. It is always possible that, just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this may not be a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population take the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this confound is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design. Note: Do not confuse random assignation with random sampling. Random sampling is a method for selecting a sample from a population; we will talk about this in Chapter 7.

Research Methods, Data Collection and Ethics Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

5.2 Experimental Design

Learning objectives.

  • Explain the difference between between-subjects and within-subjects experiments, list some of the pros and cons of each approach, and decide which approach to use to answer a particular research question.
  • Define random assignment, distinguish it from random sampling, explain its purpose in experimental research, and use some simple strategies to implement it
  • Define several types of carryover effect, give examples of each, and explain how counterbalancing helps to deal with them.

In this section, we look at some different ways to design an experiment. The primary distinction we will make is between approaches in which each participant experiences one level of the independent variable and approaches in which each participant experiences all levels of the independent variable. The former are called between-subjects experiments and the latter are called within-subjects experiments.

Between-Subjects Experiments

In a  between-subjects experiment , each participant is tested in only one condition. For example, a researcher with a sample of 100 university  students might assign half of them to write about a traumatic event and the other half write about a neutral event. Or a researcher with a sample of 60 people with severe agoraphobia (fear of open spaces) might assign 20 of them to receive each of three different treatments for that disorder. It is essential in a between-subjects experiment that the researcher assigns participants to conditions so that the different groups are, on average, highly similar to each other. Those in a trauma condition and a neutral condition, for example, should include a similar proportion of men and women, and they should have similar average intelligence quotients (IQs), similar average levels of motivation, similar average numbers of health problems, and so on. This matching is a matter of controlling these extraneous participant variables across conditions so that they do not become confounding variables.

Random Assignment

The primary way that researchers accomplish this kind of control of extraneous variables across conditions is called  random assignment , which means using a random process to decide which participants are tested in which conditions. Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and it is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too.

In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands heads, the participant is assigned to Condition A, and if it lands tails, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested. When the procedure is computerized, the computer program often handles the random assignment.

One problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible. One approach is block randomization . In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence.  Table 5.2  shows such a sequence for assigning nine participants to three conditions. The Research Randomizer website ( http://www.randomizer.org ) will generate block randomization sequences for any number of participants and conditions. Again, when the procedure is computerized, the computer program often handles the block randomization.

Random assignment is not guaranteed to control all extraneous variables across conditions. The process is random, so it is always possible that just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this possibility is not a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population takes the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this confound is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design.

Matched Groups

An alternative to simple random assignment of participants to conditions is the use of a matched-groups design . Using this design, participants in the various conditions are matched on the dependent variable or on some extraneous variable(s) prior the manipulation of the independent variable. This guarantees that these variables will not be confounded across the experimental conditions. For instance, if we want to determine whether expressive writing affects people’s health then we could start by measuring various health-related variables in our prospective research participants. We could then use that information to rank-order participants according to how healthy or unhealthy they are. Next, the two healthiest participants would be randomly assigned to complete different conditions (one would be randomly assigned to the traumatic experiences writing condition and the other to the neutral writing condition). The next two healthiest participants would then be randomly assigned to complete different conditions, and so on until the two least healthy participants. This method would ensure that participants in the traumatic experiences writing condition are matched to participants in the neutral writing condition with respect to health at the beginning of the study. If at the end of the experiment, a difference in health was detected across the two conditions, then we would know that it is due to the writing manipulation and not to pre-existing differences in health.

Within-Subjects Experiments

In a  within-subjects experiment , each participant is tested under all conditions. Consider an experiment on the effect of a defendant’s physical attractiveness on judgments of his guilt. Again, in a between-subjects experiment, one group of participants would be shown an attractive defendant and asked to judge his guilt, and another group of participants would be shown an unattractive defendant and asked to judge his guilt. In a within-subjects experiment, however, the same group of participants would judge the guilt of both an attractive  and  an unattractive defendant.

The primary advantage of this approach is that it provides maximum control of extraneous participant variables. Participants in all conditions have the same mean IQ, same socioeconomic status, same number of siblings, and so on—because they are the very same people. Within-subjects experiments also make it possible to use statistical procedures that remove the effect of these extraneous participant variables on the dependent variable and therefore make the data less “noisy” and the effect of the independent variable easier to detect. We will look more closely at this idea later in the book .  However, not all experiments can use a within-subjects design nor would it be desirable to do so.

One disadvantage of within-subjects experiments is that they make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. This  knowledge could  lead the participant to judge the unattractive defendant more harshly because he thinks this is what he is expected to do. Or it could make participants judge the two defendants similarly in an effort to be “fair.”

Carryover Effects and Counterbalancing

The primary disadvantage of within-subjects designs is that they can result in order effects. An order effect  occurs when participants’ responses in the various conditions are affected by the order of conditions to which they were exposed. One type of order effect is a carryover effect. A  carryover effect  is an effect of being tested in one condition on participants’ behavior in later conditions. One type of carryover effect is a  practice effect , where participants perform a task better in later conditions because they have had a chance to practice it. Another type is a fatigue effect , where participants perform a task worse in later conditions because they become tired or bored. Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions. This  type of effect is called a  context effect (or contrast effect) . For example, an average-looking defendant might be judged more harshly when participants have just judged an attractive defendant than when they have just judged an unattractive defendant. Within-subjects experiments also make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. 

Carryover effects can be interesting in their own right. (Does the attractiveness of one person depend on the attractiveness of other people that we have seen recently?) But when they are not the focus of the research, carryover effects can be problematic. Imagine, for example, that participants judge the guilt of an attractive defendant and then judge the guilt of an unattractive defendant. If they judge the unattractive defendant more harshly, this might be because of his unattractiveness. But it could be instead that they judge him more harshly because they are becoming bored or tired. In other words, the order of the conditions is a confounding variable. The attractive condition is always the first condition and the unattractive condition the second. Thus any difference between the conditions in terms of the dependent variable could be caused by the order of the conditions and not the independent variable itself.

There is a solution to the problem of order effects, however, that can be used in many situations. It is  counterbalancing , which means testing different participants in different orders. The best method of counterbalancing is complete counterbalancing  in which an equal number of participants complete each possible order of conditions. For example, half of the participants would be tested in the attractive defendant condition followed by the unattractive defendant condition, and others half would be tested in the unattractive condition followed by the attractive condition. With three conditions, there would be six different orders (ABC, ACB, BAC, BCA, CAB, and CBA), so some participants would be tested in each of the six orders. With four conditions, there would be 24 different orders; with five conditions there would be 120 possible orders. With counterbalancing, participants are assigned to orders randomly, using the techniques we have already discussed. Thus, random assignment plays an important role in within-subjects designs just as in between-subjects designs. Here, instead of randomly assigning to conditions, they are randomly assigned to different orders of conditions. In fact, it can safely be said that if a study does not involve random assignment in one form or another, it is not an experiment.

A more efficient way of counterbalancing is through a Latin square design which randomizes through having equal rows and columns. For example, if you have four treatments, you must have four versions. Like a Sudoku puzzle, no treatment can repeat in a row or column. For four versions of four treatments, the Latin square design would look like:

You can see in the diagram above that the square has been constructed to ensure that each condition appears at each ordinal position (A appears first once, second once, third once, and fourth once) and each condition preceded and follows each other condition one time. A Latin square for an experiment with 6 conditions would by 6 x 6 in dimension, one for an experiment with 8 conditions would be 8 x 8 in dimension, and so on. So while complete counterbalancing of 6 conditions would require 720 orders, a Latin square would only require 6 orders.

Finally, when the number of conditions is large experiments can use  random counterbalancing  in which the order of the conditions is randomly determined for each participant. Using this technique every possible order of conditions is determined and then one of these orders is randomly selected for each participant. This is not as powerful a technique as complete counterbalancing or partial counterbalancing using a Latin squares design. Use of random counterbalancing will result in more random error, but if order effects are likely to be small and the number of conditions is large, this is an option available to researchers.

There are two ways to think about what counterbalancing accomplishes. One is that it controls the order of conditions so that it is no longer a confounding variable. Instead of the attractive condition always being first and the unattractive condition always being second, the attractive condition comes first for some participants and second for others. Likewise, the unattractive condition comes first for some participants and second for others. Thus any overall difference in the dependent variable between the two conditions cannot have been caused by the order of conditions. A second way to think about what counterbalancing accomplishes is that if there are carryover effects, it makes it possible to detect them. One can analyze the data separately for each order to see whether it had an effect.

When 9 Is “Larger” Than 221

Researcher Michael Birnbaum has argued that the  lack  of context provided by between-subjects designs is often a bigger problem than the context effects created by within-subjects designs. To demonstrate this problem, he asked participants to rate two numbers on how large they were on a scale of 1-to-10 where 1 was “very very small” and 10 was “very very large”.  One group of participants were asked to rate the number 9 and another group was asked to rate the number 221 (Birnbaum, 1999) [1] . Participants in this between-subjects design gave the number 9 a mean rating of 5.13 and the number 221 a mean rating of 3.10. In other words, they rated 9 as larger than 221! According to Birnbaum, this  difference  is because participants spontaneously compared 9 with other one-digit numbers (in which case it is  relatively large) and compared 221 with other three-digit numbers (in which case it is relatively  small).

Simultaneous Within-Subjects Designs

So far, we have discussed an approach to within-subjects designs in which participants are tested in one condition at a time. There is another approach, however, that is often used when participants make multiple responses in each condition. Imagine, for example, that participants judge the guilt of 10 attractive defendants and 10 unattractive defendants. Instead of having people make judgments about all 10 defendants of one type followed by all 10 defendants of the other type, the researcher could present all 20 defendants in a sequence that mixed the two types. The researcher could then compute each participant’s mean rating for each type of defendant. Or imagine an experiment designed to see whether people with social anxiety disorder remember negative adjectives (e.g., “stupid,” “incompetent”) better than positive ones (e.g., “happy,” “productive”). The researcher could have participants study a single list that includes both kinds of words and then have them try to recall as many words as possible. The researcher could then count the number of each type of word that was recalled. 

Between-Subjects or Within-Subjects?

Almost every experiment can be conducted using either a between-subjects design or a within-subjects design. This possibility means that researchers must choose between the two approaches based on their relative merits for the particular situation.

Between-subjects experiments have the advantage of being conceptually simpler and requiring less testing time per participant. They also avoid carryover effects without the need for counterbalancing. Within-subjects experiments have the advantage of controlling extraneous participant variables, which generally reduces noise in the data and makes it easier to detect a relationship between the independent and dependent variables.

A good rule of thumb, then, is that if it is possible to conduct a within-subjects experiment (with proper counterbalancing) in the time that is available per participant—and you have no serious concerns about carryover effects—this design is probably the best option. If a within-subjects design would be difficult or impossible to carry out, then you should consider a between-subjects design instead. For example, if you were testing participants in a doctor’s waiting room or shoppers in line at a grocery store, you might not have enough time to test each participant in all conditions and therefore would opt for a between-subjects design. Or imagine you were trying to reduce people’s level of prejudice by having them interact with someone of another race. A within-subjects design with counterbalancing would require testing some participants in the treatment condition first and then in a control condition. But if the treatment works and reduces people’s level of prejudice, then they would no longer be suitable for testing in the control condition. This difficulty is true for many designs that involve a treatment meant to produce long-term change in participants’ behavior (e.g., studies testing the effectiveness of psychotherapy). Clearly, a between-subjects design would be necessary here.

Remember also that using one type of design does not preclude using the other type in a different study. There is no reason that a researcher could not use both a between-subjects design and a within-subjects design to answer the same research question. In fact, professional researchers often take exactly this type of mixed methods approach.

Key Takeaways

  • Experiments can be conducted using either between-subjects or within-subjects designs. Deciding which to use in a particular situation requires careful consideration of the pros and cons of each approach.
  • Random assignment to conditions in between-subjects experiments or counterbalancing of orders of conditions in within-subjects experiments is a fundamental element of experimental research. The purpose of these techniques is to control extraneous variables so that they do not become confounding variables.
  • You want to test the relative effectiveness of two training programs for running a marathon.
  • Using photographs of people as stimuli, you want to see if smiling people are perceived as more intelligent than people who are not smiling.
  • In a field experiment, you want to see if the way a panhandler is dressed (neatly vs. sloppily) affects whether or not passersby give him any money.
  • You want to see if concrete nouns (e.g.,  dog ) are recalled better than abstract nouns (e.g.,  truth).
  • Birnbaum, M.H. (1999). How to show that 9>221: Collect judgments in a between-subjects design. Psychological Methods, 4 (3), 243-249. ↵

Creative Commons License

Share This Book

  • Increase Font Size

example of research random assignment

The Plagiarism Checker Online For Your Academic Work

Start Plagiarism Check

Editing & Proofreading for Your Research Paper

Get it proofread now

Online Printing & Binding with Free Express Delivery

Configure binding now

  • Academic essay overview
  • The writing process
  • Structuring academic essays
  • Types of academic essays
  • Academic writing overview
  • Sentence structure
  • Academic writing process
  • Improving your academic writing
  • Titles and headings
  • APA style overview
  • APA citation & referencing
  • APA structure & sections
  • Citation & referencing
  • Structure and sections
  • APA examples overview
  • Commonly used citations
  • Other examples
  • British English vs. American English
  • Chicago style overview
  • Chicago citation & referencing
  • Chicago structure & sections
  • Chicago style examples
  • Citing sources overview
  • Citation format
  • Citation examples
  • College essay overview
  • Application
  • How to write a college essay
  • Types of college essays
  • Commonly confused words
  • Definitions
  • Dissertation overview
  • Dissertation structure & sections
  • Dissertation writing process
  • Graduate school overview
  • Application & admission
  • Study abroad
  • Master degree
  • Harvard referencing overview
  • Language rules overview
  • Grammatical rules & structures
  • Parts of speech
  • Punctuation
  • Methodology overview
  • Analyzing data
  • Experiments
  • Observations
  • Inductive vs. Deductive
  • Qualitative vs. Quantitative
  • Types of validity
  • Types of reliability
  • Sampling methods
  • Theories & Concepts
  • Types of research studies
  • Types of variables
  • MLA style overview
  • MLA examples
  • MLA citation & referencing
  • MLA structure & sections
  • Plagiarism overview
  • Plagiarism checker
  • Types of plagiarism
  • Printing production overview
  • Research bias overview
  • Types of research bias
  • Example sections
  • Types of research papers
  • Research process overview
  • Problem statement
  • Research proposal
  • Research topic
  • Statistics overview
  • Levels of measurment
  • Frequency distribution
  • Measures of central tendency
  • Measures of variability
  • Hypothesis testing
  • Parameters & test statistics
  • Types of distributions
  • Correlation
  • Effect size
  • Hypothesis testing assumptions
  • Types of ANOVAs
  • Types of chi-square
  • Statistical data
  • Statistical models
  • Spelling mistakes
  • Tips overview
  • Academic writing tips
  • Dissertation tips
  • Sources tips
  • Working with sources overview
  • Evaluating sources
  • Finding sources
  • Including sources
  • Types of sources

Your Step to Success

Plagiarism Check within 10min

Printing & Binding with 3D Live Preview

Random Assignment – A Simple Introduction with Examples

How do you like this article cancel reply.

Save my name, email, and website in this browser for the next time I comment.

Random-assignment-Definition

Completing a research or thesis paper is more work than most students imagine. For instance, you must conduct experiments before coming up with conclusions. Random assignment, a key methodology in academic research, ensures every participant has an equal chance of being placed in any group within an experiment. In experimental studies, the random assignment of participants is a vital element, which this article will discuss.

Inhaltsverzeichnis

  • 1 Random Assignment – In a Nutshell
  • 2 Definition: Random assignment
  • 3 Importance of random assignment
  • 4 Random assignment vs. random sampling
  • 5 How to use random assignment
  • 6 When random assignment is not used

Random Assignment – In a Nutshell

  • Random assignment is where you randomly place research participants into specific groups.
  • This method eliminates bias in the results by ensuring that all participants have an equal chance of getting into either group.
  • Random assignment is usually used in independent measures or between-group experiment designs.

Definition: Random assignment

Pearson Correlation is a descriptive statistical procedure that describes the measure of linear dependence between two variables. It entails a sample, control group , experimental design , and randomized design. In this statistical procedure, random assignment is used. Random assignment is the random placement of participants into different groups in experimental research.

Ireland

Importance of random assignment

Random assessment is essential for strengthening the internal validity of experimental research. Internal validity helps make a casual relationship’s conclusions reliable and trustworthy.

In experimental research, researchers isolate independent variables and manipulate them as they assess the impact while managing other variables. To achieve this, an independent variable for diverse member groups is vital. This experimental design is called an independent or between-group design.

Example: Different levels of independent variables

  • In a medical study, you can research the impact of nutrient supplements on the immune (nutrient supplements = independent variable, immune = dependent variable)

Three independent participant levels are applicable here:

  • Control group (given 0 dosages of iron supplements)
  • The experimental group (low dosage)
  • The second experimental group (high dosage)

This assignment technique in experiments ensures no bias in the treatment sets at the beginning of the trials. Therefore, if you do not use this technique, you won’t be able to exclude any alternate clarifications for your findings.

In the research experiment above, you can recruit participants randomly by handing out flyers at public spaces like gyms, cafés, and community centers. Then:

  • Place the group from cafés in the control group
  • Community center group in the low prescription trial group
  • Gym group in the high-prescription group

Even with random participant assignment, other extraneous variables may still create bias in experiment results. However, these variations are usually low, hence should not hinder your research. Therefore, using random placement in experiments is highly necessary, especially where it is ethically required or makes sense for your research subject.

Random assignment vs. random sampling

Simple random sampling is a method of choosing the participants for a study. On the other hand, the random assignment involves sorting the participants selected through random sampling. Another difference between random sampling and random assignment is that the former is used in several types of studies, while the latter is only applied in between-subject experimental designs.

Your study researches the impact of technology on productivity in a specific company.

In such a case, you have contact with the entire staff. So, you can assign each employee a quantity and apply a random number generator to pick a specific sample.

For instance, from 500 employees, you can pick 200. So, the full sample is 200.

Random sampling enhances external validity, as it guarantees that the study sample is unbiased, and that an entire population is represented. This way, you can conclude that the results of your studies can be accredited to the autonomous variable.

After determining the full sample, you can break it down into two groups using random assignment. In this case, the groups are:

  • The control group (does get access to technology)
  • The experimental group (gets access to technology)

Using random assignment assures you that any differences in the productivity results for each group are not biased and will help the company make a decision.

Random-assignment-vs-random-sampling

How to use random assignment

Firstly, give each participant a unique number as an identifier. Then, use a specific tool to simplify assigning the participants to the sample groups. Some tools you can use are:

Random member assignment is a prevailing technique for placing participants in specific groups because each person has a fair opportunity of being put in either group.

Random assignment in block experimental designs

In complex experimental designs , you must group your participants into blocks before using the random assignment technique.

You can create participant blocks depending on demographic variables, working hours, or scores. However, the blocks imply that you will require a bigger sample to attain high statistical power.

After grouping the participants in blocks, you can use random assignments inside each block to allocate the members to a specific treatment condition. Doing this will help you examine if quality impacts the result of the treatment.

Depending on their unique characteristics, you can also use blocking in experimental matched designs before matching the participants in each block. Then, you can randomly allot each partaker to one of the treatments in the research and examine the results.

When random assignment is not used

As powerful a tool as it is, random assignment does not apply in all situations. Like the following:

Comparing different groups

When the purpose of your study is to assess the differences between the participants, random member assignment may not work.

If you want to compare teens and the elderly with and without specific health conditions, you must ensure that the participants have specific characteristics. Therefore, you cannot pick them randomly.

In such a study, the medical condition (quality of interest) is the independent variable, and the participants are grouped based on their ages (different levels). Also, all partakers are tried similarly to ensure they have the medical condition, and their outcomes are tested per group level.

No ethical justifiability

Another situation where you cannot use random assignment is if it is ethically not permitted.

If your study involves unhealthy or dangerous behaviors or subjects, such as drug use. Instead of assigning random partakers to sets, you can conduct quasi-experimental research.

When using a quasi-experimental design , you examine the conclusions of pre-existing groups you have no control over, such as existing drug users. While you cannot randomly assign them to groups, you can use variables like their age, years of drug use, or socioeconomic status to group the participants.

What is the definition of random assignment?

It is an experimental research technique that involves randomly placing participants from your samples into different groups. It ensures that every sample member has the same opportunity of being in whichever group (control or experimental group).

When is random assignment applicable?

You can use this placement technique in experiments featuring an independent measures design. It helps ensure that all your sample groups are comparable.

What is the importance of random assignment?

It can help you enhance your study’s validity . This technique also helps ensure that every sample has an equal opportunity of being assigned to a control or trial group.

When should you NOT use random assignment

You should not use this technique if your study focuses on group comparisons or if it is not legally ethical.

We use cookies on our website. Some of them are essential, while others help us to improve this website and your experience.

  • External Media

Individual Privacy Preferences

Cookie Details Privacy Policy Imprint

Here you will find an overview of all cookies used. You can give your consent to whole categories or display further information and select certain cookies.

Accept all Save

Essential cookies enable basic functions and are necessary for the proper function of the website.

Show Cookie Information Hide Cookie Information

Statistics cookies collect information anonymously. This information helps us to understand how our visitors use our website.

Content from video platforms and social media platforms is blocked by default. If External Media cookies are accepted, access to those contents no longer requires manual consent.

Privacy Policy Imprint

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Random Assignment in Experiments | Introduction & Examples

Random Assignment in Experiments | Introduction & Examples

Published on 6 May 2022 by Pritha Bhandari . Revised on 13 February 2023.

In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomisation.

With simple random assignment, every member of the sample has a known or equal chance of being placed in a control group or an experimental group. Studies that use simple random assignment are also called completely randomised designs .

Random assignment is a key part of experimental design . It helps you ensure that all groups are comparable at the start of a study: any differences between them are due to random factors.

Table of contents

Why does random assignment matter, random sampling vs random assignment, how do you use random assignment, when is random assignment not used, frequently asked questions about random assignment.

Random assignment is an important part of control in experimental research, because it helps strengthen the internal validity of an experiment.

In experiments, researchers manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. To do so, they often use different levels of an independent variable for different groups of participants.

This is called a between-groups or independent measures design.

You use three groups of participants that are each given a different level of the independent variable:

  • A control group that’s given a placebo (no dosage)
  • An experimental group that’s given a low dosage
  • A second experimental group that’s given a high dosage

Random assignment to helps you make sure that the treatment groups don’t differ in systematic or biased ways at the start of the experiment.

If you don’t use random assignment, you may not be able to rule out alternative explanations for your results.

  • Participants recruited from pubs are placed in the control group
  • Participants recruited from local community centres are placed in the low-dosage experimental group
  • Participants recruited from gyms are placed in the high-dosage group

With this type of assignment, it’s hard to tell whether the participant characteristics are the same across all groups at the start of the study. Gym users may tend to engage in more healthy behaviours than people who frequent pubs or community centres, and this would introduce a healthy user bias in your study.

Although random assignment helps even out baseline differences between groups, it doesn’t always make them completely equivalent. There may still be extraneous variables that differ between groups, and there will always be some group differences that arise from chance.

Most of the time, the random variation between groups is low, and, therefore, it’s acceptable for further analysis. This is especially true when you have a large sample. In general, you should always use random assignment in experiments when it is ethically possible and makes sense for your study topic.

Prevent plagiarism, run a free check.

Random sampling and random assignment are both important concepts in research, but it’s important to understand the difference between them.

Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups.

While random sampling is used in many types of studies, random assignment is only used in between-subjects experimental designs.

Some studies use both random sampling and random assignment, while others use only one or the other.

Random sample vs random assignment

Random sampling enhances the external validity or generalisability of your results, because it helps to ensure that your sample is unbiased and representative of the whole population. This allows you to make stronger statistical inferences .

You use a simple random sample to collect data. Because you have access to the whole population (all employees), you can assign all 8,000 employees a number and use a random number generator to select 300 employees. These 300 employees are your full sample.

Random assignment enhances the internal validity of the study, because it ensures that there are no systematic differences between the participants in each group. This helps you conclude that the outcomes can be attributed to the independent variable .

  • A control group that receives no intervention
  • An experimental group that has a remote team-building intervention every week for a month

You use random assignment to place participants into the control or experimental group. To do so, you take your list of participants and assign each participant a number. Again, you use a random number generator to place each participant in one of the two groups.

To use simple random assignment, you start by giving every member of the sample a unique number. Then, you can use computer programs or manual methods to randomly assign each participant to a group.

  • Random number generator: Use a computer program to generate random numbers from the list for each group.
  • Lottery method: Place all numbers individually into a hat or a bucket, and draw numbers at random for each group.
  • Flip a coin: When you only have two groups, for each number on the list, flip a coin to decide if they’ll be in the control or the experimental group.
  • Use a dice: When you have three groups, for each number on the list, roll a die to decide which of the groups they will be in. For example, assume that rolling 1 or 2 lands them in a control group; 3 or 4 in an experimental group; and 5 or 6 in a second control or experimental group.

This type of random assignment is the most powerful method of placing participants in conditions, because each individual has an equal chance of being placed in any one of your treatment groups.

Random assignment in block designs

In more complicated experimental designs, random assignment is only used after participants are grouped into blocks based on some characteristic (e.g., test score or demographic variable). These groupings mean that you need a larger sample to achieve high statistical power .

For example, a randomised block design involves placing participants into blocks based on a shared characteristic (e.g., college students vs graduates), and then using random assignment within each block to assign participants to every treatment condition. This helps you assess whether the characteristic affects the outcomes of your treatment.

In an experimental matched design , you use blocking and then match up individual participants from each block based on specific characteristics. Within each matched pair or group, you randomly assign each participant to one of the conditions in the experiment and compare their outcomes.

Sometimes, it’s not relevant or ethical to use simple random assignment, so groups are assigned in a different way.

When comparing different groups

Sometimes, differences between participants are the main focus of a study, for example, when comparing children and adults or people with and without health conditions. Participants are not randomly assigned to different groups, but instead assigned based on their characteristics.

In this type of study, the characteristic of interest (e.g., gender) is an independent variable, and the groups differ based on the different levels (e.g., men, women). All participants are tested the same way, and then their group-level outcomes are compared.

When it’s not ethically permissible

When studying unhealthy or dangerous behaviours, it’s not possible to use random assignment. For example, if you’re studying heavy drinkers and social drinkers, it’s unethical to randomly assign participants to one of the two groups and ask them to drink large amounts of alcohol for your experiment.

When you can’t assign participants to groups, you can also conduct a quasi-experimental study . In a quasi-experiment, you study the outcomes of pre-existing groups who receive treatments that you may not have any control over (e.g., heavy drinkers and social drinkers).

These groups aren’t randomly assigned, but may be considered comparable when some other variables (e.g., age or socioeconomic status) are controlled for.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalisability of your results, while random assignment improves the internal validity of your study.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a die to randomly assign participants to groups.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2023, February 13). Random Assignment in Experiments | Introduction & Examples. Scribbr. Retrieved 22 April 2024, from https://www.scribbr.co.uk/research-methods/random-assignment-experiments/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, a quick guide to experimental design | 5 steps & examples, controlled experiments | methods & examples of control, control groups and treatment groups | uses & examples.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Athl Train
  • v.43(2); Mar-Apr 2008

Issues in Outcomes Research: An Overview of Randomization Techniques for Clinical Trials

Minsoo kang.

1 Middle Tennessee State University, Murfreesboro, TN

Brian G Ragan

2 University of Northern Iowa, Cedar Falls, IA

Jae-Hyeon Park

3 Korea National Sport University, Seoul, Korea

To review and describe randomization techniques used in clinical trials, including simple, block, stratified, and covariate adaptive techniques.

Background:

Clinical trials are required to establish treatment efficacy of many athletic training procedures. In the past, we have relied on evidence of questionable scientific merit to aid the determination of treatment choices. Interest in evidence-based practice is growing rapidly within the athletic training profession, placing greater emphasis on the importance of well-conducted clinical trials. One critical component of clinical trials that strengthens results is random assignment of participants to control and treatment groups. Although randomization appears to be a simple concept, issues of balancing sample sizes and controlling the influence of covariates a priori are important. Various techniques have been developed to account for these issues, including block, stratified randomization, and covariate adaptive techniques.

Advantages:

Athletic training researchers and scholarly clinicians can use the information presented in this article to better conduct and interpret the results of clinical trials. Implementing these techniques will increase the power and validity of findings of athletic medicine clinical trials, which will ultimately improve the quality of care provided.

Outcomes research is critical in the evidence-based health care environment because it addresses scientific questions concerning the efficacy of treatments. Clinical trials are considered the “gold standard” for outcomes in biomedical research. In athletic training, calls for more evidence-based medical research, specifically clinical trials, have been issued. 1 , 2

The strength of clinical trials is their superior ability to measure change over time from a treatment. Treatment differences identified from cross-sectional observational designs rather than experimental clinical trials have methodologic weaknesses, including confounding, cohort effects, and selection bias. 3 For example, using a nonrandomized trial to examine the effectiveness of prophylactic knee bracing to prevent medial collateral ligament injuries may suffer from confounders and jeopardize the results. One possible confounder is a history of knee injuries. Participants with a history of knee injuries may be more likely to wear braces than those with no such history. Participants with a history of injury are more likely to suffer additional knee injuries, unbalancing the groups and influencing the results of the study.

The primary goal of comparative clinical trials is to provide comparisons of treatments with maximum precision and validity. 4 One critical component of clinical trials is random assignment of participants into groups. Randomizing participants helps remove the effect of extraneous variables (eg, age, injury history) and minimizes bias associated with treatment assignment. Randomization is considered by most researchers to be the optimal approach for participant assignment in clinical trials because it strengthens the results and data interpretation. 4 – , 9

One potential problem with small clinical trials (n < 100) 7 is that conventional simple randomization methods, such as flipping a coin, may result in imbalanced sample size and baseline characteristics (ie, covariates) among treatment and control groups. 9 , 10 This imbalance of baseline characteristics can influence the comparison between treatment and control groups and introduce potential confounding factors. Many procedures have been proposed for random group assignment of participants in clinical trials. 11 Simple, block, stratified, and covariate adaptive randomizations are some examples. Each technique has advantages and disadvantages, which must be carefully considered before a method is selected. Our purpose is to introduce the concept and significance of randomization and to review several conventional and relatively new randomization techniques to aid in the design and implementation of valid clinical trials.

What Is Randomization?

Randomization is the process of assigning participants to treatment and control groups, assuming that each participant has an equal chance of being assigned to any group. 12 Randomization has evolved into a fundamental aspect of scientific research methodology. Demands have increased for more randomized clinical trials in many areas of biomedical research, such as athletic training. 2 , 13 In fact, in the last 2 decades, internationally recognized major medical journals, such as the Journal of the American Medical Association and the BMJ , have been increasingly interested in publishing studies reporting results from randomized controlled trials. 5

Since Fisher 14 first introduced the idea of randomization in a 1926 agricultural study, the academic community has deemed randomization an essential tool for unbiased comparisons of treatment groups. Five years after Fisher's introductory paper, the first randomized clinical trial involving tuberculosis was conducted. 15 A total of 24 participants were paired (ie, 12 comparable pairs), and by a flip of a coin, each participant within the pair was assigned to either the control or treatment group. By employing randomization, researchers offer each participant an equal chance of being assigned to groups, which makes the groups comparable on the dependent variable by eliminating potential bias. Indeed, randomization of treatments in clinical trials is the only means of avoiding systematic characteristic bias of participants assigned to different treatments. Although randomization may be accomplished with a simple coin toss, more appropriate and better methods are often needed, especially in small clinical trials. These other methods will be discussed in this review.

Why Randomize?

Researchers demand randomization for several reasons. First, participants in various groups should not differ in any systematic way. In a clinical trial, if treatment groups are systematically different, trial results will be biased. Suppose that participants are assigned to control and treatment groups in a study examining the efficacy of a walking intervention. If a greater proportion of older adults is assigned to the treatment group, then the outcome of the walking intervention may be influenced by this imbalance. The effects of the treatment would be indistinguishable from the influence of the imbalance of covariates, thereby requiring the researcher to control for the covariates in the analysis to obtain an unbiased result. 16

Second, proper randomization ensures no a priori knowledge of group assignment (ie, allocation concealment). That is, researchers, participants, and others should not know to which group the participant will be assigned. Knowledge of group assignment creates a layer of potential selection bias that may taint the data. Schulz and Grimes 17 stated that trials with inadequate or unclear randomization tended to overestimate treatment effects up to 40% compared with those that used proper randomization. The outcome of the trial can be negatively influenced by this inadequate randomization.

Statistical techniques such as analysis of covariance (ANCOVA), multivariate ANCOVA, or both, are often used to adjust for covariate imbalance in the analysis stage of the clinical trial. However, the interpretation of this postadjustment approach is often difficult because imbalance of covariates frequently leads to unanticipated interaction effects, such as unequal slopes among subgroups of covariates. 18 , 19 One of the critical assumptions in ANCOVA is that the slopes of regression lines are the same for each group of covariates (ie, homogeneity of regression slopes). The adjustment needed for each covariate group may vary, which is problematic because ANCOVA uses the average slope across the groups to adjust the outcome variable. Thus, the ideal way of balancing covariates among groups is to apply sound randomization in the design stage of a clinical trial (before the adjustment procedure) instead of after data collection. In such instances, random assignment is necessary and guarantees validity for statistical tests of significance that are used to compare treatments.

How To Randomize?

Many procedures have been proposed for the random assignment of participants to treatment groups in clinical trials. In this article, common randomization techniques, including simple randomization, block randomization, stratified randomization, and covariate adaptive randomization, are reviewed. Each method is described along with its advantages and disadvantages. It is very important to select a method that will produce interpretable, valid results for your study.

Simple Randomization

Randomization based on a single sequence of random assignments is known as simple randomization. 10 This technique maintains complete randomness of the assignment of a person to a particular group. The most common and basic method of simple randomization is flipping a coin. For example, with 2 treatment groups (control versus treatment), the side of the coin (ie, heads  =  control, tails  =  treatment) determines the assignment of each participant. Other methods include using a shuffled deck of cards (eg, even  =  control, odd  =  treatment) or throwing a die (eg, below and equal to 3  =  control, over 3  =  treatment). A random number table found in a statistics book or computer-generated random numbers can also be used for simple randomization of participants.

This randomization approach is simple and easy to implement in a clinical trial. In large trials (n > 200), simple randomization can be trusted to generate similar numbers of participants among groups. However, randomization results could be problematic in relatively small sample size clinical trials (n < 100), resulting in an unequal number of participants among groups. For example, using a coin toss with a small sample size (n  =  10) may result in an imbalance such that 7 participants are assigned to the control group and 3 to the treatment group ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is i1062-6050-43-2-215-f01.jpg

Block Randomization

The block randomization method is designed to randomize participants into groups that result in equal sample sizes. This method is used to ensure a balance in sample size across groups over time. Blocks are small and balanced with predetermined group assignments, which keeps the numbers of participants in each group similar at all times. According to Altman and Bland, 10 the block size is determined by the researcher and should be a multiple of the number of groups (ie, with 2 treatment groups, block size of either 4 or 6). Blocks are best used in smaller increments as researchers can more easily control balance. 7 After block size has been determined, all possible balanced combinations of assignment within the block (ie, equal number for all groups within the block) must be calculated. Blocks are then randomly chosen to determine the participants' assignment into the groups.

For a clinical trial with control and treatment groups involving 40 participants, a randomized block procedure would be as follows: (1) a block size of 4 is chosen, (2) possible balanced combinations with 2 C (control) and 2 T (treatment) subjects are calculated as 6 (TTCC, TCTC, TCCT, CTTC, CTCT, CCTT), and (3) blocks are randomly chosen to determine the assignment of all 40 participants (eg, one random sequence would be [TTCC / TCCT / CTTC / CTTC / TCCT / CCTT / TTCC / TCTC / CTCT / TCTC]). This procedure results in 20 participants in both the control and treatment groups ( Figure 2 ).

An external file that holds a picture, illustration, etc.
Object name is i1062-6050-43-2-215-f02.jpg

Although balance in sample size may be achieved with this method, groups may be generated that are rarely comparable in terms of certain covariates. 6 For example, one group may have more participants with secondary diseases (eg, diabetes, multiple sclerosis, cancer) that could confound the data and may negatively influence the results of the clinical trial. Pocock and Simon 11 stressed the importance of controlling for these covariates because of serious consequences to the interpretation of the results. Such an imbalance could introduce bias in the statistical analysis and reduce the power of the study. 4 , 6 , 8 Hence, sample size and covariates must be balanced in small clinical trials.

Stratified Randomization

The stratified randomization method addresses the need to control and balance the influence of covariates. This method can be used to achieve balance among groups in terms of participants' baseline characteristics (covariates). Specific covariates must be identified by the researcher who understands the potential influence each covariate has on the dependent variable. Stratified randomization is achieved by generating a separate block for each combination of covariates, and participants are assigned to the appropriate block of covariates. After all participants have been identified and assigned into blocks, simple randomization occurs within each block to assign participants to one of the groups.

The stratified randomization method controls for the possible influence of covariates that would jeopardize the conclusions of the clinical trial. For example, a clinical trial of different rehabilitation techniques after a surgical procedure will have a number of covariates. It is well known that the age of the patient affects the rate of healing. Thus, age could be a confounding variable and influence the outcome of the clinical trial. Stratified randomization can balance the control and treatment groups for age or other identified covariates.

For example, with 2 groups involving 40 participants, the stratified randomization method might be used to control the covariates of sex (2 levels: male, female) and body mass index (3 levels: underweight, normal, overweight) between study arms. With these 2 covariates, possible block combinations total 6 (eg, male, underweight). A simple randomization procedure, such as flipping a coin, is used to assign the participants within each block to one of the treatment groups ( Figure 3 ).

An external file that holds a picture, illustration, etc.
Object name is i1062-6050-43-2-215-f03.jpg

Although stratified randomization is a relatively simple and useful technique, especially for smaller clinical trials, it becomes complicated to implement if many covariates must be controlled. 20 For example, too many block combinations may lead to imbalances in overall treatment allocations because a large number of blocks can generate small participant numbers within the block. Therneau 21 purported that a balance in covariates begins to fail when the number of blocks approaches half the sample size. If another 4-level covariate was added to the example, the number of block combinations would increase from 6 to 24 (2 × 3 × 4), for an average of fewer than 2 (40 / 24  =  1.7) participants per block, reducing the usefulness of the procedure to balance the covariates and jeopardizing the validity of the clinical trial. In small studies, it may not be feasible to stratify more than 1 or 2 covariates because the number of blocks can quickly approach the number of participants. 10

Stratified randomization has another limitation: it works only when all participants have been identified before group assignment. This method is rarely applicable, however, because clinical trial participants are often enrolled one at a time on a continuous basis. When baseline characteristics of all participants are not available before assignment, using stratified randomization is difficult. 7

Covariate Adaptive Randomization

Covariate adaptive randomization has been recommended by many researchers as a valid alternative randomization method for clinical trials. 9 , 22 In covariate adaptive randomization, a new participant is sequentially assigned to a particular treatment group by taking into account the specific covariates and previous assignments of participants. 9 , 12 , 18 , 23 , 24 Covariate adaptive randomization uses the method of minimization by assessing the imbalance of sample size among several covariates. This covariate adaptive approach was first described by Taves. 23

The Taves covariate adaptive randomization method allows for the examination of previous participant group assignments to make a case-by-case decision on group assignment for each individual who enrolls in the study. Consider again the example of 2 groups involving 40 participants, with sex (2 levels: male, female) and body mass index (3 levels: underweight, normal, overweight) as covariates. Assume the first 9 participants have already been randomly assigned to groups by flipping a coin. The 9 participants' group assignments are broken down by covariate level in Figure 4 . Now the 10th participant, who is male and underweight, needs to be assigned to a group (ie, control versus treatment). Based on the characteristics of the 10th participant, the Taves method adds marginal totals of the corresponding covariate categories for each group and compares the totals. The participant is assigned to the group with the lower covariate total to minimize imbalance. In this example, the appropriate categories are male and underweight, which results in the total of 3 (2 for male category + 1 for underweight category) for the control group and a total of 5 (3 for male category + 2 for underweight category) for the treatment group. Because the sum of marginal totals is lower for the control group (3 < 5), the 10th participant is assigned to the control group ( Figure 5 ).

An external file that holds a picture, illustration, etc.
Object name is i1062-6050-43-2-215-f04.jpg

The Pocock and Simon method 11 of covariate adaptive randomization is similar to the method Taves 23 described. The difference in this approach is the temporary assignment of participants to both groups. This method uses the absolute difference between groups to determine group assignment. To minimize imbalance, the participant is assigned to the group determined by the lowest sum of the absolute differences among the covariates between the groups. For example, using the previous situation in assigning the 10th participant to a group, the Pocock and Simon method would (1) assign the 10th participant temporarily to the control group, resulting in marginal totals of 3 for male category and 2 for underweight category; (2) calculate the absolute difference between control and treatment group (males: 3 control – 3 treatment  =  0; underweight: 2 control – 2 treatment  =  0) and sum (0 + 0  =  0); (3) temporarily assign the 10th participant to the treatment group, resulting in marginal totals of 4 for male category and 3 for underweight category; (4) calculate the absolute difference between control and treatment group (males: 2 control – 4 treatment  =  2; underweight: 1 control – 3 treatment  =  2) and sum (2 + 2  =  4); and (5) assign the 10th participant to the control group because of the lowest sum of absolute differences (0 < 4).

Pocock and Simon 11 also suggested using a variance approach. Instead of calculating absolute difference among groups, this approach calculates the variance among treatment groups. Although the variance method performs similarly to the absolute difference method, both approaches suffer from the limitation of handling only categorical covariates. 25

Frane 18 introduced a covariate adaptive randomization for both continuous and categorical types. Frane used P values to identify imbalance among treatment groups: a smaller P value represents more imbalance among treatment groups.

The Frane method for assigning participants to either the control or treatment group would include (1) temporarily assigning the participant to both the control and treatment groups; (2) calculating P values for each of the covariates using a t test and analysis of variance (ANOVA) for continuous variables and goodness-of-fit χ 2 test for categorical variables; (3) determining the minimum P value for each control or treatment group, which indicates more imbalance among treatment groups; and (4) assigning the participant to the group with the larger minimum P value (ie, try to avoid more imbalance in groups).

Going back to the previous example of assigning the 10th participant (male and underweight) to a group, the Frane method would result in the assignment to the control group. The steps used to make this decision were calculating P values for each of the covariates using the χ 2 goodness-of-fit test represented in the Table . The t tests and ANOVAs were not used because the covariates in this example were categorical. Based on the Table , the lowest minimum P values were 1.0 for the control group and 0.317 for the treatment group. The 10th participant was assigned to the control group because of the higher minimum P value, which indicates better balance in the control group (1.0 > 0.317).

Probabilities From χ 2 Goodness-of-Fit Tests for the Example Shown in Figure 5 (Frane 18 Method)

An external file that holds a picture, illustration, etc.
Object name is i1062-6050-43-2-215-t01.jpg

Covariate adaptive randomization produces less imbalance than other conventional randomization methods and can be used successfully to balance important covariates among control and treatment groups. 6 Although the balance of covariates among groups using the stratified randomization method begins to fail when the number of blocks approaches half the sample size, covariate adaptive randomization can better handle the problem of increasing numbers of covariates (ie, increased block combinations). 9

One concern of these covariate adaptive randomization methods is that treatment assignments sometimes become highly predictable. Investigators using covariate adaptive randomization sometimes come to believe that group assignment for the next participant can be readily predicted, going against the basic concept of randomization. 12 , 26 , 27 This predictability stems from the ongoing assignment of participants to groups wherein the current allocation of participants may suggest future participant group assignment. In their review, Scott et al 9 argued that this predictability is also true of other methods, including stratified randomization, and it should not be overly penalized. Zielhuis et al 28 and Frane 18 suggested a practical approach to prevent predictability: a small number of participants should be randomly assigned into the groups before the covariate adaptive randomization technique being applied.

The complicated computation process of covariate adaptive randomization increases the administrative burden, thereby limiting its use in practice. A user-friendly computer program for covariate adaptive randomization is available (free of charge) upon request from the authors (M.K., B.G.R., or J.H.P.). 29

Conclusions

Our purpose was to introduce randomization, including its concept and significance, and to review several randomization techniques to guide athletic training researchers and practitioners to better design their randomized clinical trials. Many factors can affect the results of clinical research, but randomization is considered the gold standard in most clinical trials. It eliminates selection bias, ensures balance of sample size and baseline characteristics, and is an important step in guaranteeing the validity of statistical tests of significance used to compare treatment groups.

Before choosing a randomization method, several factors need to be considered, including the size of the clinical trial; the need for balance in sample size, covariates, or both; and participant enrollment. 16 Figure 6 depicts a flowchart designed to help select an appropriate randomization technique. For example, a power analysis for a clinical trial of different rehabilitation techniques after a surgical procedure indicated a sample size of 80. A well-known covariate for this study is age, which must be balanced among groups. Because of the nature of the study with postsurgical patients, participant recruitment and enrollment will be continuous. Using the flowchart, the appropriate randomization technique is covariate adaptive randomization technique.

An external file that holds a picture, illustration, etc.
Object name is i1062-6050-43-2-215-f06.jpg

Simple randomization works well for a large trial (eg, n > 200) but not for a small trial (n < 100). 7 To achieve balance in sample size, block randomization is desirable. To achieve balance in baseline characteristics, stratified randomization is widely used. Covariate adaptive randomization, however, can achieve better balance than other randomization methods and can be successfully used for clinical trials in an effective manner.

Acknowledgments

This study was partially supported by a Faculty Grant (FRCAC) from the College of Graduate Studies, at Middle Tennessee State University, Murfreesboro, TN.

Minsoo Kang, PhD; Brian G. Ragan, PhD, ATC; and Jae-Hyeon Park, PhD, contributed to conception and design; acquisition and analysis and interpretation of the data; and drafting, critical revision, and final approval of the article.

We're sorry, but some features of Research Randomizer require JavaScript. If you cannot enable JavaScript, we suggest you use an alternative random number generator such as the one available at Random.org .

RESEARCH RANDOMIZER

Random sampling and random assignment made easy.

Research Randomizer is a free resource for researchers and students in need of a quick way to generate random numbers or assign participants to experimental conditions. This site can be used for a variety of purposes, including psychology experiments, medical trials, and survey research.

GENERATE NUMBERS

In some cases, you may wish to generate more than one set of numbers at a time (e.g., when randomly assigning people to experimental conditions in a "blocked" research design). If you wish to generate multiple sets of random numbers, simply enter the number of sets you want, and Research Randomizer will display all sets in the results.

Specify how many numbers you want Research Randomizer to generate in each set. For example, a request for 5 numbers might yield the following set of random numbers: 2, 17, 23, 42, 50.

Specify the lowest and highest value of the numbers you want to generate. For example, a range of 1 up to 50 would only generate random numbers between 1 and 50 (e.g., 2, 17, 23, 42, 50). Enter the lowest number you want in the "From" field and the highest number you want in the "To" field.

Selecting "Yes" means that any particular number will appear only once in a given set (e.g., 2, 17, 23, 42, 50). Selecting "No" means that numbers may repeat within a given set (e.g., 2, 17, 17, 42, 50). Please note: Numbers will remain unique only within a single set, not across multiple sets. If you request multiple sets, any particular number in Set 1 may still show up again in Set 2.

Sorting your numbers can be helpful if you are performing random sampling, but it is not desirable if you are performing random assignment. To learn more about the difference between random sampling and random assignment, please see the Research Randomizer Quick Tutorial.

Place Markers let you know where in the sequence a particular random number falls (by marking it with a small number immediately to the left). Examples: With Place Markers Off, your results will look something like this: Set #1: 2, 17, 23, 42, 50 Set #2: 5, 3, 42, 18, 20 This is the default layout Research Randomizer uses. With Place Markers Within, your results will look something like this: Set #1: p1=2, p2=17, p3=23, p4=42, p5=50 Set #2: p1=5, p2=3, p3=42, p4=18, p5=20 This layout allows you to know instantly that the number 23 is the third number in Set #1, whereas the number 18 is the fourth number in Set #2. Notice that with this option, the Place Markers begin again at p1 in each set. With Place Markers Across, your results will look something like this: Set #1: p1=2, p2=17, p3=23, p4=42, p5=50 Set #2: p6=5, p7=3, p8=42, p9=18, p10=20 This layout allows you to know that 23 is the third number in the sequence, and 18 is the ninth number over both sets. As discussed in the Quick Tutorial, this option is especially helpful for doing random assignment by blocks.

Please note: By using this service, you agree to abide by the SPN User Policy and to hold Research Randomizer and its staff harmless in the event that you experience a problem with the program or its results. Although every effort has been made to develop a useful means of generating random numbers, Research Randomizer and its staff do not guarantee the quality or randomness of numbers generated. Any use to which these numbers are put remains the sole responsibility of the user who generated them.

Note: By using Research Randomizer, you agree to its Terms of Service .

Storage Assignment Using Nested Metropolis Sampling and Approximations of Order Batching Travel Costs

  • Original Research
  • Open access
  • Published: 23 April 2024
  • Volume 5 , article number  477 , ( 2024 )

Cite this article

You have full access to this open access article

example of research random assignment

  • Johan Oxenstierna   ORCID: orcid.org/0000-0002-6608-9621 1 , 2 ,
  • Jacek Malec   ORCID: orcid.org/0000-0002-2121-1937 1 &
  • Volker Krueger   ORCID: orcid.org/0000-0002-8836-8816 1  

The Storage Location Assignment Problem (SLAP) is of central importance in warehouse operations. An important research challenge lies in generalizing the SLAP such that it is not tied to certain order-picking methodologies, constraints, or warehouse layouts. We propose the OBP-based SLAP, where the quality of a location assignment is obtained by optimizing an Order Batching Problem (OBP). For the optimization of the OBP-based SLAP, we propose a nested Metropolis algorithm. The algorithm includes an OBP-optimizer to obtain the cost of an assignment, as well as a filter which approximates OBP costs using a model based on the Quadratic Assignment Problem (QAP). In experiments, we tune two key parameters in the QAP model, and test whether its predictive quality warrants its use within the SLAP optimizer. Results show that the QAP model’s per-sample accuracy is only marginally better than a random baseline, but that it delivers predictions much faster than the OBP optimizer, implying that it can be used as an effective filter. We then run the SLAP optimizer with and without using the QAP model on industrial data. We observe a cost improvement of around 23% over 1 h with the QAP model, and 17% without it. We share results for public instances on the TSPLIB format.

Avoid common mistakes on your manuscript.

Introduction

Charris et al. [ 7 ] gives the following definition of a Storage Location Assignment Problem (SLAP): The “allocation of products into a storage space and optimization of the material handling (…) or storage space utilization [costs]”. The relationship between material handling costs, on the one hand, and storage assignment, on the other, can be showcased in an example: If a vehicle needs to pick a set of products, its travel cost clearly depends on where the products are stored in the warehouse. At the same time, the development of an effective storage strategy needs to consider various features in material handling, such as vehicle constraints, traffic conventions and picking methodologies.

In this paper, we work with a version of the SLAP which is particularly generalizable. Kübler et al. [ 18 ], name this version the “joint storage location assignment, order batching and picker routing problem”. The main characteristic of this version is the inclusion of two optimization problems in the SLAP:

The Order Batching Problem (OBP), where vehicles are assigned to carry sets of orders (an order is a set of products) [ 17 ].

The Picker Routing Problem , where a short picking path of a vehicle is found for the products that the vehicle is assigned to pick. The Picker Routing Problem is a Traveling Salesman Problem (TSP) applied in a warehouse environment [ 25 ].

Henceforth, we refer to this version as the OBP-based SLAP. A key advantage of using the OBP within the SLAP is the added flexibility and generality of the order on a conceptual level: For example, optimizing the OBP-based SLAP gives opportunity to also optimize the TSP-based SLAP [ 23 ]. When it comes to product locations, the sole difference between the OBP and the OBP-based SLAP is that locations for all products are assumed fixed in the former while, in the latter, they are assumed mutable (for a subset of locations in our case).

It is of scientific importance to be able to compare optimization approaches and solutions. For the SLAP, this is made difficult by the many versions of the problem. As the extensive literature review by Charris et al. [ 7 ] shows, there is little consensus regarding which versions are more important, or specifically, which features would represent a standardized version. Examples of such features are dynamicity, warehouse layout, vehicle types, cost functions, reassignment scenarios and picking methodologies. There is also a shortage of benchmark datasets for any version of the SLAP, which prevents the reproducibility of experiments [ 2 , 16 ]. As part of our contribution for a standardized version, we suggest a modified TSPLIB format [ 26 ] (section “ Datasets ”). There are several ways in which to balance between simplicity, reproducibility and industrial applicability when developing SLAP versions and corresponding instances, however. From the generalization perspective, our model is advantageous in two main areas: Order-picking methodology and warehouse layout. But it is weak in two other areas: dynamicity and reassignment scenarios. We describe the meaning of these choices further in the light of prior work (section “ Related Work ”) and in our problem formulation (section “ Problem Formulation ”). We invite the community to debate which features are more or less important for a standardized version.

In section “ Optimization Algorithm ”, we introduce our SLAP optimizer. It is based on the Metropolis algorithm, a type of Markov Chain Monte Carlo (MCMC) method. A core feature of the optimizer is that the quality of a location assignment candidate is retrieved by optimizing an OBP. Due to the OBP’s NP-hardness, it must be optimized in a way that trades off solution quality with CPU-time. For this purpose, we use an OBP optimizer with a high degree of computational efficiency [ 22 ]. Within the SLAP optimizer, the OBP optimizer is still computationally expensive, and we show that it can be assisted by fast cost approximations from a Quadratic Assignment Problem (QAP) model. Finally, we test the performance of the SLAP optimizer with and without inclusion of the QAP approximations. Cost improvements are around 23% over 1 h with the QAP model, and 17% without. In summary, we make three concrete contributions:

Formulation of an OBP-based SLAP optimization model and a corresponding benchmark instance standard.

QAP approximation model to predict OBP travel costs and experiments on generated instances to test whether the use of QAP approximations within a SLAP optimizer can be justified.

An OBP-based SLAP optimizer (QAP-OBP) and experiments on industry instances to test its computational efficiency. Comparison of results with and without usage of QAP approximations.

Related Work

This section goes through general strategies for conducting storage location assignment, as well as ways in which their quality can be evaluated. Various SLAP formulations and proposed optimization algorithms are covered. Our primary focus will be on the standard picker-to-parts arrangement. We specifically refer to the work of Kübler et al. [ 18 ], as their proposed model aligns with ours.

There exist numerous general strategies for conducting storage location assignment [ 7 ]. Three key strategies are Dedicated, Class-based and Random:

Dedicated Each product is assigned to a specific location which never changes. This strategy is suitable if the product collection changes rarely and simplicity is desired. Additionally, human pickers can leverage this strategy by familiarizing themselves with specific products and their corresponding locations, which might speed up their picking [ 35 ].

Random Each product can be assigned any available location in the warehouse. This is suitable whenever the product collection changes frequently.

Class-based (zoning) The warehouse is partitioned into sections, and the products are classified based on their demand. Each class is assigned a zone. The outline of the zone can be regarded as dedicated in that it does not change, whereas the placement of each product in a zone is assumed to be random [ 21 ]. Class-based storage assignment can therefore be regarded as a middle ground between dedicated and random.

The quality of a location assignment is commonly evaluated based on some model of aggregate travel cost. For this purpose, a simplified simulation of order-picking in the warehouse can be used [ 7 , 21 ]. Some proposals include the simulation of order-picking by the Cube per Order Index (COI) [ 15 ]. COI includes the volume of a product and the frequency with which it is picked (historically or future-forecasted). Products with high pick frequency and relatively low volume are subsequently assigned to locations close to the depot. Since orders may contain products which are not located close to each other, COI is only adequate for order-picking scenarios where orders contain one product and vehicles carry one product at a time. This may be sufficient for pallet picking or when certain types of robots are used [ 3 ]. Mantel et al. [ 21 ], introduced Order Oriented Slotting (OOS) where the number of products in an order may be greater than one. A similar model to OOS is used by Fontana and Nepomuceno [ 10 ], Lee et al. [ 20 ] and Žulj et al. [ 37 ]. The picking cost of an order in OOS can in some cases be modeled using a Quadratic Assignment Problem (QAP) [ 21 ]. The QAP computes the sum of element-wise products of weights and frequencies [ 1 ] and for an order this can be translated into distances between products and how often they are picked. Nevertheless, a QAP on its own is often not sufficient to model a SLAP without extensive use of heuristics and constraints for warehouse layouts and picking methodologies [ 21 ]. For a layout-agnostic OBP-based SLAP, graph-based QAP techniques could be attempted, but hitherto they have only been applied on related problems [ 31 , 36 ].

There is only limited research on SLAPs where vehicles are expected to carry multiple orders and where an Order Batching Problem (OBP) is integrated into the SLAP optimization process. One example is Xiang et al. (2018) and [ 33 ], who use this approach in a robotic warehouse where the vehicles are pods or mobile racks, which is not easily comparable to a picker-to-parts system. Another example is Kübler et al. [ 18 ], which we look closer at below.

Travel distance or time are commonly used to evaluate SLAP solution quality in the above mentioned models, but there are several alternatives and extensions. Lee et al. [ 20 ], for example, study the effect of location assignment and traffic congestion in the warehouse. Assigning too many products to locations close to the depot (the goal in common COI) may lead to traffic congestion, which should ideally be considered in an industrial model. Lee et al. [ 20 ], formulate Correlated and Traffic Balanced Storage Assignment (C&TBSA) as a multi-objective problem with travel cost on the one hand, and traffic congestion avoidance on the other. Larco et al. [ 19 ], include worker welfare in their evaluation of solution quality. If picking is conducted by humans who move products from shelves onto a vehicle, the weight and volume, as well as the height of the shelf the product is placed on, can have an impact on worker welfare. Parameters such as "ergonomic loading," "human energy expenditure," or "worker discomfort" [ 7 ] can be used to quantify worker welfare.

The SLAP can be categorized into two main groups based on the number of location assignments required. Either the assignment is a “re-warehousing” operation, which means that a large portion of the warehouse’s products are (re)assigned [ 16 ]. Often, however, only a small subset of products are (re)assigned a location and this is called “healing” [ 16 ]. Solution proposals involving healing often look closely at different types of scenarios for carrying out initial assignments for new products in the warehouse, or reassignments for products already in the warehouse. Kübler et al. [ 18 ], propose four such scenarios.

Empty storage location A product is assigned to a previously unoccupied location.

Direct exchange A product changes location with another product.

Indirect exchange 1 A product is moved to another location which is occupied by another product. The latter product is moved to a third, empty location.

Indirect exchange 2 A product is moved to a new location which is occupied by a second product. The second product is moved to a new location which is occupied by a third product. The third product is moved to the original location of the first product.

The above scenarios are all associated with varying levels of effort, ranging from the lightest in scenario I, to the heaviest in IV. Kübler et al. quantify these efforts by including both physical and administrative times, which are transformed to effort terms by proposed proportionalities.

Concerning SLAP optimizers, proposals include models capable of obtaining optimal solutions, such as Mixed Integer Linear Programming (MILP), dynamic programming and branch and bound algorithms [ 7 ]. The warehouse environment is often simplified to a significant degree when optimal solutions are sought [ 7 , 13 , 16 , 19 ]. The main simplification relates to order-picking using COI or OOS. Other simplifications involve limiting the number of products [ 13 ], number of locations [ 30 ], or by requiring the conventional warehouse rack layout [ 18 ]. The conventional layout assumes Manhattan style blocks of aisles and cross-aisles, and it is used almost exclusively in existing literature on the SLAP (we are only aware of two exception cases using the “fishbone” and “cascade” layouts [ 6 , 7 ].

Most proposed SLAP optimizers provide non-exact solutions using heuristics or meta-heuristics. One example is multi-phase optimization where the first phase proposes possible locations for products, and the second phase carries out the assignments and evaluates them [ 32 ]. In Kübler et al. [ 18 ], a heuristic zoning optimizer is used to generate location assignments, and a Discrete Evolutionary Particle Swarm Optimizer (DEPSO) is used to optimize an OBP for the evaluation of the assignments. DEPSO is a modification of a standard PSO algorithm that addresses the risk of convergence on local minima and allows for a discrete search space. Other heuristic or meta-heuristic approaches include Genetic and Evolutionary Algorithms [ 9 , 20 ], Ant Colony Optimization [ 34 ] and Simulated Annealing [ 35 ]. If TSP optimization is desired within a SLAP, S-shape or Largest Gap algorithms [ 28 ] are often utilized. For TSP-optimization on unconventional layouts with a pre-computed distance matrix, Google OR-tools or Concorde have been proposed [ 22 , 27 ].

Evaluating the quality of results in prior work is challenging due to the variability of SLAP models. Below are a few examples where result quality is judged based on a percentage saving in travel distance or time: For conventional warehouse layouts, reassignment costs and dynamic picking patterns, Kofler et al. [ 16 ], report best savings around 21%. Kubler et al. (2020), report best savings around 22% in a similar scenario. Zhang et al. [ 35 ] report best savings around 18% on simulated data with thousands of product locations, but without reassignment costs. In a similar setting, for a few hundred products, Trindade et al. (2022) report best savings around 33%.

Nested Metropolis Sampling

The proposed optimizer (section “ Optimization Algorithm ”) is based on a nested Metropolis algorithm first introduced by Christen and Fox [ 8 ]. The Metropolis algorithm is a type of Markov Chain Monte Carlo (MCMC) method, which first draws a sample \({x}_{i+1}\) based on a desired feature distance (excluding costs) to a previous sample \({x}_{i}\) . The distance is given by some probability distribution \(q\left({x}_{i+1}|{x}_{i}\right)\) , and it is usually chosen such that the distance between \({x}_{i+1}\) and \({x}_{i}\) is low with a high probability (Mackay 1998). The accept probability is then computed based on some function that takes the costs of the new and previous samples as input [ 29 ]. Common Metropolis sampling assumes that there is only one cost function, \({f}^{*}\) , and since we wish to include an approximation of this cost, \(f\) , we use a modification [ 8 ]. Nested Metropolis sampling is shown in flowchart form in Fig.  1 .

figure 1

Nested Metropolis Sampling. The inner loop computes a cheap (in terms of CPU-time) approximation of a sample cost and if the approximation is strong, the sample is promoted to the outer loop where an expensive ground-truth cost is computed

After a first sample \({x}_{i}\) has been initialized (i), a new sample \({x}_{i+1}\) is generated (ii) and its cost approximated \(f\left({x}_{i+1}\right)\) (iii). If the approximation is deemed strong enough (probabilistically) relative to \(f\left({x}_{i}\right)\) , the sample is promoted (iv) to the next step where its ground-truth cost \({f}^{*}\left({x}_{i+1}\right)\) is computed (v). The accept filter (vi) is only used for promoted samples.

For a cost minimization problem, the promote and accept probabilities can be computed based on the following equations [ 8 ]:

where \(\alpha \left({x}_{i+1}|{x}_{i}\right)\) denotes the promote probability and \({\alpha }^{*}\left({x}_{i+1}|{x}_{i}\right)\) the accept probability.

Problem Formulation

Objective function.

The objective function in the OBP-based SLAP is based on the ones formulated in Henn and Wäscher [ 14 ] and Oxenstierna et al. [24], i.e., the minimization of cost in an Order Batching Problem (OBP):

where \(\mathcal{O}\) denotes orders, where \(\mathcal{B}\) denotes batches and where \({D}^{x}\left(b\right)\) denotes the distance of a TSP solution, i.e., the distance needed to pick batch \(b\) . Batch \(b\) is a set of orders and \(v\in V\) denotes a vehicle. Each vehicle can carry one batch and the number of orders that can fit in the batch is governed by vehicle capacity (such as dimensions, bins, number of orders or products). \({a}_{vb}\) denotes a binary variable set to 1 if vehicle \(v\) is assigned to pick \(b\) and 0 otherwise. Orders consist of products \(\mathcal{O}\in {2}^{\mathcal{P}}\) , where each product \(p\in \mathcal{P}\) is a tuple consisting of a unique key (Stock Keeping Unit), a Cartesian location \(loc\left(p\right)\) , and a positive quantity of how many \(p\) are available at \(loc\left(p\right)\) . The locations of all products are given by location assignment vector \(x\) , where the elements represent products and the indices locations (each index is mapped to a Cartesian coordinate).

The mapping of location keys to coordinates and computation of distances between pairs of locations is based on a digitization pipeline for warehouses on any 2D obstacle layout and usage of the Floyd-Warshall graph algorithm. Details on this digitization pipeline and the OBP (including TSP-optimization for \({D}^{x}(b)\) and usage of vehicle capacity in \({a}_{vb}\) ) are beyond the scope of this paper, so for specifics we refer to Oxenstierna et al. [24] and Rensburg [ 27 ].

The difference between the OBP and the OBP-based SLAP mainly concerns product locations. In Oxenstierna, van Rensburg, et al. (2021) each product p  ∈  “has a [fixed] location”, meaning that \(x\) in \({f}^{*}\left(x\right)\) is immutable. In the OBP-based SLAP, however, a subset of products \({\mathcal{P}}_{s}\subset \mathcal{P}\) do not have fixed locations, which means that some elements in \(x\) can change indices in the vector. The OBP-based SLAP objective consists of finding location assignment \(x\) such that the OBP in Eq.  3 is minimized:

This objective lacks reassignment costs and is therefore a version of the “empty storage location” scenario I in Kübler et al. [ 18 ] (section “ Related Work ”). Exclusion of reassignment costs is motivated for this scenario, since the initial location assignment of new products in a warehouse is not optional, but a requirement. The other of Kübler et al.’s scenarios are all reassignments. Contrary to the initial assignments that we work with, reassignments are optional and potential gains in travel cost must there be weighed against reassignment costs.

Although reassignments should ideally be included in a complete SLAP model, a standardized SLAP needs to be a trade-off between simplicity and complexity. In the TSP-based SLAP [ 23 ] it is shown that the optimization of reassignments is NP-hard and not easily combined with order-picking optimization within a SLAP. The TSP-based SLAP includes reassignments, but uses the TSP instead of the OBP to optimize order-picking. The OBP-based SLAP excludes reassignments, but includes the OBP, a significantly more challenging problem than the TSP. As is often the case in literature on the SLAP, choice of optimization model depends on which features are considered more important for the usecase at hand.

Fast OBP Cost Approximation

One key difficulty with the OBP-based SLAP is that the OBP poses a highly intractable problem. Even for relatively small OBP instances, a significant amount of CPU-time is needed to obtain substantial cost improvements [ 18 , 22 ]. In the case of the OBP-based SLAP, this means that it would require a large amount of CPU-time to minimize cost for many assignment candidates \(x\) (Eq.  4 ). To resolve this problem, we propose to include an approximation of \({f}^{*}\left(x\right)\) :

where \(w\) denotes weight, where \({d}_{{l}_{1}{l}_{2}}^{x}\) denotes distance between two locations \({l}_{1},{l}_{2}\) and \(a\left(p,l\right)\) a function which returns 1 if product \(p\) is located at location \(l\) and 0 otherwise. \(f\left(x\right)\) is the element-wise summation of weights times distances. The cell values in the weight matrix represent the number of times two products, \({p}_{1},{p}_{2}\) , appear in the same order \(o\in \mathcal{O}\) . The (shortest) distances between all pairs of product locations are assumed pre-computed and stored in memory. We refer to Eq.  5 as the Quadratic Assignment Problem (QAP) model. Note that we never minimize it. For the \(f\left(x\right)\) approximation to be of use, we proceed to discuss how its ability to predict \({f}^{*}\left(x\right)\) can be evaluated.

Assuming a dataset of finite samples with approximated and ground truth costs \(\left(x,{f\left(x\right), f}^{*}\left(x\right)\right)\in X,|X|\in {\mathbb{Z}}^{+}\) , \({f\left(x\right), f}^{*}\left(x\right) \in {\mathbb{R}}^{+}\) , the predictive quality of \(f\left(X\right)\) versus \({f}^{*}\left(X\right)\) is obtainable through softmax cross-entropy [ 4 , 5 ]:

where \({\mathbb{P}}\left(f\left({x}_{i}\right)\right)\) and \({\mathbb{P}}\left({f}^{*}\left({x}_{i}\right)\right)\) denote the probabilities of approximate and ground truth costs of sample \({x}_{i}\) , respectively, where \(\left({x}_{i},{f\left({x}_{i}\right), f}^{*}\left({x}_{i}\right)\right)\in X\) . \(L\) is the loss , i.e., a distance heuristic between \(f\left(X\right)\) and \({f}^{*}\left(X\right)\) . This approach can be extended into Normalized Discounted Cumulative Gain (NDCG) [ 4 ].

\({\pi }_{f\left(X\right)}\) is a ranking (an ordering of samples \(X\) according to their costs \(f(X)\) ) and \(rel({\pi }_{f\left(X\right)}\left(i\right))\) is the relevance at rank \({\pi }_{f\left(X\right)}\left(i\right)\) . \(IDCG\) denotes an ideal value, where \(rel({\pi }_{{f}^{*}\left(X\right)}\left(1\right))>rel({\pi }_{{f}^{*}\left(X\right)}\left(2\right))>\dots > rel({\pi }_{{f}^{*}\left(X\right)}\left(|X|\right))\) , i.e., the case when the relevance of a sample corresponds with how highly it is ranked. Bruch et al. [ 4 ] argue that NDCG is a stronger choice than softmax cross-entropy whenever cost is non-binary, which is the case in \({f}^{*}\left(x\right)\) (Eq.  3 ). In Fig.  13 (Appendix) an example is shown where NDCG is computed from \(\left|X\right|\) samples.

In summary, we can quantify the predictive quality of the QAP model by its ability to rank a list of samples \(X\) against a ground truth ranking by the OBP optimizer. Since the nested Metropolis algorithm in section “ Nested Metropolis Sampling ” only stores two samples at any iteration, we modify the algorithm to instead work with more samples (section “ Optimization Algorithm ”). We also want to avoid the computation of \({f}^{*}\left(X\right)\) in each iteration, so in the optimization algorithm we only compute \({f}^{*}\left(argmi{n}_{x} f\left(X\right)\right)\) . In section “ Experiments ”, we conduct an experiment to test the validity of using the NDCG-based \({f}^{*}\left(argmi{n}_{x} f\left(X\right)\right)\) in SLAP optimization. In section “ Datasets ” we also discuss choice of datatype for the relevance values.

Optimization Algorithm

The proposed optimization algorithm includes three main modules: 1. a sample (location assignment) generator. 2. a fast cost approximator based on a model of the Quadratic Assignment Problem (QAP). 3. an Order Batching Problem (OBP) optimizer. In this paper, we mainly focus on how QAP approximations can be effectively utilized within the nested Metropolis sampler described in section “ Nested Metropolis Sampling ”. In sections “ Sample Generator ” and “ Promote and Accept Thresholds and Cost Computations ”, we therefore describe two main modifications. The final version (QAP-OBP) is shown in flowchart form in Fig.  2 and pseudocode in Algorithm 1.

figure 2

QAP-OBP optimization algorithm

figure a

Sample \(x\) contains both the assigned products (products already in the warehouse) and the unassigned products \({\mathcal{P}}_{s}\) (section “ Problem Formulation ”). \({x}_{1}\) is initialized such that products \({\mathcal{P}}_{s}\) are assigned locations randomly without replacement. Choices for iterations \(K\) , the cost distance function \(\Delta\) and constant \({c}_{1}\) are discussed in section “ Experiments ”.

Sample Generator

The input to the sample generator (step ii in Fig.  2 ) is a single sample \({x}_{i}\) and the output is a list of new samples \({X}_{i+1}\) . There are two main parameters in use by the sample generator. \(N\in {\mathbb{Z}}^{+}\) dictates how many new samples are generated, i.e., \(|{X}_{i+1}|\) , and \(\uplambda \in {\mathbb{R}}^{+}\) dictates how much each new sample in \({X}_{i+1}\) differs from \({x}_{i}\) . The way \(N\) and \(\uplambda\) are utilized to generate new samples is shown in Algorithm 2.

figure b

Every time the sample generator is called, an empty list is first initialized. Then, for \(N\) iterations, a new sample \(x\) is generated by first copying \({x}_{i}\) and then by computing \(m\) , the number of products for which the index in \(x\) can change. For \(m\) we use a truncated Poisson distribution with rate \(\uplambda\) and upper bound \(m\le {|\mathcal{P}}_{s}|\) . A uniform random selection of \(m\) products, \({\mathcal{P}}_{m}\) , are then removed from \(x\) . For each \(p\in {\mathcal{P}}_{m}\) , a uniform random free index (either an empty location or an index holding a product in \({\mathcal{P}}_{s}\) ) in \(x\) is then selected such that the quantity ( \(q\) ) of the product does not exceed the location’s capacity. After \(x\) has been filled, it is appended to \({X}_{i+1}\) .

Promote and Accept Thresholds and Cost Computations

After a list of samples \({X}_{i+1}\) has been generated (step ii in Fig.  2 ), their costs are approximated using the QAP model (iii). The sample with the lowest cost approximation is then always promoted (iv). Steps ii, iii and iv in both the nested Metropolis sampler and QAP-OBP (Figs.  1 and 2 , respectively) are the same considering that the final output is a single promoted sample. There are advantages and disadvantages of both versions regarding how they conduct this selection. In the nested Metropolis sampler, the promote probability depends on the ratio of approximated cost between previous and new single samples. In QAP-OBP, the sample generator is instead set to output \(N=\left|{X}_{i+1}\right|\) candidates, followed by argmin (compare step iv in Figs.  1 and 2 ). This modification simplifies evaluation of the QAP model’s accuracy, since we can set up an experiment to compute OBP costs on the same samples (Fig.  5 ). Generating multiple samples could also facilitate parallelization, which, for future work, could reduce the QAP model’s CPU-time. The main consideration, however, is that it simplifies the original algorithm for a particularly complex optimization scenario, where it cannot be expected to behave according to Christen and Fox’s [ 8 ] performance guarantees. The problem with the original algorithm is that it assumes optimal ground-truth costs, but these are not generally available for OBPs [ 22 ] (as far as we are aware, there exists no proposal for how to obtain optimal results for but the smallest OBP instances within reasonable CPU-time). A relatively minor problem with the modification is that it requires tuning of the number of samples ( \(N\) ) that the sample generator is outputting each iteration. The reason we use a Metropolis algorithm instead of possibly more capable meta-heuristic alternatives, is mainly due to implementation. The Metropolis algorithm does not have many parameters which could be tuned based on iterations \(K\) (such as the temperature in Simulated Annealing) and therefore, a time-based condition can be used instead of \(K\) to terminate the algorithm (we will use this in section “ SLAP optimization with and without QAP approximation ”).

Concerning computation of \({f}^{*}(x)\) we use the Single Batch Iterated (SBI) optimizer and its main features are its high computational efficiency and its ability to handle warehouses with unconventional rack layouts [ 22 ]. OBP optimization and its internal use of TSP optimization, is beyond the scope of this paper, and we here treat SBI as a black-box which outputs a \({f}^{*}\left(x\right)\) for Eq.  3 . The sample \(x\) with the lowest \({f}^{*}\left(x\right)\) found is always stored throughout the optimization procedure (sample storage is omitted in Figs.  1 , 2 and the pseudo-code).

For this paper, we have generated and shared instances in L17_533, Footnote 1 which are based on OBP instances in L6_203 Footnote 2 and L09_251. Footnote 3 We also use data from a real warehouse (Aba Skol AB). The generated instances use the TSPLIB format [ 26 ] with certain amendments for the SLAP, including 6 types of warehouse obstacle layouts, various depot configurations, vehicle capacities and orders (see Fig.  3 for an example of one of the layouts). L17_533 does not include any unidirectional travel rules, meaning that the distance between any two locations is equal both ways. The number of orders range between 4 to 1000 and number of products range between 10 to 3000. The products that are to be assigned a location, \({\mathcal{P}}_{s}\) , are tagged as “SKUsToSlot” in the instance set. The “assignmentOptions” includes the available empty locations and how cost is to be computed (it is always set to the “empty storage location” scenario). For analysis, instances are categorized according to vehicle capacities, number of orders, products and parameters \(N\) and \(\uplambda\) .

figure 3

Example storage assignment of four products and subsequent order-picking for the SLAP model used in the paper. Rectangles denote warehouse racks. Red and blue diamonds denote origin/destination for picking paths. Colored dots denote products and the four orders they belong to. Black crosses denote available locations for the new products. Note that products are often more spread out than what is shown in this example

The industrial warehouse dataset (Fig.  4 ) contains 210,277 products in 37,014 orders collected using batch picking over a 4-month period. There are 1289 pick-locations (in the graph representation) and most batches exist within one of six picking zones, but 24.4% include picks from several zones. As with the generated instances, shortest distances and paths between any two locations are assumed equal. For a proof of concept, we select product subsets from this data to be of relevance to warehouse management and real-world utility, on the one hand, and comparability to the generated instances, on the other. We build 150 subsets from 3-week periods with selections of between 50–1800 products for \(\mathcal{P}\) and between 10 and 225 corresponding products for \({\mathcal{P}}_{s}\) . The subset selection is random apart from that the products in a subset must exist within the same 3 week period. Number of free locations is given on a per-product basis, since each product has specific constraints regarding where it can be placed, and on average it varies between 50 – 481 locations. For parameters \(N\) and \(\uplambda\) , we explore suitable values on the generated instances within shorter optimization runs, followed by longer runs with chosen constants on the real dataset.

figure 4

Top-view of the Aba Skol AB warehouse. The picking zones are color-coded. The red circle denotes the most commonly used depot location

Experiments

Overview and constants.

The experiments are divided into two parts. The first part involves tuning the QAP model and comparing its ability to rank SLAP assignment samples against an OBP ground truth model and a random baseline (Fig.  5 ).

figure 5

Steps involved to obtain QAP predictive quality on samples generated from an instance

A SLAP test-instance (orders with products) is first loaded (i) and \({x}_{1}\) initialized (products \({\mathcal{P}}_{s}\) are assigned free locations in \({x}_{1}\) randomly) (ii). Then, \(N\) location assignments, \({X}_{i+1}\) , are generated according to Algorithm 2 (iii). The cost of the generated assignments is estimated using the QAP model and the OBP optimizer SBI (iv). The samples and costs are used to compute IDCG and DCG (v). IDCG is computed from the ranking of costs according to the OBP optimizer and DCG is computed from the ranking of costs according to the QAP model. A random DCG value is also pre-computed using the average of \({10}^{6}\) random rankings. This random baseline represents the case when \(f\left({X}_{i+1}\right)\) and \(argmi{n}_{x+1} f\left({X}_{i+1}\right)\) (steps iii and iv in Fig.  3 ) cannot help produce a lower value in \({f}^{*}\left({x}_{i+1}\right)\) (step v) [ 11 , 12 ]. Relevance values \(rel({\pi }_{{f}^{*}\left(X\right)})\) and \(rel({\pi }_{f\left(X\right)})\) are chosen to be the ordinal ranks of samples \(x\) according to respective cost functions. For \(N\) samples, the values are \(rel\left({\pi }_{{f}^{*}\left(X\right)}\right)={(\pi }_{{f}^{*}\left(X\right)}\left(N\right), {\pi }_{{f}^{*}\left(X\right)}\left(N-1\right), \dots , {\pi }_{{f}^{*}\left(X\right)}\left(1\right))\) and \(rel\left({\pi }_{f\left(X\right)}\right)={(\pi }_{f\left(X\right)}\left(N\right), {\pi }_{f\left(X\right)}\left(N-1\right), \dots , {\pi }_{f\left(X\right)}\left(1\right))\) (this corresponds to the set up shown in Fig.  13 in Appendix). The DCG value obtained from the QAP model is then used to compute NDCG according to Eq.  9 (vi). The predictive quality is finally calculated by subtracting the achieved NDCG value with the random NDCG baseline, with a positive value implying that the QAP model is stronger. We also record the CPU-time needed for the QAP model and the OBP-optimizer, respectively. The tuning of the QAP model concerns parameters \(N\) (number of samples) and \(\uplambda\) (rate of change for the samples) to maximize NDCG. We further investigate whether NDCG is impacted by other factors, including warehouse layout and instance size. Instance size is used to provide a quantification of instance difficulty, and here we restrict it to number of orders, total number of products \(\left|\mathcal{P}\right|\) and products which are to be assigned a location \({|\mathcal{P}}_{s}|\) . The latter number, \({|\mathcal{P}}_{s}|\) , is computed as 5–10% of \(\left|\mathcal{P}\right|\) in the instance.

We proceed with a second experiments part, where we run the SLAP optimizer (Algorithm 1) on the industrial instances with and without the QAP model. For the experiments without the QAP model, \(N=1\) and lines 11 and 12 in Algorithm 1 are removed. This second part is carried out after suitable constants for \(N\) and \(\uplambda\) values have been found on the L17_533. In order to find such constants, we run the steps in Fig.  5 for 10 \(N\) values ranged between 1–200 and 10 \(\uplambda\) values set between 5–50% of \({|\mathcal{P}}_{s}|\) . For the experiments to test \(N\) , we use \(\uplambda =15\mathrm{\%}\) of \({|\mathcal{P}}_{s}|\) . For the experiment to test \(\uplambda\) , we use \(N= 50\) . For the cost distance function \(\Delta\) we use a scaled sigmoid, which is set to approach 1 when the ratio \({f}^{*}\left({x}_{i}\right)/{f}^{*}\left({x}_{i+1}\right)\) exceeds 1.05. This means that sample \({x}_{i+1}\) is unlikely to be accepted if its cost is 5% higher than that of \({x}_{i}\) . For each instance, the global best OBP result is tracked and uploaded as the current best result. We refer to the documentation in L17_533 for further details. We use Intel Core i7-4710MQ 2.5 GZ 4 cores, 32 GB RAM, Python3, Cython and C.

The Impact of Parameters \({\varvec{N}}\) and \(\uplambda\) on QAP Predictive Quality

Concerning \(N\) , we first observe that the average predictive quality of the QAP model is equivalent to the random baseline when \(N=1\) (Fig.  6 ). We further observe that mean predictive quality rises steadily until \(N\) is 20, after which it tapers off.

figure 6

Boxplot showing number of samples ( \(N\) ) against QAP predictive quality. The red line denotes the NDCG random baseline. The box edges show the first and third quartiles of the data (Q1, Q3) and the whiskers show (Q1 – 1.5 * IQR, Q3 + 1.5 * IQR), where IQR is the Inter Quartile Range

The result clearly shows that the QAP model is able to rank samples better than the random baseline (negative values imply the opposite). The positive initial trend could be impacted by the choice of ordinal relevance values \(rel({\pi }_{f\left(X\right)})\) for the NDCG computation (section “ Overview and Constants ”), which could favour the baseline for smaller \(N\) .

Concerning rate of change of new samples \(\uplambda\) , the best results are achieved when it is set toward the lower end of the 5–50% range of \(\left|{\mathcal{P}}_{S}\right|\) (Fig.  7 ). This provides some validation for the use of a Metropolis algorithm, since it shows that a Markov Chain can be used to nudge samples closer towards lower costs. Otherwise, NDCG would be similar regardless of the x-axis in Fig.  7 . This result is in line with Oxenstierna et al. [ 23 ], where a slightly stronger pattern is observed on the related TSP-based SLAP.

figure 7

How much new samples are changed compared to previous samples ( \(\uplambda\) ) against QAP predictive power

The Impact of Other Factors on QAP Predictive Quality

Results for all factors are shown in Tables  1 , 2 and 3 (Appendix). We find that QAP predictive quality decreases as instance size increases (Fig.  8 ). This may be due to that the quality of \({f}^{*}(x)\) costs provided by the OBP optimizer decrease with instance size (they are sub-optimal, see section “ Promote and Accept Thresholds and Cost Computations ”), making analysis of results for larger instance classes more difficult in general. We find that the fraction of CPU-time required by the QAP model versus the OBP optimizer is between 0.006–0.019, or around 50–150 times faster. The difference is largest for the largest instances and smallest for the smallest instances (Table  2 ). We do not observe any relationship between QAP predictive quality and warehouse layout.

figure 8

Instance size in terms of number of orders, versus the predictive quality of the QAP model and the random baseline

Overall, the result provides evidence that QAP approximations of OBP costs within an OBP-based SLAP optimizer may be justified. Its predictive quality may decrease with instance size, relative to the OBP optimizer (Fig.  7 ), but its relative usage of CPU-time also decreases. Another way to visualize the performance difference between the QAP model and the random baseline is through a frequency distribution (Fig.  9 ).

figure 9

Frequency distribution of NDCG values (20 bins) from QAP and random ranking of samples when \(N=20\) and \(\lambda =10\%\) (of \(\left|{\mathcal{P}}_{S}\right|\) )

SLAP Optimization With and Without QAP Approximation

We report results from running the QAP-OBP SLAP optimizer (section “ Optimization Algorithm ”) on the industrial dataset with and without the use of QAP approximations. Apart from general settings (section “ Overview and Constants ”), \(K\) is set to \({10}^{8}\) and the algorithm is set to terminate after 60 min (which, given maximum OBP and QAP CPU-times, ensures iterations never exceed \(K\) ). \(\lambda\) is set to \(10\%\) of \(\left|{\mathcal{P}}_{S}\right|\) and \({c}_{1}=1\) . \(N\) is set to 20, which means that the QAP model will have a relatively small impact on overall CPU-time. \(N\) could theoretically be set to a much larger number, but this may not necessarily yield better results. The QAP model in the form of Eq.  5 likely needs to be further developed before its extended use can be motivated. One risk with setting \(N\) to a large number is that the SLAP optimizer will spend too much time in search regions with a low QAP cost, rather than in regions with a low OBP cost.

In Fig.  10 , we see that Algorithm 1, on average, improves cost by around 23% in 1 h. Without QAP approximations, cost improves by around 17%.

figure 10

SLAP optimization cost improvements with and without the QAP model during 1 h. The shaded areas denote 95% confidence intervals

The size of the instances has a significant impact on computational efficiency. In Figs.  11 and 12 , we see that the impact of instance size, in terms of number of products that are assigned a location, | \({\mathcal{P}}_{s}\) |, has a similar effect on computational efficiency regardless of whether the QAP is used. The stronger performance of the smaller instances can largely be attributed to more samples being generated within the 60 min. On average, cost improvement continues throughout the time, which is explainable due to the large SLAP search space.

figure 11

QAP-OBP SLAP cost improvement using QAP approximations for 5 categories of instance sizes (in terms of | \({\mathcal{P}}_{s}\) |). Shaded areas denote data within 1 standard deviation

figure 12

Same as Fig.  11 , but without using QAP approximations

In this paper, we:

formulate an optimization model for the Storage Location Assignment Problem (SLAP), where the costs of assignments are evaluated using Order Batching Problem (OBP) optimization.

share generated SLAP test instances, with the goal to standardize formats and comparability between solution approaches.

propose a Quadratic Assignment Problem (QAP) model to quickly approximate OBP costs in SLAP optimization. The QAP model is tested and tuned on the generated instances.

propose a SLAP optimizer (QAP-OBP), which we test on industrial instances with a 1 h optimization timeout.

Within the QAP-OBP optimizer, the QAP and OBP modules are utilized in a Metropolis algorithm, where samples are modified by a variable amount each iteration. The algorithm is nested such that OBP costs are only computed for samples with a relatively strong QAP cost approximation.

In order to motive the use of the QAP model within the algorithm, experiments are first conducted to test its predictive quality against costs obtained by the OBP optimizer and a random baseline. Results show that QAP predictive quality is stronger than the baseline, and that they are around 50–150 times faster to compute than the cost obtained through OBP optimization.

We then proceed to run the SLAP optimizer with and without the QAP approximations. We find that the optimizer performs better when using the QAP approximations, with cost improvements of around 23% after 1 h. This result is in line with results in related work on SLAPs that are less difficult in some regards (for example concerning warehouse layouts), but more difficult in others (dynamicity or larger number of products).

For future work, the parameter which controls the number of samples that should be approximated by the QAP model for every OBP cost computation, \(N\) , could be tuned. The QAP computations could be significantly sped up by the use of parallelization and Graphical Processing Units (GPU), extending its utility within the SLAP optimizer for larger \(N\) . Also, alternative optimization approaches could be explored. These include meta-heuristic techniques such as Simulated Annealing or Particle Swarm Optimization. The QAP cost approximator could also be developed for a Machine Learning approach and used in a similar fashion as the weak estimators in boosting or aggregate bootstrapping. The factorial search space remains a fundamental problem for learning, however. Finally, we invite discussions into how to best represent SLAP features in public benchmark data and which features to choose for a standardized version of the problem.

https://github.com/johanoxenstierna/L17_533 , collected 13–02–2023.

https://github.com/johanoxenstierna/OBP_instances , collected 15–01–2023.

https://github.com/johanoxenstierna/L09_251 , collected 15–01–2023.

Abdel-Basset M, Manogaran G, Rashad H, et al. A comprehensive review of quadratic assignment problem: variants, hybrids and applications. J Ambient Intell Human Comput. 2018. https://doi.org/10.1007/s12652-018-0917-x .

Article   Google Scholar  

Aerts B, Cornelissens T, Sörensen K. The joint order batching and picker routing problem: modelled and solved as a clustered vehicle routing problem. Comput Oper Res. 2021;129: 105168. https://doi.org/10.1016/j.cor.2020.105168 .

Article   MathSciNet   Google Scholar  

Azadeh K, De Koster R, Roy D. Robotized warehouse systems: developments and research opportunities. ERIM report series research in management Erasmus Research Institute of Management. ERS-2017-009-LIS. 2017.

Bruch S, Wang X, Bendersky M, Najork M. An analysis of the softmax cross entropy loss for learning-to-rank with binary relevance. In: Proceedings of the 2019 ACM SIGIR international conference on the theory of information retrieval (ICTIR 2019). 2019. pp. 75–8.

Cao Z, Qin T, Liu T-Y, Tsai M-F, Li H. Learning to rank: from pairwise approach to listwise approach. In: Proceedings of the 24th international conference on machine learning, vol. 227. 2007. pp. 129–36. https://doi.org/10.1145/1273496.1273513 .

Cardona LF, Rivera L, Martínez HJ. Analytical study of the fishbone warehouse layout. Int J Log Res Appl. 2012;15(6):365–88.

Charris E, Rojas-Reyes J, Montoya-Torres J. The storage location assignment problem: a literature review. Int J Ind Eng Comput. 2018;10.

Christen JA, Fox C. Markov Chain Monte Carlo using an approximation. J Comput Graph Stat. 2005;14(4):795–810.

Ene S, Öztürk N. Storage location assignment and order picking optimization in the automotive industry. Int J Adv Manuf Technol. 2011;60:1–11. https://doi.org/10.1007/s00170-011-3593-y .

Fontana ME, Nepomuceno VS. Multi-criteria approach for products classification and their storage location assignment. Int J Adv Manuf Technol. 2017;88(9):3205–16.

Freund Y, Iyer R, Schapire RE, Singer Y. An efficient boosting algorithm for combining preferences. J Mach Learn Res. 2003;4(Nov):933–69.

MathSciNet   Google Scholar  

Freund Y, Schapire RE. Experiments with a new boosting algorithm. 1996.

Garfinkel M. Minimizing multi-zone orders in the correlated storage assignment problem. School of Industrial and Systems Engineering, Georgia Institute of Technology. 2005.

Henn S, Wäscher G. Tabu search heuristics for the order batching problem in manual order picking systems. Eur J Oper Res. 2012;222(3):484–94.

Kallina C, Lynn J. Application of the cube-per-order index rule for stock location in a distribution warehouse. Interfaces. 1976;7(1):37–46.

Kofler M, Beham A, Wagner S, Affenzeller M. Affinity based slotting in warehouses with dynamic order patterns. Advanced methods and applications in computational intelligence. 2014. pp. 123–43.

de Koster R, Le-Duc T, Roodbergen KJ. Design and control of warehouse order picking: a literature review. Eur J Oper Res. 2007;182(2):481–501.

Kübler P, Glock CH, Bauernhansl T. A new iterative method for solving the joint dynamic storage location assignment, order batching and picker routing problem in manual picker-to-parts warehouses. Comput Ind Eng. 2020;147: 106645.

Larco JA, de Koster R, Roodbergen KJ, Dul J. Managing warehouse efficiency and worker discomfort through enhanced storage assignment decisions. Int J Prod Res. 2017;55(21):6407–22. https://doi.org/10.1080/00207543.2016.1165880 .

Lee IG, Chung SH, Yoon SW. Two-stage storage assignment to minimize travel time and congestion for warehouse order picking operations. Comput Ind Eng. 2020;139: 106129. https://doi.org/10.1016/j.cie.2019.106129 .

Mantel R, Schuur P, Heragu S. Order oriented slotting: a new assignment strategy for warehouses. Eur J Ind Eng. 2007;1:301–16.

Oxenstierna J, Malec J, Krueger V. Efficient order batching optimization using seed heuristics and the metropolis algorithm. SN Comput Sci. 2022;4(2):107.

Oxenstierna J, Rensburg L, Stuckey P, Krueger V. Storage assignment using nested annealing and hamming distances. In: Proceedings of the 12th international conference on operations research and enterprise systems—ICORES. 2023. pp. 94–105. https://doi.org/10.5220/0011785100003396 .

Oxenstierna J, van Rensburg LJ, Malec J, Krueger V. Formulation of a layout-agnostic order batching problem. In: Dorronsoro B, Amodeo L, Pavone M, Ruiz P, editors. Optimization and learning. Berlin: Springer International Publishing; 2021. p. 216–26.

Chapter   Google Scholar  

Ratliff H, Rosenthal A. Order-picking in a rectangular warehouse: a solvable case of the traveling salesman problem. Oper Res. 1983;31:507–21.

Reinelt G. TSPLIB—a traveling salesman problem library. INFORMS J Comput. 1991;3:376–84.

Rensburg LJ. Artificial intelligence for warehouse picking optimization—an NP-hard problem [Master’s Thesis]. Uppsala University. 2019.

Roodbergen KJ, Koster R. Routing methods for warehouses with multiple cross aisles. Int J Prod Res. 2001;39(9):1865–83.

van Ravenzwaaij D, Cassey P, Brown SD. A simple introduction to Markov Chain Monte-Carlo sampling. Psychon Bull Rev. 2018;25(1):143–54. https://doi.org/10.3758/s13423-016-1015-8 .

Wu J, Qin T, Chen J, Si H, Lin K. Slotting optimization algorithm of the stereo warehouse. In: Proceedings of the 2012 2nd international conference on computer and information application (ICCIA 2012). 2014. pp. 128–32. https://doi.org/10.2991/iccia.2012.31 .

Wu X, LuWuZhou JSX. Synchronizing time-dependent transportation services: reformulation and solution algorithm using quadratic assignment problem. Transport Res Part B Methodol. 2021;152:140–79. https://doi.org/10.1016/j.trb.2021.08.008 .

Wutthisirisart P, Noble JS, Chang CA. A two-phased heuristic for relation-based item location. Comput Ind Eng. 2015;82:94–102. https://doi.org/10.1016/j.cie.2015.01.020 .

Yang N et al. Evaluation of the joint impact of the storage assignment and order batching in mobile-pod warehouse systems. Math Probl Eng. 2022;2022.

Yingde L, Smith JS. Dynamic slotting optimization based on SKUs correlations in a zone-based wave-picking system. In: IMHRC proceedings, vol. 12. 2012.

Zhang R-Q, Wang M, Pan X. New model of the storage location assignment problem considering demand correlation pattern. Comput Ind Eng. 2019;129:210–9. https://doi.org/10.1016/j.cie.2019.01.027 .

Zhou F, De la Torre F. Factorized graph matching. IEEE Trans Pattern Anal Mach Intell. 2016;38(9):1774–89. https://doi.org/10.1109/TPAMI.2015.2501802 .

Žulj I, Glock CH, Grosse EH, Schneider M. Picker routing and storage-assignment strategies for precedence-constrained order picking. Comput Ind Eng. 2018;123:338–47. https://doi.org/10.1016/j.cie.2018.06.015 .

Download references

Acknowledgements

This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. We also convey thanks to Kairos Logic AB for software.

Open access funding provided by Lund University. This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP).

Author information

Authors and affiliations.

Dept. of Computer Science, Lund University, Lund, Sweden

Johan Oxenstierna, Jacek Malec & Volker Krueger

Kairos Logic AB, Lund, Sweden

Johan Oxenstierna

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Johan Oxenstierna .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest. This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Innovative Intelligent Industrial Production and Logistics 2022” guest edited by Alexander Smirnov, Kurosh Madani, Hervé Panetto and Georg Weichhart.

NDCG flowchart: the below example shows how Normalized Discounted Cumulative Gain (NDCG) can be computed from input permutations (products to locations), approximated ( \(f\) ) and ground truth ( \({f}^{*}\) ) values. Note that \(f\left(X\right)\) denotes a sorting of \(X\) according to the cost valuation of elements in the cost step. Also note that relevance values can be formulated in several ways.

figure 13

NDCG procedure flowchart

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Oxenstierna, J., Malec, J. & Krueger, V. Storage Assignment Using Nested Metropolis Sampling and Approximations of Order Batching Travel Costs. SN COMPUT. SCI. 5 , 477 (2024). https://doi.org/10.1007/s42979-024-02711-w

Download citation

Received : 04 April 2023

Accepted : 14 February 2024

Published : 23 April 2024

DOI : https://doi.org/10.1007/s42979-024-02711-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Storage location assignment problem
  • Order batching problem
  • Quadratic assignment problem
  • Metropolis algorithm
  • Warehousing

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Introduction to Random Assignment -Voxco

    example of research random assignment

  2. Random Assignment in Experiments

    example of research random assignment

  3. Random Sample v Random Assignment

    example of research random assignment

  4. Simple Random Sample

    example of research random assignment

  5. Random Assignment in Experiments

    example of research random assignment

  6. Random Assignment in Psychology: Definition, Example & Methods

    example of research random assignment

VIDEO

  1. random sampling & assignment

  2. Conditions Most Conducive to Random Assignment

  3. RANDOM ASSIGNMENT

  4. Theory of Random Assignment

  5. 12. Simple Random Sampling- Practical Problems [ISS_Material]

  6. How to do a Simple Random Sample SRS

COMMENTS

  1. Random Assignment in Experiments

    Revised on June 22, 2023. In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomization. With simple random assignment, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

  2. Random Assignment in Psychology: Definition & Examples

    Olivia Guy-Evans, MSc. In psychology, random assignment refers to the practice of allocating participants to different experimental groups in a study in a completely unbiased way, ensuring each participant has an equal chance of being assigned to any group. In experimental research, random assignment, or random placement, organizes participants ...

  3. 15 Random Assignment Examples (2024)

    Random Assignment Examples. 1. Pharmaceutical Efficacy Study. In this type of research, consider a scenario where a pharmaceutical company wishes to test the potency of two different versions of a medication, Medication A and Medication B. The researcher recruits a group of volunteers and randomly assigns them to receive either Medication A or ...

  4. Random Assignment in Psychology (Definition + 40 Examples)

    Stepping back in time, we delve into the origins of random assignment, which finds its roots in the early 20th century. The pioneering mind behind this innovative technique was Sir Ronald A. Fisher, a British statistician and biologist.Fisher introduced the concept of random assignment in the 1920s, aiming to improve the quality and reliability of experimental research.

  5. Random Assignment in Experiments

    Random sampling is a process for obtaining a sample that accurately represents a population. Random assignment uses a chance process to assign subjects to experimental groups. Using random assignment requires that the experimenters can control the group assignment for all study subjects. For our study, we must be able to assign our participants ...

  6. What Is Random Assignment in Psychology?

    Random assignment in psychology involves each participant having an equal chance of being chosen for any of the groups, including the control and experimental groups. It helps control for potential confounding variables, reducing the likelihood of pre-existing differences between groups. This method enhances the internal validity of experiments ...

  7. The Definition of Random Assignment In Psychology

    Random assignment refers to the use of chance procedures in psychology experiments to ensure that each participant has the same opportunity to be assigned to any given group in a study to eliminate any potential bias in the experiment at the outset. Participants are randomly assigned to different groups, such as the treatment group versus the ...

  8. 6.1.1 Random Assignation

    The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design. Note: Do not confuse random assignation with random sampling. Random sampling is a method for selecting a sample from a population; we will talk about this in Chapter 7.

  9. Random Assignment

    Example 4.2 illustrates random assignment using an unrealistic small sample size . Example 4.2 Random assignment of four participants to E- and C-conditions. A sample of n = 4 students is used to study the effectiveness of the new statistics course of Example 4.1. Two of these students (M1 and M2) took the math course and two of them (NM1 and ...

  10. 5 Examples of Random Assignment

    Rules + Random Number Generation. A set of rules may be applied to random assignment to ensure that treatment and control groups are balanced. For example, in a medical study, a rule could be applied that each group have an equal number of men and women. This could be implemented by applying random assignment separately for male and female ...

  11. 5.2 Experimental Design

    Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition ...

  12. Random Assignment ~ A Simple Introduction with Examples

    Example. Your study researches the impact of technology on productivity in a specific company. In such a case, you have contact with the entire staff. So, you can assign each employee a quantity and apply a random number generator to pick a specific sample. For instance, from 500 employees, you can pick 200.

  13. Random assignment

    Random assignment or random placement is an experimental technique for assigning human participants or animal subjects to different groups in an experiment (e.g., a treatment group versus a control group) using randomization, such as by a chance procedure (e.g., flipping a coin) or a random number generator. This ensures that each participant or subject has an equal chance of being placed in ...

  14. Random Assignment in Experiments

    Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups. While random sampling is used in many types of studies, random assignment is only used ...

  15. Elements of Research : Random Assignment

    A potential problem with random assignment is the temptation to ignore the random assignment procedures. For example, it may be tempting to assign an overweight participant to the treatment group that includes participation in a weight-loss program. ... Research staff must follow random assignment protocol, if that is part of the study design ...

  16. Random Assignment in Psychology

    Random assignment is a part of the design of an experiment, and it is part of what sets an experiment apart from other research methods such as a quasi-experimental design.

  17. An overview of randomization techniques: An unbiased assessment of

    A random number table found in a statistics book or computer-generated random numbers can also be used for simple randomization of subjects. This randomization approach is simple and easy to implement in a clinical research. In large clinical research, simple randomization can be trusted to generate similar numbers of subjects among groups.

  18. Issues in Outcomes Research: An Overview of Randomization Techniques

    One critical component of clinical trials that strengthens results is random assignment of participants to control and treatment groups. Although randomization appears to be a simple concept, issues of balancing sample sizes and controlling the influence of covariates a priori are important. ... Outcomes research is critical in the evidence ...

  19. Simple Random Sampling

    Step 3: Randomly select your sample. This can be done in one of two ways: the lottery or random number method. In the lottery method, you choose the sample at random by "drawing from a hat" or by using a computer program that will simulate the same action. In the random number method, you assign every individual a number.

  20. What's the difference between random assignment and random ...

    Random selection, or random sampling, is a way of selecting members of a population for your study's sample. In contrast, random assignment is a way of sorting the sample into control and experimental groups. Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal ...

  21. Research Randomizer

    RANDOM ASSIGNMENT MADE EASY! Research Randomizer is a free resource for researchers and students in need of a quick way to generate random numbers or assign participants to experimental conditions. This site can be used for a variety of purposes, including psychology experiments, medical trials, and survey research. ... For example, a range of ...

  22. Quasi-Experimental Design

    Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.

  23. Storage Assignment Using Nested Metropolis Sampling and ...

    The Storage Location Assignment Problem (SLAP) is of central importance in warehouse operations. An important research challenge lies in generalizing the SLAP such that it is not tied to certain order-picking methodologies, constraints, or warehouse layouts. We propose the OBP-based SLAP, where the quality of a location assignment is obtained by optimizing an Order Batching Problem (OBP). For ...