Unit of analysis: definition, types, examples, and more

Last updated

16 April 2023

Reviewed by

Cathy Heath

Make research less tedious

Dovetail streamlines research to help you uncover and share actionable insights

  • What is a unit of analysis?

A unit of analysis is an object of study within a research project. It is the smallest unit a researcher can use to identify and describe a phenomenon—the 'what' or 'who' the researcher wants to study. 

For example, suppose a consultancy firm is hired to train the sales team in a solar company that is struggling to meet its targets. To evaluate their performance after the training, the unit of analysis would be the sales team—it's the main focus of the study. 

Different methods, such as surveys , interviews, or sales data analysis, can be used to evaluate the sales team's performance and determine the effectiveness of the training.

  • Units of observation vs. units of analysis

A unit of observation refers to the actual items or units being measured or collected during the research. In contrast, a unit of analysis is the entity that a researcher can comment on or make conclusions about at the end of the study.

In the example of the solar company sales team, the unit of observation would be the individual sales transactions or deals made by the sales team members. In contrast, the unit of analysis would be the sales team as a whole.

The firm may observe and collect data on individual sales transactions, but the ultimate conclusion would be based on the sales team's overall performance, as this is the entity that the firm is hired to improve.

In some studies, the unit of observation may be the same as the unit of analysis, but researchers need to define both clearly to themselves and their audiences.

  • Unit of analysis types

Below are the main types of units of analysis:

Individuals – These are the smallest levels of analysis.

Groups – These are people who interact with each other.

Artifacts –These are material objects created by humans that a researcher can study using empirical methods.

Geographical units – These are smaller than a nation and range from a province to a neighborhood.

Social interactions – These are formal or informal interactions between society members.

  • Importance of selecting the correct unit of analysis in research

Selecting the correct unit of analysis helps reveal more about the subject you are studying and how to continue with the research. It also helps determine the information you should use in the study. For instance, if a researcher has a large sample, the unit of analysis will help decide whether to focus on the whole population or a subset of it.

  • Examples of a unit of analysis

Here are examples of a unit of analysis:

Individuals – A person, an animal, etc.

Groups – Gangs, roommates, etc. 

Artifacts – Phones, photos, books, etc.  

Geographical units – Provinces, counties, states, or specific areas such as neighborhoods, city blocks, or townships

Social interaction – Friendships, romantic relationships, etc.

  • Factors to consider when selecting a unit of analysis

The main things to consider when choosing a unit of analysis are:

Research questions and hypotheses

Research questions can be descriptive if the study seeks to describe what exists or what is going on.

It can be relational if the study seeks to look at the relationship between variables. Or, it can be causal if the research aims at determining whether one or more variables affect or cause one or more outcome variables.

Your study's research question and hypothesis should guide you in choosing the correct unit of analysis.

Data availability and quality

Consider the nature of the data collected and the time spent observing each participant or studying their behavior. You should also consider the scale used to measure variables.

Some studies involve measuring every variable on a one-to-one scale, while others use variables with discrete values. All these influence the selection of a unit of analysis.

Feasibility and practicality

Look at your study and think about the unit of analysis that would be feasible and practical.

Theoretical framework and research design

The theoretical framework is crucial in research as it introduces and describes the theory explaining why the problem under research exists. As a structure that supports the theory of a study, it is a critical consideration when choosing the unit of analysis. Moreover, consider the overall strategy for collecting responses to your research questions.

  • Common mistakes when choosing a unit of analysis

Below are common errors that occur when selecting a unit of analysis:

Reductionism

This error occurs when a researcher uses data from a lower-level unit of analysis to make claims about a higher-level unit of analysis. This includes using individual-level data to make claims about groups.

However, claiming that Rosa Parks started the movement would be reductionist. There are other factors behind the rise and success of the US civil rights movement. These include the Supreme Court’s historic decision to desegregate schools, protests over legalized racial segregation, and the formation of groups such as the Student Nonviolent Coordinating Committee (SNCC). In short, the movement is attributable to various political, social, and economic factors.  

Ecological fallacy

This mistake occurs when researchers use data from a higher-level unit of analysis to make claims about one lower-level unit of analysis. It usually occurs when only group-level data is collected, but the researcher makes claims about individuals.

For instance, let's say a study seeks to understand whether addictions to electronic gadgets are more common in certain universities than others.

The researcher moves on and obtains data on the percentage of gadget-addicted students from different universities around the country. But looking at the data, the researcher notes that universities with engineering programs have more cases of gadget additions than campuses without the programs.

Concluding that engineering students are more likely to become addicted to their electronic gadgets would be inappropriate. The data available is only about gadget addiction rates by universities; thus, one can only make conclusions about institutions, not individual students at those universities.

Making claims about students while the data available is about the university puts the researcher at risk of committing an ecological fallacy.

  • The lowdown

A unit of analysis is what you would consider the primary emphasis of your study. It is what you want to discuss after your study. Researchers should determine a unit of analysis that keeps the context required to make sense of the data. They should also keep the unit of analysis in mind throughout the analysis process to protect the reliability of the results.

What is the most common unit of analysis?

The individual is the most prevalent unit of analysis.

Can the unit of analysis and the unit of observation be one?

Some situations have the same unit of analysis and observation. For instance, let's say a tutor is hired to improve the oral French proficiency of a student who finds it difficult. A few months later, the tutor wants to evaluate the student's proficiency based on what they have taught them for the time period. In this case, the student is both the unit of analysis and the unit of observation.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 11 January 2024

Last updated: 15 January 2024

Last updated: 17 January 2024

Last updated: 12 May 2023

Last updated: 30 April 2024

Last updated: 18 May 2023

Last updated: 25 November 2023

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

research design analysis unit

Users report unexpectedly high data usage, especially during streaming sessions.

research design analysis unit

Users find it hard to navigate from the home page to relevant playlists in the app.

research design analysis unit

It would be great to have a sleep timer feature, especially for bedtime listening.

research design analysis unit

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research design analysis unit

Home Market Research Research Tools and Apps

Unit of Analysis: Definition, Types & Examples

A unit of analysis is what you discuss after your research, probably what you would regard to be the primary emphasis of your research.

The unit of analysis is the people or things whose qualities will be measured. The unit of analysis is an essential part of a research project. It’s the main thing that a researcher looks at in his research.

A unit of analysis is the object about which you hope to have something to say at the end of your analysis, perhaps the major subject of your research.

In this blog post, we will explore and clarify the concept of the “unit of analysis,” including its definition, various types, and a concluding perspective on its significance.

What is a unit of analysis?

A unit of analysis is the thing you want to discuss after your research, probably what you would regard to be the primary emphasis of your research.

The researcher plans to comment on the primary topic or object in the research as a unit of analysis. The research question plays a significant role in determining it. The “who” or “what” that the researcher is interested in investigating is, to put it simply, the unit of analysis.

In his 2001 book Man, the State, and War, Waltz divides the world into three distinct spheres of study: the individual, the state, and war.

Understanding the reasoning behind the unit of analysis is vital. The likelihood of fruitful research increases if the rationale is understood. An individual, group, organization, nation, social phenomenon, etc., are a few examples.

LEARN ABOUT: Data Analytics Projects

Types of “unit of analysis”

In business research, there are almost unlimited types of possible analytical units. Data analytics and data analysis are closely related processes that involve extracting insights from data to make informed decisions. Even though the most typical unit of analysis is the individual, many research questions can be more precisely answered by looking at other types of units. Let’s find out, 

1. Individual Level

The most prevalent unit of analysis in business research is the individual. These are the primary analytical units. The researcher may be interested in looking into:

  • Employee actions
  • Perceptions
  • Attitudes or opinions.

Employees may come from wealthy or low-income families, as well as from rural or metropolitan areas.

A researcher might investigate if personnel from rural areas are more likely to arrive on time than those from urban areas. Additionally, he can check whether workers from rural areas who come from poorer families arrive on time compared to those from rural areas who come from wealthy families.

Each time, the individual (employee) serving as the analytical unit is discussed and explained. Employee analysis as a unit of analysis can shed light on issues in business, including customer and human resource behavior.

For example, employee work satisfaction and consumer purchasing patterns impact business, making research into these topics vital.

Psychologists typically concentrate on research on individuals. This research may significantly aid a firm’s success, as individuals’ knowledge and experiences reveal vital information. Thus, individuals are heavily utilized in business research.

2. Aggregates Level

Social science research does not usually focus on people. However, by combining individuals’ reactions, social scientists frequently describe and explain social interactions, communities, and groupings. Additionally, they research the collective of individuals, including communities, groups, and countries.

Aggregate levels can be divided into Groups (groups with an ad hoc structure) and Organizations (groups with a formal organization).

The following levels of the unit of analysis are made up of groups of people. A group is defined as two or more individuals who interact, share common traits, and feel connected to one another. 

Many definitions also emphasize interdependence or objective resemblance (Turner, 1982; Platow, Grace, & Smithson, 2011) and those who identify as group members (Reicher, 1982) .

As a result, society and gangs serve as examples of groups. According to Webster’s Online Dictionary (2012), they can resemble some clubs but be far less formal.

Siblings, identical twins, family, and small group functioning are examples of studies with many units of analysis.

In such circumstances, a whole group might be compared to another. Families, gender-specific groups, pals, Facebook groups, and work departments can all be groups.

By analyzing groups, researchers can learn how they form and how age, experience, class, and gender affect them. When aggregated, an individual’s data describes the group they belong to.

Sociologists study groups like economists and businesspeople to form teams to complete projects. They continually research groups and group behavior.

Organizations

The next level of the unit of analysis is organizations, which are groups of people set up formally. Organizations could include businesses, religious groups, parts of the military, colleges, academic departments, supermarkets, business groups, and so on.

The social organization includes things like sexual composition, styles of leadership, organizational structure, systems of communication, and so on. (Susan & Wheelan, 2005; Chapais & Berman, 2004) . (Lim, Putnam, and Robert, 2010) say that well-known social organizations and religious institutions are among them.

Moody, White, and Douglas (2003) say social organizations are hierarchical. Hasmath, Hildebrandt, and Hsu (2016) say social organizations can take different forms. For example, they can be made by institutions like schools or governments.

Sociology, economics, political science, psychology, management, and organizational communication are some social science fields that study organizations (Douma & Schreuder, 2013) .

Organizations are different from groups in that they are more formal and have better organization. A researcher might want to study a company to generalize its results to the whole population of companies.

One way to look at an organization is by the number of employees, the net annual revenue, the net assets, the number of projects, and so on. He might want to know if big companies hire more or fewer women than small companies.

Organization researchers might be interested in how companies like Reliance, Amazon, and HCL affect our social and economic lives. People who work in business often study business organizations.

LEARN ABOUT: Data Management Framework

3. Social Level

The social level has 2 types,

Social Artifacts Level

Things are studied alongside humans. Social artifacts are human-made objects from diverse communities. Social artifacts are items, representations, assemblages, institutions, knowledge, and conceptual frameworks used to convey, interpret, or achieve a goal (IGI Global, 2017).

Cultural artifacts are anything humans generate that reveals their culture (Watts, 1981).

Social artifacts include books, newspapers, advertising, websites, technical devices, films, photographs, paintings, clothes, poems, jokes, students’ late excuses, scientific breakthroughs, furniture, machines, structures, etc. Infinite.

Humans build social objects for social behavior. As people or groups suggest a population in business research, each social object implies a class of items.

Same-class goods include business books, magazines, articles, and case studies. A business magazine’s quantity of articles, frequency, price, content, and editor in a research study may be characterized.

Then, a linked magazine’s population might be evaluated for description and explanation. Marx W. Wartofsky (1979) defined artifacts as primary artifacts utilized in production (like a camera), secondary artifacts connected to primary artifacts (like a camera user manual), and tertiary objects related to representations of secondary artifacts (like a camera user-manual sculpture).

The scientific study of an artifact reveals its creators and users. The artifact researcher may be interested in advertising, marketing, distribution, buying, etc.

Social Interaction Level

Social artifacts include social interaction. Such as:

  • Eye contact with a coworker
  • Buying something in a store
  • Friendship decisions
  • Road accidents
  • Airline hijackings
  • Professional counseling
  • Whatsapp messaging

A researcher might study youthful employees’ smartphone addictions. Some addictions may involve social media, while others involve online games and movies that inhibit connection.

Smartphone addictions are examined as a societal phenomenon. Observation units are probably individuals (employees).

Anthropologists typically study social artifacts. They may be interested in the social order. A researcher who examines social interactions may be interested in how broader societal structures and factors impact daily behavior, festivals, and weddings.

LEARN ABOUT: Level of Analysis

Even though there is no perfect way to do research, it is generally agreed that researchers should try to find a unit of analysis that keeps the context needed to make sense of the data.

Researchers should consider the details of their research when deciding on the unit of analysis. 

They should remember that consistent use of these units throughout the analysis process (from coding to developing categories and themes to interpreting the data) is essential to gaining insight from qualitative data and protecting the reliability of the results.

QuestionPro does much more than merely serve as survey software. We have a solution for every sector of the economy and every kind of issue. We also have systems for managing data, such as our research repository, Insights Hub.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

data information vs insight

Data Information vs Insight: Essential differences

May 14, 2024

pricing analytics software

Pricing Analytics Software: Optimize Your Pricing Strategy

May 13, 2024

relationship marketing

Relationship Marketing: What It Is, Examples & Top 7 Benefits

May 8, 2024

email survey tool

The Best Email Survey Tool to Boost Your Feedback Game

May 7, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • Unit of Analysis: Definition, Types & Examples

Olayemi Jemimah Aransiola

Introduction

A unit of analysis is the smallest level of analysis for a research project. It’s important to choose the right unit of analysis because it helps you make more accurate conclusions about your data.

What Is a Unit of Analysis?

A unit of analysis is the smallest element in a data set that can be used to identify and describe a phenomenon or the smallest unit that can be used to gather data about a subject. The unit of analysis will determine how you will define your variables, which are the things that you measure in your data. 

If you want to understand why people buy a particular product, you should choose a unit of analysis that focuses on buying behavior. This means choosing a unit of analysis that is relevant to your research topic and question .

For example, if you want to study the needs of soldiers in a war zone, you will need to choose an appropriate unit of analysis for this study: soldiers or the war zone. In this case, choosing the right unit of analysis would be important because it could help you decide if your research design is appropriate for this particular subject and situation.

Why is Choosing the Right Unit of Analysis Important?

The unit of analysis is important because it helps you understand what you are trying to find out about your subject, and it also helps you to make decisions about how to proceed with your research.

Choosing the right unit of analysis is also important because it determines what information you’re going to use in your research. If you have a small sample, then you’ll have to choose whether or not to focus on the entire population or just a subset of it. 

If you have a large sample, then you’ll be able to find out more about specific groups within your population. For example, if you want to understand why people buy certain types of products, then you should choose a unit of analysis that focuses on buying behavior. 

This means choosing a unit of analysis that is relevant to your research topic and question.

Unit of Analysis vs Unit of Observation

Unit of analysis is a term used to refer to a particular part of a data set that can be analyzed. For example, in the case of a survey, the unit of analysis is an individual: the person who was selected to take part in the survey. 

Unit of analysis is used in the social sciences to refer to the individuals or groups that have been studied. It can also be referred to as the unit of observation.

Unit of observation refers to a specific person or group in the study being observed by the researcher. An example would be a particular town, census tract, state, or other geographical location being studied by researchers conducting research on crime rates in that area.

Unit of analysis refers to the individual or group being studied by the researcher. An example would be an entire town being analyzed for crime rates over time.

Types of “Unit of Analysis”

The unit of analysis is a way to understand and study a phenomenon. There are four main types of unit of analysis: individuals, groups, artifacts (books, photos, newspapers), and geographical units (towns, census tracts, states).

  • Individuals are the smallest level of analysis. For example, an individual may be a person or an animal. A group can be composed of individuals or a collection of people who interact with each other. For example, an individual might go to college with other individuals or a family might live together as roommates. 
  • An artifact is anything that can be studied using empirical methods—including books and photos but also any physical object like knives or phones. 
  • A geographical unit is smaller than an entire country but larger than just one city block or neighborhood; it may be smaller than just two houses but larger than just two houses in the same street. 
  • Social interactions include dyadic relations (such as friendships or romantic relationships) and divorces among many other things such as arrests.

Examples of Each Type of Unit of Analysis

  • Individuals are the smallest unit of analysis. An individual is a person, animal, or thing. For example, an individual can be a person or a building.
  • Artifacts are the next largest units of analysis. An artifact is something produced by human beings and is not alive. For example, a child’s toy is an artifact. Artifacts can include any material object that was produced by human activity and which has meaning to someone. Artifacts can be tangible or intangible and may be produced intentionally or accidentally.
  • Geographical units are large geographic areas such as states, counties, provinces, etc. Geographical units may also refer to specific locations within these areas such as cities or townships. 
  • Social interaction refers to interactions between members of society (e.g., family members interacting with each other). Social interaction includes both formal interactions (such as attending school) and informal interactions (such as talking on the phone).

How Does a Social Scientist Choose a Unit of Analysis?

Social scientists choose a unit of analysis based on the purpose of their research, their research question, and the type of data they have. For example, if they are trying to understand the relationship between a person’s personality and their behavior, they would choose to study personality traits.

For example, if a researcher wanted to study the effects of legalizing marijuana on crime rates, they may choose to use administrative data from police departments. However, if they wanted to study how culture influences crime rates, they might use survey data from smaller groups of people who are further removed from the influence of culture (e.g., individuals living in different areas or countries).

Factors to Consider When Choosing a Unit of Analysis

The unit of analysis is the object or person that you are studying, and it determines what kind of data you are collecting and how you will analyze it.

Factors to consider when choosing a unit of analysis include:

  • What is your purpose for studying this topic? Is it for a research paper or an article? If so, which type of paper do you want to write?
  • What is the most appropriate unit for your study? If you are studying a specific event or period of time, this may be obvious. But if your focus is broader, such as all social sciences or all human development, then you need to determine how broad your scope should be before beginning any research process (see question one above) so that you know where to start in order for it to be effective (see question three below).
  • How do other people define their units? This can be helpful when trying to understand what other people mean when they use certain terms like “social science” or “human development” because they may define those terms differently than what you would expect them to.
  • The nature of the data collected. Is it quantitative or qualitative? If it’s qualitative, what kind of data is collected? How much time was spent observing each participant/examining their behavior?
  • The scale used to measure variables. Is every variable measured on a one-to-one scale (like measurements between people)? Or do some variables only take on discrete values (like yes/no questions)?

The unit of analysis is the smallest part of a data set that you analyze. It’s important to remember that your data is made up of more than just one unit—you have lots of different units in your dataset, and each of those units has its own characteristics that you need to think about when you’re trying to analyze it.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • Data Collection
  • research questions
  • unit of analysis
  • Olayemi Jemimah Aransiola

Formplus

You may also like:

What is Field Research: Meaning, Examples, Pros & Cons

Introduction Field research is a method of research that deals with understanding and interpreting the social interactions of groups of...

research design analysis unit

The McNamara Fallacy: How Researchers Can Detect and to Avoid it.

Introduction The McNamara Fallacy is a common problem in research. It happens when researchers take a single piece of data as evidence...

Research Summary: What Is It & How To Write One

Introduction A research summary is a requirement during academic research and sometimes you might need to prepare a research summary...

Projective Techniques In Surveys: Definition, Types & Pros & Cons

Introduction When you’re conducting a survey, you need to find out what people think about things. But how do you get an accurate and...

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 4: Measurement and Units of Analysis

4.4 Units of Analysis and Units of Observation

Another point to consider when designing a research project, and which might differ slightly in qualitative and quantitative studies, has to do with units of analysis and units of observation. These two items concern what you, the researcher, actually observe in the course of your data collection and what you hope to be able to say about those observations. Table 3.1 provides a summary of the differences between units of analysis and observation.

Unit of Analysis

A unit of analysis is the entity that you wish to be able to say something about at the end of your study, probably what you would consider to be the main focus of your study.

Unit of Observation

A unit of observation is the item (or items) that you actually observe, measure, or collect in the course of trying to learn something about your unit of analysis. In a given study, the unit of observation might be the same as the unit of analysis, but that is not always the case. Further, units of analysis are not required to be the same as units of observation. What is required, however, is for researchers to be clear about how they define their units of analysis and observation, both to themselves and to their audiences. More specifically, your unit of analysis will be determined by your research question. Your unit of observation, on the other hand, is determined largely by the method of data collection that you use to answer that research question.

To demonstrate these differences, let us look at the topic of students’ addictions to their cell phones. We will consider first how different kinds of research questions about this topic will yield different units of analysis. Then we will think about how those questions might be answered and with what kinds of data. This leads us to a variety of units of observation.

If I were to ask, “Which students are most likely to be addicted to their cell phones?” our unit of analysis would be the individual. We might mail a survey to students on a university or college campus, with the aim to classify individuals according to their membership in certain social classes and, in turn, to see how membership in those classes correlates with addiction to cell phones. For example, we might find that students studying media, males, and students with high socioeconomic status are all more likely than other students to become addicted to their cell phones. Alternatively, we could ask, “How do students’ cell phone addictions differ and how are they similar? In this case, we could conduct observations of addicted students and record when, where, why, and how they use their cell phones. In both cases, one using a survey and the other using observations, data are collected from individual students. Thus, the unit of observation in both examples is the individual. But the units of analysis differ in the two studies. In the first one, our aim is to describe the characteristics of individuals. We may then make generalizations about the populations to which these individuals belong, but our unit of analysis is still the individual. In the second study, we will observe individuals in order to describe some social phenomenon, in this case, types of cell phone addictions. Consequently, our unit of analysis would be the social phenomenon.

Another common unit of analysis in sociological inquiry is groups. Groups, of course, vary in size, and almost no group is too small or too large to be of interest to sociologists. Families, friendship groups, and street gangs make up some of the more common micro-level groups examined by sociologists. Employees in an organization, professionals in a particular domain (e.g., chefs, lawyers, sociologists), and members of clubs (e.g., Girl Guides, Rotary, Red Hat Society) are all meso-level groups that sociologists might study. Finally, at the macro level, sociologists sometimes examine citizens of entire nations or residents of different continents or other regions.

A study of student addictions to their cell phones at the group level might consider whether certain types of social clubs have more or fewer cell phone-addicted members than other sorts of clubs. Perhaps we would find that clubs that emphasize physical fitness, such as the rugby club and the scuba club, have fewer cell phone-addicted members than clubs that emphasize cerebral activity, such as the chess club and the sociology club. Our unit of analysis in this example is groups. If we had instead asked whether people who join cerebral clubs are more likely to be cell phone-addicted than those who join social clubs, then our unit of analysis would have been individuals. In either case, however, our unit of observation would be individuals.

Organizations are yet another potential unit of analysis that social scientists might wish to say something about. Organizations include entities like corporations, colleges and universities, and even night clubs. At the organization level, a study of students’ cell phone addictions might ask, “How do different colleges address the problem of cell phone addiction?” In this case, our interest lies not in the experience of individual students but instead in the campus-to-campus differences in confronting cell phone addictions. A researcher conducting a study of this type might examine schools’ written policies and procedures, so his unit of observation would be documents. However, because he ultimately wishes to describe differences across campuses, the college would be his unit of analysis.

Social phenomena are also a potential unit of analysis. Many sociologists study a variety of social interactions and social problems that fall under this category. Examples include social problems like murder or rape; interactions such as counselling sessions, Facebook chatting, or wrestling; and other social phenomena such as voting and even cell phone use or misuse. A researcher interested in students’ cell phone addictions could ask, “What are the various types of cell phone addictions that exist among students?” Perhaps the researcher will discover that some addictions are primarily centred on social media such as chat rooms, Facebook, or texting, while other addictions centre on single-player games that discourage interaction with others. The resultant typology of cell phone addictions would tell us something about the social phenomenon (unit of analysis) being studied. As in several of the preceding examples, however, the unit of observation would likely be individual people.

Finally, a number of social scientists examine policies and principles, the last type of unit of analysis we will consider here. Studies that analyze policies and principles typically rely on documents as the unit of observation. Perhaps a researcher has been hired by a college to help it write an effective policy against cell phone use in the classroom. In this case, the researcher might gather all previously written policies from campuses all over the country, and compare policies at campuses where the use of cell phones in classroom is low to policies at campuses where the use of cell phones in the classroom is high.

In sum, there are many potential units of analysis that a sociologist might examine, but some of the most common units include the following:

  • Individuals
  • Organizations
  • Social phenomena.
  • Policies and principles.

Table 4.1 Units of analysis and units of observation: A hypothetical study of students’ addictions to cell phones.

Research Methods for the Social Sciences: An Introduction Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Choosing the Right Unit of Analysis for Your Research Project

Table of content.

  • Understanding the Unit of Analysis in Research
  • Factors to Consider When Selecting the Right Unit of Analysis
  • Common Mistakes to Avoid

A research project is like setting out on a voyage through uncharted territory; the unit of analysis is your compass, guiding every decision from methodology to interpretation.

It’s the beating heart of your data collection and the lens through which you view your findings. With deep-seated experience in research methodologies , our expertise recognizes that choosing an appropriate unit of analysis not only anchors your study but illuminates paths towards meaningful conclusions.

The right choice empowers researchers to extract patterns, answer pivotal questions, and offer insights into complex phenomena. But tread carefully—selecting an ill-suited unit can distort results or obscure significant relationships within data.

Remember this: A well-chosen unit of analysis acts as a beacon for accuracy and relevance throughout your scholarly inquiry. Continue reading to unlock the strategies for selecting this cornerstone of research design with precision—your project’s success depends on it.

Engage with us as we delve deeper into this critical aspect of research mastery.

Key Takeaways

  • Your research questions and hypotheses drive the choice of your unit of analysis, shaping how you collect and interpret data.
  • Avoid common mistakes like reductionism , which oversimplifies complex issues, and the ecological fallacy , where group-level findings are wrongly applied to individuals.
  • Consider the availability and quality of data when selecting your unit of analysis to ensure your research is feasible and conclusions are valid.
  • Differentiate between units of analysis (what you’re analyzing) and units of observation (what or who you’re observing) for clarity in your study.
  • Ensure that your chosen unit aligns with both the theoretical framework and practical considerations such as time and resources.

The unit of analysis in research refers to the level at which data is collected and analyzed. It is essential for researchers to understand the different types of units of analysis, as well as their significance in shaping the research process and outcomes.

Definition and Importance

With resonio, the unit of analysis you choose lays the groundwork for your market research focus. Whether it’s individuals, organizations, or specific events, resonio’s platform facilitates targeted data collection and analysis to address your unique research questions. Our tool simplifies this selection process, ensuring that you can efficiently zero in on the most relevant unit for insightful and actionable results.

This crucial component serves as a navigational aid for your market research. The market research tool not only guides you in data collection but also in selecting the most effective sampling methods and approaches to hypothesis testing. Getting robust and reliable data, ensuring your research is both effective and straightforward.

Choosing the right unit of analysis is crucial, as it defines your research’s direction. resonio makes this easier, ensuring your choice aligns with your theoretical approach and data collection methods, thereby enhancing the validity and reliability of your results.

Additionally, resonio aids in steering clear of errors like reductionism and ecological fallacy, ensuring your conclusions match the data’s level of analysis

Difference between Unit of Analysis and Unit of Observation

Understanding the difference between the unit of analysis and observation is key. Let us clarify this distinction: the unit of analysis is what you’ll ultimately analyze, while the unit of observation is what you observe or measure during the study.

For example, in using resonio for educational research, individual test scores are the units of analysis, while the students providing these scores are the units of observation.

This distinction is essential as it clarifies the specific aspect under scrutiny and what will yield measurable data. It also emphasizes that researchers must carefully consider both elements to ensure their alignment with research questions and objectives .

Types of Units of Analysis: Individual, Aggregates, and Social

Choosing the right unit of analysis for a research project is critical. The types of units of analysis include individual, aggregates, and social.

  • Individual: This type focuses on analyzing the attributes and characteristics of individual units, such as people or specific objects.
  • Aggregates: Aggregates involve analyzing groups or collections of individual units, such as neighborhoods, organizations, or communities.
  • Social: Social units of analysis emphasize analyzing broader social entities, such as cultures, societies, or institutions.

When selecting the right unit of analysis for a research project, researchers must consider various factors such as their research questions and hypotheses , data availability and quality, feasibility and practicality, as well as the theoretical framework and research design .

Each of these factors plays a crucial role in determining the most appropriate unit of analysis for the study.

Research Questions and Hypotheses

The research questions and hypotheses play a crucial role in determining the appropriate unit of analysis for a research project. They guide the researcher in identifying what exactly needs to be studied and analyzed, thereby influencing the selection of the most relevant unit of analysis.

The alignment between the research questions/hypotheses and the unit of analysis is essential to ensure that the study’s focus meets its intended objectives. Furthermore, clear research questions and hypotheses help define specific parameters for data collection and analysis, directly impacting which unit of analysis will best serve the study’s purpose.

It’s important to carefully consider how each research question or hypothesis relates to different potential units of analysis , as this connection will shape not only what you are studying but also how you will study it .

Data Availability and Quality

When considering the unit of analysis for a research project, researchers must take into account the availability and quality of data. The chosen unit of analysis should align with the available data sources to ensure that meaningful and accurate conclusions can be drawn.

Researchers need to evaluate whether the necessary data at the chosen level of analysis is accessible and reliable. Ensuring high-quality data will contribute to the validity and reliability of the study , enabling researchers to make sound interpretations and draw robust conclusions from their findings.

Choosing a unit of analysis without considering data availability and quality may lead to limitations in conducting thorough analysis or drawing valid conclusions. It is crucial for researchers to assess both factors before finalizing their selection, as it directly impacts the feasibility, accuracy, and rigor of their research project.

Feasibility and Practicality

When considering the feasibility and practicality of a unit of analysis for a research project, it is essential to assess the availability and quality of data related to the chosen unit.

Researchers should also evaluate whether the selected unit aligns with their theoretical framework and research design. The practical aspects such as time, resources, and potential challenges associated with analyzing the chosen unit must be thoroughly considered before finalizing the decision.

Moreover, it is crucial to ensure that the selected unit of analysis is feasible within the scope of the research questions and hypotheses. Additionally, researchers need to determine if the chosen unit can be effectively studied based on existing literature and sampling techniques utilized in similar studies.

By carefully evaluating these factors, researchers can make informed decisions regarding which unit of analysis will best suit their research goals.

Theoretical Framework and Research Design

The theoretical framework and research design establish the structure for a study based on existing theories and concepts. It guides the selection of the unit of analysis by providing a foundation for understanding how variables interact and influence one another.

Theoretical frameworks help to shape research questions , hypotheses, and data collection methods, ensuring that the chosen unit of analysis aligns with the study’s objectives. Research design serves as a blueprint outlining the procedures and techniques used to gather and analyze data, allowing researchers to make informed decisions regarding their unit of analysis while considering feasibility, practicality, and data availability .

Researchers often make the mistake of reductionism, where they oversimplify complex phenomena by focusing on one aspect. Another common mistake is the ecological fallacy, where conclusions about individual behavior are made based on group-level data.

Reductionism

Reductionism occurs when a researcher oversimplifies a complex phenomenon by analyzing it at too basic a level. This can lead to the loss of important nuances and details critical for understanding the broader context.

For instance, studying individual test scores without considering external factors like teaching quality or student motivation is reductionist. By focusing solely on one aspect, researchers miss out on comprehensive insights that may impact their findings.

In research projects, reductionism limits the depth of analysis and may result in skewed conclusions that don’t accurately reflect the real-world complexities. It’s essential for researchers to avoid reductionism by carefully selecting an appropriate unit of analysis that allows for a holistic understanding of the phenomenon under study.

Ecological Fallacy

The ecological fallacy involves making conclusions about individuals based on group-level data . This occurs when researchers mistakenly assume that relationships observed at the aggregate level also apply to individuals within that group.

For example, if a study finds a correlation between high levels of education and income at the city level, it doesn’t mean the same relationship applies to every individual within that city.

This fallacy can lead to erroneous generalizations and inaccurate assumptions about individuals based on broader trends. It is crucial for researchers to be mindful of this potential pitfall when selecting their unit of analysis, ensuring that their findings accurately represent the specific characteristics and behaviors of the individuals or entities under investigation.

Selecting the appropriate unit of analysis is critical for a research project’s success, shaping its focus and scope. Researchers must carefully align the chosen unit with their study objectives to ensure relevance.

The impact on findings and conclusions from this choice cannot be understated. Correctly choosing the unit of analysis can considerably influence the direction and outcomes of a research undertaking.

Robert Koch

I write about AI, SEO, Tech, and Innovation. Led by curiosity, I stay ahead of AI advancements. I aim for clarity and understand the necessity of change, taking guidance from Shaw: 'Progress is impossible without change,' and living by Welch's words: 'Change before you have to'.

  • Privacy Overview
  • Strictly Necessary Cookies
  • Additional Cookies

This website uses cookies to provide you with the best user experience possible. Cookies are small text files that are cached when you visit a website to make the user experience more efficient. We are allowed to store cookies on your device if they are absolutely necessary for the operation of the site. For all other cookies we need your consent.

You can at any time change or withdraw your consent from the Cookie Declaration on our website. Find the link to your settings in our footer.

Find out more in our privacy policy about our use of cookies and how we process personal data.

Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot properly without these cookies.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as additional cookies.

Please enable Strictly Necessary Cookies first so that we can save your preferences!

research design analysis unit

Community Blog

Keep up-to-date on postgraduate related issues with our quick reads written by students, postdocs, professors and industry leaders.

The Unit of Analysis Explained

DiscoverPhDs

  • By DiscoverPhDs
  • October 3, 2020

Unit of Analysis

The unit of analysis refers to the main parameter that you’re investigating in your research project or study. Example of the different types of unit analysis that may be used in a project include:

  • Individual people
  • Groups of people
  • Objects such as photographs, newspapers and books
  • Geographical unit based on parameters such as cities or counties
  • Social parameters such as births, deaths, divorces

The unit of analysis is named as such because the unit type is determined based on the actual data analysis that you perform in your project or study.

For example, if your research is based around data on exam grades for students at two different universities, then the unit of analysis is the data for the individual student due to each student having an exam score associated with them.

Conversely if your study is based on comparing noise level data between two different lecture halls full of students, then your unit of analysis here is the collective group of students in each hall rather than any data associated with an individual student.

In the same research study involving the same students, you may perform different types of analysis and this will be reflected by having different units of analysis. In the example of student exam scores, if you’re comparing individual exam grades then the unit of analysis is the individual student.

On the other hand, if you’re comparing the average exam grade between two universities, then the unit of analysis is now the group of students as you’re comparing the average of the group rather than individual exam grades.

These different levels of hierarchies of units of analysis can become complex with multiple levels. In fact, its complexity has led to a new field of statistical analysis that’s commonly known as hierarchical modelling.

As a researcher, you need to be clear on what your specific research questio n is. Based on this, you can define each data, observation or other variable and how they make up your dataset.

A clarity of your research question will help you identify your analysis units and the appropriate sample size needed to obtain a meaningful result (and is this a random sample/sampling unit or something else).

In developing your research method, you need to consider whether you’ll need any repeated observation of each measurement. You also need to consider whether you’re working with qualitative data/qualitative research or if this is quantitative content analysis.

The unit of analysis of your study is the specifically ‘who’ or what’ it is that your analysing – for example are you analysing the individual student, the group of students or even the whole university. You may have to consider a different unit of analysis based on the concept you’re considering, even if working with the same observation data set.

Dissertation versus Thesis

In the UK, a dissertation, usually around 20,000 words is written by undergraduate and Master’s students, whilst a thesis, around 80,000 words, is written as part of a PhD.

Do you need to have published papers to do a PhD?

Do you need to have published papers to do a PhD? The simple answer is no but it could benefit your application if you can.

Can you do a PhD part time while working answered

Is it really possible to do a PhD while working? The answer is ‘yes’, but it comes with several ‘buts’. Read our post to find out if it’s for you.

Join thousands of other students and stay up to date with the latest PhD programmes, funding opportunities and advice.

research design analysis unit

Browse PhDs Now

research design analysis unit

How should you spend your first week as a PhD student? Here’s are 7 steps to help you get started on your journey.

Types of Research Design

There are various types of research that are classified by objective, depth of study, analysed data and the time required to study the phenomenon etc.

research design analysis unit

Lewis is a third-year PhD student at CVSSP at the University of Surrey. His research involves using multi-camera broadcast footage of sports, and using this data to create new viewpoints in virtual and augmented reality.

research design analysis unit

Calvin is coming to the end of the second year of his PhD at Trinity College Dublin, Ireland. His research is focussed on how recovery as a concept is socially constructed in mental health services.

Join Thousands of Students

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 14 May 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

3.2: Unit of Analysis and Errors

  • Last updated
  • Save as PDF
  • Page ID 124454

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Learning Objective

  • Define units of analysis and units of observation, and describe the two common errors people make when they confuse the two.

Units of Analysis

Another point to consider when designing a research project, and which might differ slightly in qualitative and quantitative studies, has to do with units of analysis. A unit of analysis is the entity that you wish to be able to say something about at the end of your study, probably what you’d consider to be the main focus of your study.

More specifically, your unit of analysis will be determined by your research question. For now, let’s go back to an example about students’ addictions to electronic gadgets. We’ll consider first how different kinds of research questions about this topic will yield different units of analysis. Then we’ll think about how those questions might be answered and with what kinds of data.

If we were to ask, “Which students are most likely to be addicted to their electronic gadgets?” our unit of analysis would be the individual . We might mail a survey to students on campus, and our aim would be to classify individuals according to their membership in certain social classes in order to see how membership in those classes correlated with gadget addiction. For example, we might find that majors in new media, men, and students with high socioeconomic status are all more likely than other students to become addicted to their electronic gadgets. Another possibility would be to ask, “How do students’ gadget addictions differ, and how are they similar?” In this case, we could conduct observations of addicted students and record when, where, why, and how they use their gadgets. In both cases, one using a survey and the other using observations, data are collected from individual students. The units of analysis for both would be individuals.

Another common unit of analysis in sociological inquiry is groups . Groups of course vary in size, and almost no group is too small or too large to be of interest to sociologists. Families, friendship groups, and street gangs make up some of the more common groups examined by sociologists. Employees in an organization, professionals in a particular domain (e.g., chefs, lawyers, sociologists), and members of clubs (e.g., Girl Scouts, Rotary, Red Hat Society) are all larger groups that sociologists might study. Finally, at the macro level, sociologists sometimes examine citizens of entire nations or residents of different continents or other regions.

A study of student addictions to their electronic gadgets at the group level might consider whether certain types of social clubs have more or fewer gadget-addicted members than other sorts of clubs. Perhaps we would find that clubs that emphasize physical fitness, such as the rugby club and the scuba club, have fewer gadget-addicted members than clubs that emphasize cerebral activity, such as the chess club and the sociology club. Our unit of analysis in this example is groups. If we had instead asked whether people who join cerebral clubs are more likely to be gadget-addicted than those who join social clubs, then our unit of analysis would have been individuals.

Organizations are yet another potential unit of analysis that social scientists might wish to say something about. As you may recall from other courses organizations include entities like corporations, colleges and universities, and even night clubs. At the organization level, a study of students’ electronic gadget addictions might ask, “How do different colleges address the problem of electronic gadget addiction?” In this case, our interest lies not in the experience of individual students but instead in the campus-to-campus differences in confronting gadget addictions. A researcher conducting a study of this type might examine schools’ written policies and procedures, which could be social interactions or artifacts. However, because he ultimately wishes to describe differences across campuses, the college would be his unit of analysis.

Of course, it would be silly in a textbook focused on social scientific research to neglect social interactions and artifacts as units of analysis. Social interactions are relevant when studying individual humans and looking at interactions between them such as arguments, emails, fights, dancing, etc. A researcher interested in students’ electronic gadget addictions could ask, “What do students talk about on social media platforms?" We could use social interactions on various social media websites to study this.

Another of analysis, artifacts , are product of social beings and their behavior such as books, pottery, etc. A researcher interested in students’ electronic gadget addictions could ask, "How are different phones interfaces structured to encourage additions?"

In sum, there are many potential units of analysis that a sociologist might examine, but some of the most common units include the following:

  • Individuals
  • Organizations
  • Social interactions

Unit of Analysis Errors

One common error we see people make when it comes to both causality and units of analysis is something called the ecological fallacy . This occurs when claims about one lower-level unit of analysis are made based on data from some higher-level unit of analysis. In many cases, this occurs when claims are made about individuals, but only group-level data have been gathered. For example, we might want to understand whether electronic gadget addictions are more common on certain campuses than on others. Perhaps different campuses around the country have provided us with their campus percentage of gadget-addicted students, and we learn from these data that electronic gadget addictions are more common on campuses that have business programs than on campuses without them. We then conclude that business students are more likely than nonbusiness students to become addicted to their electronic gadgets. However, this would be an inappropriate conclusion to draw. Because we only have addiction rates by campus, we can only draw conclusions about campuses, not about the individual students on those campuses. Perhaps the sociology majors on the business campuses are the ones that caused the addiction rates on those campuses to be so high. The point is we simply don’t know because we only have campus-level data. By drawing conclusions about students when our data are about campuses, we run the risk of committing the ecological fallacy.

On the other hand, another mistake to be aware of is reductionism . Reductionism occurs when claims about some higher-level unit of analysis are made based on data from some lower-level unit of analysis. In this case, claims about groups are made based on individual-level data. An example of reductionism can be seen in some descriptions of the civil rights movement. On occasion, people have proclaimed that Rosa Parks started the civil rights movement in the United States by refusing to give up her seat to a white person while on a city bus in Montgomery, Alabama, in December 1955. Although it is true that Parks played an invaluable role in the movement, and that her act of civil disobedience gave others courage to stand up against racist policies, beliefs, and actions, to credit Parks with starting the movement is reductionist. Surely the confluence of many factors, from fights over legalized racial segregation to the Supreme Court’s historic decision to desegregate schools in 1954 to the creation of groups such as the Student Nonviolent Coordinating Committee (to name just a few), contributed to the rise and success of the American civil rights movement. In other words, the movement is attributable to many factors—some social, others political, others economic. Did Parks play a role? Of course she did—and a very important one at that. But did she cause the movement? To say yes would be reductionist.

It would be a mistake to conclude from the preceding discussion that researchers should avoid making any claims whatsoever about data or about relationships between variables. While it is important to be attentive to the possibility for error in causal reasoning about different levels of analysis, this warning should not prevent you from drawing well-reasoned analytic conclusions from your data. The point is to be cautious but not abandon entirely the social scientific quest to understand patterns of behavior.

KEY TAKEAWAYS

  • A unit of analysis is the item you wish to be able to say something about at the end of your study.
  • Ecological fallacy and reductionism are caused by generalizing to variables not of the same unit of analysis.
  • Do a Google News search for the term ecological fallacy . Chances are good you’ll come across a number of news editorials using this term. Read a few of these editorials or articles, and print one out. Demonstrate your understanding of the term ecological fallacy by writing a short answer discussing whether the author of the article you printed out used the term correctly.

Popular searches

  • How to Get Participants For Your Study
  • How to Do Segmentation?
  • Conjoint Preference Share Simulator
  • MaxDiff Analysis
  • Likert Scales
  • Reliability & Validity

Request consultation

Do you need support in running a pricing or product study? We can help you with agile consumer research and conjoint analysis.

Looking for an online survey platform?

Conjointly offers a great survey tool with multiple question types, randomisation blocks, and multilingual support. The Basic tier is always free.

Research Methods Knowledge Base

  • Navigating the Knowledge Base
  • Five Big Words
  • Types of Research Questions
  • Time in Research
  • Types of Relationships
  • Types of Data

Unit of Analysis

  • Two Research Fallacies
  • Philosophy of Research
  • Ethics in Research
  • Conceptualizing
  • Evaluation Research
  • Measurement
  • Research Design
  • Table of Contents

Fully-functional online survey tool with various question types, logic, randomisation, and reporting for unlimited number of surveys.

Completely free for academics and students .

One of the most important ideas in a research project is the unit of analysis . The unit of analysis is the major entity that you are analyzing in your study. For instance, any of the following could be a unit of analysis in a study:

  • individuals
  • artifacts (books, photos, newspapers)
  • geographical units (town, census tract, state)
  • social interactions (dyadic relations, divorces, arrests)

Why is it called the ‘unit of analysis’ and not something else (like, the unit of sampling)? Because it is the analysis you do in your study that determines what the unit is . For instance, if you are comparing the children in two classrooms on achievement test scores, the unit is the individual child because you have a score for each child. On the other hand, if you are comparing the two classes on classroom climate, your unit of analysis is the group, in this case the classroom, because you only have a classroom climate score for the class as a whole and not for each individual student. For different analyses in the same study you may have different units of analysis. If you decide to base an analysis on student scores, the individual is the unit. But you might decide to compare average classroom performance. In this case, since the data that goes into the analysis is the average itself (and not the individuals’ scores) the unit of analysis is actually the group. Even though you had data at the student level, you use aggregates in the analysis. In many areas of social research these hierarchies of analysis units have become particularly important and have spawned a whole area of statistical analysis sometimes referred to as hierarchical modeling . This is true in education, for instance, where we often compare classroom performance but collected achievement data at the individual student level.

Cookie Consent

Conjointly uses essential cookies to make our site work. We also use additional cookies in order to understand the usage of the site, gather audience analytics, and for remarketing purposes.

For more information on Conjointly's use of cookies, please read our Cookie Policy .

Which one are you?

I am new to conjointly, i am already using conjointly.

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE : Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: May 18, 2024 11:38 AM
  • URL: https://libguides.usc.edu/writingguide

Research Design Review

A discussion of qualitative & quantitative research design, unit of analysis, qualitative data analysis: the unit of analysis.

research design analysis unit

As discussed in two earlier articles in Research Design Review (see “The Important Role of ‘Buckets’ in Qualitative Data Analysis” and “Finding Connections & Making Sense of Qualitative Data” ), the selection of the unit of analysis is one of the first steps in the qualitative data analysis process. The “unit of analysis” refers to the portion of content that will be the basis for decisions made during the development of codes. For example, in textual content analyses, the unit of analysis may be at the level of a word, a sentence (Milne & Adler, 1999), a paragraph, an article or chapter, an entire edition or volume, a complete response to an interview question, entire diaries from research participants, or some other level of text. The unit of analysis may not be defined by the content per se but rather by a characteristic of the content originator (e.g., person’s age), or the unit of analysis might be at the individual level with, for example, each participant in an in-depth interview (IDI) study treated as a case. Whatever the unit of analysis, the researcher will make coding decisions based on various elements of the content, including length, complexity, manifest meanings, and latent meanings based on such nebulous variables as the person’s tone or manner.

Deciding on the unit of analysis is a very important decision because it guides the development of codes as well as the coding process. If a weak unit of analysis is chosen, one of two outcomes may result: 1) If the unit chosen is too precise (i.e., at too much of a micro-level than what is actually needed), the researcher will set in motion Read Full Text

Share this:

  • Click to share on Reddit (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Tumblr (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to print (Opens in new window)

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Report this content
  • View site in Reader
  • Manage subscriptions
  • Collapse this bar
  • Open access
  • Published: 16 May 2024

Integrating qualitative research within a clinical trials unit: developing strategies and understanding their implementation in contexts

  • Jeremy Segrott   ORCID: orcid.org/0000-0001-6215-0870 1 ,
  • Sue Channon 2 ,
  • Amy Lloyd 4 ,
  • Eleni Glarou 2 , 3 ,
  • Josie Henley 5 ,
  • Jacqueline Hughes 2 ,
  • Nina Jacob 2 ,
  • Sarah Milosevic 2 ,
  • Yvonne Moriarty 2 ,
  • Bethan Pell 6 ,
  • Mike Robling 2 ,
  • Heather Strange 2 ,
  • Julia Townson 2 ,
  • Qualitative Research Group &
  • Lucy Brookes-Howell 2  

Trials volume  25 , Article number:  323 ( 2024 ) Cite this article

72 Accesses

1 Altmetric

Metrics details

Background/aims

The value of using qualitative methods within clinical trials is widely recognised. How qualitative research is integrated within trials units to achieve this is less clear. This paper describes the process through which qualitative research has been integrated within Cardiff University’s Centre for Trials Research (CTR) in Wales, UK. We highlight facilitators of, and challenges to, integration.

We held group discussions on the work of the Qualitative Research Group (QRG) within CTR. The content of these discussions, materials for a presentation in CTR, and documents relating to the development of the QRG were interpreted at a workshop attended by group members. Normalisation Process Theory (NPT) was used to structure analysis. A writing group prepared a document for input from members of CTR, forming the basis of this paper.

Actions to integrate qualitative research comprised: its inclusion in Centre strategies; formation of a QRG with dedicated funding/roles; embedding of qualitative research within operating systems; capacity building/training; monitoring opportunities to include qualitative methods in studies; maximising the quality of qualitative research and developing methodological innovation. Facilitators of these actions included: the influence of the broader methodological landscape within trial/study design and its promotion of the value of qualitative research; and close physical proximity of CTR qualitative staff/students allowing sharing of methodological approaches. Introduction of innovative qualitative methods generated interest among other staff groups. Challenges included: pressure to under-resource qualitative components of research, preference for a statistical stance historically in some research areas and funding structures, and difficulties faced by qualitative researchers carving out individual academic profiles when working across trials/studies.

Conclusions

Given that CTUs are pivotal to the design and conduct of RCTs and related study types across multiple disciplines, integrating qualitative research into trials units is crucial if its contribution is to be fully realised. We have made explicit one trials unit’s experience of embedding qualitative research and present this to open dialogue on ways to operationalise and optimise qualitative research in trials. NPT provides a valuable framework with which to theorise these processes, including the importance of sense-making and legitimisation when introducing new practices within organisations.

Peer Review reports

The value of using qualitative methods within randomised control trials (RCTs) is widely recognised [ 1 , 2 , 3 ]. Qualitative research generates important evidence on factors affecting trial recruitment/retention [ 4 ] and implementation, aiding interpretation of quantitative data [ 5 ]. Though RCTs have traditionally been viewed as sitting within a positivist paradigm, recent methodological innovations have developed new trial designs that draw explicitly on both quantitative and qualitative methods. For instance, in the field of complex public health interventions, realist RCTs seek to understand the mechanisms through which interventions generate hypothesised impacts, and how interactions across different implementation contexts form part of these mechanisms. Proponents of realist RCTs—which integrate experimental and realist paradigms—highlight the importance of using quantitative and qualitative methods to fully realise these aims and to generate an understanding of intervention mechanisms and how context shapes them [ 6 ].

A need for guidance on how to conduct good quality qualitative research is being addressed, particularly in relation to feasibility studies for RCTs [ 7 ] and process evaluations embedded within trials of complex interventions [ 5 ]. There is also guidance on the conduct of qualitative research within trials at different points in the research cycle, including development, conduct and reporting [ 8 , 9 ].

A high proportion of trials are based within or involve clinical trials units (CTUs). In the UK the UKCRC Registered CTU Network describes them as:

… specialist units which have been set up with a specific remit to design, conduct, analyse and publish clinical trials and other well-designed studies. They have the capability to provide specialist expert statistical, epidemiological, and other methodological advice and coordination to undertake successful clinical trials. In addition, most CTUs will have expertise in the coordination of trials involving investigational medicinal products which must be conducted in compliance with the UK Regulations governing the conduct of clinical trials resulting from the EU Directive for Clinical Trials.

Thus, CTUs provide the specialist methodological expertise needed for the conduct of trials, and in the case of trials of investigational medicinal products, their involvement may be mandated to ensure compliance with relevant regulations. As the definition above suggests, CTUs also conduct and support other types of study apart from RCTs, providing a range of methodological and subject-based expertise.

However, despite their central role in the conduct and design of trials, (and other evaluation designs) little has been written about how CTUs have integrated qualitative work within their organisation at a time when such methods are, as stated above, now recognised as an important aspect of RCTs and evaluation studies more generally. This is a significant gap, since integration at the organisational level arguably shapes how qualitative research is integrated within individual studies, and thus it is valuable to understand how CTUs have approached the task. There are different ways of involving qualitative work in trials units, such as partnering with other departments (e.g. social science) or employing qualitative researchers directly. Qualitative research can be imagined and configured in different ways—as a method that generates data to inform future trial and intervention design, as an embedded component within an RCT or other evaluation type, or as a parallel strand of research focusing on lived experiences of illness, for instance. Understanding how trials units have integrated qualitative research is valuable, as it can shed light on which strategies show promise, and in which contexts, and how qualitative research is positioned within the field of trials research, foregrounding the value of qualitative research. However, although much has been written about its use within trials, few accounts exist of how trials units have integrated qualitative research within their systems and structures.

This paper discusses the process of embedding qualitative research within the work of one CTU—Cardiff University’s Centre for Trials Research (CTR). It highlights facilitators of this process and identifies challenges to integration. We use the Normalisation Process Theory (NPT) as a framework to structure our experience and approach. The key gap addressed by this paper is the implementation of strategies to integrate qualitative research (a relatively newly adopted set of practices and processes) within CTU systems and structures. We acknowledge from the outset that there are multiple ways of approaching this task. What follows therefore is not a set of recommendations for a preferred or best way to integrate qualitative research, as this will comprise diverse actions according to specific contexts. Rather, we examine the processes through which integration occurred in our own setting and highlight the potential value of these insights for others engaged in the work of promoting qualitative research within trials units.

Background to the integration of qualitative research within CTR

The CTR was formed in 2015 [ 10 ]. It brought together three existing trials units at Cardiff University: the South East Wales Trials Unit, the Wales Cancer Trials Unit, and the Haematology Clinical Trials Unit. From its inception, the CTR had a stated aim of developing a programme of qualitative research and integrating it within trials and other studies. In the sections below, we map these approaches onto the framework offered by Normalisation Process Theory to understand the processes through which they helped achieve embedding and integration of qualitative research.

CTR’s aims (including those relating to the development of qualitative research) were included within its strategy documents and communicated to others through infrastructure funding applications, annual reports and its website. A Qualitative Research Group (QRG), which had previously existed within the South East Wales Trials Unit, with dedicated funding for methodological specialists and group lead academics, was a key mechanism through which the development of a qualitative portfolio was put into action. Integration of qualitative research within Centre systems and processes occurred through the inclusion of qualitative research in study adoption processes and representation on committees. The CTR’s study portfolio provided a basis to track qualitative methods in new and existing studies, identify opportunities to embed qualitative methods within recently adopted studies (at the funding application stage) and to manage staff resources. Capacity building and training were an important focus of the QRG’s work, including training courses, mentoring, creation of an academic network open to university staff and practitioners working in the field of healthcare, presentations at CTR staff meetings and securing of PhD studentships. Standard operating procedures and methodological guidance on the design and conduct of qualitative research (e.g. templates for developing analysis plans) aimed to create a shared understanding of how to undertake high-quality research, and a means to monitor the implementation of rigorous approaches. As the QRG expanded its expertise it sought to develop innovative approaches, including the use of visual [ 11 ] and ethnographic methods [ 12 ].

Understanding implementation—Normalisation Process Theory (NPT)

Normalisation Process Theory (NPT) provides a model with which to understand the implementation of new sets of practices and their normalisation within organisational settings. The term ‘normalisation’ refers to how new practices become routinised (part of the everyday work of an organisation) through embedding and integration [ 13 , 14 ]. NPT defines implementation as ‘the social organisation of work’ and is concerned with the social processes that take place as new practices are introduced. Embedding involves ‘making practices routine elements of everyday life’ within an organisation. Integration takes the form of ‘sustaining embedded practices in social contexts’, and how these processes lead to the practices becoming (or not becoming) ‘normal and routine’ [ 14 ]. NPT is concerned with the factors which promote or ‘inhibit’ attempts to embed and integrate the operationalisation of new practices [ 13 , 14 , 15 ].

Embedding new practices is therefore achieved through implementation—which takes the form of interactions in specific contexts. Implementation is operationalised through four ‘generative mechanisms’— coherence , cognitive participation , collective action and reflexive monitoring [ 14 ]. Each mechanism is characterised by components comprising immediate and organisational work, with actions of individuals and organisations (or groups of individuals) interdependent. The mechanisms operate partly through forms of investment (i.e. meaning, commitment, effort, and comprehension) [ 14 ].

Coherence refers to how individuals/groups make sense of, and give meaning to, new practices. Sense-making concerns the coherence of a practice—whether it ‘holds together’, and its differentiation from existing activities [ 15 ]. Communal and individual specification involve understanding new practices and their potential benefits for oneself or an organisation. Individuals consider what new practices mean for them in terms of tasks and responsibilities ( internalisation ) [ 14 ].

NPT frames the second mechanism, cognitive participation , as the building of a ‘community of practice’. For a new practice to be initiated, individuals and groups within an organisation must commit to it [ 14 , 15 ]. Cognitive participation occurs through enrolment —how people relate to the new practice; legitimation —the belief that it is right for them to be involved; and activation —defining which actions are necessary to sustain the practice and their involvement [ 14 ]. Making the new practices work may require changes to roles (new responsibilities, altered procedures) and reconfiguring how colleagues work together (changed relationships).

Third, Collective Action refers to ‘the operational work that people do to enact a set of practices’ [ 14 ]. Individuals engage with the new practices ( interactional workability ) reshaping how members of an organisation interact with each other, through creation of new roles and expectations ( relational interaction ) [ 15 ]. Skill set workability concerns how the work of implementing a new set of practices is distributed and the necessary roles and skillsets defined [ 14 ]. Contextual integration draws attention to the incorporation of a practice within social contexts, and the potential for aspects of these contexts, such as systems and procedures, to be modified as a result [ 15 ].

Reflexive monitoring is the final implementation mechanism. Collective and individual appraisal evaluate the value of a set of practices, which depends on the collection of information—formally and informally ( systematisation ). Appraisal may lead to reconfiguration in which procedures of the practice are redefined or reshaped [ 14 , 15 ].

We sought to map the following: (1) the strategies used to embed qualitative research within the Centre, (2) key facilitators, and (3) barriers to their implementation. Through focused group discussions during the monthly meetings of the CTR QRG and in discussion with the CTR senior management team throughout 2019–2020 we identified nine types of documents (22 individual documents in total) produced within the CTR which had relevant information about the integration of qualitative research within its work (Table  1 ). The QRG had an ‘open door’ policy to membership and welcomed all staff/students with an interest in qualitative research. It included researchers who were employed specifically to undertake qualitative research and other staff with a range of study roles, including trial managers, statisticians, and data managers. There was also diversity in terms of career stage, including PhD students, mid-career researchers and members of the Centre’s Executive team. Membership was therefore largely self-selected, and comprised of individuals with a role related to, or an interest in, embedding qualitative research within trials. However, the group brought together diverse methodological perspectives and was not solely comprised of methodological ‘champions’ whose job it was to promote the development of qualitative research within the centre. Thus whilst the group (and by extension, the authors of this paper) had a shared appreciation of the value of qualitative research within a trials centre, they also brought varied methodological perspectives and ways of engaging with it.

All members of the QRG ( n  = 26) were invited to take part in a face-to-face, day-long workshop in February 2019 on ‘How to optimise and operationalise qualitative research in trials: reflections on CTR structure’. The workshop was attended by 12 members of staff and PhD students, including members of the QRG and the CTR’s senior management team. Recruitment to the workshop was therefore inclusive, and to some extent opportunistic, but all members of the QRG were able to contribute to discussions during regular monthly group meetings and the drafting of the current paper.

The aim of the workshop was to bring together information from the documents in Table  1 to generate discussion around the key strategies (and their component activities) that had been adopted to integrate qualitative research into CTR, as well as barriers to, and facilitators of, their implementation. The agenda for the workshop involved four key areas: development and history of the CTR model; mapping the current model within CTR; discussing the structure of other CTUs; and exploring the advantages and disadvantages of the CTR model.

During the workshop, we discussed the use of NPT to conceptualise how qualitative research had been embedded within CTR’s systems and practices. The group produced spider diagrams to map strategies and actions on to the four key domains (or ‘generative mechanisms’ of NPT) summarised above, to aid the understanding of how they had functioned, and the utility of NPT as a framework. This is summarised in Table  2 .

Detailed notes were made during the workshop. A core writing group then used these notes and the documents in Table  1 to develop a draft of the current paper. This was circulated to all members of the CTR QRG ( n  = 26) and stored within a central repository accessible to them to allow involvement and incorporate the views of those who were not able to attend the workshop. This draft was again presented for comments in the monthly CTR QRG meeting in February 2021 attended by n  = 10. The Standards for QUality Improvement Reporting Excellence 2.0 (SQUIRE) guidelines were used to inform the structure and content of the paper (see supplementary material) [ 16 ].

In the following sections, we describe the strategies CTR adopted to integrate qualitative research. These are mapped against NPT’s four generative mechanisms to explore the processes through which the strategies promoted integration, and facilitators of and barriers to their implementation. A summary of the strategies and their functioning in terms of the generative mechanisms is provided in Table  2 .

Coherence—making sense of qualitative research

In CTR, many of the actions taken to build a portfolio of qualitative research were aimed at enabling colleagues, and external actors, to make sense of this set of methodologies. Centre-level strategies and grant applications for infrastructure funding highlighted the value of qualitative research, the added benefits it would bring, and positioned it as a legitimate set of practices alongside existing methods. For example, a 2014 application for renewal of trials unit infrastructure funding stated:

We are currently in the process of undertaking […] restructuring for our qualitative research team and are planning similar for trial management next year. The aim of this restructuring is to establish greater hierarchical management and opportunities for staff development and also provide a structure that can accommodate continuing growth.

Within the CTR, various forms of communication on the development of qualitative research were designed to enable staff and students to make sense of it, and to think through its potential value for them, and ways in which they might engage with it. These included presentations at staff meetings, informal meetings between project teams and the qualitative group lead, and the visibility of qualitative research on the public-facing Centre website and Centre committees and systems. For instance, qualitative methods were included (and framed as a distinct set of practices) within study adoption forms and committee agendas. Information for colleagues described how qualitative methods could be incorporated within funding applications for RCTs and other evaluation studies to generate new insights into questions research teams were already keen to answer, such as influences on intervention implementation fidelity. Where externally based chief investigators approached the Centre to be involved in new grant applications, the existence of the qualitative team and group lead enabled the inclusion of qualitative research to be actively promoted at an early stage, and such opportunities were highlighted in the Centre’s brochure for new collaborators. Monthly qualitative research network meetings—advertised across CTR and to external research collaborators, were also designed to create a shared understanding of qualitative research methods and their utility within trials and other study types (e.g. intervention development, feasibility studies, and observational studies). Training events (discussed in more detail below) also aided sense-making.

Several factors facilitated the promotion of qualitative research as a distinctive and valuable entity. Among these was the influence of the broader methodological landscape within trial design which was promoting the value of qualitative research, such as guidance on the evaluation of complex interventions by the Medical Research Council [ 17 ], and the growing emphasis placed on process evaluations within trials (with qualitative methods important in understanding participant experience and influences on implementation) [ 5 ]. The attention given to lived experience (both through process evaluations and the move to embed public involvement in trials) helped to frame qualitative research within the Centre as something that was appropriate, legitimate, and of value. Recognition by research funders of the value of qualitative research within studies was also helpful in normalising and legitimising its adoption within grant applications.

The inclusion of qualitative methods within influential methodological guidance helped CTR researchers to develop a ‘shared language’ around these methods, and a way that a common understanding of the role of qualitative research could be generated. One barrier to such sense-making work was the varying extent to which staff and teams had existing knowledge or experience of qualitative research. This varied across methodological and subject groups within the Centre and reflected the history of the individual trials units which had merged to form the Centre.

Cognitive participation—legitimising qualitative research

Senior CTR leaders promoted the value and legitimacy of qualitative research. Its inclusion in centre strategies, infrastructure funding applications, and in public-facing materials (e.g. website, investigator brochures), signalled that it was appropriate for individuals to conduct qualitative research within their roles, or to support others in doing so. Legitimisation also took place through informal channels, such as senior leadership support for qualitative research methods in staff meetings and participation in QRG seminars. Continued development of the QRG (with dedicated infrastructure funding) provided a visible identity and equivalence with other methodological groups (e.g. trial managers, statisticians).

Staff were asked to engage with qualitative research in two main ways. First, there was an expansion in the number of staff for whom qualitative research formed part of their formal role and responsibilities. One of the three trials units that merged to form CTR brought with it a qualitative team comprising methodological specialists and a group lead. CTR continued the expansion of this group with the creation of new roles and an enlarged nucleus of researchers for whom qualitative research was the sole focus of their work. In part, this was linked to the successful award of projects that included a large qualitative component, and that were coordinated by CTR (see Table  3 which describes the PUMA study).

Members of the QRG were encouraged to develop their own research ideas and to gain experience as principal investigators, and group seminars were used to explore new ideas and provide peer support. This was communicated through line management, appraisal, and informal peer interaction. Boundaries were not strictly demarcated (i.e. staff located outside the qualitative team were already using qualitative methods), but the new team became a central focus for developing a growing programme of work.

Second, individuals and studies were called upon to engage in new ways with qualitative research, and with the qualitative team. A key goal for the Centre was that groups developing new research ideas should give more consideration in general to the potential value and inclusion of qualitative research within their funding applications. Specifically, they were asked to do this by thinking about qualitative research at an early point in their application’s development (rather than ‘bolting it on’ after other elements had been designed) and to draw upon the expertise and input of the qualitative team. An example was the inclusion of questions on qualitative methods within the Centre’s study adoption form and representation from the qualitative team at the committee which reviewed new adoption requests. Where adoption requests indicated the inclusion of qualitative methods, colleagues were encouraged to liaise with the qualitative team, facilitating the integration of its expertise from an early stage. Qualitative seminars offered an informal and supportive space in which researchers could share initial ideas and refine their methodological approach. The benefits of this included the provision of sufficient time for methodological specialists to be involved in the design of the proposed qualitative component and ensuring adequate costings had been drawn up. At study adoption group meetings, scrutiny of new proposals included consideration of whether new research proposals might be strengthened through the use of qualitative methods where these had not initially been included. Meetings of the QRG—which reviewed the Centre’s portfolio of new studies and gathered intelligence on new ideas—also helped to identify, early on, opportunities to integrate qualitative methods. Communication across teams was useful in identifying new research ideas and embedding qualitative researchers within emerging study development groups.

Actions to promote greater use of qualitative methods in funding applications fed through into a growing number of studies with a qualitative component. This helped to increase the visibility and legitimacy of qualitative methods within the Centre. For example, the PUMA study [ 12 ], which brought together a large multidisciplinary team to develop and evaluate a Paediatric early warning system, drew heavily on qualitative methods, with the qualitative research located within the QRG. The project introduced an extensive network of collaborators and clinical colleagues to qualitative methods and how they could be used during intervention development and the generation of case studies. Further information about the PUMA study is provided in Table  3 .

Increasing the legitimacy of qualitative work across an extensive network of staff, students and collaborators was a complex process. Set within the continuing dominance of quantitative methods with clinical trials, there were variations in the extent to which clinicians and other collaborators embraced the value of qualitative methods. Research funding schemes, which often continued to emphasise the quantitative element of randomised controlled trials, inevitably fed through into the focus of new research proposals. Staff and external collaborators were sometimes uncertain about the added value that qualitative methods would bring to their trials. Across the CTR there were variations in the speed at which qualitative research methods gained legitimacy, partly based on disciplinary traditions and their influences. For instance, population health trials, often located within non-health settings such as schools or community settings, frequently involved collaboration with social scientists who brought with them experience in qualitative methods. Methodological guidance in this field, such as MRC guidance on process evaluations, highlighted the value of qualitative methods and alternatives to the positivist paradigm, such as the value of realist RCTs. In other, more clinical areas, positivist paradigms had greater dominance. Established practices and methodological traditions across different funders also influenced the ease of obtaining funding to include qualitative research within studies. For drugs trials (CTIMPs), the influence of regulatory frameworks on study design, data collection and the allocation of staff resources may have played a role. Over time, teams gained repeated experience of embedding qualitative research (and researchers) within their work and took this learning with them to subsequent studies. For example, the senior clinician quoted within the PUMA case study (Table  3 below) described how they had gained an appreciation of the rigour of qualitative research and an understanding of its language. Through these repeated interactions, embedding of qualitative research within studies started to become the norm rather than the exception.

Collective action—operationalising qualitative research

Collective action concerns the operationalisation of new practices within organisations—the allocation and management of the work, how individuals interact with each other, and the work itself. In CTR the formation of a Qualitative Research Group helped to allocate and organise the work of building a portfolio of studies. Researchers across the Centre were called upon to interact with qualitative research in new ways. Presentations at staff meetings and the inclusion of qualitative research methods in portfolio study adoption forms were examples of this ( interactive workability ). It was operationalised by encouraging study teams to liaise with the qualitative research lead. Development of standard operating procedures, templates for costing qualitative research and methodological guidance (e.g. on analysis plans) also helped encourage researchers to interact with these methods in new ways. For some qualitative researchers who had been trained in the social sciences, working within a trials unit meant that they needed to interact in new and sometimes unfamiliar ways with standard operating procedures, risk assessments, and other trial-based systems. Thus, training needs and capacity-building efforts were multidirectional.

Whereas there had been a tendency for qualitative research to be ‘bolted on’ to proposals for RCTs, the systems described above were designed to embed thinking about the value and design of the qualitative component from the outset. They were also intended to integrate members of the qualitative team with trial teams from an early stage to promote effective integration of qualitative methods within larger trials and build relationships over time.

Standard Operating Procedures (SOPs), formal and informal training, and interaction between the qualitative team and other researchers increased the relational workability of qualitative methods within the Centre—the confidence individuals felt in including these methods within their studies, and their accountability for doing so. For instance, study adoption forms prompted researchers to interact routinely with the qualitative team at an early stage, whilst guidance on costing grants provided clear expectations about the resources needed to deliver a proposed set of qualitative data collection.

Formation of the Qualitative Research Group—comprised of methodological specialists, created new roles and skillsets ( skill set workability ). Research teams were encouraged to draw on these when writing funding applications for projects that included a qualitative component. Capacity-building initiatives were used to increase the number of researchers with the skills needed to undertake qualitative research, and for these individuals to develop their expertise over time. This was achieved through formal training courses, academic seminars, mentoring from experienced colleagues, and informal knowledge exchange. Links with external collaborators and centres engaged in building qualitative research supported these efforts. Within the Centre, the co-location of qualitative researchers with other methodological and trial teams facilitated knowledge exchange and building of collaborative relationships, whilst grouping of the qualitative team within a dedicated office space supported a collective identity and opportunities for informal peer support.

Some aspects of the context in which qualitative research was being developed created challenges to operationalisation. Dependence on project grants to fund qualitative methodologists meant that there was a continuing need to write further grant applications whilst limiting the amount of time available to do so. Similarly, researchers within the team whose role was funded largely by specific research projects could sometimes find it hard to create sufficient time to develop their personal methodological interests. However, the cultivation of a methodologically varied portfolio of work enabled members of the team to build significant expertise in different approaches (e.g. ethnography, discourse analysis) that connected individual studies.

Reflexive monitoring—evaluating the impact of qualitative research

Inclusion of questions/fields relating to qualitative research within the Centre’s study portfolio database was a key way in which information was collected ( systematisation ). It captured numbers of funding applications and funded studies, research design, and income generation. Alongside this database, a qualitative resource planner spreadsheet was used to link individual members of the qualitative team with projects and facilitate resource planning, further reinforcing the core responsibilities and roles of qualitative researchers within CTR. As with all staff in the Centre, members of the qualitative team were placed on ongoing rather than fixed-term contracts, reflecting their core role within CTR. Planning and strategy meetings used the database and resource planner to assess the integration of qualitative research within Centre research, identify opportunities for increasing involvement, and manage staff recruitment and sustainability of researcher posts. Academic meetings and day-to-day interaction fulfilled informal appraisal of the development of the group, and its position within the Centre. Individual appraisal was also important, with members of the qualitative team given opportunities to shape their role, reflect on progress, identify training needs, and further develop their skillset, particularly through line management systems.

These forms of systematisation and appraisal were used to reconfigure the development of qualitative research and its integration within the Centre. For example, group strategies considered how to achieve long-term integration of qualitative research from its initial embedding through further promoting the belief that it formed a core part of the Centre’s business. The visibility and legitimacy of qualitative research were promoted through initiatives such as greater prominence on the Centre’s website. Ongoing review of the qualitative portfolio and discussion at academic meetings enabled the identification of areas where increased capacity would be helpful, both for qualitative staff, and more broadly within the Centre. This prompted the qualitative group to develop an introductory course to qualitative methods open to all Centre staff and PhD students, aimed at increasing understanding and awareness. As the qualitative team built its expertise and experience it also sought to develop new and innovative approaches to conducting qualitative research. This included the use of visual and diary-based methods [ 11 ] and the adoption of ethnography to evaluate system-level clinical interventions [ 12 ]. Restrictions on conventional face-to-face qualitative data collection due to the COVID-19 pandemic prompted rapid adoption of virtual/online methods for interviews, observation, and use of new internet platforms such as Padlet—a form of digital note board.

In this paper, we have described the work undertaken by one CTU to integrate qualitative research within its studies and organisational culture. The parallel efforts of many trials units to achieve these goals arguably come at an opportune time. The traditional designs of RCTs have been challenged and re-imagined by the increasing influence of realist evaluation [ 6 , 18 ] and the widespread acceptance that trials need to understand implementation and intervention theory as well as assess outcomes [ 17 ]. Hence the widespread adoption of embedded mixed methods process evaluations within RCTs. These broad shifts in methodological orthodoxies, the production of high-profile methodological guidance, and the expectations of research funders all create fertile ground for the continued expansion of qualitative methods within trials units. However, whilst much has been written about the importance of developing qualitative research and the possible approaches to integrating qualitative and quantitative methods within studies, much less has been published on how to operationalise this within trials units. Filling this lacuna is important. Our paper highlights how the integration of a new set of practices within an organisation can become embedded as part of its ‘normal’ everyday work whilst also shaping the practices being integrated. In the case of CTR, it could be argued that the integration of qualitative research helped shape how this work was done (e.g. systems to assess progress and innovation).

In our trials unit, the presence of a dedicated research group of methodological specialists was a key action that helped realise the development of a portfolio of qualitative research and was perhaps the most visible evidence of a commitment to do so. However, our experience demonstrates that to fully realise the goal of developing qualitative research, much work focuses on the interaction between this ‘new’ set of methods and the organisation into which it is introduced. Whilst the team of methodological specialists was tasked with, and ‘able’ to do the work, the ‘work’ itself needed to be integrated and embedded within the existing system. Thus, alongside the creation of a team and methodological capacity, promoting the legitimacy of qualitative research was important to communicate to others that it was both a distinctive and different entity, yet similar and equivalent to more established groups and practices (e.g. trial management, statistics, data management). The framing of qualitative research within strategies, the messages given out by senior leaders (formally and informally) and the general visibility of qualitative research within the system all helped to achieve this.

Normalisation Process Theory draws our attention to the concepts of embedding (making a new practice routine, normal within an organisation) and integration —the long-term sustaining of these processes. An important process through which embedding took place in our centre concerned the creation of messages and systems that called upon individuals and research teams to interact with qualitative research. Research teams were encouraged to think about qualitative research and consider its potential value for their studies. Critically, they were asked to do so at specific points, and in particular ways. Early consideration of qualitative methods to maximise and optimise their inclusion within studies was emphasised, with timely input from the qualitative team. Study adoption systems, centre-level processes for managing financial and human resources, creation of a qualitative resource planner, and awareness raising among staff, helped to reinforce this. These processes of embedding and integration were complex and they varied in intensity and speed across different areas of the Centre’s work. In part this depended on existing research traditions, the extent of prior experience of working with qualitative researchers and methods, and the priorities of subject areas and funders. Centre-wide systems, sometimes linked to CTR’s operation as a CTU, also helped to legitimise and embed qualitative research, lending it equivalence with other research activity. For example, like all CTUs, CTR was required to conform with the principles of Good Clinical Practice, necessitating the creation of a quality management system, operationalised through standard operating procedures for all areas of its work. Qualitative research was included, and became embedded, within these systems, with SOPs produced to guide activities such as qualitative analysis.

NPT provides a helpful way of understanding how trials units might integrate qualitative research within their work. It highlights how new practices interact with existing organisational systems and the work needed to promote effective interaction. That is, alongside the creation of a team or programme of qualitative research, much of the work concerns how members of an organisation understand it, engage with it, and create systems to sustain it. Embedding a new set of practices may be just as important as the quality or characteristics of the practices themselves. High-quality qualitative research is of little value if it is not recognised and drawn upon within new studies for instance. NPT also offers a helpful lens with which to understand how integration and embedding occur, and the mechanisms through which they operate. For example, promoting the legitimacy of a new set of practices, or creating systems that embed it, can help sustain these practices by creating an organisational ambition and encouraging (or requiring) individuals to interact with them in certain ways, redefining their roles accordingly. NPT highlights the ways in which integration of new practices involves bi-directional exchanges with the organisation’s existing practices, with each having the potential to re-shape the other as interaction takes place. For instance, in CTR, qualitative researchers needed to integrate and apply their methods within the quality management and other systems of a CTU, such as the formalisation of key processes within standard operating procedures, something less likely to occur outside trials units. Equally, project teams (including those led by externally based chief investigators) increased the integration of qualitative methods within their overall study design, providing opportunities for new insights on intervention theory, implementation and the experiences of practitioners and participants.

We note two aspects of the normalisation processes within CTR that are slightly less well conceptualised by NPT. The first concerns the emphasis within coherence on identifying the distinctiveness of new practices, and how they differ from existing activities. Whilst differentiation was an important aspect of the integration of qualitative research in CTR, such integration could be seen as operating partly through processes of de-differentiation, or at least equivalence. That is, part of the integration of qualitative research was to see it as similar in terms of rigour, coherence, and importance to other forms of research within the Centre. To be viewed as similar, or at least comparable to existing practices, was to be legitimised.

Second, whilst NPT focuses mainly on the interaction between a new set of practices and the organisational context into which it is introduced, our own experience of introducing qualitative research into a trials unit was shaped by broader organisational and methodological contexts. For example, the increasing emphasis placed upon understanding implementation processes and the experiences of research participants in the field of clinical trials (e.g. by funders), created an environment conducive to the development of qualitative research methods within our Centre. Attempts to integrate qualitative research within studies were also cross-organisational, given that many of the studies managed within the CTR drew together multi-institutional teams. This provided important opportunities to integrate qualitative research within a portfolio of studies that extended beyond CTR and build a network of collaborators who increasingly included qualitative methods within their funding proposals. The work of growing and integrating qualitative research within a trials unit is an ongoing one in which ever-shifting macro-level influences can help or hinder, and where the organisations within which we work are never static in terms of barriers and facilitators.

The importance of utilising qualitative methods within RCTs is now widely recognised. Increased emphasis on the evaluation of complex interventions, the influence of realist methods directing greater attention to complexity and the widespread adoption of mixed methods process evaluations are key drivers of this shift. The inclusion of qualitative methods within individual trials is important and previous research has explored approaches to their incorporation and some of the challenges encountered. Our paper highlights that the integration of qualitative methods at the organisational level of the CTU can shape how they are taken up by individual trials. Within CTR, it can be argued that qualitative research achieved high levels of integration, as conceptualised by Normalisation Process Theory. Thus, qualitative research became recognised as a coherent and valuable set of practices, secured legitimisation as an appropriate focus of individual and organisational activity and benefitted from forms of collective action which operationalised these organisational processes. Crucially, the routinisation of qualitative research appeared to be sustained, something which NPT suggests helps define integration (as opposed to initial embedding). However, our analysis suggested that the degree of integration varied by trial area. This variation reflected a complex mix of factors including disciplinary traditions, methodological guidance, existing (un)familiarity with qualitative research, and the influence of regulatory frameworks for certain clinical trials.

NPT provides a valuable framework with which to understand how these processes of embedding and integration occur. Our use of NPT draws attention to the importance of sense-making and legitimisation as important steps in introducing a new set of practices within the work of an organisation. Integration also depends, across each mechanism of NPT, on the building of effective relationships, which allow individuals and teams to work together in new ways. By reflecting on our experiences and the decisions taken within CTR we have made explicit one such process for embedding qualitative research within a trials unit, whilst acknowledging that approaches may differ across trials units. Mindful of this fact, and the focus of the current paper on one trials unit’s experience, we do not propose a set of recommendations for others who are working to achieve similar goals. Rather, we offer three overarching reflections (framed by NPT) which may act as a useful starting point for trials units (and other infrastructures) seeking to promote the adoption of qualitative research.

First, whilst research organisations such as trials units are highly heterogenous, processes of embedding and integration, which we have foregrounded in this paper, are likely to be important across different contexts in sustaining the use of qualitative research. Second, developing a plan for the integration of qualitative research will benefit from mapping out the characteristics of the extant system. For example, it is valuable to know how familiar staff are with qualitative research and any variations across teams within an organisation. Thirdly, NPT frames integration as a process of implementation which operates through key generative mechanisms— coherence , cognitive participation , collective action and reflexive monitoring . These mechanisms can help guide understanding of which actions help achieve embedding and integration. Importantly, they span multiple aspects of how organisations, and the individuals within them, work. The ways in which people make sense of a new set of practices ( coherence ), their commitment towards it ( cognitive participation ), how it is operationalised ( collective action ) and the evaluation of its introduction ( reflexive monitoring ) are all important. Thus, for example, qualitative research, even when well organised and operationalised within an organisation, is unlikely to be sustained if appreciation of its value is limited, or people are not committed to it.

We present our experience of engaging with the processes described above to open dialogue with other trials units on ways to operationalise and optimise qualitative research in trials. Understanding how best to integrate qualitative research within these settings may help to fully realise the significant contribution which it makes the design and conduct of trials.

Availability of data and materials

Some documents cited in this paper are either freely available from the Centre for Trials Research website or can be requested from the author for correspondence.

O’Cathain A, Thomas KJ, Drabble SJ, Rudolph A, Hewison J. What can qualitative research do for randomised controlled trials? A systematic mapping review. BMJ Open. 2013;3(6):e002889.

Article   PubMed   PubMed Central   Google Scholar  

O’Cathain A, Thomas KJ, Drabble SJ, Rudolph A, Goode J, Hewison J. Maximising the value of combining qualitative research and randomised controlled trials in health research: the QUAlitative Research in Trials (QUART) study – a mixed methods study. Health Technol Assess. 2014;18(38):1–197.

Clement C, Edwards SL, Rapport F, Russell IT, Hutchings HA. Exploring qualitative methods reported in registered trials and their yields (EQUITY): systematic review. Trials. 2018;19(1):589.

Hennessy M, Hunter A, Healy P, Galvin S, Houghton C. Improving trial recruitment processes: how qualitative methodologies can be used to address the top 10 research priorities identified within the PRioRiTy study. Trials. 2018;19:584.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350(mar19 6):h1258.

Bonell C, Fletcher A, Morton M, Lorenc T, Moore L. Realist randomised controlled trials: a new approach to evaluating complex public health interventions. Soc Sci Med. 2012;75(12):2299–306.

Article   PubMed   Google Scholar  

O’Cathain A, Hoddinott P, Lewin S, Thomas KJ, Young B, Adamson J, et al. Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers. Pilot Feasibility Stud. 2015;1:32.

Cooper C, O’Cathain A, Hind D, Adamson J, Lawton J, Baird W. Conducting qualitative research within Clinical Trials Units: avoiding potential pitfalls. Contemp Clin Trials. 2014;38(2):338–43.

Rapport F, Storey M, Porter A, Snooks H, Jones K, Peconi J, et al. Qualitative research within trials: developing a standard operating procedure for a clinical trials unit. Trials. 2013;14:54.

Cardiff University. Centre for Trials Research. Available from: https://www.cardiff.ac.uk/centre-for-trials-research . Accessed 10 May 2024.

Pell B, Williams D, Phillips R, Sanders J, Edwards A, Choy E, et al. Using visual timelines in telephone interviews: reflections and lessons learned from the star family study. Int J Qual Methods. 2020;19:160940692091367.

Thomas-Jones E, Lloyd A, Roland D, Sefton G, Tume L, Hood K, et al. A prospective, mixed-methods, before and after study to identify the evidence base for the core components of an effective Paediatric Early Warning System and the development of an implementation package containing those core recommendations for use in th. BMC Pediatr. 2018;18:244.

May C, Finch T, Mair F, Ballini L, Dowrick C, Eccles M, et al. Understanding the implementation of complex interventions in health care: the normalization process model. BMC Health Serv Res. 2007;7:148.

May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalization process theory. Sociology. 2009;43(3):535–54.

Article   Google Scholar  

May CR, Mair F, Finch T, Macfarlane A, Dowrick C, Treweek S, et al. Development of a theory of implementation and integration: normalization process theory. Implement Sci. 2009;4:29.

Ogrinc G, Davies L, Goodman D, Batalden PB, Davidoff F, Stevens D. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): Revised publication guidelines from a detailed consensus process. BMJ Quality and Safety. 2016;25:986-92.

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

Jamal F, Fletcher A, Shackleton N, Elbourne D, Viner R, Bonell C. The three stages of building and testing mid-level theories in a realist RCT: a theoretical and methodological case-example. Trials. 2015;16(1):466.

Download references

Acknowledgements

Members of the Centre for Trials Research (CTR) Qualitative Research Group were collaborating authors: C Drew (Senior Research Fellow—Senior Trial Manager, Brain Health and Mental Wellbeing Division), D Gillespie (Director, Infection, Inflammation and Immunity Trials, Principal Research Fellow), R Hale (now Research Associate, School of Social Sciences, Cardiff University), J Latchem-Hastings (now Lecturer and Postdoctoral Fellow, School of Healthcare Sciences, Cardiff University), R Milton (Research Associate—Trial Manager), B Pell (now PhD student, DECIPHer Centre, Cardiff University), H Prout (Research Associate—Qualitative), V Shepherd (Senior Research Fellow), K Smallman (Research Associate), H Stanton (Research Associate—Senior Data Manager). Thanks are due to Kerry Hood and Aimee Grant for their involvement in developing processes and systems for qualitative research within CTR.

No specific grant was received to support the writing of this paper.

Author information

Authors and affiliations.

Centre for Trials Research, DECIPHer Centre, Cardiff University, Neuadd Meirionnydd, Heath Park, Cardiff, CF14 4YS, UK

Jeremy Segrott

Centre for Trials Research, Cardiff University, Neuadd Meirionnydd, Heath Park, Cardiff, CF14 4YS, UK

Sue Channon, Eleni Glarou, Jacqueline Hughes, Nina Jacob, Sarah Milosevic, Yvonne Moriarty, Mike Robling, Heather Strange, Julia Townson & Lucy Brookes-Howell

Division of Population Medicine, School of Medicine, Cardiff University, Neuadd Meirionnydd, Heath Park, Cardiff, CF14 4YS, UK

Eleni Glarou

Wales Centre for Public Policy, Cardiff University, Sbarc I Spark, Maindy Road, Cardiff, CF24 4HQ, UK

School of Social Sciences, Cardiff University, King Edward VII Avenue, Cardiff, CF10 3WA, UK

Josie Henley

DECIPHer Centre, School of Social Sciences, Cardiff University, Sbarc I Spark, Maindy Road, Cardiff, CF24 4HQ, UK

Bethan Pell

You can also search for this author in PubMed   Google Scholar

Qualitative Research Group

  • , D. Gillespie
  • , J. Latchem-Hastings
  • , R. Milton
  • , V. Shepherd
  • , K. Smallman
  •  & H. Stanton

Contributions

JS contributed to the design of the work and interpretation of data and was responsible for leading the drafting and revision of the paper. SC contributed to the design of the work, the acquisition of data and the drafting and revision of the paper. AL contributed to the design of the work, the acquisition of data and the drafting and revision of the paper. EG contributed to a critical review of the manuscript and provided additional relevant references. JH provided feedback on initial drafts of the paper and contributed to subsequent revisions. JHu provided feedback on initial drafts of the paper and contributed to subsequent revisions. NG provided feedback on initial drafts of the paper and contributed to subsequent revisions. SM was involved in the acquisition and analysis of data and provided a critical review of the manuscript. YM was involved in the acquisition and analysis of data and provided a critical review of the manuscript. MR was involved in the interpretation of data and critical review and revision of the paper. HS contributed to the conception and design of the work, the acquisition and analysis of data, and the revision of the manuscript. JT provided feedback on initial drafts of the paper and contributed to subsequent revisions. LB-H made a substantial contribution to the design and conception of the work, led the acquisition and analysis of data, and contributed to the drafting and revision of the paper.

Corresponding author

Correspondence to Jeremy Segrott .

Ethics declarations

Ethics approval and consent to participate.

Ethical approval was not sought as no personal or identifiable data was collected.

Consent for publication

Competing interests.

All authors are or were members of staff or students in the Centre for Trials Research. JS is an associate editor of Trials .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Segrott, J., Channon, S., Lloyd, A. et al. Integrating qualitative research within a clinical trials unit: developing strategies and understanding their implementation in contexts. Trials 25 , 323 (2024). https://doi.org/10.1186/s13063-024-08124-7

Download citation

Received : 20 October 2023

Accepted : 17 April 2024

Published : 16 May 2024

DOI : https://doi.org/10.1186/s13063-024-08124-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Qualitative methods
  • Trials units
  • Normalisation Process Theory
  • Randomised controlled trials

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research design analysis unit

  • Enroll & Pay

Project implementation

icon of gears

Research Design and Analysis scientists can be active members of your research team, or can serve as consultants with a wide array of skills.

  • Develop new instruments and evaluate their psychometric properties
  • Develop and manage Database Systems
  • Robust database architecture design using enterprise-level database systems
  • Access to HIPAA-compliant servers 
  • Custom web and software development
  • Assistance with third-party survey providers such as Qualtrics
  • Custom data collection tools – surveys for participants or data entry tools for researchers
  • Access to the LSI instance of REDCap, and support of other REDCap instances, including KUMC
  • Multi-site data collection support, collaborative application solutions between sites
  • Offline data collection with mobile device applications (e.g., tablets and smart phones)
  • MOSIO text messaging for intervention delivery and/or data collection
  • Randomize individuals to conditions using stratification
  • Assist with clinical trial registration and analysis plan pre-registration  
  • 785-864-0742
  • [email protected]
  • Open access
  • Published: 14 May 2024

Non-pharmacological interventions to prevent PICS in critically ill adult patients: a protocol for a systematic review and network meta-analysis

  • Xiaoying Sun 1 , 2 ,
  • Qian Tao 2 ,
  • Qing Cui 3 ,
  • Yaqiong Liu 4 &
  • Shouzhen Cheng   ORCID: orcid.org/0000-0002-5063-9473 2  

Systematic Reviews volume  13 , Article number:  132 ( 2024 ) Cite this article

148 Accesses

Metrics details

Postintensive care syndrome (PICS) is common in critically ill adults who were treated in the intensive care unit (ICU). Although comparative analyses between types of non-pharmacological measures and usual care to prevent PICS have been performed, it remains unclear which of these potential treatments is the most effective for prevention.

To obtain the best evidence for non-pharmaceutical interventions in preventing PICS, a systematic review and Bayesian network meta-analyses (NMAs) will be conducted by searching nine electronic databases for randomized controlled trials (RCTs). Two reviewers will carefully screen the titles, abstracts, and full-text papers to identify and extract relevant data. Furthermore, the research team will meticulously check the bibliographic references of the selected studies and related reviews to discover any articles pertinent to this research. The primary focus of the study is to examine the prevalence and severity of PICS among critically ill patients admitted to the ICU. The additional outcomes encompass patient satisfaction and adverse effects related to the preventive intervention. The Cochrane Collaboration’s risk-of-bias assessment tool will be utilized to evaluate the risk of bias in the included RCTs. To assess the efficacy of various preventative measures, traditional pairwise meta-analysis and Bayesian NMA will be used. To gauge the confidence in the evidence supporting the results, we will utilize the Confidence in NMA tool.

There are multiple non-pharmacological interventions available for preventing the occurrence and development of PICS. However, most approaches have only been directly compared to standard care, lacking comprehensive evidence and clinical balance. Although the most effective care methods are still unknown, our research will provide valuable evidence for further non-pharmacological interventions and clinical practices aimed at preventing PICS. The research is expected to offer useful data to help healthcare workers and those creating guidelines decide on the most effective path of action for preventing PICS in adult ICU patients.

Systematic review registration

PROSPERO CRD42023439343.

Graphical Abstract

research design analysis unit

Peer Review reports

Postintensive care syndrome (PICS) is an umbrella term used to define the general influence of severe disease on individuals who were treated in the intensive care unit (ICU), encompassing various physical (such as neuromuscular weakness and limitations in daily activities), psychological (such as anxiety, sadness, and post-traumatic stress disorder [PTSD]), and cognitive dysfunction [ 1 , 2 , 3 ]. These ailments impair everyday living and quality of life. A majority of adult patients who received treatment in the ICU encounter such impairments [ 4 , 5 , 6 ]. The significant progress made in the medical, scientific, and technological domains has led to a notable increase in survival among people admitted to the ICU in recent years [ 7 ]. However, although adults treated in the ICU have increased survival, their quality of life can be negatively affected by their time in the ICU.

Intensive care is the medical care provided to critically ill patients during a medical emergency or crisis, managing severe conditions of all disease types [ 8 ]. Infectious and noninfectious illnesses and injuries contribute significantly to the global burden, with an increasing trend over the years. The Global Burden of Disease project does not provide specific information on the burden of critical illness and global variation [ 9 , 10 , 11 ]. Figure  1 describes the burden of critical illness based on global overall expenditure, the aging trend, and the number of ICU beds. These data come from Our World in Data [ 12 ], the China Health Statistics Yearbook [ 13 ], and United Nations Aging data [ 14 ].

figure 1

The burden of PICS is increasing. PICS, postintensive care syndrome

In the past 50 years, the number of patients admitted to the ICU has continuously increased, especially after the beginning of the COVID-19 pandemic [ 15 , 16 ]. This trend is evident from Fig.  1 A, which shows the growth of ICU bed capacity in China. The percentage of public health spending as a part of the gross domestic product for each country in 2019 is shown in Fig.  1 C. Developed countries invest more in the healthcare sector [ 17 ], which is likely closely related to their aging population and advancements in medical technology [ 18 ]. Figure  1 B illustrates the projected future extent of global aging, indicating that the global population of individuals aged 65 years or older is expected to double within the next three decades, reaching an estimated 1.6 billion by 2050. Concurrently, the number of people aged 80 and older is anticipated to reach 459 million. The increase in age in the global population has led to a higher risk of critical illnesses, as the aging population bears a heavier load of chronic diseases [ 19 ]. However, the spectrum of medical conditions managed in the ICU includes not only the exacerbation of chronic diseases but also burns, trauma, and infectious diseases, as detailed in Fig.  1 D. Moreover, our enhanced ability to treat formerly fatal conditions has led to higher demand for critical care services [ 20 ]. Consequently, PICS is also likely to increase with the growing number of adults treated in and discharged from the ICU.

Considering the substantial public health concerns arising from the consequences of PICS on quality of life, healthcare expenditures, and hospital readmissions, it is imperative to offer effective and feasible interventions to address this issue [ 21 ]. Assistance and support for patients in critical condition are potential interventions for improving outcomes related to PICS [ 22 ]. A recent study showed that administration of dexmedetomidine during the night as a preventive measure led to a substantial decrease in the incidence of PICS, as evidenced by a substantial reduction in psychological impairment during the 6-month monitoring period [ 23 ]. However, pharmacological treatments are often expensive and can pose a certain economic burden. Further, the use of sedative and anxiolytic drugs to treat patient symptoms is linked to delirium and negative physical and mental health consequences [ 24 ]. Consequently, there is an increasing focus on employing non-pharmacological approaches and establishing a more person-centered atmosphere within the ICU, aiming to benefit both patients and their families [ 25 ].

The current interventions for PICS that show the most potential involve non-pharmacological strategies [ 22 ]. The efficacy of early rehabilitation treatment, which consists of all physiotherapy, occupational therapy, and palliative care-related support, in managing PICS was explored through a systematic review [ 7 ], which showed that such treatment can lead to an improvement in short-term physical functioning but does not have any impact on mental or cognitive aspects. ICU diaries can reduce ICU-related psychological complications, such as ICU-related PTSD, depression, and anxiety [ 26 ]. However, results obtained from a randomized controlled trial (RCT) indicate that the use of ICU diaries alone does not provide any advantage over bedside education in reducing the symptoms of PTSD that are related to the stay in ICU [ 27 ]. Hence, it is still uncertain which non-pharmacological interventions are the most effective and preferred in preventing depression and anxiety, cognitive disorder, and physical function for adults with critical illness.

Despite the potential deleterious effects of PICS in terms of healthcare usage and caregiver burden and the increasing population of adults treated in and discharged from ICU, there is a lack of evidence-based practices for this specific group [ 28 ]. Although they provide indirect evidence to evaluate the confidence of treatment comparisons, network meta-analyses (NMAs) [ 29 ] have substantial advantages over conventional pairwise meta-analyses. NMAs allow for the evaluation of comparative effects that have not been directly compared in RCTs, potentially yielding more reliable and conclusive outcomes [ 30 ]. Hence, the study’s main goal is to use NMA to examine several non-pharmacological preventative treatments that addressed PICS in individuals treated in the ICU.

Methods/design

Criteria for eligibility.

Studies conducted during the ICU stay, as well as those extending from the ICU admission through to the post-discharge period, will be eligible for inclusion.

Participants

Adults (aged > 18 years) admitted to the ICU were included in the study. Gender, ethnicity, and nationality of participants will not be further restricted.

Type of study

Only RCTs providing comparisons of preventative strategies and other strategies or standard treatment for adult patients in ICUs with full-text publications will be included.

Intervention

Any non-pharmacological interventions to prevent PICS in critically ill patients. The potential interventions may encompass, but are not limited to the following:

Psychosocial programs

Follow-up service

Patient instructions

Exercise (e.g., strength and cardiovascular exercise)

Diary therapy

Environment control

Integrated therapy

Comparators

These are different types of non-pharmacological interventions or a control group; a control group is defined as a waiting list, usual/standard care, or a control condition that provided a brief educational leaflet.

Outcome measures

Studies must have assessed depression symptoms, anxiety symptoms, PTSD, cognitive status, sleep quality, pain, physical functioning, or quality of life, with detailed data available. Additionally, the evaluation of primary outcomes must use a comprehensive and specific scale, including but not limited to the following:

Primary outcomes

Depression: Hospital Anxiety and Depression Scales [ 31 ] and Hamilton Depression Rating Scale [ 32 ]

Anxiety: Hospital Anxiety and Depression Scales [ 31 ]

PTSD: The Impact of Event Scale-Revised [ 33 ] and the Davidson Trauma Scale [ 34 ]

Cognitive: The Confusion Assessment Method for the ICU [ 35 ] and Montreal Cognitive Assessment [ 36 ]

Sleep: Richards Campbell Sleep Questionnaire [ 37 ] and Pittsburgh Sleep Quality Index [ 38 ]

Pain: Numeric rating scale [ 39 ] and visual analog scale [ 40 ]

Physical functioning: The occurrence rate of ICU-acquired weakness and the evaluation through Medical Research Council scale scores [ 41 ] and activities of daily living [ 42 , 43 ]

Quality of life: Medical Outcomes Study 36-item short-form health survey [ 44 ] and European Quality of Life-5 Dimensions questionnaire [ 45 ]

Secondary outcomes

Any harms associated with the prevention intervention

Participant satisfaction

Search strategy

“Critical care,” “intensive care units,” “syndrome,” “symptom assessment,” “depression symptom,” “depression,” “anxiety,” “anxiety symptom,” “mental health,” “Posttraumatic Stress Disorder,” “cognitive dysfunction,” “delirium,” “sleep,” “sleep wake disorder,””sleep quality,””pain,””intensive care unit acquired weakness,” and “physical functioning” will be utilized as MeSH phrases or keywords. The following electronic databases will be search from inception to June 25, 2023: PubMed, Embase, CINAHL, Cochrane Central Register of Controlled Trials, Web of Science, PsycINFO, SinoMED, CNKI, and Wangfang. Example searches of PubMed can be found in Table  1 . Moreover, we will perform thorough reverse citation searches on all included studies and pertinent reviews to find any previously missed references. Additionally, to find recent articles that have mentioned the pertinent literature, we will do forward reference searching on Google Scholar. Finally, we will try to contact the authors of those studies for more information if the full text of certain sources is unavailable.

Study selection

This study will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses criteria, and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram [ 46 ], shown in Fig.  2 , demonstrates the proposed research selection methods. The discovered studies will be imported into the online Rayyan literature management tool ( https://rayyan.qcri.org ) for additional analysis. Independent screening of the papers’ titles and abstracts will be performed by two reviewers. If either reviewer determines that an article meets the inclusion criteria, full texts will be obtained. Subsequently, both reviewers will independently assess the eligibility of each reference through a thorough examination of the full text. Any differences that cannot be settled via conversation will be brought to the attention of a third reviewer who will act as a mediator. Cohen’s kappa coefficient will be calculated to measure the inter-rater reliability. The reasons for excluding any studies will be carefully documented.

figure 2

Data extraction

A standardized data extraction form is available as a supplemental file . Before the actual usage of the form, each member of the team will have the opportunity to test it. Two reviewers will independently perform data extraction. In the case of any inconsistencies, a third arbiter will be consulted to facilitate a discussion and achieve a consensus. Our inclusion criteria for data extraction include various aspects of the study, such as background data (first contributor and the time of publication), research design (setting, methods of sampling, randomization, allocations, and blinding), sample characteristics (inclusion and exclusion criteria, sample size, age, sex, and educational background, rates, or severity of PICS), intervention details (type, content, frequency, duration, provider, and control group), and primary and secondary outcomes (including measurement time points, assessment tools, and any negative effects connected to preventative measures). In cases where information is missing or requires further clarification, we will reach out to the corresponding author for additional details.

Risk of bias

Two individuals will independently determine the risk of bias. If a dispute or discrepancy cannot be settled via conversation, a third reviewer will help achieve an agreement. We will weigh the RCTs’ quality of methodology using the revised Cochrane risk-of-bias methodology for randomized trials [ 47 ]. The five domains of this tool are as follows: (1) risk of bias resulting from the randomization process, (2) risk of bias due to departure from the purpose of the intervention, (3) risk of bias due to lacking outcome data, (4) risk of bias in measuring of the outcome, and (5) risk of bias in selection of the presented result.

Data synthesis

Study results will be categorized and summarized based on the intervention type, detailing the methodologies and clinical attributes documented in the corresponding studies. The summary will include an exhaustive analysis of patient demographics, the reported outcomes, and a critical assessment of potential bias risks. In instances where a quantitative synthesis of research findings is infeasible, a narrative synthesis will elucidate the systematic reviews outcomes.

Assessment of transitivity

In NMA, the transitivity assumption is crucial, allowing for indirect comparisons between interventions via a common comparator [ 48 ]. Considering the inherent clinical and methodological diversity in systematic reviews, it is essential for researchers to determine whether such variability could significantly impact the transitivity. To identify potential intransitivity, we will scrutinize the distribution of known effect modifiers across all direct comparisons before conducting the NMA [ 49 ], including variables like age, gender, disease severity, and the duration of interventions. A comparable distribution of these factors suggests that the transitivity assumption holds. Conversely, if transitivity is compromised, the NMA results may be biased, warranting a more conservative interpretation.

  • Network meta-analysis

Should the assumption of transitivity be deemed met, a random-effects NMA [ 50 ] will be executed employing vague priors within a Bayesian framework.

Detection of heterogeneity

Considering the anticipated variability in participant demographics, intervention methodologies, and outcome measurements, statistical heterogeneity is expected. In anticipation of inherent variability across the included studies, we will implement a random-effects model to mitigate the observed statistical heterogeneity. The deviance information criterion (DIC) will serve as our comparative metric for model selection, integrating considerations of model fit with complexity.

To explore the sources of heterogeneity, we will conduct network meta-regression, subgroup analyses, and sensitivity analyses [ 51 ]. Network meta-regression will be carried out to examine the impact of potential effect modifiers (e.g., average age of participants, baseline symptom scores) on the primary outcomes. The duration of interventions may be a significant factor affecting efficacy, and subgroup analyses will be performed to assess the influence of different intervention durations on the primary outcomes. Additionally, if a sufficient number of studies are available, we will conduct sensitivity analyses by excluding trials assessed to be at high risk of bias to ensure the robustness of the primary study results.

Assessment of inconsistency

When closed loops are present within the NMA framework, the node-splitting approach is employed to evaluate the consistency between direct and indirect evidence. p  > 0.05 in the node-splitting analysis is indicative of agreement between the two sources [ 52 ].

Assessment of publication bias

In instances where a treatment comparison encompasses over 10 studies, we will utilize a comparison-adjusted funnel plot to evaluate potential small-study effects and the likelihood of publication bias [ 53 ]. The symmetry of these plots will be systematically assessed via Egger’s test.

The overall strength of the evidence will be assessed while accounting for research limitations, imprecision, heterogeneity, indirectness, and publication bias using the Confidence in Network Meta-Analysis (CINeMA) method. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework is the foundation of CINeMA [ 54 ] and contains the following six dimensions: within-study bias, reporting bias, indirectness, imprecision, heterogeneity, and incoherence. The adoption of CINeMA boosts transparency and prevents the selective use of evidence in making judgments, thereby reducing the level of subjectivity.

Statistical analyses

All studies will be performed using the R-evolution software [ 55 ] version 4.3.0 and the gemtc package [ 56 ] version 1.0–1, which connects with JAGS version [ 57 ] 4.3.2 to perform a Markov chain Monte Carlo simulation (MCMC) [ 58 ]. We will configure 4 Markov chains, with executing a minimum of 20,000 iterations. The concordance between direct and indirect evidence will be ascertained through the node-splitting technique. Model convergence will be gauged using convergence diagnostic and trace density plots, with the potential scale reduction factor (PSRF) providing a metric for convergence adequacy—a PSRF close to 1 suggests satisfactory convergence. For continuous outcomes, the mean difference (MD) is utilized as the measure of effect, whereas for binary outcomes, the risk ratio (RR) is used, including its 95% confidence interval (CI). The area under the cumulative ranking curve (SUCRA), as determined from the ranking probability matrix generated by R software, will be calculated and the corresponding SUCRA curve plotted; a greater SUCRA value indicates an increased likelihood of a superior outcome ranking.

A network diagram will be created to visualize relationships between interventions [ 59 ]. Data processing will be executed utilizing network group commands. Subsequent to this, network evidence graphs will be generated [ 58 ]. In these visual representations, the magnitude of the nodes will be proportional to the sample sizes derived from the comparative analysis of interventions. The thickness of the edges will represent the volume of RCTs interlinking the interventions.

The ICU is a specialized hospital department dedicated to the intensive care and treatment of seriously ill patients. The recovery of patients treated in the ICU is crucial for their well-being, as well as for their families and society [ 60 ]. However, ICU patients experience a decline in immunological response and hormone disruption owing to the nature of their illnesses and the risk factors during ICU treatment [ 61 ]. This can lead to various symptoms, including sleep disturbance, anxiety, depression, cognitive impairment, and PTSD. Individuals can exhibit one or multiple symptoms of PICS [ 21 ], and they significantly impact the patient’s quality of life and impose additional economic and caregiving burdens on society. Current preventive measures for PICS in ICU patients mainly comprise pharmacological and non-pharmacological interventions. Non-pharmacological interventions primarily involve physical activity, ICU diaries, psychotherapy, health education, and comprehensive treatment [ 62 , 63 ]. However, there is no research evaluating the most effective non-pharmacological preventive measures. Therefore, this proposed study aims to compare the occurrence of PICS using an NMA approach to assess the effectiveness of various intervention measures.

The proposed systematic review and NMA aim to address the effectiveness of intervention measures in preventing PICS in adults treated in the ICU. Developing effective preventive interventions can help alleviate the social and economic burden of PICS by reducing new cases or alleviating symptoms in affected individuals. This systematic review will employ NMA to compare all non-pharmacological measures aimed at preventing PICS. The primary outcomes will include the incidence or relief of various PICS symptoms, such as depression, anxiety, PTSD, cognitive impairment, sleep disturbance, physical functional impairment, and pain. Secondary outcomes include participant satisfaction and the frequency of adverse events.

To the best of our knowledge, this will be the first systematic review and NMA to evaluate currently available non-pharmacological therapies for preventing PICS. The research findings will provide rankings in terms of treatment effectiveness and acceptability, which will contribute to evidence-based decision-making in the rehabilitation of ICU patients and further development of other non-pharmacological interventions. Furthermore, the methodology of this protocol is based on the Cochrane Handbook for Intervention Reviews [ 64 ], the PRISMA statement [ 46 , 65 ], and GRADE assessment [ 66 ], taking into account the risks of random errors and systematic errors.

The ability of our systematic review and NMA to draw conclusions about non-pharmacological interventions for PICS in individuals treated in the ICU may be limited by the available data, which could be considered a limitation of this study. However, despite this limitation, identifying the best available evidence from current research is still valuable. Additionally, we will search only Chinese and English databases and will not analyze articles in other languages, which may be another limitation. However, it is worth noting that the majority of high-quality studies are usually published in English and included in English databases, so our analysis is unlikely to omit important studies.

Some current trials may not have included patient preferences [ 67 ], but our study originates from previous research and uses existing outcome data for statistical analysis. Therefore, we hope individuals treated in the ICU can make their own choices combined with their circumstances while receiving prevention recommendations from doctors based on clinical evidence.

Availability of data and materials

The study is a systematic review.

Abbreviations

Confidence in Network Meta-Analysis

Grading of Recommendations Assessment, Development, and Evaluation

Iintensive care unit

  • Postintensive care syndrome

Posttraumatic stress disorder

Randomized controlled trial

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

The deviance information criterion

Markov chain Monte Carlo simulation

Potential scale reduction factor

Mean difference

Confidence interval

Area under the cumulative ranking curve

Kawakami D, Fujitani S, Morimoto T, Dote H, Takita M, Takaba A, et al. Prevalence of post-intensive care syndrome among Japanese intensive care unit patients: a prospective, multicenter, observational J-PICS study. Crit Care. 2021;25(1):69. https://doi.org/10.1186/s13054-021-03501-z .

Article   PubMed   PubMed Central   Google Scholar  

Meddick-Dyson SA, Boland JW, Pearson M, Greenley S, Gambe R, Budding JR, et al. Implementation lessons learnt when trialling palliative care interventions in the intensive care unit: relationships between determinants, implementation strategies, and models of delivery—a systematic review protocol. Syst Rev. 2022;11(1):186. https://doi.org/10.1186/s13643-022-02054-8 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Needham DM, Davidson J, Cohen H, Hopkins RO, Weinert C, Wunsch H, et al. Improving long-term outcomes after discharge from intensive care unit: report from a stakeholders’ conference. Crit Care Med. 2012;40(2):502–9. https://doi.org/10.1097/CCM.0b013e318232da75 .

Article   PubMed   Google Scholar  

Hatch R, Young D, Barber V, Griffiths J, Harrison DA, Watkinson P. Anxiety, depression and post traumatic stress disorder after critical illness: a UK-wide prospective cohort study. Crit Care. 2018;22(1):310. https://doi.org/10.1186/s13054-018-2223-6 .

Jackson JC, Pandharipande PP, Girard TD, Brummel NE, Thompson JL, Hughes CG, et al. Depression, post-traumatic stress disorder, and functional disability in survivors of critical illness in the BRAIN-ICU study: a longitudinal cohort study. Lancet Respir Med. 2014;2(5):369–79. https://doi.org/10.1016/s2213-2600(14)70051-7 .

Marra A, Pandharipande PP, Girard TD, Patel MB, Hughes CG, Jackson JC, et al. Co-occurrence of post-intensive care syndrome problems among 406 survivors of critical illness. Crit Care Med. 2018;46(9):1393–401. https://doi.org/10.1097/ccm.0000000000003218 .

Fuke R, Hifumi T, Kondo Y, Hatakeyama J, Takei T, Yamakawa K, et al. Early rehabilitation to prevent postintensive care syndrome in patients with critical illness: a systematic review and meta-analysis. BMJ Open. 2018;8(5): e019998. https://doi.org/10.1136/bmjopen-2017-019998 .

Allum L, Apps C, Hart N, Pattison N, Connolly B, Rose L. Standardising care in the ICU: a protocol for a scoping review of tools used to improve care delivery. Syst Rev. 2020;9(1):164. https://doi.org/10.1186/s13643-020-01414-6 .

Collaborators GDaIIaP. Global, regional, and national incidence, prevalence, and years lived with disability for 328 diseases and injuries for 195 countries, 1990–2016: a systematic analysis for the Global Burden of Disease Study 2016. Lancet. 2017;390(10100):1211–59. https://doi.org/10.1016/s0140-6736(17)32154-2 .

Article   Google Scholar  

Collaborators GDaIIaP. Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990–2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet. 2018;392(10159):1789–858. https://doi.org/10.1016/s0140-6736(18)32279-7 .

Collaborators GDaI. Global burden of 369 diseases and injuries in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet. 2020;396(10258):1204–22. https://doi.org/10.1016/s0140-6736(20)30925-9 .

Roser EO-OaM. “Healthcare spending”. Published online at OurWorldInData.org. 2017. https://ourworldindata.org/financing-healthcare . Accessed 17 July 2023.

Center SI. China Health Statistics Yearbook. 2022. http://www.nhc.gov.cn/mohwsbwstjxxzx/new_index.shtml . Accessed 17 July 2023.

Nations U. World Social Report 2023. 2023. https://www.un.org/development/desa/pd/content/launch-world-social-report-2023 . Accessed 17 July 2023.

Grasselli G, Greco M, Zanella A, Albano G, Antonelli M, Bellani G, et al. Risk factors associated with mortality among patients with COVID-19 in intensive care units in Lombardy. Italy JAMA Intern Med. 2020;180(10):1345–55. https://doi.org/10.1001/jamainternmed.2020.3539 .

Article   CAS   PubMed   Google Scholar  

Pallanch O, Ortalda A, Pelosi P, Latronico N, Sartini C, Lombardi G, et al. Effects on health-related quality of life of interventions affecting survival in critically ill patients: a systematic review. Crit Care. 2022;26(1):126. https://doi.org/10.1186/s13054-022-03993-3 .

Sen-Crowe B, Sutherland M, McKenney M, Elkbuli A. A closer look into global hospital beds capacity and resource shortages during the COVID-19 pandemic. J Surg Res. 2021;260:56–63. https://doi.org/10.1016/j.jss.2020.11.062 .

Sevin CM, Bloom SL, Jackson JC, Wang L, Ely EW, Stollings JL. Comprehensive care of ICU survivors: development and implementation of an ICU recovery center. J Crit Care. 2018;46:141–8. https://doi.org/10.1016/j.jcrc.2018.02.011 .

Soltani SA, Ingolfsson A, Zygun DA, Stelfox HT, Hartling L, Featherstone R, et al. Quality and performance measures of strain on intensive care capacity: a protocol for a systematic review. Syst Rev. 2015;4(1):158. https://doi.org/10.1186/s13643-015-0145-9 .

Mooi NM, Ncama BP. Evidence on nutritional therapy practice guidelines and implementation in adult critically ill patients: a scoping review protocol. Syst Rev. 2019;8(1):291. https://doi.org/10.1186/s13643-019-1194-2 .

Rousseau AF, Prescott HC, Brett SJ, Weiss B, Azoulay E, Creteur J, et al. Long-term outcomes after critical illness: recent insights. Crit Care. 2021;25(1):108. https://doi.org/10.1186/s13054-021-03535-3 .

Brown SM, Bose S, Banner-Goodspeed V, Beesley SJ, Dinglas VD, Hopkins RO, et al. Approaches to addressing post-intensive care syndrome among intensive care unit survivors. A narrative review. Ann Am Thorac Soc. 2019;16(8):947–56. https://doi.org/10.1513/AnnalsATS.201812-913FR .

Dong CH, Gao CN, An XH, Li N, Yang L, Li DC, et al. Nocturnal dexmedetomidine alleviates post-intensive care syndrome following cardiac surgery: a prospective randomized controlled clinical trial. BMC Med. 2021;19(1):306. https://doi.org/10.1186/s12916-021-02175-2 .

Reade MC, Finfer S. Sedation and delirium in the intensive care unit. N Engl J Med. 2014;370(5):444–54. https://doi.org/10.1056/NEJMra1208705 .

Hosey MM, Jaskulski J, Wegener ST, Chlan LL, Needham DM. Animal-assisted intervention in the ICU: a tool for humanization. Crit Care. 2018;22(1):22. https://doi.org/10.1186/s13054-018-1946-8 .

McIlroy PA, King RS, Garrouste-Orgeas M, Tabah A, Ramanan M. The effect of ICU diaries on psychological outcomes and quality of life of survivors of critical illness and their relatives: a systematic review and meta-analysis. Crit Care Med. 2019;47(2):273–9. https://doi.org/10.1097/ccm.0000000000003547 .

Sayde GE, Stefanescu A, Conrad E, Nielsen N, Hammer R. Implementing an intensive care unit (ICU) diary program at a large academic medical center: results from a randomized control trial evaluating psychological morbidity associated with critical illness. Gen Hosp Psychiatry. 2020;66:96–102. https://doi.org/10.1016/j.genhosppsych.2020.06.017 .

Schofield-Robinson OJ, Lewis SR, Smith AF, McPeake J, Alderson P. Follow-up services for improving long-term outcomes in intensive care unit (ICU) survivors. Cochrane Database Syst Rev. 2018;11(11):Cd012701. https://doi.org/10.1002/14651858.CD012701.pub2 .

Thorlund K, Mills EJ. Sample size and power considerations in network meta-analysis. Syst Rev. 2012;1(1):41. https://doi.org/10.1186/2046-4053-1-41 .

Tian J, Gao Y, Zhang J, Yang Z, Dong S, Zhang T, et al. Progress and challenges of network meta-analysis. J Evid Based Med. 2021;14(3):218–31. https://doi.org/10.1111/jebm.12443 .

Beekman E, Verhagen A. Clinimetrics: Hospital Anxiety and Depression Scale. J Physiother. 2018;64(3):198. https://doi.org/10.1016/j.jphys.2018.04.003 .

Hamilton M. A rating scale for depression. J Neurol Neurosurg Psychiatry. 1960;23(1):56–62. https://doi.org/10.1136/jnnp.23.1.56 .

Motlagh H. Impact of Event Scale-Revised J Physiother. 2010;56(3):203. https://doi.org/10.1016/s1836-9553(10)70029-1 .

Davidson JR, Book SW, Colket JT, Tupler LA, Roth S, David D, et al. Assessment of a new self-rating scale for post-traumatic stress disorder. Psychol Med. 1997;27(1):153–60. https://doi.org/10.1017/s0033291796004229 .

Ely EW, Margolin R, Francis J, May L, Truman B, Dittus R, et al. Evaluation of delirium in critically ill patients: validation of the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU). Crit Care Med. 2001;29(7):1370–9. https://doi.org/10.1097/00003246-200107000-00012 .

Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, et al. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. 2005;53(4):695–9. https://doi.org/10.1111/j.1532-5415.2005.53221.x .

Richards KC, O’Sullivan PS, Phillips RL. Measurement of sleep in critically ill patients. J Nurs Meas. 2000;8(2):131–44.

Buysse DJ, Reynolds CF 3rd, Monk TH, Berman SR, Kupfer DJ. The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry Res. 1989;28(2):193–213. https://doi.org/10.1016/0165-1781(89)90047-4 .

Hartrick CT, Kovan JP, Shapiro S. The numeric rating scale for clinical pain measurement: a ratio measure? Pain Pract. 2003;3(4):310–6. https://doi.org/10.1111/j.1530-7085.2003.03034.x .

McCormack HM, Horne DJ, Sheather S. Clinical applications of visual analogue scales: a critical review. Psychol Med. 1988;18(4):1007–19. https://doi.org/10.1017/s0033291700009934 .

Kleyweg RP, van der Meché FG, Schmitz PI. Interobserver agreement in the assessment of muscle strength and functional abilities in Guillain-Barré syndrome. Muscle Nerve. 1991;14(11):1103–9. https://doi.org/10.1002/mus.880141111 .

Mahoney FI, Barthel DW. Functional evaluation: the Barthel index. Md State Med J. 1965;14:61–5.

CAS   PubMed   Google Scholar  

Katz S, Ford AB, Moskowitz RW, Jackson BA, Jaffe MW. Studies of illness in the aged. The index of ADL: a standardized measure of biological and psychosocial function. Jama. 1963;185:914–9. https://doi.org/10.1001/jama.1963.03060120024016 .

Ware JE Jr., Sherbourne CD. The MOS 36-item short-form health survey (SF-36). I. Conceptual framework and item selection. Med Care. 1992;30(6):473–83.

EuroQol--a new facility for the measurement of health-related quality of life. Health Policy. 1990;16(3):199–208. https://doi.org/10.1016/0168-8510(90)90421-9 .

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1. https://doi.org/10.1186/2046-4053-4-1 .

Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366: l4898. https://doi.org/10.1136/bmj.l4898 .

Salanti G. Indirect and mixed-treatment comparison, network, or multiple-treatments meta-analysis: many names, many benefits, many concerns for the next generation evidence synthesis tool. Res Synth Methods. 2012;3(2):80–97. https://doi.org/10.1002/jrsm.1037 .

Chaimani A, Caldwell DM, Li T, Higgins JPT, Salanti G. Additional considerations are required when preparing a protocol for a systematic review with multiple interventions. J Clin Epidemiol. 2017;83:65–74. https://doi.org/10.1016/j.jclinepi.2016.11.015 .

Chaimani A, Higgins JP, Mavridis D, Spyridonos P, Salanti G. Graphical tools for network meta-analysis in STATA. PLoS One. 2013;8(10):e76654. https://doi.org/10.1371/journal.pone.0076654 .

Dias S, Sutton AJ, Welton NJ, Ades AE. Evidence synthesis for decision making 3: heterogeneity–subgroups, meta-regression, bias, and bias-adjustment. Med Decis Making. 2013;33(5):618–40. https://doi.org/10.1177/0272989x13485157 .

Dias S, Welton NJ, Sutton AJ, Caldwell DM, Lu G, Ades AE. Evidence synthesis for decision making 4: inconsistency in networks of evidence based on randomized controlled trials. Med Decis Making. 2013;33(5):641–56. https://doi.org/10.1177/0272989x12455847 .

Chaimani A, Salanti G. Using network meta-analysis to evaluate the existence of small-study effects in a network of interventions. Res Synth Methods. 2012;3(2):161–76. https://doi.org/10.1002/jrsm.57 .

Nikolakopoulou A, Higgins JPT, Papakonstantinou T, Chaimani A, Del Giovane C, Egger M, et al. CINeMA: an approach for assessing confidence in the results of a network meta-analysis. PLoS Med. 2020;17(4): e1003082. https://doi.org/10.1371/journal.pmed.1003082 .

RStudio Team RStudio integrated development for R. https://www.rstudio.com/ . Accessed 12 June 2023.

gemtc: network meta-analysis using Bayesian methods. http://cran.r-project.org/web/packages/gemtc/index.html . Accessed 12 June 2023.

JAGS: just another Gibbs sampler. https://sourceforge.net/projects/mcmc-jags/ . Accessed 12 June 2023.

Shim SR, Kim SJ, Lee J, Rücker G. Network meta-analysis: application and practice using R software. Epidemiol Health. 2019;41: e2019013. https://doi.org/10.4178/epih.e2019013 .

Watt J, Tricco AC, Straus S, Veroniki AA, Naglie G, Drucker AM. Research techniques made simple: network meta-analysis. J Invest Dermatol. 2019;139(1):4-12.e1. https://doi.org/10.1016/j.jid.2018.10.028 .

Spadaro S, Capuzzo M, Volta CA. Fatigue of ICU survivors, no longer to be neglected. Chest. 2020;158(3):848–9. https://doi.org/10.1016/j.chest.2020.05.521 .

Gayat E, Cariou A, Deye N, Vieillard-Baron A, Jaber S, Damoisel C, et al. Determinants of long-term outcome in ICU survivors: results from the FROG-ICU study. Crit Care. 2018;22(1):8. https://doi.org/10.1186/s13054-017-1922-8 .

Bannon L, McGaughey J, Clarke M, McAuley DF, Blackwood B. Impact of non-pharmacological interventions on prevention and treatment of delirium in critically ill patients: protocol for a systematic review of quantitative and qualitative research. Syst Rev. 2016;5(1):75. https://doi.org/10.1186/s13643-016-0254-0 .

Howard AF, Currie L, Bungay V, Meloche M, McDermid R, Crowe S, et al. Health solutions to improve post-intensive care outcomes: a realist review protocol. Syst Rev. 2019;8(1):11. https://doi.org/10.1186/s13643-018-0939-7 .

Cumpston M, Li T, Page MJ, Chandler J, Welch VA, Higgins JP, et al. Updated guidance for trusted systematic reviews: a new edition of the Cochrane Handbook for Systematic Reviews of Interventions. Cochrane Database Syst Rev. 2019;10(10):Ed000142. https://doi.org/10.1002/14651858.Ed000142 .

Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, et al. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations. Ann Intern Med. 2015;162(11):777–84. https://doi.org/10.7326/m14-2385 .

Meader N, King K, Llewellyn A, Norman G, Brown J, Rodgers M, et al. A checklist designed to aid consistency and reproducibility of GRADE assessments: development and pilot validation. Syst Rev. 2014;3(1):82. https://doi.org/10.1186/2046-4053-3-82 .

Xyrichis A, Fletcher S, Brearley S, Philippou J, Purssell E, Terblanche M, et al. Interventions to promote patients and families’ involvement in adult intensive care settings: a protocol for a mixed-method systematic review. Syst Rev. 2019;8(1):185. https://doi.org/10.1186/s13643-019-1102-9 .

Download references

Acknowledgements

The visual abstract and icons in Figure 1B and D are from the following authors on icon font: Yang, Wang xiaoxia453, Ziji, Shenseng, Saori1994, Bafenzhongdewennuan, Shenxiawuyan, Viki-wei, Zhejiushixiaowang, Wendy-qinzi, Anniebaby11, and Xiaojiage. We sincerely appreciate their creativity and generosity in sharing their creations, which provided crucial elements for the design of the figures in this paper. Special thanks also go to the unDraw platform for offering free illustrations, which enhanced the clarity and visual appeal of the abstract. Figure 1A was created using genescloud tools, a free online platform for data analysis (URL: https://www.genescloud.cn ). We extend our gratitude to the genescloud team for providing such exceptional tools, which greatly facilitated the analysis of data and the production of figures in this study. Figure 1C is adapted from Esteban Ortiz-Ospina and Max Roser’s work “Healthcare Spending” (2017). The visualization was originally published online at OurWorldInData.org and retrieved from: https://ourworldindata.org/financing-healthcare . We appreciate the staff at OurWorldInData.org for providing this insightful chart.

Author information

Authors and affiliations.

School of Nursing, Sun Yat-Sen University, Guangzhou, 510080, China

Xiaoying Sun

The First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, 510080, China

Xiaoying Sun, Qian Tao & Shouzhen Cheng

Department of Respiratory and Intensive Care Medicine, Shenzhen People’s Hospital, Shenzhen, 518020, China

School of Nursing, Suzhou Medical College, Soochow University, Suzhou, 215006, China

Yaqiong Liu

You can also search for this author in PubMed   Google Scholar

Contributions

XS and SC designed this study. Each author made a contribution to the protocol’s planning and creation. XS designed the graphics and wrote the initial draft, and all authors participated in revising the manuscript and approving and contributing to the final written manuscript. SC acted as the guarantor of the review.

Corresponding author

Correspondence to Shouzhen Cheng .

Ethics declarations

Ethics approval and consent to participate.

This study does not require ethical review or clearance as it solely comprises a meta-analysis of information from previously published studies. The study was not designed or planned with the involvement of patients or the general public.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

PRISMA-P 2015 Checklist. Data extraction form

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Sun, X., Tao, Q., Cui, Q. et al. Non-pharmacological interventions to prevent PICS in critically ill adult patients: a protocol for a systematic review and network meta-analysis. Syst Rev 13 , 132 (2024). https://doi.org/10.1186/s13643-024-02542-z

Download citation

Received : 26 July 2023

Accepted : 23 April 2024

Published : 14 May 2024

DOI : https://doi.org/10.1186/s13643-024-02542-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Non-pharmacological intervention
  • Systematic review

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research design analysis unit

IMAGES

  1. PPT

    research design analysis unit

  2. Research Design Analysis Unit

    research design analysis unit

  3. 25 Types of Research Designs (2024)

    research design analysis unit

  4. Unit of Analysis in Research

    research design analysis unit

  5. unit of analysis with example

    research design analysis unit

  6. How to Write a Research Design

    research design analysis unit

VIDEO

  1. Design and Analysis of Algorithms Important Questions AD3351 February 2024 Anna University Exam

  2. Enhancing Community Safety through Collaborative Crime Prevention Strategies

  3. What is research design? #how to design a research advantages of research design

  4. Research Design என்றால் என்ன? ‌I தமிழில் I NTA NET Research Aptitude

  5. Types of Research Design

  6. Research Design; a simple guide

COMMENTS

  1. What is a Unit of Analysis? Overview & Examples

    A unit of analysis is an object of study within a research project. It is the smallest unit a researcher can use to identify and describe a phenomenon—the 'what' or 'who' the researcher wants to study. For example, suppose a consultancy firm is hired to train the sales team in a solar company that is struggling to meet its targets.

  2. Unit of Analysis: Definition, Types & Examples

    A unit of analysis is the thing you want to discuss after your research, probably what you would regard to be the primary emphasis of your research. The researcher plans to comment on the primary topic or object in the research as a unit of analysis. The research question plays a significant role in determining it.

  3. Units of Analysis and Methodologies for Qualitative Studies

    Units of Analysis and Methodologies for Qualitative Studies. By Janet Salmons, PhD Manager, Sage Research Methods Community. Selecting the methodology is an essential piece of research design. This post is excerpted and adapted from Chapter 2 of Doing Qualitative Research Online (2022). Use the code COMMUNITY3 for a 20% discount on the book ...

  4. Qualitative Data Analysis: The Unit of Analysis

    The following is a modified excerpt from Applied Qualitative Research Design: A Total Quality Framework Approach (Roller & Lavrakas, 2015, pp. 262-263).. As discussed in two earlier articles in Research Design Review (see "The Important Role of 'Buckets' in Qualitative Data Analysis" and "Finding Connections & Making Sense of Qualitative Data"), the selection of the unit of ...

  5. Unit of Analysis: Definition, Types & Examples

    The unit of analysis is a way to understand and study a phenomenon. There are four main types of unit of analysis: individuals, groups, artifacts (books, photos, newspapers), and geographical units (towns, census tracts, states). Individuals are the smallest level of analysis. For example, an individual may be a person or an animal.

  6. 4.4 Units of Analysis and Units of Observation

    A unit of observation is the item (or items) that you actually observe, measure, or collect in the course of trying to learn something about your unit of analysis. In a given study, the unit of observation might be the same as the unit of analysis, but that is not always the case. Further, units of analysis are not required to be the same as ...

  7. Choosing the Right Unit of Analysis for Your Research Project

    The unit of analysis in research refers to the level at which data is collected and analyzed. It is essential for researchers to understand the different types of units of analysis, as well as their significance in shaping the research process and outcomes. ... The theoretical framework and research design establish the structure for a study ...

  8. The Unit of Analysis Explained

    The unit of analysis is named as such because the unit type is determined based on the actual data analysis that you perform in your project or study. For example, if your research is based around data on exam grades for students at two different universities, then the unit of analysis is the data for the individual student due to each student ...

  9. Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Frequently asked questions.

  10. 2.1: Unit of Analysis

    The unit of analysis refers to the person, collective, or object that is the target of the investigation. Typical unit of analysis include individuals, groups, organizations, countries, technologies, objects, and such. For instance, if we are interested in studying people's shopping behavior, their learning outcomes, or their attitudes to new ...

  11. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  12. 3.2: Unit of Analysis and Errors

    Units of Analysis . Another point to consider when designing a research project, and which might differ slightly in qualitative and quantitative studies, has to do with units of analysis.A unit of analysis is the entity that you wish to be able to say something about at the end of your study, probably what you'd consider to be the main focus of your study.

  13. Unit of Analysis

    Unit of Analysis. One of the most important ideas in a research project is the unit of analysis. The unit of analysis is the major entity that you are analyzing in your study. For instance, any of the following could be a unit of analysis in a study: individuals; groups; artifacts (books, photos, newspapers) geographical units (town, census ...

  14. The unit of analysis in learning research: Approaches for imagining a

    This involves that the unit(s) of analysis is(/are) defined 'in part by the study's object, in part by the researcher's focus, in part by the audience of the research and in part by the research participants (as distinct from the research object)' (Matusov, 2007, p. 325). This opens an array of possibilities for analyses that take into ...

  15. Organizing Your Social Sciences Research Paper

    Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research. London: SAGE, 2001; Gorard, Stephen. ... and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative ...

  16. Unit of analysis

    The unit of analysis is the entity that frames what is being looked at in a study, or is the entity being studied as a whole. In social science research, at the macro level, the most commonly referenced unit of analysis, considered to be a society is the state (polity) (i.e. country). At meso level, common units of observation include groups, organizations, and institutions, and at micro level ...

  17. Research Design and Analysis (revised)

    Proposal development, data analysis, and data management. With over 40 years of combined grant writing experience, the Research Design and Analysis unit provides collaborative services and resources for proposal development, data analysis, and data management with the goal of enhancing the validity, rigor, and impact of Life Span Institute ...

  18. What is unit of analysis and why is it important for qualitative

    The reason that unit of analysis and unit of observation are important is that they provide important boundaries for your study particularly your data analysis. In qualitative research, researchers can struggle to identify what is germane and what is not. By clearly identifying the level and what you are studying, you will be better able to ...

  19. Research Design and Analysis

    Research Design and Analysis Research Rising Funding Opportunities ... of RDA since 1998. As a Senior Scientist and Director, Fleming is responsible for the day-to-day operation of the RDA Unit. She has expertise in multilevel and longitudinal analyses, single-subject and small sample statistical techniques, measurement, and instrument ...

  20. unit of analysis

    The "unit of analysis" refers to the portion of content that will be the basis for decisions made during the development of codes. For example, in textual content analyses, the unit of analysis may be at the level of a word, a sentence (Milne & Adler, 1999), a paragraph, an article or chapter, an entire edition or volume, a complete ...

  21. A Phenomenological Research Design Illustrated

    Abstract. This article distills the core principles of a phenomenological research design and, by means of a specific study, illustrates the phenomenological methodology. After a brief overview of the developments of phenomenology, the research paradigm of the specific study follows. Thereafter the location of the data, the data-gathering the ...

  22. (PDF) Basics of Research Design: A Guide to selecting appropriate

    for validity and reliability. Design is basically concerned with the aims, uses, purposes, intentions and plans within the. pr actical constraint of location, time, money and the researcher's ...

  23. Integrating qualitative research within a clinical trials unit

    The value of using qualitative methods within clinical trials is widely recognised. How qualitative research is integrated within trials units to achieve this is less clear. This paper describes the process through which qualitative research has been integrated within Cardiff University's Centre for Trials Research (CTR) in Wales, UK. We highlight facilitators of, and challenges to, integration.

  24. Research Design and Analysis

    Project implementation. Research Design and Analysis scientists can be active members of your research team, or can serve as consultants with a wide array of skills. Provide guidance on effective survey development - user interface and question design. Develop new instruments and evaluate their psychometric properties.

  25. Sun coast remediation unit VIII (pptx)

    English document from Columbia Southern University, 23 pages, Sun Coast Remediation BECKY MILLER COLUMBIA SOUTHERN UNIVERSITY RCH- 5301 DR. MATTHEW ADEMOLA MARCH 21, 2023 Agenda Introduction Literature Review Research Methodology, Design, and Methods Research Questions and Hypotheses Data Analysis Findings Recommen

  26. Buildings

    Thermal comfort and daylighting are vital components of dormitory environments. However, enhancing indoor lighting conditions may lead to increased annual energy consumption and decreased thermal comfort. Therefore, it is crucial to identify methods to reduce buildings' energy costs while maintaining occupants' thermal comfort and daylighting. Taking the dormitory building of Songyuan No ...

  27. Non-pharmacological interventions to prevent PICS in critically ill

    Postintensive care syndrome (PICS) is common in critically ill adults who were treated in the intensive care unit (ICU). Although comparative analyses between types of non-pharmacological measures and usual care to prevent PICS have been performed, it remains unclear which of these potential treatments is the most effective for prevention. To obtain the best evidence for non-pharmaceutical ...