Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Research Design | Types, Guide & Examples

What Is a Research Design | Types, Guide & Examples

Published on June 7, 2021 by Shona McCombes . Revised on November 20, 2023 by Pritha Bhandari.

A research design is a strategy for answering your   research question  using empirical data. Creating a research design means making decisions about:

  • Your overall research objectives and approach
  • Whether you’ll rely on primary research or secondary research
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research objectives and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, other interesting articles, frequently asked questions about research design.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities—start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed-methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research and study design

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types.

  • Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships
  • Descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analyzing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study—plants, animals, organizations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

  • Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalize your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study , your aim is to deeply understand a specific context, not to generalize to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question .

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviors, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews .

Observation methods

Observational studies allow you to collect data unobtrusively, observing characteristics, behaviors or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what kinds of data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected—for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are high in reliability and validity.

Operationalization

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalization means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in—for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced, while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method , you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample—by mail, online, by phone, or in person?

If you’re using a probability sampling method , it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method , how will you avoid research bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organizing and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymize and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well-organized will save time when it comes to analyzing it. It can also help other researchers validate and add to your findings (high replicability ).

On its own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyze the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarize your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarize your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analyzing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Research Design | Types, Guide & Examples. Scribbr. Retrieved April 2, 2024, from https://www.scribbr.com/methodology/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, guide to experimental design | overview, steps, & examples, how to write a research proposal | examples & templates, ethical considerations in research | types & examples, what is your plagiarism score.

Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

research and study design

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

research and study design

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

research and study design

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

research and study design

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Survey Design 101: The Basics

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 2 April 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

  • En español – ExME
  • Em português – EME

An introduction to different types of study design

Posted on 6th April 2021 by Hadi Abbas

""

Study designs are the set of methods and procedures used to collect and analyze data in a study.

Broadly speaking, there are 2 types of study designs: descriptive studies and analytical studies.

Descriptive studies

  • Describes specific characteristics in a population of interest
  • The most common forms are case reports and case series
  • In a case report, we discuss our experience with the patient’s symptoms, signs, diagnosis, and treatment
  • In a case series, several patients with similar experiences are grouped.

Analytical Studies

Analytical studies are of 2 types: observational and experimental.

Observational studies are studies that we conduct without any intervention or experiment. In those studies, we purely observe the outcomes.  On the other hand, in experimental studies, we conduct experiments and interventions.

Observational studies

Observational studies include many subtypes. Below, I will discuss the most common designs.

Cross-sectional study:

  • This design is transverse where we take a specific sample at a specific time without any follow-up
  • It allows us to calculate the frequency of disease ( p revalence ) or the frequency of a risk factor
  • This design is easy to conduct
  • For example – if we want to know the prevalence of migraine in a population, we can conduct a cross-sectional study whereby we take a sample from the population and calculate the number of patients with migraine headaches.

Cohort study:

  • We conduct this study by comparing two samples from the population: one sample with a risk factor while the other lacks this risk factor
  • It shows us the risk of developing the disease in individuals with the risk factor compared to those without the risk factor ( RR = relative risk )
  • Prospective : we follow the individuals in the future to know who will develop the disease
  • Retrospective : we look to the past to know who developed the disease (e.g. using medical records)
  • This design is the strongest among the observational studies
  • For example – to find out the relative risk of developing chronic obstructive pulmonary disease (COPD) among smokers, we take a sample including smokers and non-smokers. Then, we calculate the number of individuals with COPD among both.

Case-Control Study:

  • We conduct this study by comparing 2 groups: one group with the disease (cases) and another group without the disease (controls)
  • This design is always retrospective
  •  We aim to find out the odds of having a risk factor or an exposure if an individual has a specific disease (Odds ratio)
  •  Relatively easy to conduct
  • For example – we want to study the odds of being a smoker among hypertensive patients compared to normotensive ones. To do so, we choose a group of patients diagnosed with hypertension and another group that serves as the control (normal blood pressure). Then we study their smoking history to find out if there is a correlation.

Experimental Studies

  • Also known as interventional studies
  • Can involve animals and humans
  • Pre-clinical trials involve animals
  • Clinical trials are experimental studies involving humans
  • In clinical trials, we study the effect of an intervention compared to another intervention or placebo. As an example, I have listed the four phases of a drug trial:

I:  We aim to assess the safety of the drug ( is it safe ? )

II: We aim to assess the efficacy of the drug ( does it work ? )

III: We want to know if this drug is better than the old treatment ( is it better ? )

IV: We follow-up to detect long-term side effects ( can it stay in the market ? )

  • In randomized controlled trials, one group of participants receives the control, while the other receives the tested drug/intervention. Those studies are the best way to evaluate the efficacy of a treatment.

Finally, the figure below will help you with your understanding of different types of study designs.

A visual diagram describing the following. Two types of epidemiological studies are descriptive and analytical. Types of descriptive studies are case reports, case series, descriptive surveys. Types of analytical studies are observational or experimental. Observational studies can be cross-sectional, case-control or cohort studies. Types of experimental studies can be lab trials or field trials.

References (pdf)

You may also be interested in the following blogs for further reading:

An introduction to randomized controlled trials

Case-control and cohort studies: a brief overview

Cohort studies: prospective and retrospective designs

Prevalence vs Incidence: what is the difference?

' src=

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

No Comments on An introduction to different types of study design

' src=

you are amazing one!! if I get you I’m working with you! I’m student from Ethiopian higher education. health sciences student

' src=

Very informative and easy understandable

' src=

You are my kind of doctor. Do not lose sight of your objective.

' src=

Wow very erll explained and easy to understand

' src=

I’m Khamisu Habibu community health officer student from Abubakar Tafawa Balewa university teaching hospital Bauchi, Nigeria, I really appreciate your write up and you have make it clear for the learner. thank you

' src=

well understood,thank you so much

' src=

Well understood…thanks

' src=

Simply explained. Thank You.

' src=

Thanks a lot for this nice informative article which help me to understand different study designs that I felt difficult before

' src=

That’s lovely to hear, Mona, thank you for letting the author know how useful this was. If there are any other particular topics you think would be useful to you, and are not already on the website, please do let us know.

' src=

it is very informative and useful.

thank you statistician

Fabulous to hear, thank you John.

' src=

Thanks for this information

Thanks so much for this information….I have clearly known the types of study design Thanks

That’s so good to hear, Mirembe, thank you for letting the author know.

' src=

Very helpful article!! U have simplified everything for easy understanding

' src=

I’m a health science major currently taking statistics for health care workers…this is a challenging class…thanks for the simified feedback.

That’s good to hear this has helped you. Hopefully you will find some of the other blogs useful too. If you see any topics that are missing from the website, please do let us know!

' src=

Hello. I liked your presentation, the fact that you ranked them clearly is very helpful to understand for people like me who is a novelist researcher. However, I was expecting to read much more about the Experimental studies. So please direct me if you already have or will one day. Thank you

Dear Ay. My sincere apologies for not responding to your comment sooner. You may find it useful to filter the blogs by the topic of ‘Study design and research methods’ – here is a link to that filter: https://s4be.cochrane.org/blog/topic/study-design/ This will cover more detail about experimental studies. Or have a look on our library page for further resources there – you’ll find that on the ‘Resources’ drop down from the home page.

However, if there are specific things you feel you would like to learn about experimental studies, that are missing from the website, it would be great if you could let me know too. Thank you, and best of luck. Emma

' src=

Great job Mr Hadi. I advise you to prepare and study for the Australian Medical Board Exams as soon as you finish your undergrad study in Lebanon. Good luck and hope we can meet sometime in the future. Regards ;)

' src=

You have give a good explaination of what am looking for. However, references am not sure of where to get them from.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

""

Cluster Randomized Trials: Concepts

This blog summarizes the concepts of cluster randomization, and the logistical and statistical considerations while designing a cluster randomized controlled trial.

""

Expertise-based Randomized Controlled Trials

This blog summarizes the concepts of Expertise-based randomized controlled trials with a focus on the advantages and challenges associated with this type of study.

research and study design

A well-designed cohort study can provide powerful results. This blog introduces prospective and retrospective cohort studies, discussing the advantages, disadvantages and use of these type of study designs.

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE : Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: Apr 1, 2024 9:56 AM
  • URL: https://libguides.usc.edu/writingguide
  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Design – Types, Methods and Examples

Research Design – Types, Methods and Examples

Table of Contents

Research Design

Research Design

Definition:

Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner.

Types of Research Design

Types of Research Design are as follows:

Descriptive Research Design

This type of research design is used to describe a phenomenon or situation. It involves collecting data through surveys, questionnaires, interviews, and observations. The aim of descriptive research is to provide an accurate and detailed portrayal of a particular group, event, or situation. It can be useful in identifying patterns, trends, and relationships in the data.

Correlational Research Design

Correlational research design is used to determine if there is a relationship between two or more variables. This type of research design involves collecting data from participants and analyzing the relationship between the variables using statistical methods. The aim of correlational research is to identify the strength and direction of the relationship between the variables.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This type of research design involves manipulating one variable and measuring the effect on another variable. It usually involves randomly assigning participants to groups and manipulating an independent variable to determine its effect on a dependent variable. The aim of experimental research is to establish causality.

Quasi-experimental Research Design

Quasi-experimental research design is similar to experimental research design, but it lacks one or more of the features of a true experiment. For example, there may not be random assignment to groups or a control group. This type of research design is used when it is not feasible or ethical to conduct a true experiment.

Case Study Research Design

Case study research design is used to investigate a single case or a small number of cases in depth. It involves collecting data through various methods, such as interviews, observations, and document analysis. The aim of case study research is to provide an in-depth understanding of a particular case or situation.

Longitudinal Research Design

Longitudinal research design is used to study changes in a particular phenomenon over time. It involves collecting data at multiple time points and analyzing the changes that occur. The aim of longitudinal research is to provide insights into the development, growth, or decline of a particular phenomenon over time.

Structure of Research Design

The format of a research design typically includes the following sections:

  • Introduction : This section provides an overview of the research problem, the research questions, and the importance of the study. It also includes a brief literature review that summarizes previous research on the topic and identifies gaps in the existing knowledge.
  • Research Questions or Hypotheses: This section identifies the specific research questions or hypotheses that the study will address. These questions should be clear, specific, and testable.
  • Research Methods : This section describes the methods that will be used to collect and analyze data. It includes details about the study design, the sampling strategy, the data collection instruments, and the data analysis techniques.
  • Data Collection: This section describes how the data will be collected, including the sample size, data collection procedures, and any ethical considerations.
  • Data Analysis: This section describes how the data will be analyzed, including the statistical techniques that will be used to test the research questions or hypotheses.
  • Results : This section presents the findings of the study, including descriptive statistics and statistical tests.
  • Discussion and Conclusion : This section summarizes the key findings of the study, interprets the results, and discusses the implications of the findings. It also includes recommendations for future research.
  • References : This section lists the sources cited in the research design.

Example of Research Design

An Example of Research Design could be:

Research question: Does the use of social media affect the academic performance of high school students?

Research design:

  • Research approach : The research approach will be quantitative as it involves collecting numerical data to test the hypothesis.
  • Research design : The research design will be a quasi-experimental design, with a pretest-posttest control group design.
  • Sample : The sample will be 200 high school students from two schools, with 100 students in the experimental group and 100 students in the control group.
  • Data collection : The data will be collected through surveys administered to the students at the beginning and end of the academic year. The surveys will include questions about their social media usage and academic performance.
  • Data analysis : The data collected will be analyzed using statistical software. The mean scores of the experimental and control groups will be compared to determine whether there is a significant difference in academic performance between the two groups.
  • Limitations : The limitations of the study will be acknowledged, including the fact that social media usage can vary greatly among individuals, and the study only focuses on two schools, which may not be representative of the entire population.
  • Ethical considerations: Ethical considerations will be taken into account, such as obtaining informed consent from the participants and ensuring their anonymity and confidentiality.

How to Write Research Design

Writing a research design involves planning and outlining the methodology and approach that will be used to answer a research question or hypothesis. Here are some steps to help you write a research design:

  • Define the research question or hypothesis : Before beginning your research design, you should clearly define your research question or hypothesis. This will guide your research design and help you select appropriate methods.
  • Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.
  • Develop a sampling plan : If your research involves collecting data from a sample, you will need to develop a sampling plan. This should outline how you will select participants and how many participants you will include.
  • Define variables: Clearly define the variables you will be measuring or manipulating in your study. This will help ensure that your results are meaningful and relevant to your research question.
  • Choose data collection methods : Decide on the data collection methods you will use to gather information. This may include surveys, interviews, observations, experiments, or secondary data sources.
  • Create a data analysis plan: Develop a plan for analyzing your data, including the statistical or qualitative techniques you will use.
  • Consider ethical concerns : Finally, be sure to consider any ethical concerns related to your research, such as participant confidentiality or potential harm.

When to Write Research Design

Research design should be written before conducting any research study. It is an important planning phase that outlines the research methodology, data collection methods, and data analysis techniques that will be used to investigate a research question or problem. The research design helps to ensure that the research is conducted in a systematic and logical manner, and that the data collected is relevant and reliable.

Ideally, the research design should be developed as early as possible in the research process, before any data is collected. This allows the researcher to carefully consider the research question, identify the most appropriate research methodology, and plan the data collection and analysis procedures in advance. By doing so, the research can be conducted in a more efficient and effective manner, and the results are more likely to be valid and reliable.

Purpose of Research Design

The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection and analysis.

Some of the key purposes of research design include:

  • Providing a clear and concise plan of action for the research study.
  • Ensuring that the research is conducted ethically and with rigor.
  • Maximizing the accuracy and reliability of the research findings.
  • Minimizing the possibility of errors, biases, or confounding variables.
  • Ensuring that the research is feasible, practical, and cost-effective.
  • Determining the appropriate research methodology to answer the research question(s).
  • Identifying the sample size, sampling method, and data collection techniques.
  • Determining the data analysis method and statistical tests to be used.
  • Facilitating the replication of the study by other researchers.
  • Enhancing the validity and generalizability of the research findings.

Applications of Research Design

There are numerous applications of research design in various fields, some of which are:

  • Social sciences: In fields such as psychology, sociology, and anthropology, research design is used to investigate human behavior and social phenomena. Researchers use various research designs, such as experimental, quasi-experimental, and correlational designs, to study different aspects of social behavior.
  • Education : Research design is essential in the field of education to investigate the effectiveness of different teaching methods and learning strategies. Researchers use various designs such as experimental, quasi-experimental, and case study designs to understand how students learn and how to improve teaching practices.
  • Health sciences : In the health sciences, research design is used to investigate the causes, prevention, and treatment of diseases. Researchers use various designs, such as randomized controlled trials, cohort studies, and case-control studies, to study different aspects of health and healthcare.
  • Business : Research design is used in the field of business to investigate consumer behavior, marketing strategies, and the impact of different business practices. Researchers use various designs, such as survey research, experimental research, and case studies, to study different aspects of the business world.
  • Engineering : In the field of engineering, research design is used to investigate the development and implementation of new technologies. Researchers use various designs, such as experimental research and case studies, to study the effectiveness of new technologies and to identify areas for improvement.

Advantages of Research Design

Here are some advantages of research design:

  • Systematic and organized approach : A well-designed research plan ensures that the research is conducted in a systematic and organized manner, which makes it easier to manage and analyze the data.
  • Clear objectives: The research design helps to clarify the objectives of the study, which makes it easier to identify the variables that need to be measured, and the methods that need to be used to collect and analyze data.
  • Minimizes bias: A well-designed research plan minimizes the chances of bias, by ensuring that the data is collected and analyzed objectively, and that the results are not influenced by the researcher’s personal biases or preferences.
  • Efficient use of resources: A well-designed research plan helps to ensure that the resources (time, money, and personnel) are used efficiently and effectively, by focusing on the most important variables and methods.
  • Replicability: A well-designed research plan makes it easier for other researchers to replicate the study, which enhances the credibility and reliability of the findings.
  • Validity: A well-designed research plan helps to ensure that the findings are valid, by ensuring that the methods used to collect and analyze data are appropriate for the research question.
  • Generalizability : A well-designed research plan helps to ensure that the findings can be generalized to other populations, settings, or situations, which increases the external validity of the study.

Research Design Vs Research Methodology

About the author.

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Leave a comment x.

Save my name, email, and website in this browser for the next time I comment.

  • How it works

How to Write a Research Design – Guide with Examples

Published by Alaxendra Bets at August 14th, 2021 , Revised On October 3, 2023

A research design is a structure that combines different components of research. It involves the use of different data collection and data analysis techniques logically to answer the  research questions .

It would be best to make some decisions about addressing the research questions adequately before starting the research process, which is achieved with the help of the research design.

Below are the key aspects of the decision-making process:

  • Data type required for research
  • Research resources
  • Participants required for research
  • Hypothesis based upon research question(s)
  • Data analysis  methodologies
  • Variables (Independent, dependent, and confounding)
  • The location and timescale for conducting the data
  • The time period required for research

The research design provides the strategy of investigation for your project. Furthermore, it defines the parameters and criteria to compile the data to evaluate results and conclude.

Your project’s validity depends on the data collection and  interpretation techniques.  A strong research design reflects a strong  dissertation , scientific paper, or research proposal .

Steps of research design

Step 1: Establish Priorities for Research Design

Before conducting any research study, you must address an important question: “how to create a research design.”

The research design depends on the researcher’s priorities and choices because every research has different priorities. For a complex research study involving multiple methods, you may choose to have more than one research design.

Multimethodology or multimethod research includes using more than one data collection method or research in a research study or set of related studies.

If one research design is weak in one area, then another research design can cover that weakness. For instance, a  dissertation analyzing different situations or cases will have more than one research design.

For example:

  • Experimental research involves experimental investigation and laboratory experience, but it does not accurately investigate the real world.
  • Quantitative research is good for the  statistical part of the project, but it may not provide an in-depth understanding of the  topic .
  • Also, correlational research will not provide experimental results because it is a technique that assesses the statistical relationship between two variables.

While scientific considerations are a fundamental aspect of the research design, It is equally important that the researcher think practically before deciding on its structure. Here are some questions that you should think of;

  • Do you have enough time to gather data and complete the write-up?
  • Will you be able to collect the necessary data by interviewing a specific person or visiting a specific location?
  • Do you have in-depth knowledge about the  different statistical analysis and data collection techniques to address the research questions  or test the  hypothesis ?

If you think that the chosen research design cannot answer the research questions properly, you can refine your research questions to gain better insight.

Step 2: Data Type you Need for Research

Decide on the type of data you need for your research. The type of data you need to collect depends on your research questions or research hypothesis. Two types of research data can be used to answer the research questions:

Primary Data Vs. Secondary Data

Qualitative vs. quantitative data.

Also, see; Research methods, design, and analysis .

Need help with a thesis chapter?

  • Hire an expert from ResearchProspect today!
  • Statistical analysis, research methodology, discussion of the results or conclusion – our experts can help you no matter how complex the requirements are.

analysis image

Step 3: Data Collection Techniques

Once you have selected the type of research to answer your research question, you need to decide where and how to collect the data.

It is time to determine your research method to address the  research problem . Research methods involve procedures, techniques, materials, and tools used for the study.

For instance, a dissertation research design includes the different resources and data collection techniques and helps establish your  dissertation’s structure .

The following table shows the characteristics of the most popularly employed research methods.

Research Methods

Step 4: Procedure of Data Analysis

Use of the  correct data and statistical analysis technique is necessary for the validity of your research. Therefore, you need to be certain about the data type that would best address the research problem. Choosing an appropriate analysis method is the final step for the research design. It can be split into two main categories;

Quantitative Data Analysis

The quantitative data analysis technique involves analyzing the numerical data with the help of different applications such as; SPSS, STATA, Excel, origin lab, etc.

This data analysis strategy tests different variables such as spectrum, frequencies, averages, and more. The research question and the hypothesis must be established to identify the variables for testing.

Qualitative Data Analysis

Qualitative data analysis of figures, themes, and words allows for flexibility and the researcher’s subjective opinions. This means that the researcher’s primary focus will be interpreting patterns, tendencies, and accounts and understanding the implications and social framework.

You should be clear about your research objectives before starting to analyze the data. For example, you should ask yourself whether you need to explain respondents’ experiences and insights or do you also need to evaluate their responses with reference to a certain social framework.

Step 5: Write your Research Proposal

The research design is an important component of a research proposal because it plans the project’s execution. You can share it with the supervisor, who would evaluate the feasibility and capacity of the results  and  conclusion .

Read our guidelines to write a research proposal  if you have already formulated your research design. The research proposal is written in the future tense because you are writing your proposal before conducting research.

The  research methodology  or research design, on the other hand, is generally written in the past tense.

How to Write a Research Design – Conclusion

A research design is the plan, structure, strategy of investigation conceived to answer the research question and test the hypothesis. The dissertation research design can be classified based on the type of data and the type of analysis.

Above mentioned five steps are the answer to how to write a research design. So, follow these steps to  formulate the perfect research design for your dissertation .

ResearchProspect writers have years of experience creating research designs that align with the dissertation’s aim and objectives. If you are struggling with your dissertation methodology chapter, you might want to look at our dissertation part-writing service.

Our dissertation writers can also help you with the full dissertation paper . No matter how urgent or complex your need may be, ResearchProspect can help. We also offer PhD level research paper writing services.

Frequently Asked Questions

What is research design.

Research design is a systematic plan that guides the research process, outlining the methodology and procedures for collecting and analysing data. It determines the structure of the study, ensuring the research question is answered effectively, reliably, and validly. It serves as the blueprint for the entire research project.

How to write a research design?

To write a research design, define your research question, identify the research method (qualitative, quantitative, or mixed), choose data collection techniques (e.g., surveys, interviews), determine the sample size and sampling method, outline data analysis procedures, and highlight potential limitations and ethical considerations for the study.

How to write the design section of a research paper?

In the design section of a research paper, describe the research methodology chosen and justify its selection. Outline the data collection methods, participants or samples, instruments used, and procedures followed. Detail any experimental controls, if applicable. Ensure clarity and precision to enable replication of the study by other researchers.

How to write a research design in methodology?

To write a research design in methodology, clearly outline the research strategy (e.g., experimental, survey, case study). Describe the sampling technique, participants, and data collection methods. Detail the procedures for data collection and analysis. Justify choices by linking them to research objectives, addressing reliability and validity.

You May Also Like

To help students organise their dissertation proposal paper correctly, we have put together detailed guidelines on how to structure a dissertation proposal.

Struggling to find relevant and up-to-date topics for your dissertation? Here is all you need to know if unsure about how to choose dissertation topic.

How to write a hypothesis for dissertation,? A hypothesis is a statement that can be tested with the help of experimental or theoretical research.

USEFUL LINKS

LEARNING RESOURCES

DMCA.com Protection Status

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

Understanding Research Study Designs

Phillips-Wangensteen Building.

Table of Contents

In order to find the best possible evidence, it helps to understand the basic designs of research studies. The following basic definitions and examples of clinical research designs follow the “ levels of evidence.”

Case Series and Case Reports

Case control studies, cohort studies, randomized controlled studies, double-blind method, meta-analyses, systematic reviews.

These consist either of collections of reports on the treatment of individual patients with the same condition, or of reports on a single patient.

  • Case series/reports are used to illustrate an aspect of a condition, the treatment or the adverse reaction to treatment.
  • Example : You have a patient that has a condition that you are unfamiliar with. You would search for case reports that could help you decide on a direction of treatment or to assist on a diagnosis.
  • Case series/reports have no control group (one to compare outcomes), so they have no statistical validity.
  • The benefits of case series/reports are that they are easy to understand and can be written up in a very short period of time.

research and study design

Patients who already have a certain condition are compared with people who do not.

  • Case control studies are generally designed to estimate the odds (using an odds ratio) of developing the studied condition/disease. They can determine if there is an associational relationship between condition and risk factor
  • Example: A study in which colon cancer patients are asked what kinds of food they have eaten in the past and the answers are compared with a selected control group.
  • Case control studies are less reliable than either randomized controlled trials or cohort studies.
  • A major drawback to case control studies is that one cannot directly obtain absolute risk (i.e. incidence) of a bad outcome.
  • The advantages of case control studies are they can be done quickly and are very efficient for conditions/diseases with rare outcomes.

research and study design

Also called longitudinal studies, involve a case-defined population who presently have a certain exposure and/or receive a particular treatment that are followed over time and compared with another group who are not affected by the exposure under investigation.

  • Cohort studies may be either prospective (i.e., exposure factors are identified at the beginning of a study and a defined population is followed into the future), or historical/retrospective (i.e., past medical records for the defined population are used to identify exposure factors).
  • Cohort studies are used to establish causation of a disease or to evaluate the outcome/impact of treatment, when randomized controlled clinical trials are not possible.
  • Example: One of the more well-know examples of a cohort study is the Framingham Heart Study, which followed generations of residents of Framingham, Massachusetts.
  • Cohort studies are not as reliable as randomized controlled studies, since the two groups may differ in ways other than the variable under study.
  • Other problems with cohort studies are that they require a large sample size, are inefficient for rare outcomes, and can take long periods of time. 

Cohort studies

This is a study in which 1) There are two groups, one treatment group and one control group. The treatment group receives the treatment under investigation, and the control group receives either no treatment (placebo) or standard treatment. 2) Patients are randomly assigned to all groups. 

  • Randomized controlled trials are considered the “gold standard” in medical research. They lend themselves best to answering questions about the effectiveness of different therapies or interventions.
  • Randomization helps avoid the bias in choice of patients-to-treatment that a physician might be subject to. It also increases the probability that differences between the groups can be attributed to the treatment(s) under study.
  • Having a  control group allows for a comparison of treatments – e.g., treatment A produced favorable results 56% of the time versus treatment B in which only 25% of patients had favorable results.
  • There are certain types of questions on which randomized controlled studies cannot be done for ethical reasons, for instance, if patients were asked to undertake harmful experiences (like smoking) or denied any treatment beyond a placebo when there are known effective treatments.

undefined

A type of randomized controlled clinical trial/study in which neither medical staff/physician nor the patient knows which of several possible treatments/therapies the patient is receiving.

  • Example : Studies of treatments that consist essentially of taking pills are very easy to do double blind – the patient takes one of two pills of identical size, shape, and color, and neither the patient nor the physician needs to know which is which.
  • A double blind study is the most rigorous clinical research design because, in addition to the randomization of subjects, which reduces the risk of bias, it can eliminate or minimize the placebo effect which is a further challenge to the validity of a study.

undefined

Meta-analysis is a systematic, objective way to combine data from many studies, usually from randomized controlled clinical trials, and arrive at a pooled estimate of treatment effectiveness and statistical significance.

  • Meta-analysis can also combine data from case/control and cohort studies. The advantage to merging these data is that it increases sample size and allows for analyses that would not otherwise be possible.
  • They should not be confused with reviews of the literature or systematic reviews. 
  • Two problems with meta-analysis are publication bias (studies showing no effect or little effect are often not published and just “filed” away) and the quality of the design of the studies from which data is pulled. This can lead to misleading results when all the data on the subject from “published” literature are summarized.

undefined

A systematic review is a comprehensive survey of a topic that takes great care to find all relevant studies of the highest level of evidence, published and unpublished, assess each study, synthesize the findings from individual studies in an unbiased, explicit and reproducible way and present a balanced and impartial summary of the findings with due consideration of any flaws in the evidence. In this way it can be used for the evaluation of either existing or new technologies and practices.   

A systematic review is more rigorous than a traditional literature review and attempts to reduce the influence of bias. In order to do this, a systematic review follows a formal process:

  • Clearly formulated research question
  • Published & unpublished (conferences, company reports, “file drawer reports”, etc.) literature is carefully searched for relevant research
  • Identified research is assessed according to an explicit methodology
  • Results of the critical assessment of the individual studies are combined
  • Final results are placed in context, addressing such issues are quality of the included studies, impact of bias and the applicability of the findings
  • The difference between a systematic review and a meta-analysis is that a systematic review looks at the whole picture (qualitative view), while a meta-analysis looks for the specific statistical picture (quantitative view). 

undefined

R esearch Process in the Health Sciences  (35:37 min): Overview of the scientific research process in the health sciences. Follows the seven steps: defining the problem, reviewing the literature, formulating a hypothesis, choosing a research design, collecting data, analyzing the data and interpretation and report writing. Includes a set of additional readings and library resources.

Research Study Designs in the Health Sciences  (29:36 min): An overview of research study designs used by health sciences researchers. Covers case reports/case series, case control studies, cohort studies, correlational studies, cross-sectional studies, experimental studies (including randomized control trials), systematic reviews and meta-analysis.  Additional readings and library resources are also provided.

  • Technical Help
  • CE/CME Help
  • Billing Help
  • Sales Inquiries
  • CE Certificates
  • Billing Inquiries
  • Purchase Inquiries

Research Study Design

This course provides learners with an understanding of how to improve study design, collect and analyze data, and promote reproducible research.

About this Course

The Research Study Design course provides learners with an introduction to research study design, a detailed overview of scientific inquiry, examples of various research designs, a discussion of data management methods, an introduction to statistical analysis, and sound approaches to optimize the reproducibility of research results. This course is valuable to university undergraduate and graduate students who are taking a classroom research study design course or who need a refresher on a specific aspect of research design. Research team members and Institutional Review Board members who may need an overview or refresher on research design concepts will also find the course meaningful.

This course was authored and peer-reviewed by experts.

Language Availability: English

Suggested Audiences: IRB Members and Administrators, Research Staff, Undergraduate and Graduate Students

Organizational Subscription Price: $675 per year/per site for government and non-profit organizations; $750 per year/per site for for-profit organizations Independent Learner Price: $99 per person

Course Content

Introduction to scientific research.

Provides an introduction to the steps involved in scientific research, including how to formulate a research question and the steps associated with developing a hypothesis. The module concludes with an overview of the Institutional Review Board (IRB) and other committees that may be involved in the review of research.

Recommended Use: Required ID (Language): 17581 (English) Author(s): Michael Belotto, PhD, MPH, CCRA, CCRC - BRANY; Christina Ventura-DiPersia, MPH - Hofstra University

Observational Research

Presents different types of observational research designs and includes a discussion on determining the best designs to fit with intended research activities. The module concludes with a discussion on the strengths and limitations of the designs.

Recommended Use: Required ID (Language): 17582 (English) Author(s): Michael Belotto, PhD, MPH, CCRA, CCRC - BRANY; Christina Ventura-DiPersia, MPH - Hofstra University

Interventional Research

Identifies the different types of interventional studies and designs, including special considerations associated with interventional research designs.

Recommended Use: Required ID (Language): 17583 (English) Author(s): Michael Belotto, PhD, MPH, CCRA, CCRC - BRANY; Christina Ventura-DiPersia, MPH - Hofstra University

Quantitative Research (Statistical Reasoning and Hypothesis Testing) - Part 1

Provides an overview of statistical reasoning, hypothesis testing, and research design. Explores how researchers develop research questions, generate research hypotheses, understand variability, and develop methods for explaining variability to the extent possible.

Recommended Use: Required ID (Language): 17584 (English) Author(s): Dee Andrews, PhD

Quantitative Research (Statistical Reasoning and Hypothesis Testing) - Part 2

Expands on the fundamentals of statistics and explores how statistics are used to make research decisions.

Recommended Use: Required ID (Language): 17585 (English) Author(s): Dee Andrews, PhD

Survey Research: Designing the Instrument

Provides an overview of survey research design, with a focus on developing and pilot testing the survey instrument.

Recommended Use: Required ID (Language): 17586 (English) Author(s): Seth J. Schwartz, PhD - University of Miami

Survey Research: Conducting the Research

Discusses key areas associated with conducting survey-based research, including ways to adapt surveys for new populations, different samples and sampling techniques, and ways to administer surveys.

Recommended Use: Required ID (Language): 17587 (English) Author(s): Seth J. Schwartz, PhD - University of Miami

Qualitative Research Methods

Provides an overview of qualitative research and differences among the major qualitative research designs. Highlights critical issues to consider when designing a qualitative study.

Recommended Use: Required ID (Language): 19101 (English) Author(s): Moin Syed, PhD - University of Minnesota

Mixed Methods Research

Describes mixed methods research and the rationale for using a mixed methods design. Includes an overview of the different mixed methods designs and the major design decisions to consider.

Recommended Use: Required ID (Language): 17588 (English) Author(s): Moin Syed, PhD - University of Minnesota

Data Management

Identifies the steps, concepts, and importance of data management throughout a research study. The module also discusses institutional support services that can help manage research data; methodological, technological, and regulatory considerations that affect data management practices; documentation needed to facilitate the accessibility and reproducibility of research findings; and ethical and compliance issues relating to data ownership, sharing, and protection.

Note: This module is part of the CITI Program’s Responsible Conduct of Research (RCR) series, but is recommended as part of this course. For organizations with a “Make Your Own” custom subscription, use of this module requires adding  Responsible Conduct of Research (RCR) to your organization’s subscription.

Recommended Use: Required ID (Language): 20896 (English) Author(s): Julie Goldman, BS, MLIS - Harvard Medical School, Countway Library

" role="button"> Reproducibility of Research Results

Discusses factors that contribute to the lack of reproducibility and the resulting problems that can emerge. The module also describes the stakeholders affected by reproducibility problems, a collection of reproducibility initiatives, and strategies that can mitigate or prevent irreproducibility.

Recommended Use: Supplemental ID (Language): 17756 (English) Author(s): Teri A. Hamill, PhD - Nova Southeastern University

How is the Research Study Design course structured?

It consists of modules that contain detailed content, supporting visual elements, and supplemental materials (such as case studies, examples, and resources), and a quiz.  Learners may complete the modules at their own pace.

Note: The Data Management and Reproducibility of Research Results modules are part of the Responsible Conduct of Research course but are included with the purchase of the Research Study Design course.

When should learners and organizations consider subscribing to the Research Study Design course?

Learners should consider this course if they need a review of scientific research and different research designs. Organizations may subscribe to the course and provide access to learners who need such training.

Who might benefit from the Research Study Design course?

This course is suitable for individuals who would benefit from an overview of the scientific method and different research designs. Individuals new to scientific inquiry or unfamiliar with some elements of scientific inquiry might find the course meaningful. In addition, individuals involved in conducting or overseeing research, such as research team members, IRB members and administrators, or non-scientific community members, might find the Research Study Design course helpful.

What are the advantages of the Research Study Design course?

The Research Study Design course was developed and reviewed by experts to provide individuals with a succinct overview of scientific methods and various research designs. The course is self-paced, convenient, and available to learners around the world.

How long will a course take me to complete?

Although completion time will vary from one learner to the next, we estimate that each module will take about 25-35 minutes to complete. The modules are designed so that learners can complete them in one sitting or in multiple sittings, and there is no time limit for course activities.

Related Content

These courses teach learners the essentials of statistical analysis via an interactive online experience.

hand holding tablet with statistical data

Protocol Builder is an online protocol writing and collaboration platform that also speed up your pre-review turnaround times.

monitor displaying protocol builder interface

Explores the current protections, regulatory elements, and ethics tools associated with protecting human subjects in light of AI research.

Brain made upf technological connections

Provides an overview of the structure and function of public health systems, differentiates research and practice, and reviews consent and ethical issues for public health researchers.

nurse injecting patient with vaccine

This webinar covers the types, limits, and ethical guidelines of peer review along with federal regulations and criteria for evaluating proposals.

Equal sign signifying reproducibility of research

This course provides a step-by-step guide to help simplify the grant writing process.

checklist of items for a grant proposal

Covers various technologies and their associated ethical issues and governance approaches.

futuristic concept of finger on the pulse of technology

This course covers the core norms, principles, regulations, and rules governing the practice of research.

two doctors looking at a display

This course provides an expansive review of human subjects research topics for biomedical researchers.

tablet with a 3D brain illuminated at the core

This course offers a foundational overview of the critical areas associated with IRB and IRB office operations.

hand searching through administrative files

Privacy Overview

Study designs: Part 1 - An overview and classification

Affiliations.

  • 1 Department of Anaesthesiology, Tata Memorial Centre, Mumbai, Maharashtra, India.
  • 2 Department of Gastroenterology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, Uttar Pradesh, India.
  • PMID: 30319950
  • PMCID: PMC6176693
  • DOI: 10.4103/picr.PICR_124_18

There are several types of research study designs, each with its inherent strengths and flaws. The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

Keywords: Epidemiologic methods; research design; research methodology.

FDA: New Guidance for Non-interventional Studies of Drug Safety and Effectiveness

research and study design

The U.S. Food and Drug Administration (FDA) has recently issued guidance for sponsors and investigators interested in submitting a non-interventional study, commonly known as an observational study, to contribute evidence of a drug’s safety and effectiveness. These drug studies aim to distinguish the drug’s effects from other factors like disease progression, placebo responses, or observer bias. Non-interventional studies, such as those using electronic health record data in cohort designs, may yield inaccurate conclusions due to confounding factors like noncomparable treatment groups or bias leading researchers to choosing specific types of participants thereby missing data. Addressing these issues is essential for obtaining accurate data and results. FDA’s guidance assists sponsors in recognizing and mitigating these challenges before finalizing study protocols and statistical analysis plans (SAP).

Understanding Non-Interventional Studies: Background

Non-interventional studies are invaluable for gaining insights into the actual safety and effectiveness of drugs in real-world settings. These studies involve patients receiving approved drugs as part of routine medical care, without specific interventions being dictated by a protocol.

There are various types of non-interventional study designs, including observational cohort studies, case-control studies, and self-controlled studies. These designs enable researchers to observe outcomes of interest without disrupting the natural course of treatment.

The reliability and relevance of real-world data (RWD) are crucial for the success of non-interventional studies. RWD further encompasses a wide range of information on patient health status and healthcare delivery from diverse sources. In these studies, the quality and consistency of RWD from different sources are pivotal for ensuring accurate conclusions are drawn.

Overview of Research Guidance

FDA has provided guidance to researchers regarding the:  

  • Proposed approach;
  • Elements of a study design;
  • Selection of data sources; and
  • Analytic approach to reviewing data.

Summary of the Proposed Approach

Sponsors should first finalize their protocol by outlining the detailed plan for conducting the study. This protocol should encompass the study’s roadmap, covering aspects such as the research question, hypothesis, rationale for study design choice, selection of data sources, results of preliminary studies, approach to causal inference, and ethical considerations. Engaging with the FDA early is important for assuring alignment with expectations.

Study Design

Crafting a robust study design is the cornerstone of any successful research endeavor. The study design elements of the proposed research should include a schema outlining the overall study design and a causal diagram specifying the theorized causal relationship. Additionally, critical components such as the source population, eligibility criteria, conceptual and operational definitions for key variables, relevant covariates, index date determination, and start and end of follow-up periods should be concisely described in the protocol.

Data Sources

Selecting appropriate data sources is key for the success of a non-interventional study. The description of the proposed data source(s) should include details on how the data were originally collected and the rationale for choosing the data source(s). The description of the sources should also address the relevance to the drug-outcome association, information on confounding factors, data reliability, common data models used, timing and completeness of key data elements, appropriateness of coding based on operational definitions, alignment with the target patient population, and quality assurance activities. Additionally, plans for data quality assurance and potential links to other data sources should be outlined.

Analytic Approach

Once the study protocol is established, the focus shifts to the analytic approach. Sponsors must craft a detailed SAP that aligns with study objectives. This includes specifying the statistical approach for evaluating treatment effects, handling intercurrent events and censoring rules, and accounting for potential confounding factors, including assessment of unmeasured confounding factors.

The SAP also considers potential overadjustment of intermediate variables on the causal pathway, outlines approaches for subgroup analyses, addresses unequal detection of outcomes across compared groups, evaluates the possibility of reverse causality, describes methods for handling missing or misclassified data, and outlines strategies for managing multiplicity, such as analyzing multiple exposures or outcomes.

Non-interventional studies play a vital role in the evaluation of drug safety and effectiveness. FDA’s guidance offers recommendations to sponsors considering the submission of non-interventional studies to the FDA to support the demonstration of a drug’s safety and effectiveness in connection with RWD. The development of this draft guidance reflects the increasing interest among stakeholders in leveraging non-interventional studies to provide evidence of a drug’s safety and efficacy. By carefully considering the outlined recommendations at each stage of the research process, researchers can enhance the quality and reliability of their study outcomes.

Foley is here to help you address the short- and long-term impacts in the wake of regulatory changes. We have the resources to help you navigate these and other important legal considerations related to business operations and industry-specific issues. Please reach out to the authors, your Foley relationship partner, our Health Care & Life Science Sector , or to our Health Care Practice Group with any questions.

research and study design

Jordan H. Smiley

research and study design

Kyle Y. Faget

Related insights, health-related social needs: three trends in leveraging community partnerships, hhs updates pixels and trackers guidance for hipaa regulated entities, clinical research: fda issues draft guidance on informed consent.

  • Open access
  • Published: 02 April 2024

Adopting, implementing and assimilating coproduced health and social care innovations involving structurally vulnerable populations: findings from a longitudinal, multiple case study design in Canada, Scotland and Sweden

  • Gillian Mulvale   ORCID: orcid.org/0000-0003-0546-6910 1 ,
  • Jenn Green 1 ,
  • Glenn Robert 2 , 5 ,
  • Michael Larkin 3 ,
  • Nicoline Vackerberg 4 , 5 ,
  • Sofia Kjellström 5 ,
  • Puspita Hossain 6 ,
  • Sandra Moll 7 ,
  • Esther Lim 8 , 9 &
  • Shioma-Lei Craythorne 3  

Health Research Policy and Systems volume  22 , Article number:  42 ( 2024 ) Cite this article

Metrics details

Innovations in coproduction are shaping public service reform in diverse contexts around the world. Although many innovations are local, others have expanded and evolved over time. We know very little, however, about the process of implementation and evolution of coproduction. The purpose of this study was to explore the adoption, implementation and assimilation of three approaches to the coproduction of public services with structurally vulnerable groups.

We conducted a 4 year longitudinal multiple case study (2019–2023) of three coproduced public service innovations involving vulnerable populations: ESTHER in Jönköping Region, Sweden involving people with multiple complex needs (Case 1); Making Recovery Real in Dundee, Scotland with people who have serious mental illness (Case 2); and Learning Centres in Manitoba, Canada (Case 3), also involving people with serious mental illness. Data sources included 14 interviews with strategic decision-makers and a document analysis to understand the history and contextual factors relating to each case. Three frameworks informed the case study protocol, semi-structured interview guides, data extraction, deductive coding and analysis: the Consolidated Framework for Implementation Research, the Diffusion of Innovation model and Lozeau’s Compatibility Gaps to understand assimilation.

The adoption of coproduction involving structurally vulnerable populations was a notable evolution of existing improvement efforts in Cases 1 and 3, while impetus by an external change agency, existing collaborative efforts among community organizations, and the opportunity to inform a new municipal mental health policy sparked adoption in Case 2. In all cases, coproduced innovation centred around a central philosophy that valued lived experience on an equal basis with professional knowledge in coproduction processes. This philosophical orientation offered flexibility and adaptability to local contexts, thereby facilitating implementation when compared with more defined programming. According to the informants, efforts to avoid co-optation risks were successful, resulting in the assimilation of new mindsets and coproduction processes, with examples of how this had led to transformative change.

Conclusions

In exploring innovations in coproduction with structurally vulnerable groups, our findings suggest several additional considerations when applying existing theoretical frameworks. These include the philosophical nature of the innovation, the need to study the evolution of the innovation itself as it emerges over time, greater attention to partnered processes as disruptors to existing power structures and an emphasis on driving transformational change in organizational cultures.

Peer Review reports

Growing recognition by governments internationally of the need to involve the perspectives of people using public services when designing, delivering and improving those services has been described as a Participatory Zeitgeist reflecting the “spirit of our time” [ 1 , 2 (p247)]. Researchers and designers have developed various approaches drawn from different disciplines and using different labels (for example, codesign, cocreation, coproduction) that align with principles in the citizen engagement literature [ 3 ]. These approaches recognize that service users have experiences and assets and can contribute to service design and delivery along with professional expertise, rather than simply being passive recipients of services designed and delivered by others [ 3 , 4 ]. While these approaches can be used with anyone, they have been increasingly applied to promote the inclusion of structurally vulnerable populations in the design and delivery of innovative health and social care services that seek to support them.

While coproduction has the potential to reform inequitable structures and social processes, excluding vulnerable groups or involving them in a tokenistic manner may unintentionally reinforce existing power imbalances [ 4 , 5 ]. For example, gaps have been noted between the rhetoric of service user involvement in international mental health policy and the readiness to adopt such policies in practice [ 6 ]. Challenges have also been noted in incorporating the voices of individuals with complex needs in improving care coordination across health and social services [ 7 ].

Despite increasing attention to coproduction in the literature and practice, knowledge gaps exist with respect to the implementation of coproduction involving vulnerable populations in different contexts [ 8 , 9 , 10 ]. An international symposium of coproduction researchers and people with lived experience held in Birmingham, England in 2017 identified the need for research to understand how exemplary coproduction innovations involving structurally vulnerable groups originated and their assimilation into routine practice [ 11 ]. To our knowledge, established implementation science models have yet to be applied to coproduction, where service users and service providers are cocreating innovations during the process of implementation [ 12 ].

In this paper, we present findings from a longitudinal case study exploring the factors and processes that influence the adoption, implementation and assimilation of three diverse coproduced public service innovations involving structurally vulnerable groups. We explored the perspectives of strategic leaders involved in advancing coproduction processes involving vulnerable groups. Our analysis proceeds through the lens of existing frameworks from the literature to discuss the outer context (economic, social, political, geographical), inner context (organizational and community considerations), individual factors, innovation features and process considerations [ 13 ].

Conceptual foundations: coproduction, structural vulnerability and implementation processes

Coproduction: Coproduction has been defined as “… involvement of public service users in the design, management, delivery and/or evaluation of public services” [ 4 ]. A core feature of coproduction approaches is that they are applied in a flexible manner, dynamically and innovatively responding to local needs and context [ 14 ].

Structural vulnerability: We adopt the term structurally vulnerable populations to recognize that vulnerability is not inherent in these populations but rather in the social, economic and political systems in which they are embedded [ 15 , 16 ]. Examples include individuals who may require multiple health and/or other public services, including people with complex and intersecting health needs (for example, heart failure and dementia) along with poverty, homelessness and/or being members of newcomer or racialized groups. Structural barriers (for example, lack of trust, language, cultural, scheduling, financial) and power relations may prevent them from engaging in coproduction.

Adoption, implementation and assimilation: We draw on and combine elements from three theoretical frameworks to guide this research. The first is the Diffusion Of Innovation (DOI) model [ 17 ], which identifies how political, social, economic, cultural, and organizational factors and processes affect fidelity and adoption during the diffusion of service innovation. The second is the Consolidated Framework for Implementation Research (CFIR) [ 18 , 19 ], which demonstrates the importance of contextual factors at multiple levels (external context, internal context, innovation features, processes and individual characteristics) in shaping the implementation of service improvements. The third is Lozeau et al.’s (2002) compatibility gaps [ 20 ], which characterize different forms of assimilation of innovations into routine practice [ 20 , 21 ]. Based on these frameworks, we define innovation as a novel set of behaviours, routines and ways of working that are directed at improving health outcomes, administrative efficiency, cost effectiveness or users’ experiences, and that are implemented by planned and coordinated action [ 20 ]. We define adoption as the incremental considerations and progressive individual and collective decision-making from pre-contemplation through exploration by which organizations ultimately decide to adopt the innovation (programme/model/process). Implementation describes the formal strategies to promote the integration of innovations into existing practices. Assimilation is the informal process by which, over time, innovations become part of routine ways of doing things. Assimilation can be characterized as (a) transformation when there is high fidelity to the model and the organization adjusts its functioning to fit the assumptions of the model; (b) customization when the model is adapted to the context and the organization adjusts its practices; (c) loose coupling whereby the innovation is adopted only superficially, while the functioning of the organization remains largely unaffected; or (d) co-optation whereby the innovation becomes captured and distorted to reinforce existing organizational roles and power structures [ 21 ].

Study aim and design

We adopt a longitudinal multiple case study approach to understand the dynamic nature by which three coproduced innovations intended to address the needs of vulnerable populations were adopted, implemented and assimilated [ 22 ]. Case study research is well suited to studying contemporary phenomena in their real-life contexts, and theory is often adopted to focus the analysis, allowing the theory to be augmented or revised based on emerging findings [ 22 ]. To meet the criteria of being a ‘case’, an innovation had to be underpinned by a coproduction model involving structurally vulnerable populations in the design, management, delivery and/or evaluation of a public service that has advanced through these phases. Concepts from the CFIR, DOI and assimilation frameworks described above informed the case study protocol, semi-structured interview questions, data extraction and coding.

Case selection

The three cases were selected through the networks of the investigators to illustrate how coproduction involving vulnerable populations can be advanced in different contexts: the region of Jönköping, Sweden striving for better patient outcomes and experiences by tailoring care to the needs of people with multiple complex needs (Case 1 – ESTHER); the city of Dundee, Scotland aiming to advance the recovery of people with mental illness through greater collaboration with those with lived experience and among service organizations (Case 2 – Making Recovery Real [MRR]); and a rural and an urban branch of a national community mental health organization in a Canadian province that adapted the English Recovery College model of coproduced educational programming to support the recovery of people with serious mental illness (Case 3 – Canadian Mental Health Association [CMHA] Manitoba and Winnipeg and CMHA Central branches’ Learning Centres in Manitoba, Canada) (see Tables  1 , 2 and 3 ).

The study team were familiar with each of these cases and were confident in having good access to them over time. Additionally, their different national contexts offered the opportunity to consider macro-level factors. While each of these countries’ health and social care systems are largely publicly funded, funding is the responsibility of different levels of government (municipal, provincial and/or national) and services are administered and delivered primarily by local governments and/or designated authorities (see Table  4 ).

Data sources and collection

Data sources include relevant academic and grey literature identified through electronic searches and/or recommended or shared by local gatekeepers and key informants to inform the background case context for the individual case analyses, and the interview guides (see Table  5 , and Table S1 in Additional file 1 for more details). Research team members (GM, JG, GR, NV, PH, SC, SS) conducted 45–60 minute long semi-structured interviews in person or online between November 2019 and August 2021. To help understand the history and context of each case, key informants were strategic decision-makers and programme managers affiliated at the time with the organizations leading, participating in or supporting the local initiatives, and who were familiar with the history of how the coproduced innovations emerged, their developmental timeline and coproduction’s role in the overall system. Footnote 1

The interview guide questions probed about this history with a focus on the contextual factors that influenced adoption and implementation and the extent to which coproduction has been assimilated into routine practice. Data were gathered through investigator field notes, the audio-recording and transcription of interviews, timelines, hand-written notes and/or audio-recordings of team meetings to capture member checking with local collaborators, and case team memos of decision points.

To maintain participant anonymity, participant codes are used in the text, identified by a location code (for Case 1, JKG = Jönköping, Sweden; for Case 2, DND = Dundee, Scotland; for Case 3, OTH = Other [for example, national, international informants], PLP = Portage la Prairie, Manitoba, Canada; WPG = Winnipeg, Manitoba, Canada), and a participant number (that is, 01, 02, 03 and so on). For example, an informant from Dundee could be DND-03. Note that the perspectives of service providers and people with lived experience of structural vulnerability were not the focus here but are considered in subsequent waves of our data collection to understand their experiences of coproduction in practice.

Data analysis

A common coding framework was developed iteratively to capture factors and processes influencing adoption, implementation and assimilation by combining elements of the theoretical frameworks to remove overlap and promote consistency of understanding when coding and interpreting the data. Table S2 presents this in more detail (see Additional file 2 ).

The initial data extraction was performed by the research team affiliated with each case, and the project research coordinator worked with the local research coordinator for each case to ensure consistency across cases. Documentary evidence analysis primarily informed our understanding of the historical context and overview of each case. All data were coded and analysed using a deductive approach; a common coding scheme and thematic analysis were employed, respectively, based on the theoretical propositions and concepts in the CFIR and DOI models, and allowing for emergent themes, particularly in relation to the coproduction context [ 22 ]. A visual timeline was created to understand the initiation and growth of coproduction in each case. Interview data was triangulated with documentary evidence and field notes. Analysis proceeded on a case-by-case basis, followed by a cross-case analysis.

Qualitative validity and reliability

The research team comprised four members who were familiar with one of the three cases prior to the study (the ESTHER case), as well as eight members who were not familiar with any of the cases. One member of the team had been closely involved with the development of the ESTHER case over a long period of time. The use of a common and detailed case study protocol and data management system, central and local research coordination by case, monthly investigator meetings and tri-annual full team meetings including collaborating organization representatives were strategies used to enhance qualitative validity. The common coding framework and frequent team discussions helped to ensure consistency and enhanced reliability. Data were triangulated across sources, the analysis was triangulated across investigators and theories, and member checked at various stages with the full team of investigators and collaborators [ 23 ].

Ethical considerations

Research ethics clearance was obtained from the relevant academic research ethics boards (McMaster University Research Ethics Board [MREB Project ID 2066], Aston University Ethics Committee [Rec Ref #1611]; King’s College London Research Ethics Office [Reference Number MOD-19/20-17350]; SingHealth Centralized Institutional Review Board [CIRB Ref# 2020/2341]; and Swedish Ethical Review Authority [Etikprövningsmyndigheten, Dnr 2019-06373]), and in light of this, ethics review was waived by the boards of the collaborating organizations (Canadian Mental Health Association, Manitoba & Winnipeg branch, the East of Scotland Research Ethics Service). Participants received letters of information outlining the study objectives, protocol and risks prior to consenting in writing. Data were collected and stored locally and shared across sites as anonymized, encrypted and password-protected files.

We outline the historical context and analysis of contextual factors influencing adoption and implementation, discuss assimilation by case and then present a cross-case analysis. Tables 1 , 2 and 3 above capture the key features of each case, Figs.  1 , 2 and 3 summarize the adoption, implementation and assimilation timelines, and Tables  6 , 7 and 8 summarize the cross-case analysis.

Historical context: ESTHER is a complex system of public health and social care services run by 13 municipal councils in Region Jönköping County, Sweden that has brought intersectoral health and social care providers together since the 1990s to increase coordination and to redefine service experiences around the needs of the person receiving the services. In a context of restricted public sector funding, ESTHER began in 1997, initially for 2 years, with the aim of finding ways to meet population health needs using approaches other than increased hospital bed capacity. Hospital leaders in Region Jönköping County aimed to transform ways of working and to prevent hospital admissions through what informants called “radical customization”, which considered the needs of individual patients using a bottom-up change process referred to as health process re-engineering. This approach 'shadowed' a patient with complex needs through their health service experience journey and included interviews and surveys with patients, staff and government officials and observations of care encounters and processes to gain new insights into what was needed to improve the system from the patient perspectives. Storytelling of the experience of 'Esther', a persona of an elderly person with complex health needs, actualized this process, pointing out what needed to be done differently by demonstrating the importance of focusing on the experience of the person receiving care. The lessons learned from ESTHER fuelled health and social service-wide change, including coproduction with patients beginning in 2006 through patient roles on advisory committees and councils, and has expanded to include initiatives such as ESTHER cafes, ESTHER coach training and ESTHER family meetings, among others.

Adoption: In the ESTHER case 'adoption' of coproduction was an emergent phenomenon that took place over a 10 year period as ongoing improvement efforts, aimed from the outset at better capturing the lived experience of people with complex needs, evolved in terms of how their perspectives were incorporated in design and decision-making. This initially began with interviews and shadowing patients and bringing staff on board with this approach, until by 2006, Esthers became more directly involved in coproducing system improvements. In the internal context , healthcare process re-engineering efforts since the 1990s centred on the question of “What is best for Esther?” and demonstrated the importance of person-centred care and emphasizing the experiences of the person in need of complex care, laying the foundation for a coproduction approach to emerge. In the external context , system-wide efforts by health and social leaders to create a system map led to ESTHER becoming more than a health quality improvement project but rather a health and social systems-wide movement. From a process perspective, the initial project’s evaluation results indicated a 20% reduction in hospital beds, an achievement that earned recognition in the external context through two national awards. As project funding ended, the benefits of the ESTHER philosophy were recognized, and ESTHER transitioned from a project to a 'network' without funding. Over the next few years, the ESTHER Network further developed as 'cousins' emerged across Sweden, and the approach was adopted in other countries, including Italy, England, Scotland and France.

By 2006, ESTHER in Sweden transitioned toward adopting coproduction approaches that actively invited participation of people with lived experience expertise (Esthers) in coproducing ongoing innovations; however, this process was emergent and not uniform. The flexibility of a guiding philosophy was a key feature that enabled this emergence of innovation in the coproduction approach. By this time, some individual system leaders had come to recognize that keeping the focus on value and what is best for the person being treated in their daily lives would lead to better results than a preoccupation with resources and cost cutting. ESTHER had transformed relationships internally in hospitals to team-based (doctor‒nurse) coleadership and externally across the region via interorganizational collaboration between hospitals, primary care, community care and social care to improve Esthers’ care journeys. These collaborative ways of working were preparation for collaboration with Esthers, helping to create receptivity among senior leaders to coproduction. Nonetheless, at this stage of adoption there was still some internal resistance, particularly at middle management and staff levels, as Esthers began attending and sharing stories about their experiences at leadership meetings.

“I think one of the most important decisions was to take patient in the room. In addition, there was a lot of resistance”. [JKG-01]

Implementation: Once the decision to work directly with Esthers was taken, the implementation of coproduction has continued to unfold, albeit unevenly and opportunistically. Around this time, factors in the outer context shaped ESTHER’s continued development, as Esthers became increasingly present in local patient committees and began to participate in and influence the ESTHER steering committee. While ongoing primary care reform was a distraction for many health service managers, an external network of Esthers developed from different programmes across municipalities, and annual ESTHER 'family' meetings were held, where Esthers could convene to share experiences and ideas, strengthening the grassroots support. ESTHER was again gaining international recognition, becoming the subject of a BBC documentary film and being declared “one of the coolest innovations in the world” by CNN.

In the internal context , further developments included the creation of internal structures that were funded to support greater involvement of patients with multiple vulnerabilities in coproduction activities: The ESTHER Competence Center, training healthcare teams to follow the ESTHER philosophy, and ESTHER Coach quality improvement training programmes for approximately 30 health and social service providers to become new ESTHER Coaches each year, and with growing numbers of Esthers as faculty. Key features of the approach were supportive of grassroots growth. Coaches developed innovations on an ongoing basis with input from Esthers, and health and social service providers remarked that the ESTHER philosophy takes them back to the reasons they entered their professions. At the same time, the bottom-up nature driving innovation continued to be threatening to some individuals in senior leadership positions who were more distanced from observing the benefits.

“ESTHER is very much bottom-up. So, you are very close to ESTHER … you see what’s going on and what you can do better. The steering is from the bottom, and then the managers got a bit threatened. I think there was suddenly too much; the movement was suddenly too big. So, people were reacting to that. …That still is a challenge”. [JKG-01]

Creative approaches have been used to foster growth despite this resistance. Small changes such as renaming committees have enabled participation by Esthers.

“We had our ESTHER Strategy Days. It was once a year that we had a really big gathering about what we are going to focus on. And we invited managers, we invited the coaches, we invited Esthers. So, one-third of the group [of 30] were Esthers and the other were working in health and social care. And, for me, that was a very big success, but it also became a threat. So, they took it away because they said you can’t have strategy day because you are not a manager. So, we changed the name. Now we have the ESTHER Inspiration Day”. [JKG-01]

The implementation process has been incremental and iterative to balance the grassroots pressure for innovation with the internal resistance to patients as equal partners, while ensuring real change results. As an example, in 2007, ESTHER cafes were introduced to connect Esthers and to identify the improvement possibilities most important to Esther. These cafes continue to be held four times per year and have attracted a wide audience, including clinicians and politicians. Esthers share their stories to help leaders and practitioners understand individual experiences, but the process also builds credibility: it requires a check-in with leaders and service providers about what they heard and whether that is consistent with what the storyteller feels is most important, and agreements are reached before the meeting ends about specific action(s) that will be taken to address what is important to Esthers.

“When we listen to a story, we ask the group, ‘What did you hear?’ And we are trying to confirm whether we are hearing different things than [what] Esthers really mean. So, the staff sometimes think, ‘This is very important’. But when we give that back to Esther, she says, ‘Well, that’s not so important for me. For me, this is important’. So, the ESTHER cafe is an activity to identify improvement possibilities. That’s one of the activities”. [JKG-02]

Assimilation: By 2016, ESTHER had evolved from being a network to becoming assimilated as a mindset – the central concept driving innovation in the system in the Jönköping Region. By this time, the decision was made to withdraw funding specific to ESTHER other than to support coach education and to have no single person responsible as leader, as it is intended to be fully assimilated as part of the normal way of working. At the same time, without dedicated funding and leadership, questions remain about sustainability.

“As I said, it is a mindset. Now it is implemented in these programs – the question: ‘What’s best for Esther?’– you will find you can’t find one person who is responsible for ESTHER in Sweden, but there is a programme group and the programme group is trying to find out ways how to spread it in the whole region, because we have some difficulties there. It’s a mindset and it should be part of the daily work. And we are getting there. I think it’s very much dependent who is leading all these kind of leadership programmes, and do they really take the ESTHER philosophy to heart?” [JKG-02]

At this point, all steering groups were removed, being seen as no longer necessary. This removal of infrastructure (formal structures, funding) initially concerned committed leaders, with a risk of co-optation of the ESTHER concept without true adherence in practice. However, there was a widespread sense among interviewees that the ESTHER philosophy has been assimilated as a core value that continues to influence all activities, permeating the culture to become the routine practice in Jönköping.

“It’s a very normal mindset in one of our hospitals to ask the question, ‘What’s best for Esther?’ That’s just a normal way of working and people are just using that word and that question”. [JKG-FL-01]

See Fig. 1 for a summary of the Case 1 adoption, implementation and assimilation timeline.

figure 1

Case 1 ESTHER coproduction adoption, implementation and assimilation timeline

Historical context: Making Recovery Real gives people with lived experience of mental health difficulties the opportunity to be at the centre of decision-making, service design and practice development in the community of Dundee, Scotland by changing the terms of the dialogue about recovery, mental health and well-being. It began in 2015 as a collaboration of 10 local public, voluntary and community organizations who responded to a call from the Scottish Recovery Network (SRN) to work together to take a new approach to improve the experience and outcomes for people living with mental illness. Initially, the partner organizations endeavoured to develop and deliver more recovery-focused policies and practice by centring lived experience in answering the question: “How can we make recovery real in Dundee?” They brought together interested people, including those with lived experience, at collaborative cafes; a series of events where priorities and accompanying actions were identified, and where participants were equal contributors to the process and its outcomes. To foster the integration of lived experience into system design and practice, the priorities identified were to (i) collect and share recovery stories so that lived experience is at the core of service design, delivery and practice; (ii) develop peer support roles and training; and (iii) celebrate recovery [ 24 ].

Adoption: In the external context , the mental health system remained dominated by the medical model, a lack of system innovation and acute services prioritized over community services. Yet, recent Scottish health and social care system integration has supported partnership working. Simultaneously, SRN, a national voluntary organization established in 2004 to promote recovery principles within the mental health system, was shifting from working with the National Health Service towards building coalitions of change within communities and a whole-systems approach to promoting recovery. SRN solicited proposals from local groups and organizations, offering their support for community-based collaborations that would involve people with lived experience in developing local initiatives to support mental health.

Factors in Dundee’s internal context also converged to support a proposal put forward to SRN for an innovative approach. First, the Dundee Third Sector Interface (TSI), which supports the representation of third sector organizations in local authority planning, had been working to better involve people with lived experience in mental health system planning, and meetings with their network members were becoming more recovery focused. A recent inquiry into mental health services and a fairness commission on poverty (a longstanding local issue) also motivated the local council and Health and Social Care Partnership (HSCP) to take innovative action focused on prevention versus mitigation.

“And I think the Health and Social Care Partnership realized that they needed to do more than mitigation … they have been really, really clear on the need for new ways of doing things for about the last 10, 15 years”. [DND-02]

Furthermore, Dundee City was preparing to develop a new mental health strategic plan and, in the hope of influencing the strategic priorities and the future approach to engagement locally, the TSI brought partners from across community services, the local authority and representative groups who had been attempting to make change in the system to submit a proposal for SRN’s support. Individual leaders from within the partner organizations, motivated by their own lived or professional experience, were drawn by the innovation’s features : to support any concerned citizen to contribute their inherent resources through meaningful involvement and an asset-based approach:

“… So lived experience is essential, bringing people together, involving everybody who wants to be involved in each aspect of the process; so, firstly in agreeing what it is they want to achieve, then in making sure that it is carried out, also in having an actual role in actively carrying it out, so not just identifying things other people should do but having a vested interest and an active contribution to the activities that are going to be – whatever it is that’s going to be done differently, basically”. [DND-04]

SRN acted as a change agency, helping to alleviate tensions among the coalition and supporting their process of exploring the opportunity and submitting a successful proposal.

Implementation: First, SRN helped to bring the individuals involved together to establish a shared vision for the process among the local integration bodies (TSI and HSCP) and a TSI-supported service user network, reducing competition among the service provider partners. Within the inner context of the partnership, there was a commitment to coproduction processes and peer support as a critical opportunity to incorporate more lived experience into the mental health system. Despite these efforts, some of the original partners could not align themselves with the experience-led approach and discontinued their involvement knowing they could return at any time. Undaunted, the remaining partners proceeded by working with the “willing”, beginning with increasing local knowledge of recovery approaches and exploring what recovery meant to local citizens.

“… at the very start, it was a case of, ‘Right. We don’t really know where we want this to go. And actually, are we the ones to be dictating where this should go? No, we’re not. What’s most important is that we’re listening to people with lived experience, people on the ground, and they should be the ones that are telling us what needs to be changing’. So from the beginning, the sort of first step was looking at how we can engage with local people. And we were really keen to make sure that it was meaningful … And we thought this involvement can’t be tokenistic. People need to be on board, and it needs to be collaborative from the start”. [DND-05]

To build connection and trust between participants while shifting to a peer-led approach, the implementation process involved facilitating a series of coproduced, discussion-based events where people with lived experience were invited to be involved in all stages from planning and executing the events, to identifying and achieving priorities. The role of professionals shifted to “being on tap, not on top” [DND-02]. SRN provided developmental support to the Dundee partners to deliver the events, the features of which were welcoming and inclusive, avoiding formal presentations in favour of fun, health-promoting activities that allowed community members to feel heard, and demonstrated alignment with their own ideas and values.

“… what we did—and I would say I think that really set the tone – was rather than have lots of presentations, what we did was, at the event, we welcomed everybody, but we invited lots of the groups to run taster sessions of the things they did. So, that actually brought a lot of people with lived experience because they were coming along to demonstrate their finger painting. There was hula-hooping. There was wellness action planning. There was how to sleep well [sessions]. And in every corner of this venue, there was little groups of people who were painting pebbles, things like that. And then in the afternoon, we had a big conversation happen, world café style. And the sort of comments we got from people were, ‘I felt this was my event. This was for me. It wasn’t for them, the professionals’”. [DND-02]

From these discussions, it emerged that understanding local experiences of personal recovery was the most preferred and effective conveyor of local knowledge and motivator for change for the range of stakeholders. Storytelling became the primary vehicle for relationship building. Peoples’ stories were compiled into a film that premiered at a well-attended, prestigious 'red-carpet' event at a local cinema house, and subsequently became a tool to foster collaborative conversations at engagement events.

“And the film galvanised things and I think because we’d moved beyond that individual telling their story to having a 20 minute film of people reflecting on recovery, which is quite different from telling a story, say, of illness”. [DND-02]

The film drew strategic attention to MRR. This culminated into a consensus to embed recovery, backing for continued peer support and recovery work into the new Dundee Mental Health Strategy and accompanying action plan.

Assimilation: The MRR partner organizations have adopted a peer-led approach to their efforts to promote mental health recovery going forwards. Partners are also now far more involved in collectively determining the distribution of funding through the HSCP and in designing new mental health services.

Locally, the MRR approach has also been included in the Dundee Mental Health Strategy, granting the third sector more influence and collective power in local health and social care planning. The adoption of the MRR approach by the Dundee HSCP has strengthened the importance of mental health locally, dovetailing with the recommendations of the independent inquiry on poverty. At the national level, a Scottish government funding programme to increase the number of mental health workers in community-based services provided an opportunity for the HSCP to fund additional peer support roles, a key initiative within MRR.

Overall, the MRR partnership can be said to have had a transformative effect locally. It has led to better working relationships between providers and continues to drive progress. Furthermore, lived experience is being built into the system infrastructure through actions prioritized in experience-centred collaborative conversations: expansion of the local peer recovery network, development of peer support roles, implementation of peer-led services, peer support training provision and building recovery awareness. A key feature of ongoing progress has been that lived experience partners have been able to move in and out of active participation roles throughout the process, as their recovery journeys and contexts have allowed.

“There was that sense of collaboration that continued ... We kind of all came together to discuss how we felt our organizations could contribute to that bigger picture and the strategic objectives moving forward, and not just the strategic objectives in relation to Making Recovery Real but the wider kind of city and what they were looking for in relation to the local mental health strategy and the city plan”. [DND-05]

Participants describe the process as a difficult yet joyful and rewarding journey. For some organizations, the introduction of the MRR approach has motivated significant recovery-oriented change in their values and structure, further cementing system-level impact.

“Making Recovery Real has really been – I suppose we’ve adopted the principles and approaches … We try to adopt those as far as possible in all of our work. And we don’t badge it all Making Recovery Real, but we use the learning from it, I would say, in everything we do now, everything in the programme”. [DND-04]

See Fig. 2 for a summary of the Case 2 adoption, implementation and assimilation timeline.

figure 2

Case 2 Making Recovery Real coproduction adoption, implementation and assimilation timeline

Historical context: CMHA Learning Centres began development in Manitoba in 2015 as a coproduced adaptation and renaming of Recovery Colleges, which originated in England in 2009 with a focus on people with lived/living experience of serious mental illness. The aim of Recovery Colleges is to bring the lived experience of people with mental illness and other community members together with professional expertise to locally plan, develop and deliver educational courses about mental health and recovery, with the aim of empowering people to support their mental health and well-being. The concept of recovery education originated in the USA [ 25 , 26 ], and before adopting the Recovery College model, CMHA Winnipeg had offered psychosocial rehabilitation (PSR)-based recovery education since the early 1990s. In 2015, the CMHA Winnipeg branch leader conducted an internal evaluation of this programming, which suggested that improvement was needed to meet the psychosocial health and well-being needs of the community. Around the same time, the new leader of the CMHA Central branch in Portage la Prairie, Manitoba sought a fresh approach to its clubhouse programme, a mutual support drop-in centre, in response to member feedback. Leaders and service users of both branches embraced the Recovery College and coproduction approach to better meet client needs. CMHA Learning Centres build on the Recovery College principles, with the programming and the target audience expanded to promote living well among the broader population, as well as recovery education for people with lived experience of mental illness. The CMHA Central branch’s Thrive Learning Centre and the CMHA Winnipeg and Manitoba branch’s Well-being Learning Centre opened in September 2017 and January 2018, respectively.

Adoption: In the external context , the national policy context was supportive of a recovery and well-being approach; it was the focus of consultations over the 2008–2012 period prior to the release of Canada’s mental health strategy [ 27 ]. This enabled Manitoba bureaucrats to pressure provincial government leaders to cosponsor a 'Recovery Days in Mental Health' conference held in Winnipeg in June 2015. An English Recovery College champion was a keynote speaker and sparked interest in the model among CMHA branches in Manitoba. The Winnipeg Regional Health Authority (RHA), the major funder of the Winnipeg CMHA branch, also supported recovery and mental health promotion approaches. Informants reported that Manitoba’s culture of innovation and solidarity, with its many small rural communities, also aligned with the coproduction philosophy of inclusive innovation.

In the internal context , the Recovery College model resonated with existing branch cultures of deep commitment to recovery-oriented work and strong peer support foundations. CMHA’s federated structure allowed each branch autonomy to develop its own programming, with support from a national office. Attractive innovation features were the existing evidence base, emphasis on lived experience through coproduction in course development and facilitation, opportunity for student skill building, and flexibility to accommodate local needs and strengths. The instructional climate was also appealing, as it could offer people with lived experience a sense of community and could promote their self-efficacy and confidence while reducing the power imbalance and fostering relationships between staff and students. The Recovery College model could also offer a more immediate response in terms of educational support to people needing care and facing long wait times for traditional services.

“I would say there’s probably many other things besides instruction. I think there’s relationship-building that happens so there are connections between students and between the facilitators and the learners. It’s the development of a space that allows for people to develop skills that are unrelated to the content. So, people also learn skills like sharing in a group context, so confidence-building, self-efficacy. When you can cultivate a skill in one area, you build confidence, and you start to believe that you have the ability to learn and to develop new skills. So that sense of self-efficacy is very integral to the recovery and well-being journey”. [WPG-02]

The importance of individual characteristics was demonstrated as passionate leaders in the Winnipeg and Central branches who were committed to advancing upstream mental health promotion and PSR were impressed by the model and together, they researched it further to inform adoption decisions. The coproduction process aligned with CMHA’s “nothing about us without us” approach and could foster a sense of ownership. In both branches, the name Recovery College was changed to Learning Centre during the adoption process, which better resonated with community and agency participants.

Implementation: In the external context , in early 2017, CMHA Winnipeg and Central branches met with CMHA National to implement Learning Centres. Although no new funding was made available by the RHAs, philosophical support enabled the repurposing of existing funding for recovery education and peer support. In 2018, CMHA National and CMHA Winnipeg leadership visited England to meet recovery-focused mental health services experts and to see the model in action. This visit was crucial in fostering strong relationships between the model initiators and CMHA leaders who discovered common visions to widen the target audience to anyone in the community interested in mental health issues, thereby making mental health a universal concern and promoting a living well approach. Collaboration with an Ontario-based psychiatric hospital, with similar values and interest in Recovery Colleges, supported programme evaluation to produce evidence of effectiveness.

Internally , the Winnipeg and Central branches collaborated on initial model and course development, and took a staged approach to opening their Learning Centres. In the Central branch, where resources were tighter and there was a large geographic area to serve, creative approaches to leverage local support and assets were used. Health professional placement students supported the small branch to prepare for launch and in doing so, encouraged staff buy-in. Another peer service organization provided funding support and this, along with community grants, covered staffing, technology, social marketing and other costs that are traditionally not eligible for provincial funding.

“[A] critical moment would be the establishment of a partnership. I think that was a critical moment. I walked away and I know my staff did, too, with an immense sense of relief after I could tell them that [a peer Manitoban mental health community organization] was on board to help make this a reality”. [PLP-22]

The Winnipeg branch also leveraged internal resources, including an existing peer support group whose members assisted in developing the first five courses.

“And so we actually relied on some communities that existed within our CMHA. So we had a group of individuals who are peer supporters to one another. They had taken our workshops in the past. And then they created, on their own, their own support group, and designed that support group based on their needs and on an educational focus. So we actually asked them if they would be our initial coproduction group”. [WPG-04]

The passion of individual CMHA staff and leaders, many with their own lived experience, made them champions who demonstrated their commitment to valuing expertise derived from lived experience. These individuals also helped build the external linkages with organizations and key people both nationally and internationally. Innovation features allowed for initial small-scale implementation, leveraging local assets and community strengths before expanding further. The flexibility to offer “something for everyone” and promote “living well in your community” garnered broad interest and unanimous buy-in from community members. The flexibility of the model also allowed the Winnipeg branch to retain PSR influences from their colleagues at Boston College.

The collaborative coproduction process fostered a sense of ownership, friendship building, balance across perspectives and acceptance within the classroom. This affirming process allowed room for creative input and for trial and error, with the process itself evolving to become more effective over time. It also facilitated the expansion of course offerings, as students were encouraged to lead future course development. Accompanying changes to the physical space and staff roles helped in welcoming the whole community, meeting the needs of vulnerable groups in society and addressing access barriers.

Assimilation: The Central branch has been unable to coproduce new Learning Centre material during the COVID-19 pandemic, yet it continues to offer its existing content. The Winnipeg Learning Centre was able to shift to virtual and then hybrid online and in-person coproduction activities, while ensuring fidelity to the core Recovery College principles.

“And some of the other things that are in the fidelity assessment are: Are you recovery-focused? Are you community-focused? Are you collaborating with the people who are consuming your services? So, it’s a really easy fidelity to conform to but also have room to be kind of creative because they’re not dictating what courses you should have. The fidelity is that you provide courses”. [WPG-04]

In Winnipeg, the Learning Centre continues to expand and evolve, and is reported to have had a gradual but transformative impact on organizational context and values within the branch, by providing a universally accessible platform that demonstrates the value of engaging people with lived experience at every step. The coproduction approach to course development has ensured that content remains current and relevant through creativity, diversity and responsiveness to people’s needs. Leaders’ commitment to the model and ongoing evaluation to ensure it is meeting local needs have supported wider assimilation of coproduction approaches in other branch programming as well. New leadership in the Central branch has expressed the desire to revive the Learning Centre’s coproduction activities.

See Fig. 3 for a summary of the Case 3 adoption, implementation and assimilation timeline.

figure 3

Case 3 CMHA Learning Centres coproduction adoption, implementation and assimilation timeline

Cross-case comparison

Adoption: Shifting ideas in the public policy realm and supportive external change agents created a conducive external context . In Cases 1 and 2, shifting ideas pertained to interprofessional and intersectoral collaboration and in Case 3, national and provincial discussions about a recovery and well-being orientation were important precursors to coproduction with people with lived experience. Internally , tension for change was evident in all cases; however, the process by which this unfolded differed, as a natural progression of ongoing improvement efforts in Cases 1 and 3 and as a deliberate response to an opportunity created by an external change agent for local system-wide transformative change in Case 2. In all cases, passionate individuals , many with their own lived experience, and a philosophical approach that resonated deeply and widely was a core feature leading to adoption (see Table  6 ).

Implementation: In all three cases, building local partnerships and/or networks in the external context was integral to implementation. These partnerships and networks helped to overcome internal resistance within existing power structures (Case 1), created a community coalition that could move forwards in the face of resistance within traditional mental health services (Case 2), and offered material support and expertise to support implementation (Case 3). In Cases 1 and 2, there was no 'programme' per se, rather a philosophy steered by guiding questions, and in Case 3, the Recovery College model itself was designed to realize its embedded philosophy through coproduced educational programming. These features drove a micro-level movement for change (all cases) that was locally adapted, for example, to become “something for everyone” (in Case 3). Philosophical alignment also helped in building trust across collaborating organizations to support implementation and as a shared foundation for overcoming differences during implementation. Implementation proceeded incrementally at the grassroots level in all cases and by working with the willing (see Table  7 ).

Assimilation: There have been different forms of assimilation across all three cases, with transformative impacts not only on the organizations involved but with impacts extending to the broader organizational and political context. A widely embraced mindset in the region, new structures and a growing international network (Case 1); impact on the local mental health strategy and continuing transformative effects on partnerships among community agencies (Case 2); and assimilation to other programmes and branches (Case 3) are some of the ongoing transformative impacts.

In Case 3, assimilation was characterized by customization, as both branches have changed the name and broadened the reach of Recovery Colleges, while maintaining fidelity to core principles. At the same time, challenges to sustaining such transformative change going forward were a concern without targeted leadership and funding (see Table  8 ).

The analysis of these cases of adoption, implementation and assimilation of innovation demonstrates a range of factors from existing frameworks that shaped the stories of these coproduced innovations. The analysis also suggests additional considerations beyond established frameworks when aiming to engage structurally vulnerable people in coproduction activities that can help to overcome structural barriers and address power differentials in legacy systems.

Existing frameworks and models were very helpful in pointing to the interplay between the many factors operating at different levels in each context. These comprehensive frameworks provided a wide lens that was useful for thoroughly investigating different contextual elements. However, at times, this comprehensiveness made it difficult to tease out the essential causal story from our data to understand how each set of coproduced innovations emerged [ 28 ]. In our analysis, existing frameworks were most helpful when comparing across cases to identify overarching patterns, such as the influence of shifting policy ideas and external change agents in the external context during adoption and the role of community partners and network building in the implementation phase.

At the same time, particularly compelling considerations involving structurally vulnerable groups identified here were less evident in existing frameworks. Notably, there were two important differences in the nature of the ‘programme’ in this context. First, existing frameworks suggest a predefined 'programme' to adopt; however, there was no predefined programme per se in two of our cases. Instead, change was more ideological/philosophical in nature, captured simply by a set of guiding questions (two cases) or embedded as a central feature of an existing program with lots of room for customization (one case). The central philosophy in these cases corresponded to efforts to raise the profile of traditionally marginalized voices by shifting normative paradigms about what types of knowledge (for example, lived experience) and whose voices (for example, structurally vulnerable service users) should be heard in traditional systems. Second, the process (coproduction) could not be disentangled from this essential philosophy and, in some cases, it was met with considerable resistance. Including vulnerable people as genuine partners in coproducing innovations was perceived as a 'threat' to some managers (Case 1) or to the prevailing orthodoxy of 'Quality Improvement' (Case 2).

These 'programme' features suggest a second consideration in terms of implementation processes . The clear intention to shift the existing power balance in systems and within organizations needed a set of resources that went beyond the capacity of any one organization. While high-level leaders with their own lived experience were instrumental in providing vision and support, the implementation process relied heavily on relationship building across partner organizations and networking at the grassroots levels rather than on top-down directives. Meaningful service user involvement was considered critical in making transformative service and system culture change, often disrupting traditional structures, networks and communication. Shared values, the development of a group-based belief system, core activities and a different relational environment and leadership [ 29 , 30 ] are central to social movement theories. Furthermore, the definitive objective of stepping outside organizations within the formal healthcare system to instead derive a new way of working across many community organizations led by people with lived experiences is not clearly captured in existing frameworks, which typically speak to innovation within existing structures of power in organizations and systems.

Finally, the cases analysed here suggest important differences in temporal dynamics at play that were not elaborated in existing models. Consistent with concepts of change in complex adaptive systems and theories of policy path dependence and agenda setting, adoption could occur through a slow internal tension for change that built over time and culminated in coproduction as a natural evolution of ongoing improvement efforts or through seemingly sudden 'transformative' reform where a confluence of interested groups came together in the face of an opportunity to do something differently. Ideas about change in complex adaptive systems such as emergence, self-organization, adaptation, change over time, distributed control and tipping points [ 31 ], and from policy literature such as path dependence [ 32 ], multiple streams theory [ 33 ] and distributed control could be informative in this respect [ 34 ]. Our participants suggested that because each case relates to a set of concepts and principles that were collectively generated over time, there was a need to better understand this process as it unfolded.

While existing models were helpful in considering a wide range of factors to consider and recent updates suggest a movement away from concepts such as 'programme' to 'innovation' [ 19 ], the temporal, relational and power dimensions discussed here were validated by our collaborators as equally important considerations. Exploring these dimensions will be the focus of future work.

Limitations and future work

This work is subject to several limitations. First, it is based on a case study of three examples of coproduction of health and social care innovations in different national contexts in the northern hemisphere. The findings may not be transferable elsewhere. Furthermore, when considering our findings in relation to the CFIR, DOI and assimilation frameworks, it is important to note that these frameworks were not specifically developed for an innovation process involving service users at all stages of innovation adoption, implementation and assimilation. However, the limitations in adopting and applying these frameworks here have led to a careful examination of what is unique to coproduction processes involving vulnerable populations. A forthcoming contribution will try to capture these unique elements and position them within the innovation, power, and social movement literatures. Finally, the analysis here is primarily based on our 'wave 1' home site findings from this longitudinal case study, and new insights may be gained from a deeper evaluation of our wave 2 and wave 3 findings. The latter pertain to processes of ongoing coproduction in practice and diffusion to other contexts, respectively, and will be analysed in forthcoming work.

While our case study was extremely helpful in identifying core considerations for factors influencing the adoption, implementation and assimilation of three cases of coproduced health and social care innovations, several nuanced considerations when applying existing theoretical frameworks in the coproduction context emerged: the nature of the 'intervention' being a philosophy rather than a concrete set of steps, the intertwining of intervention and process and the need to study evolution of the intervention itself as it emerges over time, greater attention to partnered processes as disruptors to existing power structures and an emphasis on driving transformational change in organizational cultures. Future work will explore these considerations further.

Availability of data and materials

The datasets generated and/or analysed during the current study are not publicly available due to the study’s small sample size and the key informants’ roles as leaders within small organizations, making it difficult to deidentify their data. However, the datasets are available from the corresponding author upon reasonable request.

In some cases, these individuals also had lived experience of vulnerability that also motivated their work, but this was not a specific requirement for study participation.

Abbreviations

Consolidated Framework for Implementation Research

(SingHealth) Centralized Institutional Review Board

Canadian Mental Health Association

Diffusion of innovation

Health and Social Care Partnership

McMaster University Research Ethics Board

Making Recovery Real

Portage la Prairie

Psychosocial rehabilitation

Regional health authority

Scottish Recovery Network

Third sector interface

United States

Robert G, Locock L, Williams O, Cornwell J, Donetto S, Goodrich J. Co-producing and co-designing. In: Dixon-Woods M, Martin G, editors. Elements of improving quality and safety in healthcare. Cambridge: Cambridge University Press; 2022.

Google Scholar  

Palmer V, Weavell W, Callander R, Piper D, Richard L, Maher L, et al. The Participatory Zeitgeist: an explanatory theoretical model of change in an era of coproduction and codesign in healthcare improvement. Med Humanit. 2018;45:247-57.

Article   PubMed   PubMed Central   Google Scholar  

McGeachie M, Power G. Co-production in Scotland—a policy overview. 2017.

Osborne S, Radnor Z, Stokosch K. Co-production and the co-creation of value in public services. Public Manag Rev. 2016;18(5):645–59.

Article   Google Scholar  

Iedema R, Merrick E, Piper D, Britton K, Gray J, Verma R, et al. Codesigning as a discursive practice in emergency health services: the architecture of deliberation. J Appl Behav Sci. 2010;46(1):73–91.

Sandhu S, Priebe S, Leavey G, Harrison I, Krotofil J, McPherson P, et al. Intentions and experiences of effective practice in mental health specific supported accommodation services: a qualitative interview study. BMC Health Serv Res. 2017;17(1):1–13.

Vackerberg N, Andersson A-C, Peterson A, Karltun A. What is best for Esther? A simple question that moves mindsets and improves care. BMC Health Serv Res. 2023;23(1):873.

Brown PR, Head BW. Navigating tensions in co-production: a missing link in leadership for public value. Public Adm. 2019;97(2):250–63.

First Nations Information Governance Centre. The First Nations principles of OCAP® [Internet]. Akwesasne (ON): First Nations Information Governance Centre (FNIGC); [cited 2023 Nov 12]. Available from: https://fnigc.ca/ocap-training/

Fotaki M. Co-production under the financial crisis and austerity: a means of democratizing public services or a race to the bottom? J Manag Inq. 2015;24(4):433–8.

Mulvale G, Robert G. Special issue- engaging vulnerable populations in the co-production of public services. Int J Public Adm. 2021;44(9):711–4.

Williamson V, Larkin M, Reardon T, Pearcey S, Button R, Green I, et al. School-based screening for childhood anxiety problems and intervention delivery: a codesign approach. BMJ Open. 2022;12(6):e058089.

Nelson EC, Batalden PB, Godfrey MM. Quality by design: a clinical microsystems approach. 1st ed. San Francisco: Jossey-Bass; 2007.

Batalden M, Batalden P, Margolis P, Seid M, Armstrong G, Opipari-Arrigan L, et al. Coproduction of healthcare service. BMJ Qual Saf. 2016;25(7):509–17.

Article   PubMed   Google Scholar  

Grabovschi C, Loignon C, Fortin M. Mapping the concept of vulnerability related to health care disparities: a scoping review. BMC Health Serv Res. 2013;13(94):1–11.

Katz A, Hardy B-J, Firestone M, Lofters A, Morton-Ninomiya ME. Vagueness, power and public health: use of ‘vulnerable’ in public health literature. Crit Public Health. 2020;30(5):601-11.

Greenhalgh T, Robert G, MacFarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(50).

Damschroder LJ, Reardon CM, Opra Widerquist MA, Lowery J. The updated Consolidated Framework for Implementation Research based on user feedback. Implement Sci. 2022;17(1):1–16.

Lozeau D, Langley A, Denis J-L. The corruption of managerial techniques by organizations. Human Relat. 2002;55(5):537–64.

Robert G, Sarre S, Maben J, Griffiths P, Chable R. Exploring the sustainability of quality improvement interventions in healthcare organisations: a multiple methods study of the 10-year impact of the ‘Productive Ward: Releasing Time to Care’ programme in English acute hospitals. BMJ Qual Saf. 2020;29:31–40.

Yin RK. Case study research and applications: design and methods. 6th ed. Thousand Oaks, California: SAGE Publications, Inc.; 2018.

Maxwell JA. Designing a qualitative study. In: Bickman L, Rog DJ, editors. The SAGE handbook of applied social research methods. 2nd ed. Thousand Oaks, California: SAGE Publications, Inc.; 2009. p. 214–53.

Sharp C. Making Recovery Real in Dundee: A review with the Scottish Recovery Network. Scottish Recovery Network. Glasgow: Scottish Recovery Network; 2018.

Crowther A, Taylor A, Toney R, Meddings S, Whale T, Jennings H, et al. The impact of Recovery Colleges on mental health staff, services and society. Epidemiol Psychiatr Sci. 2019;28(5):481–8.

Article   CAS   PubMed   Google Scholar  

Anfossi A. The current state of Recovery Colleges in the UK: final report. Nottingham, UK: Implementing Recovery through Organisational Change (ImROC) and Nottinghamshire Healthcare NHS Foundation Trust; 2017.

Mental Health Commission of Canada. Changing Directions Changing Lives: The Mental Health Strategy for Canada. Calgary, Canada: Mental Health Commission of Canada; 2012.

Giacomini M, Bourgeault I, Dingwall R, De Vries, R. Theory matters in qualitative health research. In: The SAGE Handbook of Qualitative Methods in Health Research. London: SAGE Publications Ltd.; 2010. p. 125–56.

Chapter   Google Scholar  

Tremblay M-C, Martin DH, Macaulay AC, Pluye P. Can we build on social movement theories to develop and improve community-based participatory research? A framework synthesis review. Am J Community Psychol. 2017;59(3-4):333–62.

Maton K. Empowering community settings: agents of individual development, community betterment, and positive social change. Am J Community Psychol. 2008;41:4–21.

Boehnert J. The visual representation of complexity: sixteen key characteristics of complex systems. In: Proceedings of RSD7, Relating Systems Thinking and Design 7; 2018 Oct 23-26; Turin, Italy. Available at http://openresearch.ocadu.ca/id/eprint/2737/

Pierson P. When effect becomes cause: policy feedback and political change. World Politics. 1993;45(4):595–628.

Kingdon JW. Agendas, alternatives and public policies. 2nd ed. New York: Longman; 1995.

Greenhalgh T, Jackson C, Shaw S, Janamian T. Achieving research impact through co-creation in community-based health services; literature review and case study. Milbank Q. 2016;94(2):392–429.

Download references

Acknowledgements

We wish to thank all participants in this study for giving their time and for sharing their experiences. We also thank the study’s collaborators who provided important background to the cases contributing to the research design/direction, acted as local gatekeepers to the cases and/or who helped to interpret the data. Over the life of the research project, the collaborators have been: Louise Christie (Scottish Recovery Network), Marion Cooper (CMHA Manitoba & Winnipeg), Olivia Hanley (formerly of the Scottish Community Development Centre), Greg Kyllo (formerly of CMHA National), Erica McDiarmid (formerly of CMHA National), Susan Paxton (Scottish Community Development Centre), Denise Silverstone (CMHA National), Stephanie Skakun (CMHA Manitoba & Winnipeg) and Nicoline Vackerberg (Region Jönköping County). Without their involvement, this study would not have been possible. Finally, we thank Sophie Sarre for her contributions to wave 1 interviewing and early coding framework development and data coding.

This manuscript draws on research supported by the Social Sciences and Humanities Research Council Partnership Development grant no. 890-2018-0116. The funders had no role in the design of the study; in the collection, analysis and interpretation of data; or in writing the manuscript.

Author information

Authors and affiliations.

DeGroote School of Business, McMaster University, 4350 South Service Road, Suite 421, Burlington, ON, L7L 5R8, Canada

Gillian Mulvale & Jenn Green

Florence Nightingale Faculty of Nursing, Midwifery and Palliative Care, King’s College London, London, United Kingdom

Glenn Robert

Institute of Health and Neurodevelopment, Aston University, Birmingham, United Kingdom

Michael Larkin & Shioma-Lei Craythorne

Region Jönköping County, Jönköping, Sweden

Nicoline Vackerberg

The Jönköping Academy for Improvement of Health and Welfare, School of Health and Welfare, Jönköping University, Jönköping, Sweden

Glenn Robert, Nicoline Vackerberg & Sofia Kjellström

Department of Health Research Methods, Evidence, and Impact (HEI), Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada

Puspita Hossain

School of Rehabilitation Science, McMaster University, Hamilton, ON, Canada

Sandra Moll

School of Health and Welfare, Jönköping University, Jönköping, Sweden

SingHealth Office of Regional Health, Singapore Health Services, Singapore, Singapore

You can also search for this author in PubMed   Google Scholar

Contributions

GM, GR, ML, SK and SM conceived of and designed the study. GM, JG, NV, EL and SC collected and analysed the data under the guidance of GM, GR, ML and SK. GM, JG and PH interpreted the data. GM, GR, JG and PH drafted the manuscript. ML, NV, SK, SM, EL and SC reviewed and commented on different versions of the paper. GR, GM and JG revised the manuscript following peer review, in consultation with the other authors. All the authors have read and approved the final manuscript.

Corresponding author

Correspondence to Gillian Mulvale .

Ethics declarations

Ethics approval and consent to participate.

Research ethics clearance was obtained from the relevant academic research ethics boards (McMaster University Research Ethics Board [MREB Project ID 2066], Aston University Ethics Committee [Rec Ref #1611]; King’s College London Research Ethics Office [Reference Number MOD-19/20-17350]; SingHealth Centralised Institutional Review Board [CIRB Ref# 2020/2341]; and Swedish Ethical Review Authority [Etikprövningsmyndigheten, Dnr 2019-06373]), and in light of this, ethics review was waived by the boards of the collaborating organizations (Canadian Mental Health Association, Manitoba & Winnipeg branch, the East of Scotland Research Ethics Service). Participants received letters of information outlining the study objectives, protocol and risks prior to consenting in writing.

Consent for publication

Consent for publication was received from all key informants.

Competing interests

The authors declare that they have no competing interests. All authors approved the final version of the paper.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Provides additional details about the sampling frame (that is, the organizations the interviewees are associated with, the document titles and types).

Additional file 2.

Demonstrates how concepts from the CFIR, DOI, and compatibility gaps frameworks were incorporated into the coding framework.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mulvale, G., Green, J., Robert, G. et al. Adopting, implementing and assimilating coproduced health and social care innovations involving structurally vulnerable populations: findings from a longitudinal, multiple case study design in Canada, Scotland and Sweden. Health Res Policy Sys 22 , 42 (2024). https://doi.org/10.1186/s12961-024-01130-w

Download citation

Received : 12 November 2023

Accepted : 05 March 2024

Published : 02 April 2024

DOI : https://doi.org/10.1186/s12961-024-01130-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Coproduction
  • Structurally vulnerable populations
  • Implementation
  • Assimilation
  • Transformation

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research and study design

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

Types of studies and research design

Mukul chandra kapoor.

Department of Anesthesiology, Max Smart Super Specialty Hospital, New Delhi, India

Medical research has evolved, from individual expert described opinions and techniques, to scientifically designed methodology-based studies. Evidence-based medicine (EBM) was established to re-evaluate medical facts and remove various myths in clinical practice. Research methodology is now protocol based with predefined steps. Studies were classified based on the method of collection and evaluation of data. Clinical study methodology now needs to comply to strict ethical, moral, truth, and transparency standards, ensuring that no conflict of interest is involved. A medical research pyramid has been designed to grade the quality of evidence and help physicians determine the value of the research. Randomised controlled trials (RCTs) have become gold standards for quality research. EBM now scales systemic reviews and meta-analyses at a level higher than RCTs to overcome deficiencies in the randomised trials due to errors in methodology and analyses.

INTRODUCTION

Expert opinion, experience, and authoritarian judgement were the norm in clinical medical practice. At scientific meetings, one often heard senior professionals emphatically expressing ‘In my experience,…… what I have said is correct!’ In 1981, articles published by Sackett et al . introduced ‘critical appraisal’ as they felt a need to teach methods of understanding scientific literature and its application at the bedside.[ 1 ] To improve clinical outcomes, clinical expertise must be complemented by the best external evidence.[ 2 ] Conversely, without clinical expertise, good external evidence may be used inappropriately [ Figure 1 ]. Practice gets outdated, if not updated with current evidence, depriving the clientele of the best available therapy.

An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g001.jpg

Triad of evidence-based medicine

EVIDENCE-BASED MEDICINE

In 1971, in his book ‘Effectiveness and Efficiency’, Archibald Cochrane highlighted the lack of reliable evidence behind many accepted health-care interventions.[ 3 ] This triggered re-evaluation of many established ‘supposed’ scientific facts and awakened physicians to the need for evidence in medicine. Evidence-based medicine (EBM) thus evolved, which was defined as ‘the conscientious, explicit and judicious use of the current best evidence in making decisions about the care of individual patients.’[ 2 ]

The goal of EBM was scientific endowment to achieve consistency, efficiency, effectiveness, quality, safety, reduction in dilemma and limitation of idiosyncrasies in clinical practice.[ 4 ] EBM required the physician to diligently assess the therapy, make clinical adjustments using the best available external evidence, ensure awareness of current research and discover clinical pathways to ensure best patient outcomes.[ 5 ]

With widespread internet use, phenomenally large number of publications, training and media resources are available but determining the quality of this literature is difficult for a busy physician. Abstracts are available freely on the internet, but full-text articles require a subscription. To complicate issues, contradictory studies are published making decision-making difficult.[ 6 ] Publication bias, especially against negative studies, makes matters worse.

In 1993, the Cochrane Collaboration was founded by Ian Chalmers and others to create and disseminate up-to-date review of randomised controlled trials (RCTs) to help health-care professionals make informed decisions.[ 7 ] In 1995, the American College of Physicians and the British Medical Journal Publishing Group collaborated to publish the journal ‘Evidence-based medicine’, leading to the evolution of EBM in all spheres of medicine.

MEDICAL RESEARCH

Medical research needs to be conducted to increase knowledge about the human species, its social/natural environment and to combat disease/infirmity in humans. Research should be conducted in a manner conducive to and consistent with dignity and well-being of the participant; in a professional and transparent manner; and ensuring minimal risk.[ 8 ] Research thus must be subjected to careful evaluation at all stages, i.e., research design/experimentation; results and their implications; the objective of the research sought; anticipated benefits/dangers; potential uses/abuses of the experiment and its results; and on ensuring the safety of human life. Table 1 lists the principles any research should follow.[ 8 ]

General principles of medical research

An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g002.jpg

Types of study design

Medical research is classified into primary and secondary research. Clinical/experimental studies are performed in primary research, whereas secondary research consolidates available studies as reviews, systematic reviews and meta-analyses. Three main areas in primary research are basic medical research, clinical research and epidemiological research [ Figure 2 ]. Basic research includes fundamental research in fields shown in Figure 2 . In almost all studies, at least one independent variable is varied, whereas the effects on the dependent variables are investigated. Clinical studies include observational studies and interventional studies and are subclassified as in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g003.jpg

Classification of types of medical research

Interventional clinical study is performed with the purpose of studying or demonstrating clinical or pharmacological properties of drugs/devices, their side effects and to establish their efficacy or safety. They also include studies in which surgical, physical or psychotherapeutic procedures are examined.[ 9 ] Studies on drugs/devices are subject to legal and ethical requirements including the Drug Controller General India (DCGI) directives. They require the approval of DCGI recognized Ethics Committee and must be performed in accordance with the rules of ‘Good Clinical Practice’.[ 10 ] Further details are available under ‘Methodology for research II’ section in this issue of IJA. In 2004, the World Health Organization advised registration of all clinical trials in a public registry. In India, the Clinical Trials Registry of India was launched in 2007 ( www.ctri.nic.in ). The International Committee of Medical Journal Editors (ICMJE) mandates its member journals to publish only registered trials.[ 11 ]

Observational clinical study is a study in which knowledge from treatment of persons with drugs is analysed using epidemiological methods. In these studies, the diagnosis, treatment and monitoring are performed exclusively according to medical practice and not according to a specified study protocol.[ 9 ] They are subclassified as per Figure 2 .

Epidemiological studies have two basic approaches, the interventional and observational. Clinicians are more familiar with interventional research, whereas epidemiologists usually perform observational research.

Interventional studies are experimental in character and are subdivided into field and group studies, for example, iodine supplementation of cooking salt to prevent hypothyroidism. Many interventions are unsuitable for RCTs, as the exposure may be harmful to the subjects.

Observational studies can be subdivided into cohort, case–control, cross-sectional and ecological studies.

  • Cohort studies are suited to detect connections between exposure and development of disease. They are normally prospective studies of two healthy groups of subjects observed over time, in which one group is exposed to a specific substance, whereas the other is not. The occurrence of the disease can be determined in the two groups. Cohort studies can also be retrospective
  • Case–control studies are retrospective analyses performed to establish the prevalence of a disease in two groups exposed to a factor or disease. The incidence rate cannot be calculated, and there is also a risk of selection bias and faulty recall.

Secondary research

Narrative review.

An expert senior author writes about a particular field, condition or treatment, including an overview, and this information is fortified by his experience. The article is in a narrative format. Its limitation is that one cannot tell whether recommendations are based on author's clinical experience, available literature and why some studies were given more emphasis. It can be biased, with selective citation of reports that reinforce the authors' views of a topic.[ 12 ]

Systematic review

Systematic reviews methodically and comprehensively identify studies focused on a specified topic, appraise their methodology, summate the results, identify key findings and reasons for differences across studies, and cite limitations of current knowledge.[ 13 ] They adhere to reproducible methods and recommended guidelines.[ 14 ] The methods used to compile data are explicit and transparent, allowing the reader to gauge the quality of the review and the potential for bias.[ 15 ]

A systematic review can be presented in text or graphic form. In graphic form, data of different trials can be plotted with the point estimate and 95% confidence interval for each study, presented on an individual line. A properly conducted systematic review presents the best available research evidence for a focused clinical question. The review team may obtain information, not available in the original reports, from the primary authors. This ensures that findings are consistent and generalisable across populations, environment, therapies and groups.[ 12 ] A systematic review attempts to reduce bias identification and studies selection for review, using a comprehensive search strategy and specifying inclusion criteria. The strength of a systematic review lies in the transparency of each phase and highlighting the merits of each decision made, while compiling information.

Meta-analysis

A review team compiles aggregate-level data in each primary study, and in some cases, data are solicited from each of the primary studies.[ 16 , 17 ] Although difficult to perform, individual patient meta-analyses offer advantages over aggregate-level analyses.[ 18 ] These mathematically pooled results are referred to as meta-analysis. Combining data from well-conducted primary studies provide a precise estimate of the “true effect.”[ 19 ] Pooling the samples of individual studies increases overall sample size, enhances statistical analysis power, reduces confidence interval and thereby improves statistical value.

The structured process of Cochrane Collaboration systematic reviews has contributed to the improvement of their quality. For the meta-analysis to be definitive, the primary RCTs should have been conducted methodically. When the existing studies have important scientific and methodological limitations, such as smaller sized samples, the systematic review may identify where gaps exist in the available literature.[ 20 ] RCTs and systematic review of several randomised trials are less likely to mislead us, and thereby help judge whether an intervention is better.[ 2 ] Practice guidelines supported by large RCTs and meta-analyses are considered as ‘gold standard’ in EBM. This issue of IJA is accompanied by an editorial on Importance of EBM on research and practice (Guyat and Sriganesh 471_16).[ 21 ] The EBM pyramid grading the value of different types of research studies is shown in Figure 3 .

An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g004.jpg

The evidence-based medicine pyramid

In the last decade, a number of studies and guidelines brought about path-breaking changes in anaesthesiology and critical care. Some guidelines such as the ‘Surviving Sepsis Guidelines-2004’[ 22 ] were later found to be flawed and biased. A number of large RCTs were rejected as their findings were erroneous. Another classic example is that of ENIGMA-I (Evaluation of Nitrous oxide In the Gas Mixture for Anaesthesia)[ 23 ] which implicated nitrous oxide for poor outcomes, but ENIGMA-II[ 24 , 25 ] conducted later, by the same investigators, declared it as safe. The rise and fall of the ‘tight glucose control’ regimen was similar.[ 26 ]

Although RCTs are considered ‘gold standard’ in research, their status is at crossroads today. RCTs have conflicting interests and thus must be evaluated with careful scrutiny. EBM can promote evidence reflected in RCTs and meta-analyses. However, it cannot promulgate evidence not reflected in RCTs. Flawed RCTs and meta-analyses may bring forth erroneous recommendations. EBM thus should not be restricted to RCTs and meta-analyses but must involve tracking down the best external evidence to answer our clinical questions.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

  • Study Guides
  • Homework Questions

HGMT 400 Discussion 3

Drs. Desai and Wise-Draper stand and smile on a bridge in UC's CARE/Crawley Building

Collaborative University of Cincinnati Cancer Center team opens Phase 2 brain tumor trial

Falk catalyst award funds next phase of research seeking new treatments for glioblastomas.

headshot of Tim Tedeschi

A multidisciplinary team of University of Cincinnati Cancer Center researchers have opened a Phase 2 clinical trial to test a new combination treatment for glioblastomas (GBM), the most deadly form of brain tumors. 

The team, led by UC’s Pankaj Desai, PhD, and Trisha Wise-Draper, MD, PhD, has been awarded a Catalyst Research Award from the Dr. Ralph and Marian Falk Medical Research Trust to move the trial forward.

Study background

Difficult to diagnose at early stages, GBMs are aggressive brain tumors that become symptomatic once the tumor is substantial. Current treatments include immediate surgery to safely remove as much tumor as possible, radiation and chemotherapy, but the tumor often recurs or becomes resistant to treatments. The average patient survives no more than 15 months after diagnosis. 

Drug-based treatments for GBMs face an additional challenge known as the blood-brain barrier, which only allows certain compounds into the brain based on their physical and chemical properties.  

The research team is focused on the use of a drug called letrozole that has been used for more than 20 years as a treatment for breast cancer. The drug targets an enzyme called aromatase that is present in the breast cancer cells and helps the cells grow. 

Early research in Desai’s lab found that aromatase was present in brain tumor cells, making letrozole a potential new treatment for GBMs.

The team is testing the drug letrozole as a treatment for glioblastoma, a deadly and aggressive brain tumor. Photo/National Cancer Institute.

Phase 0/1 trial results

To bring letrozole from Desai’s lab to patients’ bedsides, he collaborated with Wise-Draper and neuro-oncologists and neurosurgeons at  UC’s Brain Tumor Center  to launch a Phase 0/1 clinical trial. 

“In the academic setting, we are very good at doing molecular research that enhances our  understanding of the mechanism of disease and preclinical characterization of efficacy, safety and other aspects of drug development research,” said Desai, professor and chair of the Pharmaceutical Sciences Division and director of the drug development graduate program in UC’s James L. Winkle College of Pharmacy. “But you can’t translate this into a clinical trial without a Phase 1 clinical trial expert like Dr. Wise-Draper and the experts at the Brain Tumor Center.” 

The researchers published the results of the Phase 0/1 trial March 26 in Clinical Cancer Research , a journal of the American Association for Cancer Research.

Pankaj Desai, PhD. Photo/Andrew Higley/UC Marketing + Brand.

“Letrozole was safe up to the highest dose, and there were no safety concerns in the Phase 0/1 trial,” said Wise-Draper, section head of Medical Oncology and professor in the Division of Hematology/Oncology in UC’s College of Medicine. “The biggest conclusion is that it was safe and that we could reach what we felt was going to be the effective dose based on Dr. Desai’s preclinical work.” 

The research team collected tumor tissues from patients enrolled in the Phase 0/1 trial and found that letrozole was crossing the blood-brain barrier when they analyzed the samples in Desai’s lab. 

“We can categorically show that in humans the drug actually crosses and reaches the brain tumor at concentrations that we believe are likely to be most efficacious,” Desai said. 

Phase 2 trial design

Trisha Wise-Draper, MD, PhD. Photo/Nyla Sauter/University of Cincinnati Cancer Center.

Since GBMs are aggressive and complicated tumors, Desai said most likely new effective treatments will be combinations of drugs instead of one single drug. 

In the Phase 2 trial, patients will be given letrozole in combination with a chemotherapy drug called temozolomide that is already approved as a GBM treatment. Desai said preclinical research in his lab and input from Brain Tumor Center collaborators, including neuro-oncologist and former UC faculty member Soma Sengupta, suggested this combination treatment could be more effective than letrozole alone. 

A total of 19 patients with recurrent GBM who are no longer eligible for additional surgery will be  enrolled in the first stage of the trial. The results from this trial will guide the design of future larger Phase 2 trials. 

The team estimates it will complete enrollment within two years, and two patients have already been enrolled. 

Collaboration and funding support

Wise-Draper and Desai have worked together on various research projects for nearly 15 years and said this project would not be moving forward without the varied expertise each team member brings. 

“I think collaboration with multidisciplinary teams is critical to be able to have the expertise and all the components you need, including biostatistics, pharmacokinetics, clinical, basic science and neuro-oncology expertise,” Wise-Draper said. “The future of all science is team science. No one really can do everything on their own anymore because we’re all too specialized.” 

“Only academic centers with integrated scientific and clinical expertise are able to move their molecules from the research bench to clinical trials,” Desai added. “It takes a lot of persistence, ups and downs, highs and lows of funding, but we have been supported by a very strong team of people. It’s a journey that has taken a while and a lot of hard work by a number of people, and we’re in a very exciting stage.”

Early-stage support for the preclinical and clinical trial studies was provided by the UC Brain Tumor Center, where investigators from UC’s colleges of Medicine, Pharmacy, Engineering and Applied Science and Cincinnati Children’s Hospital collaborate on brain tumor research. 

UC’s Brain Tumor Center provided direct support for the completion of the Phase 0/1 trial and some of the correlative mechanistic studies that will continue during the Phase 2 trials using funds raised in the annual Walk Ahead for a Brain Tumor Discoveries fundraiser. 

The Falk Catalyst Award provides up to $350,000 in seed funding to support translational research projects, which the researchers said was crucial in opening the new trial.  

“Oftentimes the funding is somewhat limited for initial clinical trial development compared to many other more early-stage studies that you can do,” Desai said. “So that gap is filled by foundations like the Falk Medical Research Trust, and that really is very helpful and plays a critical role in accelerating clinical development.”

“It would not be possible if we didn’t have the funding to be able to bring this combination into patients that desperately need new treatment options,” Wise-Draper said.  

As the clinical trial progresses, the team is also collaborating to find other drugs to combine with letrozole to treat GBMs, funded by a $1.19 million  National Institutes of Health/National Institute of Neurological Disorders and Stroke grant . The team is already preparing a proposal for larger confirmatory Phase 2 studies and expanding the opportunities for cutting-edge brain tumor clinical trials in Cincinnati.

Desai said the ongoing research includes additional collaboration from experts including David Plas, PhD, Biplab DasGupta, PhD, and Tim Phoenix, PhD (molecular/cancer biology); Gary Gudelsky, PhD (neuro-pharmacology) Rekha Chaudhary, MD, and Lalanthica Yogendran, MD (neuro-oncology); Mario Medvedovic, PhD (bioinformatics and genomics); and Shesh Rai, PhD (biostatistics). Many graduate students, postdoctoral fellows and the clinical trials support staff also provide essential support for the project.   

Impact Lives Here

The University of Cincinnati is leading public urban universities into a new era of innovation and impact. Our faculty, staff and students are saving lives, changing outcomes and bending the future in our city's direction.  Next Lives Here.

For more information on the trial, please call 513-584-7698 or email  [email protected] .

The team also recently published research in the International Journal of Molecular Sciences reviewing approaches to overcome treatment resistance to temozolomide to treat GBMs.

Featured photo at top of Drs. Desai and Wise-Draper. Photo/Nyla Sauter/University of Cincinnati Cancer Center. 

  • Clinical Research
  • Faculty Staff
  • College of Pharmacy
  • College of Medicine
  • UC Gardner Neuroscience Institute
  • Academic Health Center
  • Neurology & Rehabilitation Medicine
  • UC Cancer Institute

Related Stories

March 26, 2024

A multidisciplinary team of University of Cincinnati Cancer Center researchers have opened a Phase 2 clinical trial to test a new combination treatment for glioblastomas, the most deadly form of brain tumors.

Learning more about how cancer affects stroke risk

October 16, 2023

A collaborative team led by University of Cincinnati, University of North Carolina and Duke University researchers is studying how specific cancers and treatments affect patients' risk of stroke.

A potential new treatment for brain tumors

September 23, 2022

The University of Cincinnati's Pankaj Desai, PhD, has received a $1.19 million grant from the National Institutes of Health/National Institute of Neurological Disorders and Stroke to continue research into the use of a drug called letrozole to treat glioblastomas, the most deadly form of brain tumors.

Study Lead Opportunities – April 2024

Upcoming events, managing data and documentation for fda inspections and remote assessments, magi 2024: the clinical research conference, 2024 avoca quality consortium summit, featured products.

Surviving an FDA GCP Inspection

Surviving an FDA GCP Inspection: Resources for Investigators, Sponsors, CROs and IRBs

Best Practices for Clinical Trial Site Management

Best Practices for Clinical Trial Site Management

Featured stories.

Jonathan Seltzer

Thought Leadership: Remote Patient Monitoring Gives New View of Safety in Cardiac Clinical Trials

Quality_Compass-360x240.png

Ask the Experts: Applying Quality by Design to Protocols

Obesity Treatment Patient

Clinical Trials Need Greater Representation of Obese Patients, Experts Say

Modernize-360x240.png

FDA IT Modernization Plan Prioritizes Data-Sharing, AI, Collaboration and More

Standard operating procedures for risk-based monitoring of clinical trials, the information you need to adapt your monitoring plan to changing times..

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Study designs: Part 1

    Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem. Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the ...

  3. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  4. Research Design

    Table of contents. Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies.

  5. Clinical research study designs: The essentials

    Introduction. In clinical research, our aim is to design a study, which would be able to derive a valid and meaningful scientific conclusion using appropriate statistical methods that can be translated to the "real world" setting. 1 Before choosing a study design, one must establish aims and objectives of the study, and choose an appropriate target population that is most representative of ...

  6. Study designs in biomedical research: an introduction to the different

    We may approach this study by 2 longitudinal designs: Prospective: we follow the individuals in the future to know who will develop the disease. Retrospective: we look to the past to know who developed the disease (e.g. using medical records) This design is the strongest among the observational studies. For example - to find out the relative ...

  7. Types of Research Designs

    The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of ...

  8. Research design

    Research design refers to the overall strategy utilized to answer research questions. A research design typically outlines the theories and models underlying a project; the research question(s) of a project; a strategy for gathering data and information; and a strategy for producing answers from the data. A strong research design yields valid answers to research questions while weak designs ...

  9. Understanding Research Study Designs

    Ranganathan P. Understanding Research Study Designs. Indian J Crit Care Med 2019;23 (Suppl 4):S305-S307. Keywords: Clinical trials as topic, Observational studies as topic, Research designs. We use a variety of research study designs in biomedical research. In this article, the main features of each of these designs are summarized. Go to:

  10. Research Design

    The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection ...

  11. How to choose your study design

    First, by the specific research question. That is, if the question is one of 'prevalence' (disease burden) then the ideal is a cross-sectional study; if it is a question of 'harm' - a case-control study; prognosis - a cohort and therapy - a RCT. Second, by what resources are available to you. This includes budget, time, feasibility re-patient ...

  12. What is a Research Design? Definition, Types, Methods and Examples

    Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields: 1.

  13. How to Write a Research Design

    Step 1: Establish Priorities for Research Design. Before conducting any research study, you must address an important question: "how to create a research design.". The research design depends on the researcher's priorities and choices because every research has different priorities.

  14. Understanding Research Study Designs

    Research Study Designs in the Health Sciences (29:36 min): An overview of research study designs used by health sciences researchers. Covers case reports/case series, case control studies, cohort studies, correlational studies, cross-sectional studies, experimental studies (including randomized control trials), systematic reviews and meta-analysis.

  15. Research Study Design

    The Research Study Design course provides learners with an introduction to research study design, a detailed overview of scientific inquiry, examples of various research designs, a discussion of data management methods, an introduction to statistical analysis, and sound approaches to optimize the reproducibility of research results. ...

  16. Study designs: Part 1

    Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem. Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the ...

  17. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  18. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  19. Study Design

    Clinical research can be categorized into one of a few basic clinical study designs. Additional specificity may pertain, such as economic analysis, ethnography, focus groups, etc. The archived webpage of the 1993 version of OHRP's IRB Guidebook has a nice overview of clinical research design written for the audience of IRB members.

  20. (PDF) Basics of Research Design: A Guide to selecting appropriate

    The choice of the research design is influenced by the type of evidence needed to answer the research question (Akhtar, 2016), and it can be qualitative, quantitative, or a combination of both ...

  21. Research Protocol: A Transdisciplinary Multi-Case Study Research Design

    As noted by Merriam (2009) and Yin (2014), case study design is well-suited to research seeking a holistic, deep understanding of a real-world phenomenon, site, or community that qualifies as a bounded system (a self-contained entity) and where "the researcher aims to uncover the interaction of significant factors characteristic of the ...

  22. FDA: New Guidance for Non-interventional Studies of Drug Safety and

    Crafting a robust study design is the cornerstone of any successful research endeavor. The study design elements of the proposed research should include a schema outlining the overall study design and a causal diagram specifying the theorized causal relationship. Additionally, critical components such as the source population, eligibility ...

  23. Adopting, implementing and assimilating coproduced health and social

    Study aim and design. We adopt a longitudinal multiple case study approach to understand the dynamic nature by which three coproduced innovations intended to address the needs of vulnerable populations were adopted, implemented and assimilated [].Case study research is well suited to studying contemporary phenomena in their real-life contexts, and theory is often adopted to focus the analysis ...

  24. Study Design and Interim Analysis of the Cancer Lifetime ...

    OBJECTIVE The Cancer Lifetime Assessment Screening Study in Canines (CLASSiC) is a prospective, longitudinal cancer screening study, in which enrolled dogs are screened for cancer with physical exams and next-generation sequencing-based liquid biopsy testing on a serial basis. The goals of the first interim analysis, presented here, are to assess the benefits of using the OncoK9® liquid ...

  25. Types of studies and research design

    Types of study design. Medical research is classified into primary and secondary research. Clinical/experimental studies are performed in primary research, whereas secondary research consolidates available studies as reviews, systematic reviews and meta-analyses. Three main areas in primary research are basic medical research, clinical research ...

  26. HGMT 400 Discussion 3 (docx)

    Discussion 3 Explain the differences between Research Methods and Research Design. Be sure to: Research methods encompass a wide range of techniques, procedures, or tools employed to collect and examine data. These approaches may involve surveys, experiments, case studies, interviews, observations, and content analysis. Research design pertains to the comprehensive plan or approach that ...

  27. Collaborative UC Cancer Center team opens Phase 2 brain tumor trial

    To bring letrozole from Desai's lab to patients' bedsides, he collaborated with Wise-Draper and neuro-oncologists and neurosurgeons at UC's Brain Tumor Center to launch a Phase 0/1 clinical trial. "In the academic setting, we are very good at doing molecular research that enhances our understanding of the mechanism of disease and preclinical characterization of efficacy, safety and ...

  28. Enhancing tunnel stability in the Himalayas: Empirical design support

    The stress conditions have been systematically integrated to inform the design of support measures for a distinct class of rock mass in accordance with RMR. 2 PROJECT DESCRIPTION AND GEOLOGY OF THE STUDY AREA. The study area is situated within the Kohistan Arc, which formed as a result of the India-Pakistan plate beneath the Eurasian plate.

  29. Study Lead Opportunities

    212 Carnegie Center, Suite 301, Princeton, NJ 08540, USA. Phone 703.538.7600 - Toll free 888.838.5578. Cookie Settings