Logo for Open Educational Resources

Chapter 5. Sampling

Introduction.

Most Americans will experience unemployment at some point in their lives. Sarah Damaske ( 2021 ) was interested in learning about how men and women experience unemployment differently. To answer this question, she interviewed unemployed people. After conducting a “pilot study” with twenty interviewees, she realized she was also interested in finding out how working-class and middle-class persons experienced unemployment differently. She found one hundred persons through local unemployment offices. She purposefully selected a roughly equal number of men and women and working-class and middle-class persons for the study. This would allow her to make the kinds of comparisons she was interested in. She further refined her selection of persons to interview:

I decided that I needed to be able to focus my attention on gender and class; therefore, I interviewed only people born between 1962 and 1987 (ages 28–52, the prime working and child-rearing years), those who worked full-time before their job loss, those who experienced an involuntary job loss during the past year, and those who did not lose a job for cause (e.g., were not fired because of their behavior at work). ( 244 )

The people she ultimately interviewed compose her sample. They represent (“sample”) the larger population of the involuntarily unemployed. This “theoretically informed stratified sampling design” allowed Damaske “to achieve relatively equal distribution of participation across gender and class,” but it came with some limitations. For one, the unemployment centers were located in primarily White areas of the country, so there were very few persons of color interviewed. Qualitative researchers must make these kinds of decisions all the time—who to include and who not to include. There is never an absolutely correct decision, as the choice is linked to the particular research question posed by the particular researcher, although some sampling choices are more compelling than others. In this case, Damaske made the choice to foreground both gender and class rather than compare all middle-class men and women or women of color from different class positions or just talk to White men. She leaves the door open for other researchers to sample differently. Because science is a collective enterprise, it is most likely someone will be inspired to conduct a similar study as Damaske’s but with an entirely different sample.

This chapter is all about sampling. After you have developed a research question and have a general idea of how you will collect data (observations or interviews), how do you go about actually finding people and sites to study? Although there is no “correct number” of people to interview, the sample should follow the research question and research design. You might remember studying sampling in a quantitative research course. Sampling is important here too, but it works a bit differently. Unlike quantitative research, qualitative research involves nonprobability sampling. This chapter explains why this is so and what qualities instead make a good sample for qualitative research.

Quick Terms Refresher

  • The population is the entire group that you want to draw conclusions about.
  • The sample is the specific group of individuals that you will collect data from.
  • Sampling frame is the actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).
  • Sample size is how many individuals (or units) are included in your sample.

The “Who” of Your Research Study

After you have turned your general research interest into an actual research question and identified an approach you want to take to answer that question, you will need to specify the people you will be interviewing or observing. In most qualitative research, the objects of your study will indeed be people. In some cases, however, your objects might be content left by people (e.g., diaries, yearbooks, photographs) or documents (official or unofficial) or even institutions (e.g., schools, medical centers) and locations (e.g., nation-states, cities). Chances are, whatever “people, places, or things” are the objects of your study, you will not really be able to talk to, observe, or follow every single individual/object of the entire population of interest. You will need to create a sample of the population . Sampling in qualitative research has different purposes and goals than sampling in quantitative research. Sampling in both allows you to say something of interest about a population without having to include the entire population in your sample.

We begin this chapter with the case of a population of interest composed of actual people. After we have a better understanding of populations and samples that involve real people, we’ll discuss sampling in other types of qualitative research, such as archival research, content analysis, and case studies. We’ll then move to a larger discussion about the difference between sampling in qualitative research generally versus quantitative research, then we’ll move on to the idea of “theoretical” generalizability, and finally, we’ll conclude with some practical tips on the correct “number” to include in one’s sample.

Sampling People

To help think through samples, let’s imagine we want to know more about “vaccine hesitancy.” We’ve all lived through 2020 and 2021, and we know that a sizable number of people in the United States (and elsewhere) were slow to accept vaccines, even when these were freely available. By some accounts, about one-third of Americans initially refused vaccination. Why is this so? Well, as I write this in the summer of 2021, we know that some people actively refused the vaccination, thinking it was harmful or part of a government plot. Others were simply lazy or dismissed the necessity. And still others were worried about harmful side effects. The general population of interest here (all adult Americans who were not vaccinated by August 2021) may be as many as eighty million people. We clearly cannot talk to all of them. So we will have to narrow the number to something manageable. How can we do this?

Null

First, we have to think about our actual research question and the form of research we are conducting. I am going to begin with a quantitative research question. Quantitative research questions tend to be simpler to visualize, at least when we are first starting out doing social science research. So let us say we want to know what percentage of each kind of resistance is out there and how race or class or gender affects vaccine hesitancy. Again, we don’t have the ability to talk to everyone. But harnessing what we know about normal probability distributions (see quantitative methods for more on this), we can find this out through a sample that represents the general population. We can’t really address these particular questions if we only talk to White women who go to college with us. And if you are really trying to generalize the specific findings of your sample to the larger population, you will have to employ probability sampling , a sampling technique where a researcher sets a selection of a few criteria and chooses members of a population randomly. Why randomly? If truly random, all the members have an equal opportunity to be a part of the sample, and thus we avoid the problem of having only our friends and neighbors (who may be very different from other people in the population) in the study. Mathematically, there is going to be a certain number that will be large enough to allow us to generalize our particular findings from our sample population to the population at large. It might surprise you how small that number can be. Election polls of no more than one thousand people are routinely used to predict actual election outcomes of millions of people. Below that number, however, you will not be able to make generalizations. Talking to five people at random is simply not enough people to predict a presidential election.

In order to answer quantitative research questions of causality, one must employ probability sampling. Quantitative researchers try to generalize their findings to a larger population. Samples are designed with that in mind. Qualitative researchers ask very different questions, though. Qualitative research questions are not about “how many” of a certain group do X (in this case, what percentage of the unvaccinated hesitate for concern about safety rather than reject vaccination on political grounds). Qualitative research employs nonprobability sampling . By definition, not everyone has an equal opportunity to be included in the sample. The researcher might select White women they go to college with to provide insight into racial and gender dynamics at play. Whatever is found by doing so will not be generalizable to everyone who has not been vaccinated, or even all White women who have not been vaccinated, or even all White women who have not been vaccinated who are in this particular college. That is not the point of qualitative research at all. This is a really important distinction, so I will repeat in bold: Qualitative researchers are not trying to statistically generalize specific findings to a larger population . They have not failed when their sample cannot be generalized, as that is not the point at all.

In the previous paragraph, I said it would be perfectly acceptable for a qualitative researcher to interview five White women with whom she goes to college about their vaccine hesitancy “to provide insight into racial and gender dynamics at play.” The key word here is “insight.” Rather than use a sample as a stand-in for the general population, as quantitative researchers do, the qualitative researcher uses the sample to gain insight into a process or phenomenon. The qualitative researcher is not going to be content with simply asking each of the women to state her reason for not being vaccinated and then draw conclusions that, because one in five of these women were concerned about their health, one in five of all people were also concerned about their health. That would be, frankly, a very poor study indeed. Rather, the qualitative researcher might sit down with each of the women and conduct a lengthy interview about what the vaccine means to her, why she is hesitant, how she manages her hesitancy (how she explains it to her friends), what she thinks about others who are unvaccinated, what she thinks of those who have been vaccinated, and what she knows or thinks she knows about COVID-19. The researcher might include specific interview questions about the college context, about their status as White women, about the political beliefs they hold about racism in the US, and about how their own political affiliations may or may not provide narrative scripts about “protective whiteness.” There are many interesting things to ask and learn about and many things to discover. Where a quantitative researcher begins with clear parameters to set their population and guide their sample selection process, the qualitative researcher is discovering new parameters, making it impossible to engage in probability sampling.

Looking at it this way, sampling for qualitative researchers needs to be more strategic. More theoretically informed. What persons can be interviewed or observed that would provide maximum insight into what is still unknown? In other words, qualitative researchers think through what cases they could learn the most from, and those are the cases selected to study: “What would be ‘bias’ in statistical sampling, and therefore a weakness, becomes intended focus in qualitative sampling, and therefore a strength. The logic and power of purposeful sampling like in selecting information-rich cases for study in depth. Information-rich cases are those from which one can learn a great deal about issues of central importance to the purpose of the inquiry, thus the term purposeful sampling” ( Patton 2002:230 ; emphases in the original).

Before selecting your sample, though, it is important to clearly identify the general population of interest. You need to know this before you can determine the sample. In our example case, it is “adult Americans who have not yet been vaccinated.” Depending on the specific qualitative research question, however, it might be “adult Americans who have been vaccinated for political reasons” or even “college students who have not been vaccinated.” What insights are you seeking? Do you want to know how politics is affecting vaccination? Or do you want to understand how people manage being an outlier in a particular setting (unvaccinated where vaccinations are heavily encouraged if not required)? More clearly stated, your population should align with your research question . Think back to the opening story about Damaske’s work studying the unemployed. She drew her sample narrowly to address the particular questions she was interested in pursuing. Knowing your questions or, at a minimum, why you are interested in the topic will allow you to draw the best sample possible to achieve insight.

Once you have your population in mind, how do you go about getting people to agree to be in your sample? In qualitative research, it is permissible to find people by convenience. Just ask for people who fit your sample criteria and see who shows up. Or reach out to friends and colleagues and see if they know anyone that fits. Don’t let the name convenience sampling mislead you; this is not exactly “easy,” and it is certainly a valid form of sampling in qualitative research. The more unknowns you have about what you will find, the more convenience sampling makes sense. If you don’t know how race or class or political affiliation might matter, and your population is unvaccinated college students, you can construct a sample of college students by placing an advertisement in the student paper or posting a flyer on a notice board. Whoever answers is your sample. That is what is meant by a convenience sample. A common variation of convenience sampling is snowball sampling . This is particularly useful if your target population is hard to find. Let’s say you posted a flyer about your study and only two college students responded. You could then ask those two students for referrals. They tell their friends, and those friends tell other friends, and, like a snowball, your sample gets bigger and bigger.

Researcher Note

Gaining Access: When Your Friend Is Your Research Subject

My early experience with qualitative research was rather unique. At that time, I needed to do a project that required me to interview first-generation college students, and my friends, with whom I had been sharing a dorm for two years, just perfectly fell into the sample category. Thus, I just asked them and easily “gained my access” to the research subject; I know them, we are friends, and I am part of them. I am an insider. I also thought, “Well, since I am part of the group, I can easily understand their language and norms, I can capture their honesty, read their nonverbal cues well, will get more information, as they will be more opened to me because they trust me.” All in all, easy access with rich information. But, gosh, I did not realize that my status as an insider came with a price! When structuring the interview questions, I began to realize that rather than focusing on the unique experiences of my friends, I mostly based the questions on my own experiences, assuming we have similar if not the same experiences. I began to struggle with my objectivity and even questioned my role; am I doing this as part of the group or as a researcher? I came to know later that my status as an insider or my “positionality” may impact my research. It not only shapes the process of data collection but might heavily influence my interpretation of the data. I came to realize that although my inside status came with a lot of benefits (especially for access), it could also bring some drawbacks.

—Dede Setiono, PhD student focusing on international development and environmental policy, Oregon State University

The more you know about what you might find, the more strategic you can be. If you wanted to compare how politically conservative and politically liberal college students explained their vaccine hesitancy, for example, you might construct a sample purposively, finding an equal number of both types of students so that you can make those comparisons in your analysis. This is what Damaske ( 2021 ) did. You could still use convenience or snowball sampling as a way of recruitment. Post a flyer at the conservative student club and then ask for referrals from the one student that agrees to be interviewed. As with convenience sampling, there are variations of purposive sampling as well as other names used (e.g., judgment, quota, stratified, criterion, theoretical). Try not to get bogged down in the nomenclature; instead, focus on identifying the general population that matches your research question and then using a sampling method that is most likely to provide insight, given the types of questions you have.

There are all kinds of ways of being strategic with sampling in qualitative research. Here are a few of my favorite techniques for maximizing insight:

  • Consider using “extreme” or “deviant” cases. Maybe your college houses a prominent anti-vaxxer who has written about and demonstrated against the college’s policy on vaccines. You could learn a lot from that single case (depending on your research question, of course).
  • Consider “intensity”: people and cases and circumstances where your questions are more likely to feature prominently (but not extremely or deviantly). For example, you could compare those who volunteer at local Republican and Democratic election headquarters during an election season in a study on why party matters. Those who volunteer are more likely to have something to say than those who are more apathetic.
  • Maximize variation, as with the case of “politically liberal” versus “politically conservative,” or include an array of social locations (young vs. old; Northwest vs. Southeast region). This kind of heterogeneity sampling can capture and describe the central themes that cut across the variations: any common patterns that emerge, even in this wildly mismatched sample, are probably important to note!
  • Rather than maximize the variation, you could select a small homogenous sample to describe some particular subgroup in depth. Focus groups are often the best form of data collection for homogeneity sampling.
  • Think about which cases are “critical” or politically important—ones that “if it happens here, it would happen anywhere” or a case that is politically sensitive, as with the single “blue” (Democratic) county in a “red” (Republican) state. In both, you are choosing a site that would yield the most information and have the greatest impact on the development of knowledge.
  • On the other hand, sometimes you want to select the “typical”—the typical college student, for example. You are trying to not generalize from the typical but illustrate aspects that may be typical of this case or group. When selecting for typicality, be clear with yourself about why the typical matches your research questions (and who might be excluded or marginalized in doing so).
  • Finally, it is often a good idea to look for disconfirming cases : if you are at the stage where you have a hypothesis (of sorts), you might select those who do not fit your hypothesis—you will surely learn something important there. They may be “exceptions that prove the rule” or exceptions that force you to alter your findings in order to make sense of these additional cases.

In addition to all these sampling variations, there is the theoretical approach taken by grounded theorists in which the researcher samples comparative people (or events) on the basis of their potential to represent important theoretical constructs. The sample, one can say, is by definition representative of the phenomenon of interest. It accompanies the constant comparative method of analysis. In the words of the funders of Grounded Theory , “Theoretical sampling is sampling on the basis of the emerging concepts, with the aim being to explore the dimensional range or varied conditions along which the properties of the concepts vary” ( Strauss and Corbin 1998:73 ).

When Your Population is Not Composed of People

I think it is easiest for most people to think of populations and samples in terms of people, but sometimes our units of analysis are not actually people. They could be places or institutions. Even so, you might still want to talk to people or observe the actions of people to understand those places or institutions. Or not! In the case of content analyses (see chapter 17), you won’t even have people involved at all but rather documents or films or photographs or news clippings. Everything we have covered about sampling applies to other units of analysis too. Let’s work through some examples.

Case Studies

When constructing a case study, it is helpful to think of your cases as sample populations in the same way that we considered people above. If, for example, you are comparing campus climates for diversity, your overall population may be “four-year college campuses in the US,” and from there you might decide to study three college campuses as your sample. Which three? Will you use purposeful sampling (perhaps [1] selecting three colleges in Oregon that are different sizes or [2] selecting three colleges across the US located in different political cultures or [3] varying the three colleges by racial makeup of the student body)? Or will you select three colleges at random, out of convenience? There are justifiable reasons for all approaches.

As with people, there are different ways of maximizing insight in your sample selection. Think about the following rationales: typical, diverse, extreme, deviant, influential, crucial, or even embodying a particular “pathway” ( Gerring 2008 ). When choosing a case or particular research site, Rubin ( 2021 ) suggests you bear in mind, first, what you are leaving out by selecting this particular case/site; second, what you might be overemphasizing by studying this case/site and not another; and, finally, whether you truly need to worry about either of those things—“that is, what are the sources of bias and how bad are they for what you are trying to do?” ( 89 ).

Once you have selected your cases, you may still want to include interviews with specific people or observations at particular sites within those cases. Then you go through possible sampling approaches all over again to determine which people will be contacted.

Content: Documents, Narrative Accounts, And So On

Although not often discussed as sampling, your selection of documents and other units to use in various content/historical analyses is subject to similar considerations. When you are asking quantitative-type questions (percentages and proportionalities of a general population), you will want to follow probabilistic sampling. For example, I created a random sample of accounts posted on the website studentloanjustice.org to delineate the types of problems people were having with student debt ( Hurst 2007 ). Even though my data was qualitative (narratives of student debt), I was actually asking a quantitative-type research question, so it was important that my sample was representative of the larger population (debtors who posted on the website). On the other hand, when you are asking qualitative-type questions, the selection process should be very different. In that case, use nonprobabilistic techniques, either convenience (where you are really new to this data and do not have the ability to set comparative criteria or even know what a deviant case would be) or some variant of purposive sampling. Let’s say you were interested in the visual representation of women in media published in the 1950s. You could select a national magazine like Time for a “typical” representation (and for its convenience, as all issues are freely available on the web and easy to search). Or you could compare one magazine known for its feminist content versus one antifeminist. The point is, sample selection is important even when you are not interviewing or observing people.

Goals of Qualitative Sampling versus Goals of Quantitative Sampling

We have already discussed some of the differences in the goals of quantitative and qualitative sampling above, but it is worth further discussion. The quantitative researcher seeks a sample that is representative of the population of interest so that they may properly generalize the results (e.g., if 80 percent of first-gen students in the sample were concerned with costs of college, then we can say there is a strong likelihood that 80 percent of first-gen students nationally are concerned with costs of college). The qualitative researcher does not seek to generalize in this way . They may want a representative sample because they are interested in typical responses or behaviors of the population of interest, but they may very well not want a representative sample at all. They might want an “extreme” or deviant case to highlight what could go wrong with a particular situation, or maybe they want to examine just one case as a way of understanding what elements might be of interest in further research. When thinking of your sample, you will have to know why you are selecting the units, and this relates back to your research question or sets of questions. It has nothing to do with having a representative sample to generalize results. You may be tempted—or it may be suggested to you by a quantitatively minded member of your committee—to create as large and representative a sample as you possibly can to earn credibility from quantitative researchers. Ignore this temptation or suggestion. The only thing you should be considering is what sample will best bring insight into the questions guiding your research. This has implications for the number of people (or units) in your study as well, which is the topic of the next section.

What is the Correct “Number” to Sample?

Because we are not trying to create a generalizable representative sample, the guidelines for the “number” of people to interview or news stories to code are also a bit more nebulous. There are some brilliant insightful studies out there with an n of 1 (meaning one person or one account used as the entire set of data). This is particularly so in the case of autoethnography, a variation of ethnographic research that uses the researcher’s own subject position and experiences as the basis of data collection and analysis. But it is true for all forms of qualitative research. There are no hard-and-fast rules here. The number to include is what is relevant and insightful to your particular study.

That said, humans do not thrive well under such ambiguity, and there are a few helpful suggestions that can be made. First, many qualitative researchers talk about “saturation” as the end point for data collection. You stop adding participants when you are no longer getting any new information (or so very little that the cost of adding another interview subject or spending another day in the field exceeds any likely benefits to the research). The term saturation was first used here by Glaser and Strauss ( 1967 ), the founders of Grounded Theory. Here is their explanation: “The criterion for judging when to stop sampling the different groups pertinent to a category is the category’s theoretical saturation . Saturation means that no additional data are being found whereby the sociologist can develop properties of the category. As he [or she] sees similar instances over and over again, the researcher becomes empirically confident that a category is saturated. [They go] out of [their] way to look for groups that stretch diversity of data as far as possible, just to make certain that saturation is based on the widest possible range of data on the category” ( 61 ).

It makes sense that the term was developed by grounded theorists, since this approach is rather more open-ended than other approaches used by qualitative researchers. With so much left open, having a guideline of “stop collecting data when you don’t find anything new” is reasonable. However, saturation can’t help much when first setting out your sample. How do you know how many people to contact to interview? What number will you put down in your institutional review board (IRB) protocol (see chapter 8)? You may guess how many people or units it will take to reach saturation, but there really is no way to know in advance. The best you can do is think about your population and your questions and look at what others have done with similar populations and questions.

Here are some suggestions to use as a starting point: For phenomenological studies, try to interview at least ten people for each major category or group of people . If you are comparing male-identified, female-identified, and gender-neutral college students in a study on gender regimes in social clubs, that means you might want to design a sample of thirty students, ten from each group. This is the minimum suggested number. Damaske’s ( 2021 ) sample of one hundred allows room for up to twenty-five participants in each of four “buckets” (e.g., working-class*female, working-class*male, middle-class*female, middle-class*male). If there is more than one comparative group (e.g., you are comparing students attending three different colleges, and you are comparing White and Black students in each), you can sometimes reduce the number for each group in your sample to five for, in this case, thirty total students. But that is really a bare minimum you will want to go. A lot of people will not trust you with only “five” cases in a bucket. Lareau ( 2021:24 ) advises a minimum of seven or nine for each bucket (or “cell,” in her words). The point is to think about what your analyses might look like and how comfortable you will be with a certain number of persons fitting each category.

Because qualitative research takes so much time and effort, it is rare for a beginning researcher to include more than thirty to fifty people or units in the study. You may not be able to conduct all the comparisons you might want simply because you cannot manage a larger sample. In that case, the limits of who you can reach or what you can include may influence you to rethink an original overcomplicated research design. Rather than include students from every racial group on a campus, for example, you might want to sample strategically, thinking about the most contrast (insightful), possibly excluding majority-race (White) students entirely, and simply using previous literature to fill in gaps in our understanding. For example, one of my former students was interested in discovering how race and class worked at a predominantly White institution (PWI). Due to time constraints, she simplified her study from an original sample frame of middle-class and working-class domestic Black and international African students (four buckets) to a sample frame of domestic Black and international African students (two buckets), allowing the complexities of class to come through individual accounts rather than from part of the sample frame. She wisely decided not to include White students in the sample, as her focus was on how minoritized students navigated the PWI. She was able to successfully complete her project and develop insights from the data with fewer than twenty interviewees. [1]

But what if you had unlimited time and resources? Would it always be better to interview more people or include more accounts, documents, and units of analysis? No! Your sample size should reflect your research question and the goals you have set yourself. Larger numbers can sometimes work against your goals. If, for example, you want to help bring out individual stories of success against the odds, adding more people to the analysis can end up drowning out those individual stories. Sometimes, the perfect size really is one (or three, or five). It really depends on what you are trying to discover and achieve in your study. Furthermore, studies of one hundred or more (people, documents, accounts, etc.) can sometimes be mistaken for quantitative research. Inevitably, the large sample size will push the researcher into simplifying the data numerically. And readers will begin to expect generalizability from such a large sample.

To summarize, “There are no rules for sample size in qualitative inquiry. Sample size depends on what you want to know, the purpose of the inquiry, what’s at stake, what will be useful, what will have credibility, and what can be done with available time and resources” ( Patton 2002:244 ).

How did you find/construct a sample?

Since qualitative researchers work with comparatively small sample sizes, getting your sample right is rather important. Yet it is also difficult to accomplish. For instance, a key question you need to ask yourself is whether you want a homogeneous or heterogeneous sample. In other words, do you want to include people in your study who are by and large the same, or do you want to have diversity in your sample?

For many years, I have studied the experiences of students who were the first in their families to attend university. There is a rather large number of sampling decisions I need to consider before starting the study. (1) Should I only talk to first-in-family students, or should I have a comparison group of students who are not first-in-family? (2) Do I need to strive for a gender distribution that matches undergraduate enrollment patterns? (3) Should I include participants that reflect diversity in gender identity and sexuality? (4) How about racial diversity? First-in-family status is strongly related to some ethnic or racial identity. (5) And how about areas of study?

As you can see, if I wanted to accommodate all these differences and get enough study participants in each category, I would quickly end up with a sample size of hundreds, which is not feasible in most qualitative research. In the end, for me, the most important decision was to maximize the voices of first-in-family students, which meant that I only included them in my sample. As for the other categories, I figured it was going to be hard enough to find first-in-family students, so I started recruiting with an open mind and an understanding that I may have to accept a lack of gender, sexuality, or racial diversity and then not be able to say anything about these issues. But I would definitely be able to speak about the experiences of being first-in-family.

—Wolfgang Lehmann, author of “Habitus Transformation and Hidden Injuries”

Examples of “Sample” Sections in Journal Articles

Think about some of the studies you have read in college, especially those with rich stories and accounts about people’s lives. Do you know how the people were selected to be the focus of those stories? If the account was published by an academic press (e.g., University of California Press or Princeton University Press) or in an academic journal, chances are that the author included a description of their sample selection. You can usually find these in a methodological appendix (book) or a section on “research methods” (article).

Here are two examples from recent books and one example from a recent article:

Example 1 . In It’s Not like I’m Poor: How Working Families Make Ends Meet in a Post-welfare World , the research team employed a mixed methods approach to understand how parents use the earned income tax credit, a refundable tax credit designed to provide relief for low- to moderate-income working people ( Halpern-Meekin et al. 2015 ). At the end of their book, their first appendix is “Introduction to Boston and the Research Project.” After describing the context of the study, they include the following description of their sample selection:

In June 2007, we drew 120 names at random from the roughly 332 surveys we gathered between February and April. Within each racial and ethnic group, we aimed for one-third married couples with children and two-thirds unmarried parents. We sent each of these families a letter informing them of the opportunity to participate in the in-depth portion of our study and then began calling the home and cell phone numbers they provided us on the surveys and knocking on the doors of the addresses they provided.…In the end, we interviewed 115 of the 120 families originally selected for the in-depth interview sample (the remaining five families declined to participate). ( 22 )

Was their sample selection based on convenience or purpose? Why do you think it was important for them to tell you that five families declined to be interviewed? There is actually a trick here, as the names were pulled randomly from a survey whose sample design was probabilistic. Why is this important to know? What can we say about the representativeness or the uniqueness of whatever findings are reported here?

Example 2 . In When Diversity Drops , Park ( 2013 ) examines the impact of decreasing campus diversity on the lives of college students. She does this through a case study of one student club, the InterVarsity Christian Fellowship (IVCF), at one university (“California University,” a pseudonym). Here is her description:

I supplemented participant observation with individual in-depth interviews with sixty IVCF associates, including thirty-four current students, eight former and current staff members, eleven alumni, and seven regional or national staff members. The racial/ethnic breakdown was twenty-five Asian Americans (41.6 percent), one Armenian (1.6 percent), twelve people who were black (20.0 percent), eight Latino/as (13.3 percent), three South Asian Americans (5.0 percent), and eleven people who were white (18.3 percent). Twenty-nine were men, and thirty-one were women. Looking back, I note that the higher number of Asian Americans reflected both the group’s racial/ethnic composition and my relative ease about approaching them for interviews. ( 156 )

How can you tell this is a convenience sample? What else do you note about the sample selection from this description?

Example 3. The last example is taken from an article published in the journal Research in Higher Education . Published articles tend to be more formal than books, at least when it comes to the presentation of qualitative research. In this article, Lawson ( 2021 ) is seeking to understand why female-identified college students drop out of majors that are dominated by male-identified students (e.g., engineering, computer science, music theory). Here is the entire relevant section of the article:

Method Participants Data were collected as part of a larger study designed to better understand the daily experiences of women in MDMs [male-dominated majors].…Participants included 120 students from a midsize, Midwestern University. This sample included 40 women and 40 men from MDMs—defined as any major where at least 2/3 of students are men at both the university and nationally—and 40 women from GNMs—defined as any may where 40–60% of students are women at both the university and nationally.… Procedure A multi-faceted approach was used to recruit participants; participants were sent targeted emails (obtained based on participants’ reported gender and major listings), campus-wide emails sent through the University’s Communication Center, flyers, and in-class presentations. Recruitment materials stated that the research focused on the daily experiences of college students, including classroom experiences, stressors, positive experiences, departmental contexts, and career aspirations. Interested participants were directed to email the study coordinator to verify eligibility (at least 18 years old, man/woman in MDM or woman in GNM, access to a smartphone). Sixteen interested individuals were not eligible for the study due to the gender/major combination. ( 482ff .)

What method of sample selection was used by Lawson? Why is it important to define “MDM” at the outset? How does this definition relate to sampling? Why were interested participants directed to the study coordinator to verify eligibility?

Final Words

I have found that students often find it difficult to be specific enough when defining and choosing their sample. It might help to think about your sample design and sample recruitment like a cookbook. You want all the details there so that someone else can pick up your study and conduct it as you intended. That person could be yourself, but this analogy might work better if you have someone else in mind. When I am writing down recipes, I often think of my sister and try to convey the details she would need to duplicate the dish. We share a grandmother whose recipes are full of handwritten notes in the margins, in spidery ink, that tell us what bowl to use when or where things could go wrong. Describe your sample clearly, convey the steps required accurately, and then add any other details that will help keep you on track and remind you why you have chosen to limit possible interviewees to those of a certain age or class or location. Imagine actually going out and getting your sample (making your dish). Do you have all the necessary details to get started?

Table 5.1. Sampling Type and Strategies

Type Used primarily in... Strategies  
Probabilistic Quantitative research
Simple random Each member of the population has an equal chance at being selected
Stratified The sample is split into strata; members of each strata are selected in proportion to the population at large
Non-probabilistic Qualitative research
Convenience Simply includes the individuals who happen to be most accessible to the researcher
Snowball Used to recruit participants via other participants. The number of people you have access to “snowballs” as you get in contact with more people
Purposive Involves the researcher using their expertise to select a sample that is most useful to the purposes of the research; An effective purposive sample must have clear criteria and rationale for inclusion (e.g., )
Quota Set quotas to ensure that the sample you get represents certain characteristics in proportion to their prevalence in the population

Further Readings

Fusch, Patricia I., and Lawrence R. Ness. 2015. “Are We There Yet? Data Saturation in Qualitative Research.” Qualitative Report 20(9):1408–1416.

Saunders, Benjamin, Julius Sim, Tom Kinstone, Shula Baker, Jackie Waterfield, Bernadette Bartlam, Heather Burroughs, and Clare Jinks. 2018. “Saturation in Qualitative Research: Exploring Its Conceptualization and Operationalization.”  Quality & Quantity  52(4):1893–1907.

  • Rubin ( 2021 ) suggests a minimum of twenty interviews (but safer with thirty) for an interview-based study and a minimum of three to six months in the field for ethnographic studies. For a content-based study, she suggests between five hundred and one thousand documents, although some will be “very small” ( 243–244 ). ↵

The process of selecting people or other units of analysis to represent a larger population. In quantitative research, this representation is taken quite literally, as statistically representative.  In qualitative research, in contrast, sample selection is often made based on potential to generate insight about a particular topic or phenomenon.

The actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).  Sampling frames can differ from the larger population when specific exclusions are inherent, as in the case of pulling names randomly from voter registration rolls where not everyone is a registered voter.  This difference in frame and population can undercut the generalizability of quantitative results.

The specific group of individuals that you will collect data from.  Contrast population.

The large group of interest to the researcher.  Although it will likely be impossible to design a study that incorporates or reaches all members of the population of interest, this should be clearly defined at the outset of a study so that a reasonable sample of the population can be taken.  For example, if one is studying working-class college students, the sample may include twenty such students attending a particular college, while the population is “working-class college students.”  In quantitative research, clearly defining the general population of interest is a necessary step in generalizing results from a sample.  In qualitative research, defining the population is conceptually important for clarity.

A sampling strategy in which the sample is chosen to represent (numerically) the larger population from which it is drawn by random selection.  Each person in the population has an equal chance of making it into the sample.  This is often done through a lottery or other chance mechanisms (e.g., a random selection of every twelfth name on an alphabetical list of voters).  Also known as random sampling .

The selection of research participants or other data sources based on availability or accessibility, in contrast to purposive sampling .

A sample generated non-randomly by asking participants to help recruit more participants the idea being that a person who fits your sampling criteria probably knows other people with similar criteria.

Broad codes that are assigned to the main issues emerging in the data; identifying themes is often part of initial coding . 

A form of case selection focusing on examples that do not fit the emerging patterns. This allows the researcher to evaluate rival explanations or to define the limitations of their research findings. While disconfirming cases are found (not sought out), researchers should expand their analysis or rethink their theories to include/explain them.

A methodological tradition of inquiry and approach to analyzing qualitative data in which theories emerge from a rigorous and systematic process of induction.  This approach was pioneered by the sociologists Glaser and Strauss (1967).  The elements of theory generated from comparative analysis of data are, first, conceptual categories and their properties and, second, hypotheses or generalized relations among the categories and their properties – “The constant comparing of many groups draws the [researcher’s] attention to their many similarities and differences.  Considering these leads [the researcher] to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.” (36).

The result of probability sampling, in which a sample is chosen to represent (numerically) the larger population from which it is drawn by random selection.  Each person in the population has an equal chance of making it into the random sample.  This is often done through a lottery or other chance mechanisms (e.g., the random selection of every twelfth name on an alphabetical list of voters).  This is typically not required in qualitative research but rather essential for the generalizability of quantitative research.

A form of case selection or purposeful sampling in which cases that are unusual or special in some way are chosen to highlight processes or to illuminate gaps in our knowledge of a phenomenon.   See also extreme case .

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

The accuracy with which results or findings can be transferred to situations or people other than those originally studied.  Qualitative studies generally are unable to use (and are uninterested in) statistical generalizability where the sample population is said to be able to predict or stand in for a larger population of interest.  Instead, qualitative researchers often discuss “theoretical generalizability,” in which the findings of a particular study can shed light on processes and mechanisms that may be at play in other settings.  See also statistical generalization and theoretical generalization .

A term used by IRBs to denote all materials aimed at recruiting participants into a research study (including printed advertisements, scripts, audio or video tapes, or websites).  Copies of this material are required in research protocols submitted to IRB.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Sampling in Qualitative Research

The chapter discusses different types of sampling methods used in qualitative research to select information-rich cases. Two types of sampling techniques are discussed in the past qualitative studies—the theoretical and the purposeful sampling techniques. The chapter illustrates these two types of sampling techniques relevant examples. The sample size estimation and the point of data saturation and data sufficiency are also discussed in the chapter. The chapter will help the scholars and researchers in selecting the right technique for their qualitative study.

  • Related Documents

Sample size estimation and sampling techniques for selecting a representative sample

Qualitative sampling methods.

Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros and cons of each. Sample size and data saturation are discussed.

A systematic review of the quality of conduct and reporting of survival analyses of tuberculosis outcomes in Africa

Abstract Background Survival analyses methods (SAMs) are central to analysing time-to-event outcomes. Appropriate application and reporting of such methods are important to ensure correct interpretation of the data. In this study, we systematically review the application and reporting of SAMs in studies of tuberculosis (TB) patients in Africa. It is the first review to assess the application and reporting of SAMs in this context. Methods Systematic review of studies involving TB patients from Africa published between January 2010 and April 2020 in English language. Studies were eligible if they reported use of SAMs. Application and reporting of SAMs were evaluated based on seven author-defined criteria. Results Seventy-six studies were included with patient numbers ranging from 56 to 182,890. Forty-three (57%) studies involved a statistician/epidemiologist. The number of published papers per year applying SAMs increased from two in 2010 to 18 in 2019 (P = 0.004). Sample size estimation was not reported by 67 (88%) studies. A total of 22 (29%) studies did not report summary follow-up time. The survival function was commonly presented using Kaplan-Meier survival curves (n = 51, (67%) studies) and group comparisons were performed using log-rank tests (n = 44, (58%) studies). Sixty seven (91%), 3 (4.1%) and 4 (5.4%) studies reported Cox proportional hazard, competing risk and parametric survival regression models, respectively. A total of 37 (49%) studies had hierarchical clustering, of which 28 (76%) did not adjust for the clustering in the analysis. Reporting was adequate among 4.0, 1.3 and 6.6% studies for sample size estimation, plotting of survival curves and test of survival regression underlying assumptions, respectively. Forty-five (59%), 52 (68%) and 73 (96%) studies adequately reported comparison of survival curves, follow-up time and measures of effect, respectively. Conclusion The quality of reporting survival analyses remains inadequate despite its increasing application. Because similar reporting deficiencies may be common in other diseases in low- and middle-income countries, reporting guidelines, additional training, and more capacity building are needed along with more vigilance by reviewers and journal editors.

R2: A computer program for interval estimation, power Calculations, sample size estimation, and hypothesis testing in multiple regression

Barriers to self-care in elderly people with hypertension: a qualitative study.

Purpose Hypertension is the most common chronic disease throughout the world. Self-care is the key criteria in determining the final course of the disease. However, the majority of elderly people do not observe self-care behaviors. The purpose of this paper is to analyze the experiences of elderly people with hypertension in order to understand the barriers of their self-care behaviors. Design/methodology/approach This is a qualitative study with a conventional content analysis approach conducted in Tehran, Iran in 2017. Data collection was done among 23 participants – 14 elderly people; 6 cardiologists, geriatric physicians and nurses working in the cardiovascular ward; and 3 caregivers – who were selected by purposeful sampling. Using semi-structured, face-to-face interviews, data collection was continued until data saturation. Findings Three main categories, including attitude limitations, inefficient supportive network and desperation, all showed barriers to self-care by the experiences of elderly people with hypertension. Originality/value Lack of knowledge of the disease and its treatment process is one of the main barriers to self-care in elderly people with hypertension. Deficient supportive resources along with economic and family problems exacerbate the failure to do self-care behaviors.

Sample Size Estimation

Tamaño óptimo de la muestra.

Key words: Bias, estimation, population, sampleAbstract. The basics of sample size estimation process are described. Assuming the normal distribution, the procedures for estimation of sample size for the mean; with and without knowledge of the population variance, and population proportion are noted. Sample size for more than one population feature is also given.Palabras clave: Estimación, muestra, población, sesgoResumen. Se describen los fundamentos del proceso de la estimación del tamaño óptimo de la muestra. Suponiendo una distribución normal para una población, se notan los procedimientos de la estimación del tamaño óptimo de la muestra para la media muestral con y sin el conocimiento de la varianza poblacional. Se presenta el tamaño óptimo de la muestra con más de una característica poblacional.

P90 The problem with a heuristic approach to sample size estimation for time-to-failure endpoints involving three or more treatment groups

Sample size estimation for negative binomial regression comparing rates of recurrent events with unequal follow-up time, export citation format, share document.

Sampling Techniques for Qualitative Research

  • First Online: 27 October 2022

Cite this chapter

types of qualitative research sampling methods

  • Heather Douglas 4  

3829 Accesses

6 Citations

This chapter explains how to design suitable sampling strategies for qualitative research. The focus of this chapter is purposive (or theoretical) sampling to produce credible and trustworthy explanations of a phenomenon (a specific aspect of society). A specific research question (RQ) guides the methodology (the study design or approach ). It defines the participants, location, and actions to be used to answer the question. Qualitative studies use specific tools and techniques ( methods ) to sample people, organizations, or whatever is to be examined. The methodology guides the selection of tools and techniques for sampling, data analysis, quality assurance, etc. These all vary according to the purpose and design of the study and the RQ. In this chapter, a fake example is used to demonstrate how to apply your sampling strategy in a developing country.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research, the role of sampling in mixed methods-research.

types of qualitative research sampling methods

Preparation of Qualitative Research

Douglas, H. (2010). Divergent orientations in social entrepreneurship organisations. In K. Hockerts, J. Robinson, & J. Mair (Eds.), Values and opportunities in social entrepreneurship (pp. 71–95). Palgrave Macmillan.

Chapter   Google Scholar  

Douglas, H., Eti-Tofinga, B., & Singh, G. (2018a). Contextualising social enterprise in Fiji. Social Enterprise Journal, 14 (2), 208–224. https://doi.org/10.1108/SEJ-05-2017-0032

Article   Google Scholar  

Douglas, H., Eti-Tofinga, B., & Singh, G. (2018b). Hybrid organisations contributing to wellbeing in small Pacific island countries. Sustainability Accounting, Management and Policy Journal, 9 (4), 490–514. https://doi.org/10.1108/SAMPJ-08-2017-0081

Douglas, H., & Borbasi, S. (2009). Parental perspectives on disability: The story of Sam, Anna, and Marcus. Disabilities: Insights from across fields and around the world, 2 , 201–217.

Google Scholar  

Douglas, H. (1999). Community transport in rural Queensland: Using community resources effectively in small communities. Paper presented at the 5th National Rural Health Conference, Adelaide, South Australia, pp. 14–17th March.

Douglas, H. (2006). Action, blastoff, chaos: ABC of successful youth participation. Child, Youth and Environments, 16 (1). Retrieved from http://www.colorado.edu/journals/cye

Douglas, H. (2007). Methodological sampling issues for researching new nonprofit organisations. Paper presented at the 52nd International Council for Small Business (ICSB) 13–15 June, Turku, Finland.

Draper, H., Wilson, S., Flanagan, S., & Ives, J. (2009). Offering payments, reimbursement and incentives to patients and family doctors to encourage participation in research. Family Practice, 26 (3), 231–238. https://doi.org/10.1093/fampra/cmp011

Puamua, P. Q. (1999). Understanding Fijian under-achievement: An integrated perspective. Directions, 21 (2), 100–112.

Download references

Author information

Authors and affiliations.

The University of Queensland, The Royal Society of Queensland, Activation Australia, Brisbane, Australia

Heather Douglas

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Heather Douglas .

Editor information

Editors and affiliations.

Centre for Family and Child Studies, Research Institute of Humanities and Social Sciences, University of Sharjah, Sharjah, United Arab Emirates

M. Rezaul Islam

Department of Development Studies, University of Dhaka, Dhaka, Bangladesh

Niaz Ahmed Khan

Department of Social Work, School of Humanities, University of Johannesburg, Johannesburg, South Africa

Rajendra Baikady

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Douglas, H. (2022). Sampling Techniques for Qualitative Research. In: Islam, M.R., Khan, N.A., Baikady, R. (eds) Principles of Social Research Methodology. Springer, Singapore. https://doi.org/10.1007/978-981-19-5441-2_29

Download citation

DOI : https://doi.org/10.1007/978-981-19-5441-2_29

Published : 27 October 2022

Publisher Name : Springer, Singapore

Print ISBN : 978-981-19-5219-7

Online ISBN : 978-981-19-5441-2

eBook Packages : Social Sciences Social Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Qualitative, Quantitative, and Mixed Methods Research Sampling Strategies

Introduction.

  • Sampling Strategies
  • Sample Size
  • Qualitative Design Considerations
  • Discipline Specific and Special Considerations
  • Sampling Strategies Unique to Mixed Methods Designs

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Mixed Methods Research
  • Qualitative Research Design
  • Quantitative Research Designs in Educational Research

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Cyber Safety in Schools
  • Girls' Education in the Developing World
  • History of Education in Europe
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Qualitative, Quantitative, and Mixed Methods Research Sampling Strategies by Timothy C. Guetterman LAST REVIEWED: 26 February 2020 LAST MODIFIED: 26 February 2020 DOI: 10.1093/obo/9780199756810-0241

Sampling is a critical, often overlooked aspect of the research process. The importance of sampling extends to the ability to draw accurate inferences, and it is an integral part of qualitative guidelines across research methods. Sampling considerations are important in quantitative and qualitative research when considering a target population and when drawing a sample that will either allow us to generalize (i.e., quantitatively) or go into sufficient depth (i.e., qualitatively). While quantitative research is generally concerned with probability-based approaches, qualitative research typically uses nonprobability purposeful sampling approaches. Scholars generally focus on two major sampling topics: sampling strategies and sample sizes. Or simply, researchers should think about who to include and how many; both of these concerns are key. Mixed methods studies have both qualitative and quantitative sampling considerations. However, mixed methods studies also have unique considerations based on the relationship of quantitative and qualitative research within the study.

Sampling in Qualitative Research

Sampling in qualitative research may be divided into two major areas: overall sampling strategies and issues around sample size. Sampling strategies refers to the process of sampling and how to design a sampling. Qualitative sampling typically follows a nonprobability-based approach, such as purposive or purposeful sampling where participants or other units of analysis are selected intentionally for their ability to provide information to address research questions. Sample size refers to how many participants or other units are needed to address research questions. The methodological literature about sampling tends to fall into these two broad categories, though some articles, chapters, and books cover both concepts. Others have connected sampling to the type of qualitative design that is employed. Additionally, researchers might consider discipline specific sampling issues as much research does tend to operate within disciplinary views and constraints. Scholars in many disciplines have examined sampling around specific topics, research problems, or disciplines and provide guidance to making sampling decisions, such as appropriate strategies and sample size.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Education »
  • Meet the Editorial Board »
  • Academic Achievement
  • Academic Audit for Universities
  • Academic Freedom and Tenure in the United States
  • Action Research in Education
  • Adjuncts in Higher Education in the United States
  • Administrator Preparation
  • Adolescence
  • Advanced Placement and International Baccalaureate Courses
  • Advocacy and Activism in Early Childhood
  • African American Racial Identity and Learning
  • Alaska Native Education
  • Alternative Certification Programs for Educators
  • Alternative Schools
  • American Indian Education
  • Animals in Environmental Education
  • Art Education
  • Artificial Intelligence and Learning
  • Assessing School Leader Effectiveness
  • Assessment, Behavioral
  • Assessment, Educational
  • Assessment in Early Childhood Education
  • Assistive Technology
  • Augmented Reality in Education
  • Beginning-Teacher Induction
  • Bilingual Education and Bilingualism
  • Black Undergraduate Women: Critical Race and Gender Perspe...
  • Black Women in Academia
  • Blended Learning
  • Case Study in Education Research
  • Changing Professional and Academic Identities
  • Character Education
  • Children’s and Young Adult Literature
  • Children's Beliefs about Intelligence
  • Children's Rights in Early Childhood Education
  • Citizenship Education
  • Civic and Social Engagement of Higher Education
  • Classroom Learning Environments: Assessing and Investigati...
  • Classroom Management
  • Coherent Instructional Systems at the School and School Sy...
  • College Admissions in the United States
  • College Athletics in the United States
  • Community Relations
  • Comparative Education
  • Computer-Assisted Language Learning
  • Computer-Based Testing
  • Conceptualizing, Measuring, and Evaluating Improvement Net...
  • Continuous Improvement and "High Leverage" Educational Pro...
  • Counseling in Schools
  • Critical Approaches to Gender in Higher Education
  • Critical Perspectives on Educational Innovation and Improv...
  • Critical Race Theory
  • Crossborder and Transnational Higher Education
  • Cross-National Research on Continuous Improvement
  • Cross-Sector Research on Continuous Learning and Improveme...
  • Cultural Diversity in Early Childhood Education
  • Culturally Responsive Leadership
  • Culturally Responsive Pedagogies
  • Culturally Responsive Teacher Education in the United Stat...
  • Curriculum Design
  • Data Collection in Educational Research
  • Data-driven Decision Making in the United States
  • Deaf Education
  • Desegregation and Integration
  • Design Thinking and the Learning Sciences: Theoretical, Pr...
  • Development, Moral
  • Dialogic Pedagogy
  • Digital Age Teacher, The
  • Digital Citizenship
  • Digital Divides
  • Disabilities
  • Distance Learning
  • Distributed Leadership
  • Doctoral Education and Training
  • Early Childhood Education and Care (ECEC) in Denmark
  • Early Childhood Education and Development in Mexico
  • Early Childhood Education in Aotearoa New Zealand
  • Early Childhood Education in Australia
  • Early Childhood Education in China
  • Early Childhood Education in Europe
  • Early Childhood Education in Sub-Saharan Africa
  • Early Childhood Education in Sweden
  • Early Childhood Education Pedagogy
  • Early Childhood Education Policy
  • Early Childhood Education, The Arts in
  • Early Childhood Mathematics
  • Early Childhood Science
  • Early Childhood Teacher Education
  • Early Childhood Teachers in Aotearoa New Zealand
  • Early Years Professionalism and Professionalization Polici...
  • Economics of Education
  • Education For Children with Autism
  • Education for Sustainable Development
  • Education Leadership, Empirical Perspectives in
  • Education of Native Hawaiian Students
  • Education Reform and School Change
  • Educational Research Approaches: A Comparison
  • Educational Statistics for Longitudinal Research
  • Educator Partnerships with Parents and Families with a Foc...
  • Emotional and Affective Issues in Environmental and Sustai...
  • Emotional and Behavioral Disorders
  • English as an International Language for Academic Publishi...
  • Environmental and Science Education: Overlaps and Issues
  • Environmental Education
  • Environmental Education in Brazil
  • Epistemic Beliefs
  • Equity and Improvement: Engaging Communities in Educationa...
  • Equity, Ethnicity, Diversity, and Excellence in Education
  • Ethical Research with Young Children
  • Ethics and Education
  • Ethics of Teaching
  • Ethnic Studies
  • Evidence-Based Communication Assessment and Intervention
  • Family and Community Partnerships in Education
  • Family Day Care
  • Federal Government Programs and Issues
  • Feminization of Labor in Academia
  • Finance, Education
  • Financial Aid
  • Formative Assessment
  • Future-Focused Education
  • Gender and Achievement
  • Gender and Alternative Education
  • Gender, Power and Politics in the Academy
  • Gender-Based Violence on University Campuses
  • Gifted Education
  • Global Mindedness and Global Citizenship Education
  • Global University Rankings
  • Governance, Education
  • Grounded Theory
  • Growth of Effective Mental Health Services in Schools in t...
  • Higher Education and Globalization
  • Higher Education and the Developing World
  • Higher Education Faculty Characteristics and Trends in the...
  • Higher Education Finance
  • Higher Education Governance
  • Higher Education Graduate Outcomes and Destinations
  • Higher Education in Africa
  • Higher Education in China
  • Higher Education in Latin America
  • Higher Education in the United States, Historical Evolutio...
  • Higher Education, International Issues in
  • Higher Education Management
  • Higher Education Policy
  • Higher Education Research
  • Higher Education Student Assessment
  • High-stakes Testing
  • History of Early Childhood Education in the United States
  • History of Education in the United States
  • History of Technology Integration in Education
  • Homeschooling
  • Inclusion in Early Childhood: Difference, Disability, and ...
  • Inclusive Education
  • Indigenous Education in a Global Context
  • Indigenous Learning Environments
  • Indigenous Students in Higher Education in the United Stat...
  • Infant and Toddler Pedagogy
  • Inservice Teacher Education
  • Integrating Art across the Curriculum
  • Intelligence
  • Intensive Interventions for Children and Adolescents with ...
  • International Perspectives on Academic Freedom
  • Intersectionality and Education
  • Knowledge Development in Early Childhood
  • Leadership Development, Coaching and Feedback for
  • Leadership in Early Childhood Education
  • Leadership Training with an Emphasis on the United States
  • Learning Analytics in Higher Education
  • Learning Difficulties
  • Learning, Lifelong
  • Learning, Multimedia
  • Learning Strategies
  • Legal Matters and Education Law
  • LGBT Youth in Schools
  • Linguistic Diversity
  • Linguistically Inclusive Pedagogy
  • Literacy Development and Language Acquisition
  • Literature Reviews
  • Mathematics Identity
  • Mathematics Instruction and Interventions for Students wit...
  • Mathematics Teacher Education
  • Measurement for Improvement in Education
  • Measurement in Education in the United States
  • Meta-Analysis and Research Synthesis in Education
  • Methodological Approaches for Impact Evaluation in Educati...
  • Methodologies for Conducting Education Research
  • Mindfulness, Learning, and Education
  • Motherscholars
  • Multiliteracies in Early Childhood Education
  • Multiple Documents Literacy: Theory, Research, and Applica...
  • Multivariate Research Methodology
  • Museums, Education, and Curriculum
  • Music Education
  • Narrative Research in Education
  • Native American Studies
  • Nonformal and Informal Environmental Education
  • Note-Taking
  • Numeracy Education
  • One-to-One Technology in the K-12 Classroom
  • Online Education
  • Open Education
  • Organizing for Continuous Improvement in Education
  • Organizing Schools for the Inclusion of Students with Disa...
  • Outdoor Play and Learning
  • Outdoor Play and Learning in Early Childhood Education
  • Pedagogical Leadership
  • Pedagogy of Teacher Education, A
  • Performance Objectives and Measurement
  • Performance-based Research Assessment in Higher Education
  • Performance-based Research Funding
  • Phenomenology in Educational Research
  • Philosophy of Education
  • Physical Education
  • Podcasts in Education
  • Policy Context of United States Educational Innovation and...
  • Politics of Education
  • Portable Technology Use in Special Education Programs and ...
  • Post-humanism and Environmental Education
  • Pre-Service Teacher Education
  • Problem Solving
  • Productivity and Higher Education
  • Professional Development
  • Professional Learning Communities
  • Program Evaluation
  • Programs and Services for Students with Emotional or Behav...
  • Psychology Learning and Teaching
  • Psychometric Issues in the Assessment of English Language ...
  • Qualitative Data Analysis Techniques
  • Qualitative, Quantitative, and Mixed Methods Research Samp...
  • Queering the English Language Arts (ELA) Writing Classroom
  • Race and Affirmative Action in Higher Education
  • Reading Education
  • Refugee and New Immigrant Learners
  • Relational and Developmental Trauma and Schools
  • Relational Pedagogies in Early Childhood Education
  • Reliability in Educational Assessments
  • Religion in Elementary and Secondary Education in the Unit...
  • Researcher Development and Skills Training within the Cont...
  • Research-Practice Partnerships in Education within the Uni...
  • Response to Intervention
  • Restorative Practices
  • Risky Play in Early Childhood Education
  • Role of Gender Equity Work on University Campuses through ...
  • Scale and Sustainability of Education Innovation and Impro...
  • Scaling Up Research-based Educational Practices
  • School Accreditation
  • School Choice
  • School Culture
  • School District Budgeting and Financial Management in the ...
  • School Improvement through Inclusive Education
  • School Reform
  • Schools, Private and Independent
  • School-Wide Positive Behavior Support
  • Science Education
  • Secondary to Postsecondary Transition Issues
  • Self-Regulated Learning
  • Self-Study of Teacher Education Practices
  • Service-Learning
  • Severe Disabilities
  • Single Salary Schedule
  • Single-sex Education
  • Single-Subject Research Design
  • Social Context of Education
  • Social Justice
  • Social Network Analysis
  • Social Pedagogy
  • Social Science and Education Research
  • Social Studies Education
  • Sociology of Education
  • Standards-Based Education
  • Statistical Assumptions
  • Student Access, Equity, and Diversity in Higher Education
  • Student Assignment Policy
  • Student Engagement in Tertiary Education
  • Student Learning, Development, Engagement, and Motivation ...
  • Student Participation
  • Student Voice in Teacher Development
  • Sustainability Education in Early Childhood Education
  • Sustainability in Early Childhood Education
  • Sustainability in Higher Education
  • Teacher Beliefs and Epistemologies
  • Teacher Collaboration in School Improvement
  • Teacher Evaluation and Teacher Effectiveness
  • Teacher Preparation
  • Teacher Training and Development
  • Teacher Unions and Associations
  • Teacher-Student Relationships
  • Teaching Critical Thinking
  • Technologies, Teaching, and Learning in Higher Education
  • Technology Education in Early Childhood
  • Technology, Educational
  • Technology-based Assessment
  • The Bologna Process
  • The Regulation of Standards in Higher Education
  • Theories of Educational Leadership
  • Three Conceptions of Literacy: Media, Narrative, and Gamin...
  • Tracking and Detracking
  • Traditions of Quality Improvement in Education
  • Transformative Learning
  • Transitions in Early Childhood Education
  • Tribally Controlled Colleges and Universities in the Unite...
  • Understanding the Psycho-Social Dimensions of Schools and ...
  • University Faculty Roles and Responsibilities in the Unite...
  • Using Ethnography in Educational Research
  • Value of Higher Education for Students and Other Stakehold...
  • Virtual Learning Environments
  • Vocational and Technical Education
  • Wellness and Well-Being in Education
  • Women's and Gender Studies
  • Young Children and Spirituality
  • Young Children's Learning Dispositions
  • Young Children's Working Theories
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [91.193.111.216]
  • 91.193.111.216

types of qualitative research sampling methods

  • Account Logins

types of qualitative research sampling methods

What We Offer

With a comprehensive suite of qualitative and quantitative capabilities and 55 years of experience in the industry, Sago powers insights through adaptive solutions.

  • Recruitment
  • Communities
  • Methodify® Automated research
  • QualBoard® Digital Discussions
  • QualMeeting® Digital Interviews
  • Global Qualitative
  • Global Quantitative
  • In-Person Facilities
  • Healthcare Solutions
  • Research Consulting
  • Europe Solutions
  • Neuromarketing Tools
  • Trial & Jury Consulting

Who We Serve

Form deeper customer connections and make the process of answering your business questions easier. Sago delivers unparalleled access to the audiences you need through adaptive solutions and a consultative approach.

  • Consumer Packaged Goods
  • Financial Services
  • Media Technology
  • Medical Device Manufacturing
  • Marketing Research

With a 55-year legacy of impact, Sago has proven we have what it takes to be a long-standing industry leader and partner. We continually advance our range of expertise to provide our clients with the highest level of confidence.​

  • Global Offices
  • Partnerships & Certifications
  • News & Media
  • Researcher Events

multi-video ai summaries thumbnail

Take Your Research to the Next Level with Multi-Video AI Summaries

steve schlesinger, mrx council hall of fame

Steve Schlesinger Inducted Into 2024 Market Research Council Hall of Fame

professional woman looking down at tablet in office at night

Sago Announces Launch of Sago Health to Elevate Healthcare Research

Drop into your new favorite insights rabbit hole and explore content created by the leading minds in market research.

  • Case Studies
  • Knowledge Kit

de la riva case study blog thumbnail

Enhancing Efficiency with All-in-One Digital Qual

girl wearing medical mask in foreground, two people talking in medical masks in background

How Connecting with Gen C Can Help Your Brand Grow

  • Partner with us
  • Join our panel

types of qualitative research sampling methods

Different Types of Sampling Techniques in Qualitative Research

  • Resources , Blog

clock icon

Key Takeaways:

  • Sampling techniques in qualitative research include purposive, convenience, snowball, and theoretical sampling.
  • Choosing the right sampling technique significantly impacts the accuracy and reliability of the research results.
  • It’s crucial to consider the potential impact on the bias, sample diversity, and generalizability when choosing a sampling technique for your qualitative research.

Qualitative research seeks to understand social phenomena from the perspective of those experiencing them. It involves collecting non-numerical data such as interviews, observations, and written documents to gain insights into human experiences, attitudes, and behaviors. While qualitative research can provide rich and nuanced insights, the accuracy and generalizability of findings depend on the quality of the sampling process. Sampling techniques are a critical component of qualitative research as it involves selecting a group of participants who can provide valuable insights into the research questions.

This article explores different types of sampling techniques in qualitative research. First, we’ll provide a comprehensive overview of four standard sampling techniques in qualitative research. and then compare and contrast these techniques to provide guidance on choosing the most appropriate method for a particular study. Additionally, you’ll find best practices for sampling and learn about ethical considerations researchers need to consider in selecting a sample. Overall, this article aims to help researchers conduct effective and high-quality sampling in qualitative research.

In this Article:

  • Purposive Sampling
  • Convenience Sampling
  • Snowball Sampling
  • Theoretical Sampling

Factors to Consider When Choosing a Sampling Technique

Practical approaches to sampling: recommended practices, final thoughts, get expert guidance on your sample needs.

Want expert input on the best sampling technique for your qualitative research project? Book a consultation for trusted advice.

Request a consultation

4 Types of Sampling Techniques and Their Applications

Sampling is a crucial aspect of qualitative research as it determines the representativeness and credibility of the data collected. Several sampling techniques are used in qualitative research, each with strengths and weaknesses. In this section, let’s explore four standard sampling techniques in qualitative research: purposive sampling, convenience sampling, snowball sampling, and theoretical sampling. We’ll break down the definition of each technique, when to use it, and its advantages and disadvantages.

1. Purposive Sampling

Purposive sampling, or judgmental sampling, is a non-probability sampling technique in qualitative research that’s commonly used. In purposive sampling, researchers intentionally select participants with specific characteristics or unique experiences related to the research question. The goal is to identify and recruit participants who can provide rich and diverse data to enhance the research findings.

Purposive sampling is used when researchers seek to identify individuals or groups with particular knowledge, skills, or experiences relevant to the research question. For instance, in a study examining the experiences of cancer patients undergoing chemotherapy, purposive sampling may be used to recruit participants who have undergone chemotherapy in the past year. Researchers can better understand the phenomenon under investigation by selecting individuals with relevant backgrounds.

Purposive Sampling: Strengths and Weaknesses

Purposive sampling is a powerful tool for researchers seeking to select participants who can provide valuable insight into their research question. This method is advantageous when studying groups with technical characteristics or experiences where a random selection of participants may yield different results.

One of the main advantages of purposive sampling is the ability to improve the quality and accuracy of data collected by selecting participants most relevant to the research question. This approach also enables researchers to collect data from diverse participants with unique perspectives and experiences related to the research question.

However, researchers should also be aware of potential bias when using purposive sampling. The researcher’s judgment may influence the selection of participants, resulting in a biased sample that does not accurately represent the broader population. Another disadvantage is that purposive sampling may not be representative of the more general population, which limits the generalizability of the findings. To guarantee the accuracy and dependability of data obtained through purposive sampling, researchers must provide a clear and transparent justification of their selection criteria and sampling approach. This entails outlining the specific characteristics or experiences required for participants to be included in the study and explaining the rationale behind these criteria. This level of transparency not only helps readers to evaluate the validity of the findings, but also enhances the replicability of the research.

2. Convenience Sampling  

When time and resources are limited, researchers may opt for convenience sampling as a quick and cost-effective way to recruit participants. In this non-probability sampling technique, participants are selected based on their accessibility and willingness to participate rather than their suitability for the research question. Qualitative research often uses this approach to generate various perspectives and experiences.

During the COVID-19 pandemic, convenience sampling was a valuable method for researchers to collect data quickly and efficiently from participants who were easily accessible and willing to participate. For example, in a study examining the experiences of university students during the pandemic, convenience sampling allowed researchers to recruit students who were available and willing to share their experiences quickly. While the pandemic may be over, convenience sampling during this time highlights its value in urgent situations where time and resources are limited.

Convenience Sampling: Strengths and Weaknesses

Convenience sampling offers several advantages to researchers, including its ease of implementation and cost-effectiveness. This technique allows researchers to quickly and efficiently recruit participants without spending time and resources identifying and contacting potential participants. Furthermore, convenience sampling can result in a diverse pool of participants, as individuals from various backgrounds and experiences may be more likely to participate.

While convenience sampling has the advantage of being efficient, researchers need to acknowledge its limitations. One of the primary drawbacks of convenience sampling is that it is susceptible to selection bias. Participants who are more easily accessible may not be representative of the broader population, which can limit the generalizability of the findings. Furthermore, convenience sampling may lead to issues with the reliability of the results, as it may not be possible to replicate the study using the same sample or a similar one.

To mitigate these limitations, researchers should carefully define the population of interest and ensure the sample is drawn from that population. For instance, if a study is investigating the experiences of individuals with a particular medical condition, researchers can recruit participants from specialized clinics or support groups for that condition. Researchers can also use statistical techniques such as stratified sampling or weighting to adjust for potential biases in the sample.

3. Snowball Sampling

Snowball sampling, also called referral sampling, is a unique approach researchers use to recruit participants in qualitative research. The technique involves identifying a few initial participants who meet the eligibility criteria and asking them to refer others they know who also fit the requirements. The sample size grows as referrals are added, creating a chain-like structure.

Snowball sampling enables researchers to reach out to individuals who may be hard to locate through traditional sampling methods, such as members of marginalized or hidden communities. For instance, in a study examining the experiences of undocumented immigrants, snowball sampling may be used to identify and recruit participants through referrals from other undocumented immigrants.

Snowball Sampling: Strengths and Weaknesses

Snowball sampling can produce in-depth and detailed data from participants with common characteristics or experiences. Since referrals are made within a network of individuals who share similarities, researchers can gain deep insights into a specific group’s attitudes, behaviors, and perspectives.

4. Theoretical Sampling

Theoretical sampling is a sophisticated and strategic technique that can help researchers develop more in-depth and nuanced theories from their data. Instead of selecting participants based on convenience or accessibility, researchers using theoretical sampling choose participants based on their potential to contribute to the emerging themes and concepts in the data. This approach allows researchers to refine their research question and theory based on the data they collect rather than forcing their data to fit a preconceived idea.

Theoretical sampling is used when researchers conduct grounded theory research and have developed an initial theory or conceptual framework. In a study examining cancer survivors’ experiences, for example, theoretical sampling may be used to identify and recruit participants who can provide new insights into the coping strategies of survivors.

Theoretical Sampling: Strengths and Weaknesses

One of the significant advantages of theoretical sampling is that it allows researchers to refine their research question and theory based on emerging data. This means the research can be highly targeted and focused, leading to a deeper understanding of the phenomenon being studied. Additionally, theoretical sampling can generate rich and in-depth data, as participants are selected based on their potential to provide new insights into the research question.

Participants are selected based on their perceived ability to offer new perspectives on the research question. This means specific perspectives or experiences may be overrepresented in the sample, leading to an incomplete understanding of the phenomenon being studied. Additionally, theoretical sampling can be time-consuming and resource-intensive, as researchers must continuously analyze the data and recruit new participants.

To mitigate the potential for bias, researchers can take several steps. One way to reduce bias is to use a diverse team of researchers to analyze the data and make participant selection decisions. Having multiple perspectives and backgrounds can help prevent researchers from unconsciously selecting participants who fit their preconceived notions or biases.

Another solution would be to use reflexive sampling. Reflexive sampling involves selecting participants aware of the research process and provides insights into how their biases and experiences may influence their perspectives. By including participants who are reflexive about their subjectivity, researchers can generate more nuanced and self-aware findings.

Choosing the proper sampling technique in qualitative research is one of the most critical decisions a researcher makes when conducting a study. The preferred method can significantly impact the accuracy and reliability of the research results.

For instance, purposive sampling provides a more targeted and specific sample, which helps to answer research questions related to that particular population or phenomenon. However, this approach may also introduce bias by limiting the diversity of the sample.

Conversely, convenience sampling may offer a more diverse sample regarding demographics and backgrounds but may also introduce bias by selecting more willing or available participants.

Snowball sampling may help study hard-to-reach populations, but it can also limit the sample’s diversity as participants are selected based on their connections to existing participants.

Theoretical sampling may offer an opportunity to refine the research question and theory based on emerging data, but it can also be time-consuming and resource-intensive.

Additionally, the choice of sampling technique can impact the generalizability of the research findings. Therefore, it’s crucial to consider the potential impact on the bias, sample diversity, and generalizability when choosing a sampling technique. By doing so, researchers can select the most appropriate method for their research question and ensure the validity and reliability of their findings.

Tips for Selecting Participants

When selecting participants for a qualitative research study, it is crucial to consider the research question and the purpose of the study. In addition, researchers should identify the specific characteristics or criteria they seek in their sample and select participants accordingly.

One helpful tip for selecting participants is to use a pre-screening process to ensure potential participants meet the criteria for inclusion in the study. Another technique is using multiple recruitment methods to ensure the sample is diverse and representative of the studied population.

Ensuring Diversity in Samples

Diversity in the sample is important to ensure the study’s findings apply to a wide range of individuals and situations. One way to ensure diversity is to use stratified sampling, which involves dividing the population into subgroups and selecting participants from each subset. This helps establish that the sample is representative of the larger population.

Maintaining Ethical Considerations

When selecting participants for a qualitative research study, it is essential to ensure ethical considerations are taken into account. Researchers must ensure participants are fully informed about the study and provide their voluntary consent to participate. They must also ensure participants understand their rights and that their confidentiality and privacy will be protected.

A qualitative research study’s success hinges on its sampling technique’s effectiveness. The choice of sampling technique must be guided by the research question, the population being studied, and the purpose of the study. Whether purposive, convenience, snowball, or theoretical sampling, the primary goal is to ensure the validity and reliability of the study’s findings.

By thoughtfully weighing the pros and cons of each sampling technique in qualitative research, researchers can make informed decisions that lead to more reliable and accurate results. In conclusion, carefully selecting a sampling technique is integral to the success of a qualitative research study, and a thorough understanding of the available options can make all the difference in achieving high-quality research outcomes.

If you’re interested in improving your research and sampling methods, Sago offers a variety of solutions. Our qualitative research platforms, such as QualBoard and QualMeeting, can assist you in conducting research studies with precision and efficiency. Our robust global panel and recruitment options help you reach the right people. We also offer qualitative and quantitative research services to meet your research needs. Contact us today to learn more about how we can help improve your research outcomes.

Find the Right Sample for Your Qualitative Research

Trust our team to recruit the participants you need using the appropriate techniques. Book a consultation with our team to get started .

Get in touch

swing voters, wisconsin, aug 2024

The Swing Voters Project, August 2024: Wisconsin

the deciders july 2024 blog thumbnail

The Deciders, July 2024: Former Nikki Haley Voters

smiling woman sitting at a table looking at her phone with a coffee cup in front of her

Crack the Code: Evolving Panel Expectations

toddler girl surrounded by stuffed animals and using an ipad

Pioneering the Future of Pediatric Health

swing voters, july 2024 florida thumbnail

The Swing Voter Project, July 2024: Florida

summer 2024 travel trends

Exploring Travel Trends and Behaviors for Summer 2024

The Deciders, June 2024, Georgia

The Deciders, June 24, 2024: Third-Party Georgia Voters

Summer 2024 Insights: The Compass to This Year's Travel Choices

Summer 2024 Insights: The Compass to This Year’s Travel Choices

Take a deep dive into your favorite market research topics

types of qualitative research sampling methods

How can we help support you and your research needs?

types of qualitative research sampling methods

BEFORE YOU GO

Have you considered how to harness AI in your research process? Check out our on-demand webinar for everything you need to know

types of qualitative research sampling methods

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Purposeful sampling for qualitative data collection and analysis in mixed method implementation research

Lawrence a. palinkas.

1 School of Social Work, University of Southern California, Los Angeles, CA 90089-0411

Sarah M. Horwitz

2 Department of Child and Adolescent Psychiatry, New York University, New York, NY

Carla A. Green

3 Center for Health Research, Kaiser Permanente Northwest, Portland, OR

Jennifer P. Wisdom

4 George Washington University, Washington DC

Naihua Duan

5 New York State Neuropsychiatric Institute and Department of Psychiatry, Columbia University, New York, NY

Kimberly Hoagwood

Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.

Recently there have been several calls for the use of mixed method designs in implementation research ( Proctor et al., 2009 ; Landsverk et al., 2012 ; Palinkas et al. 2011 ; Aarons et al., 2012). This has been precipitated by the realization that the challenges of implementing evidence-based and other innovative practices, treatments, interventions and programs are sufficiently complex that a single methodological approach is often inadequate. This is particularly true of efforts to implement evidence-based practices (EBPs) in statewide systems where relationships among key stakeholders extend both vertically (from state to local organizations) and horizontally (between organizations located in different parts of a state). As in other areas of research, mixed method designs are viewed as preferable in implementation research because they provide a better understanding of research issues than either qualitative or quantitative approaches alone ( Palinkas et al., 2011 ). In such designs, qualitative methods are used to explore and obtain depth of understanding as to the reasons for success or failure to implement evidence-based practice or to identify strategies for facilitating implementation while quantitative methods are used to test and confirm hypotheses based on an existing conceptual model and obtain breadth of understanding of predictors of successful implementation ( Teddlie & Tashakkori, 2003 ).

Sampling strategies for quantitative methods used in mixed methods designs in implementation research are generally well-established and based on probability theory. In contrast, sampling strategies for qualitative methods in implementation studies are less explicit and often less evident. Although the samples for qualitative inquiry are generally assumed to be selected purposefully to yield cases that are “information rich” (Patton, 2001), there are no clear guidelines for conducting purposeful sampling in mixed methods implementation studies, particularly when studies have more than one specific objective. Moreover, it is not entirely clear what forms of purposeful sampling are most appropriate for the challenges of using both quantitative and qualitative methods in the mixed methods designs used in implementation research. Such a consideration requires a determination of the objectives of each methodology and the potential impact of selecting one strategy to achieve one objective on the selection of other strategies to achieve additional objectives.

In this paper, we present different approaches to the use of purposeful sampling strategies in implementation research. We begin with a review of the principles and practice of purposeful sampling in implementation research, a summary of the types and categories of purposeful sampling strategies, and a set of recommendations for matching the appropriate single strategy or multistage strategy to study aims and quantitative method designs.

Principles of Purposeful Sampling

Purposeful sampling is a technique widely used in qualitative research for the identification and selection of information-rich cases for the most effective use of limited resources ( Patton, 2002 ). This involves identifying and selecting individuals or groups of individuals that are especially knowledgeable about or experienced with a phenomenon of interest ( Cresswell & Plano Clark, 2011 ). In addition to knowledge and experience, Bernard (2002) and Spradley (1979) note the importance of availability and willingness to participate, and the ability to communicate experiences and opinions in an articulate, expressive, and reflective manner. In contrast, probabilistic or random sampling is used to ensure the generalizability of findings by minimizing the potential for bias in selection and to control for the potential influence of known and unknown confounders.

As Morse and Niehaus (2009) observe, whether the methodology employed is quantitative or qualitative, sampling methods are intended to maximize efficiency and validity. Nevertheless, sampling must be consistent with the aims and assumptions inherent in the use of either method. Qualitative methods are, for the most part, intended to achieve depth of understanding while quantitative methods are intended to achieve breadth of understanding ( Patton, 2002 ). Qualitative methods place primary emphasis on saturation (i.e., obtaining a comprehensive understanding by continuing to sample until no new substantive information is acquired) ( Miles & Huberman, 1994 ). Quantitative methods place primary emphasis on generalizability (i.e., ensuring that the knowledge gained is representative of the population from which the sample was drawn). Each methodology, in turn, has different expectations and standards for determining the number of participants required to achieve its aims. Quantitative methods rely on established formulae for avoiding Type I and Type II errors, while qualitative methods often rely on precedents for determining number of participants based on type of analysis proposed (e.g., 3-6 participants interviewed multiple times in a phenomenological study versus 20-30 participants interviewed once or twice in a grounded theory study), level of detail required, and emphasis of homogeneity (requiring smaller samples) versus heterogeneity (requiring larger samples) ( Guest, Bunce & Johnson., 2006 ; Morse & Niehaus, 2009 ; Padgett, 2008 ).

Types of purposeful sampling designs

There exist numerous purposeful sampling designs. Examples include the selection of extreme or deviant (outlier) cases for the purpose of learning from an unusual manifestations of phenomena of interest; the selection of cases with maximum variation for the purpose of documenting unique or diverse variations that have emerged in adapting to different conditions, and to identify important common patterns that cut across variations; and the selection of homogeneous cases for the purpose of reducing variation, simplifying analysis, and facilitating group interviewing. A list of some of these strategies and examples of their use in implementation research is provided in Table 1 .

Purposeful sampling strategies in implementation research

StrategyObjectiveExampleConsiderations
Emphasis on similarity
Criterion-iTo identify and select all
cases that meet some
predetermined criterion
of importance
Selection of consultant
trainers and program
leaders at study sites to
facilitators and barriers
to EBP implementation
( ).
Can be used to identify
cases from standardized
questionnaires for in-
depth follow-up
( )
Criterion-eTo identify and select all
cases that exceed or fall
outside a specified
criterion
Selection of directors of
agencies that failed to
move to the next stage
of implementation
within expected period
of time.
Typical caseTo illustrate or highlight
what is typical, normal
or average
A child undergoing
treatment for trauma
( )
The purpose is to
describe and illustrate
what is typical to those
unfamiliar with the
setting, not to make
generalized statements
about the experiences
of all participants
( ).
HomogeneityTo describe a particular
subgroup in depth, to
reduce variation,
simplify analysis and
facilitate group
interviewing
Selecting Latino/a
directors of mental
health services agencies
to discuss challenges of
implementing evidence-
based treatments for
mental health problems
with Latino/a clients.
Often used for selecting
focus group participants
SnowballTo identify cases of
interest from sampling
people who know
people that generally
have similar
characteristics who, in
turn know people, also
with similar
characteristics.
Asking recruited
program managers to
identify clinicians,
administrative support
staff, and consumers for
project recruitment
( ).
Begins by asking key
informants or well-
situated people “Who
knows a lot about…”
(Patton, 2001)
Extreme or deviant caseTo illuminate both the
unusual and the typical
Selecting clinicians from
state agencies or
mental health with best
and worst performance
records or
implementation
outcomes
Extreme successes or
failures may be
discredited as being too
extreme or unusual to
yield useful
information, leading
one to select cases that
manifest sufficient
intensity to illuminate
the nature of success or
failure, but not in the
extreme.
Emphasis on variation
IntensitySame objective as
extreme case sampling
but with less emphasis
on extremes
Clinicians providing
usual care and clinicians
who dropped out of a
study prior to consent
to contrast with
clinicians who provided
the intervention under
investigation.
( )
Requires the researcher
to do some exploratory
work to determine the
nature of the variation
of the situation under
study, then sampling
intense examples of the
phenomenon of
interest.
Maximum variationImportant shared
patterns that cut across
cases and derived their
significance from having
emerged out of
heterogeneity.
Sampling mental health
services programs in
urban and rural areas in
different parts of the
state (north, central,
south) to capture
maximum variation in
location
( ).
Can be used to
document unique or
diverse variations that
have emerged in
adapting to different
conditions
( ).
Critical caseTo permit logical
generalization and
maximum application of
information because if
it is true in this one
case, it’s likely to be
true of all other cases
Investigation of a group
of agencies that
decided to stop using
an evidence-based
practice to identify
reasons for lack of EBP
sustainment.
Depends on recognition
of key dimensions that
make for a critical case.
Particularly important
when resources may
limit the study of only
one site (program,
community, population)
( )
Theory-basedTo find manifestations
of a theoretical
construct so as to
elaborate and examine
the construct and its
variations
Sampling therapists
based on academic
training to understand
the impact of CBT
training versus
psychodynamic training
in graduate school of
acceptance of EBPs
Sample on the basis of
potential manifestation
or representation of
important theoretical
constructs.
Sampling on the basis of
emerging concepts with
the aim being to
explore the dimensional
range or varied
conditions along which
the properties of
concepts vary.
Confirming and
disconfirming case
To confirm the
importance and
meaning of possible
patterns and checking
out the viability of
emergent findings with
new data and additional
cases.
Once trends are
identified, deliberately
seeking examples that
are counter to the
trend.
Usually employed in
later phases of data
collection. Confirmatory
cases are additional
examples that fit
already emergent
patterns to add
richness, depth and
credibility.
Disconfirming cases are
a source of rival
interpretations as well
as a means for placing
boundaries around
confirmed findings
Stratified purposefulTo capture major
variations rather than
to identify a common
core, although the
latter may emerge in
the analysis
Combining typical case
sampling with
maximum variation
sampling by taking a
stratified purposeful
sample of above
average, average, and
below average cases of
health care
expenditures for a
particular problem.
This represents less
than the full maximum
variation sample, but
more than simple
typical case sampling.
Purposeful randomTo increase the
credibility of results
Selecting for interviews
a random sample of
providers to describe
experiences with EBP
implementation.
Not as representative of
the population as a
probability random
sample.
Nonspecific emphasis
Opportunistic or
emergent
To take advantage of
circumstances, events
and opportunities for
additional data
collection as they arise.
Usually employed when
it is impossible to
identify sample or the
population from which
a sample should be
drawn at the outset of a
study. Used primarily in
conducting
ethnographic fieldwork
ConvenienceTo collect information
from participants who
are easily accessible to
the researcher
Recruiting providers
attending a staff
meeting for study
participation.
Although commonly
used, it is neither
purposeful nor strategic

Embedded in each strategy is the ability to compare and contrast, to identify similarities and differences in the phenomenon of interest. Nevertheless, some of these strategies (e.g., maximum variation sampling, extreme case sampling, intensity sampling, and purposeful random sampling) are used to identify and expand the range of variation or differences, similar to the use of quantitative measures to describe the variability or dispersion of values for a particular variable or variables, while other strategies (e.g., homogeneous sampling, typical case sampling, criterion sampling, and snowball sampling) are used to narrow the range of variation and focus on similarities. The latter are similar to the use of quantitative central tendency measures (e.g., mean, median, and mode). Moreover, certain strategies, like stratified purposeful sampling or opportunistic or emergent sampling, are designed to achieve both goals. As Patton (2002 , p. 240) explains, “the purpose of a stratified purposeful sample is to capture major variations rather than to identify a common core, although the latter may also emerge in the analysis. Each of the strata would constitute a fairly homogeneous sample.”

Challenges to use of purposeful sampling

Despite its wide use, there are numerous challenges in identifying and applying the appropriate purposeful sampling strategy in any study. For instance, the range of variation in a sample from which purposive sample is to be taken is often not really known at the outset of a study. To set as the goal the sampling of information-rich informants that cover the range of variation assumes one knows that range of variation. Consequently, an iterative approach of sampling and re-sampling to draw an appropriate sample is usually recommended to make certain the theoretical saturation occurs ( Miles & Huberman, 1994 ). However, that saturation may be determined a-priori on the basis of an existing theory or conceptual framework, or it may emerge from the data themselves, as in a grounded theory approach ( Glaser & Strauss, 1967 ). Second, there are a not insignificant number in the qualitative methods field who resist or refuse systematic sampling of any kind and reject the limiting nature of such realist, systematic, or positivist approaches. This includes critics of interventions and “bottom up” case studies and critiques. However, even those who equate purposeful sampling with systematic sampling must offer a rationale for selecting study participants that is linked with the aims of the investigation (i.e., why recruit these individuals for this particular study? What qualifies them to address the aims of the study?). While systematic sampling may be associated with a post-positivist tradition of qualitative data collection and analysis, such sampling is not inherently limited to such analyses and the need for such sampling is not inherently limited to post-positivist qualitative approaches ( Patton, 2002 ).

Purposeful Sampling in Implementation Research

Characteristics of implementation research.

In implementation research, quantitative and qualitative methods often play important roles, either simultaneously or sequentially, for the purpose of answering the same question through convergence of results from different sources, answering related questions in a complementary fashion, using one set of methods to expand or explain the results obtained from use of the other set of methods, using one set of methods to develop questionnaires or conceptual models that inform the use of the other set, and using one set of methods to identify the sample for analysis using the other set of methods ( Palinkas et al., 2011 ). A review of mixed method designs in implementation research conducted by Palinkas and colleagues (2011) revealed seven different sequential and simultaneous structural arrangements, five different functions of mixed methods, and three different ways of linking quantitative and qualitative data together. However, this review did not consider the sampling strategies involved in the types of quantitative and qualitative methods common to implementation research, nor did it consider the consequences of the sampling strategy selected for one method or set of methods on the choice of sampling strategy for the other method or set of methods. For instance, one of the most significant challenges to sampling in sequential mixed method designs lies in the limitations the initial method may place on sampling for the subsequent method. As Morse and Neihaus (2009) observe, when the initial method is qualitative, the sample selected may be too small and lack randomization necessary to fulfill the assumptions for a subsequent quantitative analysis. On the other hand, when the initial method is quantitative, the sample selected may be too large for each individual to be included in qualitative inquiry and lack purposeful selection to reduce the sample size to one more appropriate for qualitative research. The fact that potential participants were recruited and selected at random does not necessarily make them information rich.

A re-examination of the 22 studies and an additional 6 studies published since 2009 revealed that only 5 studies ( Aarons & Palinkas, 2007 ; Bachman et al., 2009 ; Palinkas et al., 2011 ; Palinkas et al., 2012 ; Slade et al., 2003) made a specific reference to purposeful sampling. An additional three studies ( Henke et al., 2008 ; Proctor et al., 2007 ; Swain et al., 2010 ) did not make explicit reference to purposeful sampling but did provide a rationale for sample selection. The remaining 20 studies provided no description of the sampling strategy used to identify participants for qualitative data collection and analysis; however, a rationale could be inferred based on a description of who were recruited and selected for participation. Of the 28 studies, 3 used more than one sampling strategy. Twenty-one of the 28 studies (75%) used some form of criterion sampling. In most instances, the criterion used is related to the individual’s role, either in the research project (i.e., trainer, team leader), or the agency (program director, clinical supervisor, clinician); in other words, criterion of inclusion in a certain category (criterion-i), in contrast to cases that are external to a specific criterion (criterion-e). For instance, in a series of studies based on the National Implementing Evidence-Based Practices Project, participants included semi-structured interviews with consultant trainers and program leaders at each study site ( Brunette et al., 2008 ; Marshall et al., 2008 ; Marty et al., 2007; Rapp et al., 2010 ; Woltmann et al., 2008 ). Six studies used some form of maximum variation sampling to ensure representativeness and diversity of organizations and individual practitioners. Two studies used intensity sampling to make contrasts. Aarons and Palinkas (2007) , for example, purposefully selected 15 child welfare case managers representing those having the most positive and those having the most negative views of SafeCare, an evidence-based prevention intervention, based on results of a web-based quantitative survey asking about the perceived value and usefulness of SafeCare. Kramer and Burns (2008) recruited and interviewed clinicians providing usual care and clinicians who dropped out of a study prior to consent to contrast with clinicians who provided the intervention under investigation. One study ( Hoagwood et al., 2007 ), used a typical case approach to identify participants for a qualitative assessment of the challenges faced in implementing a trauma-focused intervention for youth. One study ( Green & Aarons, 2011 ) used a combined snowball sampling/criterion-i strategy by asking recruited program managers to identify clinicians, administrative support staff, and consumers for project recruitment. County mental directors, agency directors, and program managers were recruited to represent the policy interests of implementation while clinicians, administrative support staff and consumers were recruited to represent the direct practice perspectives of EBP implementation.

Table 2 below provides a description of the use of different purposeful sampling strategies in mixed methods implementation studies. Criterion-i sampling was most frequently used in mixed methods implementation studies that employed a simultaneous design where the qualitative method was secondary to the quantitative method or studies that employed a simultaneous structure where the qualitative and quantitative methods were assigned equal priority. These mixed method designs were used to complement the depth of understanding afforded by the qualitative methods with the breadth of understanding afforded by the quantitative methods (n = 13), to explain or elaborate upon the findings of one set of methods (usually quantitative) with the findings from the other set of methods (n = 10), or to seek convergence through triangulation of results or quantifying qualitative data (n = 8). The process of mixing methods in the large majority (n = 18) of these studies involved embedding the qualitative study within the larger quantitative study. In one study (Goia & Dziadosz, 2008), criterion sampling was used in a simultaneous design where quantitative and qualitative data were merged together in a complementary fashion, and in two studies (Aarons et al., 2012; Zazelli et al., 2008 ), quantitative and qualitative data were connected together, one in sequential design for the purpose of developing a conceptual model ( Zazelli et al., 2008 ), and one in a simultaneous design for the purpose of complementing one another (Aarons et al., 2012). Three of the six studies that used maximum variation sampling used a simultaneous structure with quantitative methods taking priority over qualitative methods and a process of embedding the qualitative methods in a larger quantitative study ( Henke et al., 2008 ; Palinkas et al., 2010; Slade et al., 2008 ). Two of the six studies used maximum variation sampling in a sequential design ( Aarons et al., 2009 ; Zazelli et al., 2008 ) and one in a simultaneous design (Henke et al., 2010) for the purpose of development, and three used it in a simultaneous design for complementarity ( Bachman et al., 2009 ; Henke et al., 2008; Palinkas, Ell, Hansen, Cabassa, & Wells, 2011 ). The two studies relying upon intensity sampling used a simultaneous structure for the purpose of either convergence or expansion, and both studies involved a qualitative study embedded in a larger quantitative study ( Aarons & Palinkas, 2007 ; Kramer & Burns, 2008 ). The single typical case study involved a simultaneous design where the qualitative study was embedded in a larger quantitative study for the purpose of complementarity ( Hoagwood et al., 2007 ). The snowball/maximum variation study involved a sequential design where the qualitative study was merged into the quantitative data for the purpose of convergence and conceptual model development ( Green & Aarons, 2011 ). Although not used in any of the 28 implementation studies examined here, another common sequential sampling strategy is using criteria sampling of the larger quantitative sample to produce a second-stage qualitative sample in a manner similar to maximum variation sampling, except that the former narrows the range of variation while the latter expands the range.

Purposeful sampling strategies and mixed method designs in implementation research

Sampling strategyStructureDesignFunction
Single stage sampling (n = 22)
Criterion
(n = 18)
Simultaneous (n = 17)
Sequential (n = 6)
Merged (n = 9)
Connected (n = 9)
Embedded (n = 14)
Convergence (n = 6)
Complementarity (n = 12)
Expansion (n = 10)
Development (n = 3)
Sampling (n = 4)
Maximum variation
(n = 4)
Simultaneous (n = 3)
Sequential (n = 1)
Merged (n = 1)
Connected (n = 1)
Embedded (n = 2)
Convergence (n = 1)
Complementarity (n = 2)
Expansion (n = 1)
Development (n = 2)
Intensity
(n = 1)
Simultaneous
Sequential
Merged
Connected
Embedded
Convergence
Complementarity
Expansion
Development
Typical case Study
(n = 1)
SimultaneousEmbeddedComplementarity
Multistage sampling (n = 4)
Criterion/maximum
variation
(n = 2)
Simultaneous
Sequential
Embedded
Connected
Complementarity
Development
Criterion/intensity
(n = 1)
SimultaneousEmbeddedConvergence
Complementarity
Expansion
Criterion/snowball
(n = 1)
SequentialConnectedConvergence
Development

Criterion-i sampling as a purposeful sampling strategy shares many characteristics with random probability sampling, despite having different aims and different procedures for identifying and selecting potential participants. In both instances, study participants are drawn from agencies, organizations or systems involved in the implementation process. Individuals are selected based on the assumption that they possess knowledge and experience with the phenomenon of interest (i.e., the implementation of an EBP) and thus will be able to provide information that is both detailed (depth) and generalizable (breadth). Participants for a qualitative study, usually service providers, consumers, agency directors, or state policy-makers, are drawn from the larger sample of participants in the quantitative study. They are selected from the larger sample because they meet the same criteria, in this case, playing a specific role in the organization and/or implementation process. To some extent, they are assumed to be “representative” of that role, although implementation studies rarely explain the rationale for selecting only some and not all of the available role representatives (i.e., recruiting 15 providers from an agency for semi-structured interviews out of an available sample of 25 providers). From the perspective of qualitative methodology, participants who meet or exceed a specific criterion or criteria possess intimate (or, at the very least, greater) knowledge of the phenomenon of interest by virtue of their experience, making them information-rich cases.

However, criterion sampling may not be the most appropriate strategy for implementation research because by attempting to capture both breadth and depth of understanding, it may actually be inadequate to the task of accomplishing either. Although qualitative methods are often contrasted with quantitative methods on the basis of depth versus breadth, they actually require elements of both in order to provide a comprehensive understanding of the phenomenon of interest. Ideally, the goal of achieving theoretical saturation by providing as much detail as possible involves selection of individuals or cases that can ensure all aspects of that phenomenon are included in the examination and that any one aspect is thoroughly examined. This goal, therefore, requires an approach that sequentially or simultaneously expands and narrows the field of view, respectively. By selecting only individuals who meet a specific criterion defined on the basis of their role in the implementation process or who have a specific experience (e.g., engaged only in an implementation defined as successful or only in one defined as unsuccessful), one may fail to capture the experiences or activities of other groups playing other roles in the process. For instance, a focus only on practitioners may fail to capture the insights, experiences, and activities of consumers, family members, agency directors, administrative staff, or state policy leaders in the implementation process, thus limiting the breadth of understanding of that process. On the other hand, selecting participants on the basis of whether they were a practitioner, consumer, director, staff, or any of the above, may fail to identify those with the greatest experience or most knowledgeable or most able to communicate what they know and/or have experienced, thus limiting the depth of understanding of the implementation process.

To address the potential limitations of criterion sampling, other purposeful sampling strategies should be considered and possibly adopted in implementation research ( Figure 1 ). For instance, strategies placing greater emphasis on breadth and variation such as maximum variation, extreme case, confirming and disconfirming case sampling are better suited for an examination of differences, while strategies placing greater emphasis on depth and similarity such as homogeneous, snowball, and typical case sampling are better suited for an examination of commonalities or similarities, even though both types of sampling strategies include a focus on both differences and similarities. Alternatives to criterion sampling may be more appropriate to the specific functions of mixed methods, however. For instance, using qualitative methods for the purpose of complementarity may require that a sampling strategy emphasize similarity if it is to achieve depth of understanding or explore and develop hypotheses that complement a quantitative probability sampling strategy achieving breadth of understanding and testing hypotheses ( Kemper et al., 2003 ). Similarly, mixed methods that address related questions for the purpose of expanding or explaining results or developing new measures or conceptual models may require a purposeful sampling strategy aiming for similarity that complements probability sampling aiming for variation or dispersion. A narrowly focused purposeful sampling strategy for qualitative analysis that “complements” a broader focused probability sample for quantitative analysis may help to achieve a balance between increasing inference quality/trustworthiness (internal validity) and generalizability/transferability (external validity). A single method that focuses only on a broad view may decrease internal validity at the expense of external validity ( Kemper et al., 2003 ). On the other hand, the aim of convergence (answering the same question with either method) may suggest use of a purposeful sampling strategy that aims for breadth that parallels the quantitative probability sampling strategy.

An external file that holds a picture, illustration, etc.
Object name is nihms-538401-f0001.jpg

Purposeful and Random Sampling Strategies for Mixed Method Implementation Studies

  • (1) Priority and sequencing of Qualitative (QUAL) and Quantitative (QUAN) can be reversed.
  • (2) Refers to emphasis of sampling strategy.

An external file that holds a picture, illustration, etc.
Object name is nihms-538401-ig0002.jpg

Furthermore, the specific nature of implementation research suggests that a multistage purposeful sampling strategy be used. Three different multistage sampling strategies are illustrated in Figure 1 below. Several qualitative methodologists recommend sampling for variation (breadth) before sampling for commonalities (depth) ( Glaser, 1978 ; Bernard, 2002 ) (Multistage I). Also known as a “funnel approach”, this strategy is often recommended when conducting semi-structured interviews ( Spradley, 1979 ) or focus groups ( Morgan, 1997 ). This approach begins with a broad view of the topic and then proceeds to narrow down the conversation to very specific components of the topic. However, as noted earlier, the lack of a clear understanding of the nature of the range may require an iterative approach where each stage of data analysis helps to determine subsequent means of data collection and analysis ( Denzen, 1978 ; Patton, 2001) (Multistage II). Similarly, multistage purposeful sampling designs like opportunistic or emergent sampling, allow the option of adding to a sample to take advantage of unforeseen opportunities after data collection has been initiated (Patton, 2001, p. 240) (Multistage III). Multistage I models generally involve two stages, while a Multistage II model requires a minimum of 3 stages, alternating from sampling for variation to sampling for similarity. A Multistage III model begins with sampling for variation and ends with sampling for similarity, but may involve one or more intervening stages of sampling for variation or similarity as the need or opportunity arises.

Multistage purposeful sampling is also consistent with the use of hybrid designs to simultaneously examine intervention effectiveness and implementation. An extension of the concept of “practical clinical trials” ( Tunis, Stryer & Clancey, 2003 ), effectiveness-implementation hybrid designs provide benefits such as more rapid translational gains in clinical intervention uptake, more effective implementation strategies, and more useful information for researchers and decision makers ( Curran et al., 2012 ). Such designs may give equal priority to the testing of clinical treatments and implementation strategies (Hybrid Type 2) or give priority to the testing of treatment effectiveness (Hybrid Type 1) or implementation strategy (Hybrid Type 3). Curran and colleagues (2012) suggest that evaluation of the intervention’s effectiveness will require or involve use of quantitative measures while evaluation of the implementation process will require or involve use of mixed methods. When conducting a Hybrid Type 1 design (conducting a process evaluation of implementation in the context of a clinical effectiveness trial), the qualitative data could be used to inform the findings of the effectiveness trial. Thus, an effectiveness trial that finds substantial variation might purposefully select participants using a broader strategy like sampling for disconfirming cases to account for the variation. For instance, group randomized trials require knowledge of the contexts and circumstances similar and different across sites to account for inevitable site differences in interventions and assist local implementations of an intervention ( Bloom & Michalopoulos, 2013 ; Raudenbush & Liu, 2000 ). Alternatively, a narrow strategy may be used to account for the lack of variation. In either instance, the choice of a purposeful sampling strategy is determined by the outcomes of the quantitative analysis that is based on a probability sampling strategy. In Hybrid Type 2 and Type 3 designs where the implementation process is given equal or greater priority than the effectiveness trial, the purposeful sampling strategy must be first and foremost consistent with the aims of the implementation study, which may be to understand variation, central tendencies, or both. In all three instances, the sampling strategy employed for the implementation study may vary based on the priority assigned to that study relative to the effectiveness trial. For instance, purposeful sampling for a Hybrid Type 1 design may give higher priority to variation and comparison to understand the parameters of implementation processes or context as a contribution to an understanding of effectiveness outcomes (i.e., using qualitative data to expand upon or explain the results of the effectiveness trial), In effect, these process measures could be seen as modifiers of innovation/EBP outcome. In contrast, purposeful sampling for a Hybrid Type 3 design may give higher priority to similarity and depth to understand the core features of successful outcomes only.

Finally, multistage sampling strategies may be more consistent with innovations in experimental designs representing alternatives to the classic randomized controlled trial in community-based settings that have greater feasibility, acceptability, and external validity. While RCT designs provide the highest level of evidence, “in many clinical and community settings, and especially in studies with underserved populations and low resource settings, randomization may not be feasible or acceptable” ( Glasgow, et al., 2005 , p. 554). Randomized trials are also “relatively poor in assessing the benefit from complex public health or medical interventions that account for individual preferences for or against certain interventions, differential adherence or attrition, or varying dosage or tailoring of an intervention to individual needs” ( Brown et al., 2009 , p. 2). Several alternatives to the randomized design have been proposed, such as “interrupted time series,” “multiple baseline across settings” or “regression-discontinuity” designs. Optimal designs represent one such alternative to the classic RCT and are addressed in detail by Duan and colleagues (this issue) . Like purposeful sampling, optimal designs are intended to capture information-rich cases, usually identified as individuals most likely to benefit from the experimental intervention. The goal here is not to identify the typical or average patient, but patients who represent one end of the variation in an extreme case, intensity sampling, or criterion sampling strategy. Hence, a sampling strategy that begins by sampling for variation at the first stage and then sampling for homogeneity within a specific parameter of that variation (i.e., one end or the other of the distribution) at the second stage would seem the best approach for identifying an “optimal” sample for the clinical trial.

Another alternative to the classic RCT are the adaptive designs proposed by Brown and colleagues ( Brown et al, 2006 ; Brown et al., 2008 ; Brown et al., 2009 ). Adaptive designs are a sequence of trials that draw on the results of existing studies to determine the next stage of evaluation research. They use cumulative knowledge of current treatment successes or failures to change qualities of the ongoing trial. An adaptive intervention modifies what an individual subject (or community for a group-based trial) receives in response to his or her preferences or initial responses to an intervention. Consistent with multistage sampling in qualitative research, the design is somewhat iterative in nature in the sense that information gained from analysis of data collected at the first stage influences the nature of the data collected, and the way they are collected, at subsequent stages ( Denzen, 1978 ). Furthermore, many of these adaptive designs may benefit from a multistage purposeful sampling strategy at early phases of the clinical trial to identify the range of variation and core characteristics of study participants. This information can then be used for the purposes of identifying optimal dose of treatment, limiting sample size, randomizing participants into different enrollment procedures, determining who should be eligible for random assignment (as in the optimal design) to maximize treatment adherence and minimize dropout, or identifying incentives and motives that may be used to encourage participation in the trial itself.

Alternatives to the classic RCT design may also be desirable in studies that adopt a community-based participatory research framework ( Minkler & Wallerstein, 2003 ), considered to be an important tool on conducting implementation research ( Palinkas & Soydan, 2012 ). Such frameworks suggest that identification and recruitment of potential study participants will place greater emphasis on the priorities and “local knowledge” of community partners than on the need to sample for variation or uniformity. In this instance, the first stage of sampling may approximate the strategy of sampling politically important cases ( Patton, 2002 ) at the first stage, followed by other sampling strategies intended to maximize variations in stakeholder opinions or experience.

On the basis of this review, the following recommendations are offered for the use of purposeful sampling in mixed method implementation research. First, many mixed methods studies in health services research and implementation science do not clearly identify or provide a rationale for the sampling procedure for either quantitative or qualitative components of the study ( Wisdom et al., 2011 ), so a primary recommendation is for researchers to clearly describe their sampling strategies and provide the rationale for the strategy.

Second, use of a single stage strategy for purposeful sampling for qualitative portions of a mixed methods implementation study should adhere to the same general principles that govern all forms of sampling, qualitative or quantitative. Kemper and colleagues (2003) identify seven such principles: 1) the sampling strategy should stem logically from the conceptual framework as well as the research questions being addressed by the study; 2) the sample should be able to generate a thorough database on the type of phenomenon under study; 3) the sample should at least allow the possibility of drawing clear inferences and credible explanations from the data; 4) the sampling strategy must be ethical; 5) the sampling plan should be feasible; 6) the sampling plan should allow the researcher to transfer/generalize the conclusions of the study to other settings or populations; and 7) the sampling scheme should be as efficient as practical.

Third, the field of implementation research is at a stage itself where qualitative methods are intended primarily to explore the barriers and facilitators of EBP implementation and to develop new conceptual models of implementation process and outcomes. This is especially important in state implementation research, where fiscal necessities are driving policy reforms for which knowledge about EBP implementation barriers and facilitators are urgently needed. Thus a multistage strategy for purposeful sampling should begin first with a broader view with an emphasis on variation or dispersion and move to a narrow view with an emphasis on similarity or central tendencies. Such a strategy is necessary for the task of finding the optimal balance between internal and external validity.

Fourth, if we assume that probability sampling will be the preferred strategy for the quantitative components of most implementation research, the selection of a single or multistage purposeful sampling strategy should be based, in part, on how it relates to the probability sample, either for the purpose of answering the same question (in which case a strategy emphasizing variation and dispersion is preferred) or the for answering related questions (in which case, a strategy emphasizing similarity and central tendencies is preferred).

Fifth, it should be kept in mind that all sampling procedures, whether purposeful or probability, are designed to capture elements of both similarity and differences, of both centrality and dispersion, because both elements are essential to the task of generating new knowledge through the processes of comparison and contrast. Selecting a strategy that gives emphasis to one does not mean that it cannot be used for the other. Having said that, our analysis has assumed at least some degree of concordance between breadth of understanding associated with quantitative probability sampling and purposeful sampling strategies that emphasize variation on the one hand, and between the depth of understanding and purposeful sampling strategies that emphasize similarity on the other hand. While there may be some merit to that assumption, depth of understanding requires both an understanding of variation and common elements.

Finally, it should also be kept in mind that quantitative data can be generated from a purposeful sampling strategy and qualitative data can be generated from a probability sampling strategy. Each set of data is suited to a specific objective and each must adhere to a specific set of assumptions and requirements. Nevertheless, the promise of mixed methods, like the promise of implementation science, lies in its ability to move beyond the confines of existing methodological approaches and develop innovative solutions to important and complex problems. For states engaged in EBP implementation, the need for these solutions is urgent.

An external file that holds a picture, illustration, etc.
Object name is nihms-538401-f0004.jpg

Multistage Purposeful Sampling Strategies

Acknowledgments

This study was funded through a grant from the National Institute of Mental Health (P30-MH090322: K. Hoagwood, PI).

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Sampling Methods | Types, Techniques & Examples

Sampling Methods | Types, Techniques & Examples

Published on September 19, 2019 by Shona McCombes . Revised on June 22, 2023.

When you conduct research about a group of people, it’s rarely possible to collect data from every person in that group. Instead, you select a sample . The sample is the group of individuals who will actually participate in the research.

To draw valid conclusions from your results, you have to carefully decide how you will select a sample that is representative of the group as a whole. This is called a sampling method . There are two primary types of sampling methods that you can use in your research:

  • Probability sampling involves random selection, allowing you to make strong statistical inferences about the whole group.
  • Non-probability sampling involves non-random selection based on convenience or other criteria, allowing you to easily collect data.

You should clearly explain how you selected your sample in the methodology section of your paper or thesis, as well as how you approached minimizing research bias in your work.

Table of contents

Population vs. sample, probability sampling methods, non-probability sampling methods, other interesting articles, frequently asked questions about sampling.

First, you need to understand the difference between a population and a sample , and identify the target population of your research.

  • The population is the entire group that you want to draw conclusions about.
  • The sample is the specific group of individuals that you will collect data from.

The population can be defined in terms of geographical location, age, income, or many other characteristics.

Population vs sample

It is important to carefully define your target population according to the purpose and practicalities of your project.

If the population is very large, demographically mixed, and geographically dispersed, it might be difficult to gain access to a representative sample. A lack of a representative sample affects the validity of your results, and can lead to several research biases , particularly sampling bias .

Sampling frame

The sampling frame is the actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).

Sample size

The number of individuals you should include in your sample depends on various factors, including the size and variability of the population and your research design. There are different sample size calculators and formulas depending on what you want to achieve with statistical analysis .

Prevent plagiarism. Run a free check.

Probability sampling means that every member of the population has a chance of being selected. It is mainly used in quantitative research . If you want to produce results that are representative of the whole population, probability sampling techniques are the most valid choice.

There are four main types of probability sample.

Probability sampling

1. Simple random sampling

In a simple random sample, every member of the population has an equal chance of being selected. Your sampling frame should include the whole population.

To conduct this type of sampling, you can use tools like random number generators or other techniques that are based entirely on chance.

2. Systematic sampling

Systematic sampling is similar to simple random sampling, but it is usually slightly easier to conduct. Every member of the population is listed with a number, but instead of randomly generating numbers, individuals are chosen at regular intervals.

If you use this technique, it is important to make sure that there is no hidden pattern in the list that might skew the sample. For example, if the HR database groups employees by team, and team members are listed in order of seniority, there is a risk that your interval might skip over people in junior roles, resulting in a sample that is skewed towards senior employees.

3. Stratified sampling

Stratified sampling involves dividing the population into subpopulations that may differ in important ways. It allows you draw more precise conclusions by ensuring that every subgroup is properly represented in the sample.

To use this sampling method, you divide the population into subgroups (called strata) based on the relevant characteristic (e.g., gender identity, age range, income bracket, job role).

Based on the overall proportions of the population, you calculate how many people should be sampled from each subgroup. Then you use random or systematic sampling to select a sample from each subgroup.

4. Cluster sampling

Cluster sampling also involves dividing the population into subgroups, but each subgroup should have similar characteristics to the whole sample. Instead of sampling individuals from each subgroup, you randomly select entire subgroups.

If it is practically possible, you might include every individual from each sampled cluster. If the clusters themselves are large, you can also sample individuals from within each cluster using one of the techniques above. This is called multistage sampling .

This method is good for dealing with large and dispersed populations, but there is more risk of error in the sample, as there could be substantial differences between clusters. It’s difficult to guarantee that the sampled clusters are really representative of the whole population.

In a non-probability sample, individuals are selected based on non-random criteria, and not every individual has a chance of being included.

This type of sample is easier and cheaper to access, but it has a higher risk of sampling bias . That means the inferences you can make about the population are weaker than with probability samples, and your conclusions may be more limited. If you use a non-probability sample, you should still aim to make it as representative of the population as possible.

Non-probability sampling techniques are often used in exploratory and qualitative research . In these types of research, the aim is not to test a hypothesis about a broad population, but to develop an initial understanding of a small or under-researched population.

Non probability sampling

1. Convenience sampling

A convenience sample simply includes the individuals who happen to be most accessible to the researcher.

This is an easy and inexpensive way to gather initial data, but there is no way to tell if the sample is representative of the population, so it can’t produce generalizable results. Convenience samples are at risk for both sampling bias and selection bias .

2. Voluntary response sampling

Similar to a convenience sample, a voluntary response sample is mainly based on ease of access. Instead of the researcher choosing participants and directly contacting them, people volunteer themselves (e.g. by responding to a public online survey).

Voluntary response samples are always at least somewhat biased , as some people will inherently be more likely to volunteer than others, leading to self-selection bias .

3. Purposive sampling

This type of sampling, also known as judgement sampling, involves the researcher using their expertise to select a sample that is most useful to the purposes of the research.

It is often used in qualitative research , where the researcher wants to gain detailed knowledge about a specific phenomenon rather than make statistical inferences, or where the population is very small and specific. An effective purposive sample must have clear criteria and rationale for inclusion. Always make sure to describe your inclusion and exclusion criteria and beware of observer bias affecting your arguments.

4. Snowball sampling

If the population is hard to access, snowball sampling can be used to recruit participants via other participants. The number of people you have access to “snowballs” as you get in contact with more people. The downside here is also representativeness, as you have no way of knowing how representative your sample is due to the reliance on participants recruiting others. This can lead to sampling bias .

5. Quota sampling

Quota sampling relies on the non-random selection of a predetermined number or proportion of units. This is called a quota.

You first divide the population into mutually exclusive subgroups (called strata) and then recruit sample units until you reach your quota. These units share specific characteristics, determined by you prior to forming your strata. The aim of quota sampling is to control what or who makes up your sample.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than others.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Sampling Methods | Types, Techniques & Examples. Scribbr. Retrieved August 26, 2024, from https://www.scribbr.com/methodology/sampling-methods/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, population vs. sample | definitions, differences & examples, simple random sampling | definition, steps & examples, sampling bias and how to avoid it | types & examples, what is your plagiarism score.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Qualitative Sampling Methods

Affiliation.

  • 1 14742 School of Nursing, University of Texas Health Science Center, San Antonio, TX, USA.
  • PMID: 32813616
  • DOI: 10.1177/0890334420949218

Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros and cons of each. Sample size and data saturation are discussed.

Keywords: breastfeeding; qualitative methods; sampling; sampling methods.

PubMed Disclaimer

Similar articles

  • [Saturation sampling in qualitative health research: theoretical contributions]. Fontanella BJ, Ricas J, Turato ER. Fontanella BJ, et al. Cad Saude Publica. 2008 Jan;24(1):17-27. doi: 10.1590/s0102-311x2008000100003. Cad Saude Publica. 2008. PMID: 18209831 Portuguese.
  • (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research. van Rijnsoever FJ. van Rijnsoever FJ. PLoS One. 2017 Jul 26;12(7):e0181689. doi: 10.1371/journal.pone.0181689. eCollection 2017. PLoS One. 2017. PMID: 28746358 Free PMC article.
  • Enhancing rigor in qualitative description: a case study. Milne J, Oberle K. Milne J, et al. J Wound Ostomy Continence Nurs. 2005 Nov-Dec;32(6):413-20. doi: 10.1097/00152192-200511000-00014. J Wound Ostomy Continence Nurs. 2005. PMID: 16301909 Review.
  • Sampling Methods. Berndt AE. Berndt AE. J Hum Lact. 2020 May;36(2):224-226. doi: 10.1177/0890334420906850. Epub 2020 Mar 10. J Hum Lact. 2020. PMID: 32155099
  • Sampling issues in qualitative research. Higginbottom GM. Higginbottom GM. Nurse Res. 2004;12(1):7-19. doi: 10.7748/nr2004.07.12.1.7.c5927. Nurse Res. 2004. PMID: 15493211 Review.
  • Exploring illness perceptions of multimorbidity among community-dwelling older adults: a mixed methods study. Okanlawon Bankole A, Jiwani RB, Avorgbedor F, Wang J, Osokpo OH, Gill SL, Jo Braden C. Okanlawon Bankole A, et al. Aging Health Res. 2023 Dec;3(4):100158. doi: 10.1016/j.ahr.2023.100158. Epub 2023 Sep 16. Aging Health Res. 2023. PMID: 38779434 Free PMC article.
  • Virtual reality as a method of cognitive training of processing speed, working memory, and sustained attention in persons with acquired brain injury: a protocol for a randomized controlled trial. Johansen T, Matre M, Løvstad M, Lund A, Martinsen AC, Olsen A, Becker F, Brunborg C, Ponsford J, Spikman J, Neumann D, Tornås S. Johansen T, et al. Trials. 2024 May 22;25(1):340. doi: 10.1186/s13063-024-08178-7. Trials. 2024. PMID: 38778411 Free PMC article.
  • Participatory Health Cadre Model to Improve Exclusive Breastfeeding Coverage with King's Conceptual System. Sukmawati E, Wijaya M, Hilmanto D. Sukmawati E, et al. J Multidiscip Healthc. 2024 Apr 24;17:1857-1875. doi: 10.2147/JMDH.S450634. eCollection 2024. J Multidiscip Healthc. 2024. PMID: 38699558 Free PMC article.
  • Exploring gestational age, and birth weight assessment in Thatta district, Sindh, Pakistan: Healthcare providers' knowledge, practices, perceived barriers, and the potential of a mobile app for identifying preterm and low birth weight. Tikmani SS, Mårtensson T, Roujani S, Feroz AS, Seyfulayeva A, Mårtensson A, Brown N, Saleem S. Tikmani SS, et al. PLoS One. 2024 Apr 11;19(4):e0299395. doi: 10.1371/journal.pone.0299395. eCollection 2024. PLoS One. 2024. PMID: 38603767 Free PMC article.
  • Caregiving experiences of family caregivers of patients with schizophrenia in a community: a qualitative study in Beijing. Pan Z, Li T, Jin G, Lu X. Pan Z, et al. BMJ Open. 2024 Apr 8;14(4):e081364. doi: 10.1136/bmjopen-2023-081364. BMJ Open. 2024. PMID: 38589261 Free PMC article.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.

Miscellaneous

  • NCI CPTAC Assay Portal

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • Usability testing

Run remote usability tests on any digital product to deep dive into your key user flows

  • Product analytics

Learn how users are behaving on your website in real time and uncover points of frustration

  • Research repository

A tool for collaborative analysis of qualitative data and for building your research repository and database.

  • Trymata Blog

How-to articles, expert tips, and the latest news in user testing & user experience

  • Knowledge Hub

Detailed explainers of Trymata’s features & plans, and UX research terms & topics

  • Plans & Pricing

Get paid to test

  • User Experience (UX) testing
  • User Interface (UI) testing
  • Ecommerce testing
  • Remote usability testing
  • Plans & Pricing
  • Customer Stories

How do you want to use Trymata?

Conduct user testing, desktop usability video.

You’re on a business trip in Oakland, CA. You've been working late in downtown and now you're looking for a place nearby to grab a late dinner. You decided to check Zomato to try and find somewhere to eat. (Don't begin searching yet).

  • Look around on the home page. Does anything seem interesting to you?
  • How would you go about finding a place to eat near you in Downtown Oakland? You want something kind of quick, open late, not too expensive, and with a good rating.
  • What do the reviews say about the restaurant you've chosen?
  • What was the most important factor for you in choosing this spot?
  • You're currently close to the 19th St Bart station, and it's 9PM. How would you get to this restaurant? Do you think you'll be able to make it before closing time?
  • Your friend recommended you to check out a place called Belly while you're in Oakland. Try to find where it is, when it's open, and what kind of food options they have.
  • Now go to any restaurant's page and try to leave a review (don't actually submit it).

What was the worst thing about your experience?

It was hard to find the bart station. The collections not being able to be sorted was a bit of a bummer

What other aspects of the experience could be improved?

Feedback from the owners would be nice

What did you like about the website?

The flow was good, lots of bright photos

What other comments do you have for the owner of the website?

I like that you can sort by what you are looking for and i like the idea of collections

You're going on a vacation to Italy next month, and you want to learn some basic Italian for getting around while there. You decided to try Duolingo.

  • Please begin by downloading the app to your device.
  • Choose Italian and get started with the first lesson (stop once you reach the first question).
  • Now go all the way through the rest of the first lesson, describing your thoughts as you go.
  • Get your profile set up, then view your account page. What information and options are there? Do you feel that these are useful? Why or why not?
  • After a week in Italy, you're going to spend a few days in Austria. How would you take German lessons on Duolingo?
  • What other languages does the app offer? Do any of them interest you?

I felt like there could have been a little more of an instructional component to the lesson.

It would be cool if there were some feature that could allow two learners studying the same language to take lessons together. I imagine that their screens would be synced and they could go through lessons together and chat along the way.

Overall, the app was very intuitive to use and visually appealing. I also liked the option to connect with others.

Overall, the app seemed very helpful and easy to use. I feel like it makes learning a new language fun and almost like a game. It would be nice, however, if it contained more of an instructional portion.

All accounts, tests, and data have been migrated to our new & improved system!

Use the same email and password to log in:

Legacy login: Our legacy system is still available in view-only mode, login here >

What’s the new system about? Read more about our transition & what it-->

Sampling Methods in Qualitative Research: Definition, Types with Examples

' src=

What is Sampling in Qualitative Research?

Sampling in qualitative research is defined as an initial stage process involving the deliberate selection of individuals or cases from a broader population to participate in a study. 

Unlike quantitative research, where the emphasis is often on achieving statistical generalizability, qualitative research seeks to obtain depth and richness of information. 

In qualitative research sampling, the focus is not on achieving statistical representation of population but rather on gaining a profound understanding of the subject under investigation. Researchers carefully consider the appropriateness of each sampling method based on the research question, objectives, and the nature of the study population, ensuring alignment with the qualitative approach and the desired richness of data.

Key Methods for Qualitative Research Sampling

Various sampling methods are employed to select participants or cases that can provide meaningful insights and contribute to a rich understanding of the research question. Here, we’ll explore four common types of sampling methods in qualitative research, along with explanations and examples:

  • Purposeful Sampling:

Purposeful sampling involves intentionally selecting participants or cases based on specific criteria relevant to the research question. The goal is to gather in-depth information from individuals who can provide rich insights into the phenomenon under investigation. Researchers may use different purposeful sampling strategies, such as maximum variation (selecting diverse cases) or typical case (choosing a representative example).

Example: In a study exploring the experiences of cancer survivors, purposeful sampling might involve selecting participants with a variety of cancer types, treatment histories, and socio-demographic backgrounds to capture diverse perspectives.

  • Snowball Sampling:

Snowball sampling, or chain referral sampling, is used when studying populations that are challenging to reach through traditional methods. The researcher starts with a small number of participants and asks them to refer others who share similar characteristics or experiences. This method is particularly useful for studying hidden populations or subcultures.

Example: When researching illicit drug users, a researcher might start by interviewing a few individuals and then ask them to refer others in their social network who have similar experiences with drug use.

  • Theoretical Sampling:

Theoretical sampling is associated with grounded theory methodology. Unlike other sampling methods, theoretical sampling involves an ongoing and iterative process. Sampling decisions are made based on emerging themes and theoretical insights uncovered during data analysis. The goal is to gather data that help develop and refine emerging theories.

Example: In a study exploring the experiences of individuals transitioning between careers, theoretical sampling might involve selecting participants who can provide insights into specific aspects of the transition process as the study progresses.

  • Quota Sampling:

Quota sampling involves setting specific quotas based on predetermined characteristics such as age, gender, or socio-economic status. The researcher aims to ensure that the sample reflects the diversity present in the larger population. Quota sampling provides a structured way to achieve a balanced sample.

Example: In a study on consumer preferences for a new product, quota sampling might involve ensuring that the sample includes a proportional representation of different age groups and income levels to capture a range of perspectives.

These sampling methods are selected based on the nature of the research question, the goals of the study, and the characteristics of the population under investigation. Researchers often choose a method that aligns with the qualitative approach and allows for the collection of rich, context-specific data.

  • Convenience Sampling:

Convenience sampling involves selecting participants who are readily available and easily accessible to the researcher. This method is often pragmatic and efficient, but it may introduce bias since participants are not chosen based on specific criteria related to the research question. Convenience sampling is common in exploratory or pilot studies.

Example: If a researcher is studying the use of mobile banking apps, they might approach individuals in a public space, such as a coffee shop, and ask them about their experiences with mobile banking for a quick and accessible sample.

  • Criterion Sampling:

Criterion sampling involves selecting participants who meet specific criteria relevant to the research question. The criteria are predetermined and guide the researcher in choosing individuals who possess certain characteristics or have experienced particular events. This method ensures that the sample aligns closely with the study’s objectives.

Example: In a study on the impact of a specific educational intervention, criterion sampling might involve selecting participants who have completed the intervention program, ensuring that the sample includes individuals directly affected by the educational initiative.

Each of these qualitative sampling methods has its advantages and limitations. Researchers carefully consider the appropriateness of the method based on the research question, the study’s objectives, and the characteristics of the population being studied. The goal is to select a sampling strategy that aligns with the qualitative research approach, allowing for a nuanced exploration of the phenomenon under investigation.

Qualitative Research Sampling: Key Best Practices 

Using sampling methods in qualitative research requires thoughtful consideration and adherence to best practices to ensure the study’s validity, reliability, and relevance. Here are some best practices for employing sampling methods in qualitative research:

1. Clearly Define Research Objectives:

Begin by clearly defining the research objectives and the specific goals of the study. This clarity will guide the selection of an appropriate sampling method aligned with the research questions.

2. Select a Sampling Method Aligned with Research Goals:

Choose a sampling method that aligns with the nature of the research question and the study’s objectives. Consider the strengths and limitations of each method, and select the one that best serves the research purpose.

3. Use Multiple Sampling Strategies:

Consider employing multiple sampling strategies within the same study. This can enhance the richness and diversity of the data by capturing various perspectives and experiences related to the research question.

4. Establish Inclusion and Exclusion Criteria:

Clearly define inclusion and exclusion criteria based on the study’s objectives. This helps ensure that participants or cases selected contribute directly to the research question and provide relevant insights.

5. Document Sampling Decisions:

Document the rationale behind sampling decisions, including the criteria used and any adjustments made during the study. Transparent documentation enhances the study’s transparency, replicability, and credibility.

6. Consider Saturation:

Monitor data saturation throughout the study. Once saturation is reached, which means that no new data is available, data collection can cease, ensuring that the study has sufficiently explored the research question.

7. Strive for Diversity within the Sample:

Aim for diversity within the sample to capture a range of perspectives. Diversity can include variations in age, gender, socio-economic status, or other relevant characteristics, depending on the research question.

8. Ethical Considerations:

Prioritize ethical considerations in participant selection. Obtain informed consent, safeguard participant confidentiality, and ensure that vulnerable populations are treated with sensitivity and respect.

9. Adapt Sampling Strategies as Needed:

Be open to adapting sampling strategies based on emerging insights. Theoretical sampling, in particular, allows for adjustments in the sampling plan as the study progresses and new themes emerge.

10. Member Checking:

Consider implementing member checking, where preliminary findings are shared with participants to validate or refine the interpretations. This enhances the trustworthiness and credibility of the study.

11. Reflect on Researcher Bias:

Acknowledge and reflect on the potential biases introduced by the researcher during the sampling process. Reflexivity ensures transparency and helps mitigate bias in participant selection and interpretation of data.

By adhering to these best practices, researchers can enhance the rigor and quality of qualitative research. These practices contribute to the trustworthiness of the study and ensure that the selected sampling method aligns effectively with the research objectives.

Interested in learning more about the fields of product, research, and design? Search our articles here for helpful information spanning a wide range of topics!

15 User Acceptance Testing Tools for Quality & Satisfaction

Mobile usability test: from preparation to execution & more, ux research process: a step-by-step guide for you, usability testing questions for improving user’s experience.

An overview of sampling methods

Last updated

27 February 2023

Reviewed by

Cathy Heath

When researching perceptions or attributes of a product, service, or people, you have two options:

Survey every person in your chosen group (the target market, or population), collate your responses, and reach your conclusions.

Select a smaller group from within your target market and use their answers to represent everyone. This option is sampling .

Sampling saves you time and money. When you use the sampling method, the whole population being studied is called the sampling frame .

The sample you choose should represent your target market, or the sampling frame, well enough to do one of the following:

Generalize your findings across the sampling frame and use them as though you had surveyed everyone

Use the findings to decide on your next step, which might involve more in-depth sampling

Make research less tedious

Dovetail streamlines research to help you uncover and share actionable insights

How was sampling developed?

Valery Glivenko and Francesco Cantelli, two mathematicians studying probability theory in the early 1900s, devised the sampling method. Their research showed that a properly chosen sample of people would reflect the larger group’s status, opinions, decisions, and decision-making steps.

They proved you don't need to survey the entire target market, thereby saving the rest of us a lot of time and money.

  • Why is sampling important?

We’ve already touched on the fact that sampling saves you time and money. When you get reliable results quickly, you can act on them sooner. And the money you save can pay for something else.

It’s often easier to survey a sample than a whole population. Sample inferences can be more reliable than those you get from a very large group because you can choose your samples carefully and scientifically.

Sampling is also useful because it is often impossible to survey the entire population. You probably have no choice but to collect only a sample in the first place.

Because you’re working with fewer people, you can collect richer data, which makes your research more accurate. You can:

Ask more questions

Go into more detail

Seek opinions instead of just collecting facts

Observe user behaviors

Double-check your findings if you need to

In short, sampling works! Let's take a look at the most common sampling methods.

  • Types of sampling methods

There are two main sampling methods: probability sampling and non-probability sampling. These can be further refined, which we'll cover shortly. You can then decide which approach best suits your research project.

Probability sampling method

Probability sampling is used in quantitative research , so it provides data on the survey topic in terms of numbers. Probability relates to mathematics, hence the name ‘quantitative research’. Subjects are asked questions like:

How many boxes of candy do you buy at one time?

How often do you shop for candy?

How much would you pay for a box of candy?

This method is also called random sampling because everyone in the target market has an equal chance of being chosen for the survey. It is designed to reduce sampling error for the most important variables. You should, therefore, get results that fairly reflect the larger population.

Non-probability sampling method

In this method, not everyone has an equal chance of being part of the sample. It's usually easier (and cheaper) to select people for the sample group. You choose people who are more likely to be involved in or know more about the topic you’re researching.

Non-probability sampling is used for qualitative research. Qualitative data is generated by questions like:

Where do you usually shop for candy (supermarket, gas station, etc.?)

Which candy brand do you usually buy?

Why do you like that brand?

  • Probability sampling methods

Here are five ways of doing probability sampling:

Simple random sampling (basic probability sampling)

Systematic sampling

Stratified sampling.

Cluster sampling

Multi-stage sampling

Simple random sampling.

There are three basic steps to simple random sampling:

Choose your sampling frame.

Decide on your sample size. Make sure it is large enough to give you reliable data.

Randomly choose your sample participants.

You could put all their names in a hat, shake the hat to mix the names, and pull out however many names you want in your sample (without looking!)

You could be more scientific by giving each participant a number and then using a random number generator program to choose the numbers.

Instead of choosing names or numbers, you decide beforehand on a selection method. For example, collect all the names in your sampling frame and start at, for example, the fifth person on the list, then choose every fourth name or every tenth name. Alternatively, you could choose everyone whose last name begins with randomly-selected initials, such as A, G, or W.

Choose your system of selecting names, and away you go.

This is a more sophisticated way to choose your sample. You break the sampling frame down into important subgroups or strata . Then, decide how many you want in your sample, and choose an equal number (or a proportionate number) from each subgroup.

For example, you want to survey how many people in a geographic area buy candy, so you compile a list of everyone in that area. You then break that list down into, for example, males and females, then into pre-teens, teenagers, young adults, senior citizens, etc. who are male or female.

So, if there are 1,000 young male adults and 2,000 young female adults in the whole sampling frame, you may want to choose 100 males and 200 females to keep the proportions balanced. You then choose the individual survey participants through the systematic sampling method.

Clustered sampling

This method is used when you want to subdivide a sample into smaller groups or clusters that are geographically or organizationally related.

Let’s say you’re doing quantitative research into candy sales. You could choose your sample participants from urban, suburban, or rural populations. This would give you three geographic clusters from which to select your participants.

This is a more refined way of doing cluster sampling. Let’s say you have your urban cluster, which is your primary sampling unit. You can subdivide this into a secondary sampling unit, say, participants who typically buy their candy in supermarkets. You could then further subdivide this group into your ultimate sampling unit. Finally, you select the actual survey participants from this unit.

  • Uses of probability sampling

Probability sampling has three main advantages:

It helps minimizes the likelihood of sampling bias. How you choose your sample determines the quality of your results. Probability sampling gives you an unbiased, randomly selected sample of your target market.

It allows you to create representative samples and subgroups within a sample out of a large or diverse target market.

It lets you use sophisticated statistical methods to select as close to perfect samples as possible.

  • Non-probability sampling methods

To recap, with non-probability sampling, you choose people for your sample in a non-random way, so not everyone in your sampling frame has an equal chance of being chosen. Your research findings, therefore, may not be as representative overall as probability sampling, but you may not want them to be.

Sampling bias is not a concern if all potential survey participants share similar traits. For example, you may want to specifically focus on young male adults who spend more than others on candy. In addition, it is usually a cheaper and quicker method because you don't have to work out a complex selection system that represents the entire population in that community.

Researchers do need to be mindful of carefully considering the strengths and limitations of each method before selecting a sampling technique.

Non-probability sampling is best for exploratory research , such as at the beginning of a research project.

There are five main types of non-probability sampling methods:

Convenience sampling

Purposive sampling, voluntary response sampling, snowball sampling, quota sampling.

The strategy of convenience sampling is to choose your sample quickly and efficiently, using the least effort, usually to save money.

Let's say you want to survey the opinions of 100 millennials about a particular topic. You could send out a questionnaire over the social media platforms millennials use. Ask respondents to confirm their birth year at the top of their response sheet and, when you have your 100 responses, begin your analysis. Or you could visit restaurants and bars where millennials spend their evenings and sign people up.

A drawback of convenience sampling is that it may not yield results that apply to a broader population.

This method relies on your judgment to choose the most likely sample to deliver the most useful results. You must know enough about the survey goals and the sampling frame to choose the most appropriate sample respondents.

Your knowledge and experience save you time because you know your ideal sample candidates, so you should get high-quality results.

This method is similar to convenience sampling, but it is based on potential sample members volunteering rather than you looking for people.

You make it known you want to do a survey on a particular topic for a particular reason and wait until enough people volunteer. Then you give them the questionnaire or arrange interviews to ask your questions directly.

Snowball sampling involves asking selected participants to refer others who may qualify for the survey. This method is best used when there is no sampling frame available. It is also useful when the researcher doesn’t know much about the target population.

Let's say you want to research a niche topic that involves people who may be difficult to locate. For our candy example, this could be young males who buy a lot of candy, go rock climbing during the day, and watch adventure movies at night. You ask each participant to name others they know who do the same things, so you can contact them. As you make contact with more people, your sample 'snowballs' until you have all the names you need.

This sampling method involves collecting the specific number of units (quotas) from your predetermined subpopulations. Quota sampling is a way of ensuring that your sample accurately represents the sampling frame.

  • Uses of non-probability sampling

You can use non-probability sampling when you:

Want to do a quick test to see if a more detailed and sophisticated survey may be worthwhile

Want to explore an idea to see if it 'has legs'

Launch a pilot study

Do some initial qualitative research

Have little time or money available (half a loaf is better than no bread at all)

Want to see if the initial results will help you justify a longer, more detailed, and more expensive research project

  • The main types of sampling bias, and how to avoid them

Sampling bias can fog or limit your research results. This will have an impact when you generalize your results across the whole target market. The two main causes of sampling bias are faulty research design and poor data collection or recording. They can affect probability and non-probability sampling.

Faulty research

If a surveyor chooses participants inappropriately, the results will not reflect the population as a whole.

A famous example is the 1948 presidential race. A telephone survey was conducted to see which candidate had more support. The problem with the research design was that, in 1948, most people with telephones were wealthy, and their opinions were very different from voters as a whole. The research implied Dewey would win, but it was Truman who became president.

Poor data collection or recording

This problem speaks for itself. The survey may be well structured, the sample groups appropriate, the questions clear and easy to understand, and the cluster sizes appropriate. But if surveyors check the wrong boxes when they get an answer or if the entire subgroup results are lost, the survey results will be biased.

How do you minimize bias in sampling?

 To get results you can rely on, you must:

Know enough about your target market

Choose one or more sample surveys to cover the whole target market properly

Choose enough people in each sample so your results mirror your target market

Have content validity . This means the content of your questions must be direct and efficiently worded. If it isn’t, the viability of your survey could be questioned. That would also be a waste of time and money, so make the wording of your questions your top focus.

If using probability sampling, make sure your sampling frame includes everyone it should and that your random sampling selection process includes the right proportion of the subgroups

If using non-probability sampling, focus on fairness, equality, and completeness in identifying your samples and subgroups. Then balance those criteria against simple convenience or other relevant factors.

What are the five types of sampling bias?

Self-selection bias. If you mass-mail questionnaires to everyone in the sample, you’re more likely to get results from people with extrovert or activist personalities and not from introverts or pragmatists. So if your convenience sampling focuses on getting your quota responses quickly, it may be skewed.

Non-response bias. Unhappy customers, stressed-out employees, or other sub-groups may not want to cooperate or they may pull out early.

Undercoverage bias. If your survey is done, say, via email or social media platforms, it will miss people without internet access, such as those living in rural areas, the elderly, or lower-income groups.

Survivorship bias. Unsuccessful people are less likely to take part. Another example may be a researcher excluding results that don’t support the overall goal. If the CEO wants to tell the shareholders about a successful product or project at the AGM, some less positive survey results may go “missing” (to take an extreme example.) The result is that your data will reflect an overly optimistic representation of the truth.

Pre-screening bias. If the researcher, whose experience and knowledge are being used to pre-select respondents in a judgmental sampling, focuses more on convenience than judgment, the results may be compromised.

How do you minimize sampling bias?

Focus on the bullet points in the next section and:

Make survey questionnaires as direct, easy, short, and available as possible, so participants are more likely to complete them accurately and send them back

Follow up with the people who have been selected but have not returned their responses

Ignore any pressure that may produce bias

  • How do you decide on the type of sampling to use?

Use the ideas you've gleaned from this article to give yourself a platform, then choose the best method to meet your goals while staying within your time and cost limits.

If it isn't obvious which method you should choose, use this strategy:

Clarify your research goals

Clarify how accurate your research results must be to reach your goals

Evaluate your goals against time and budget

List the two or three most obvious sampling methods that will work for you

Confirm the availability of your resources (researchers, computer time, etc.)

Compare each of the possible methods with your goals, accuracy, precision, resource, time, and cost constraints

Make your decision

  • The takeaway

Effective market research is the basis of successful marketing, advertising, and future productivity. By selecting the most appropriate sampling methods, you will collect the most useful market data and make the most effective decisions.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

  • Open access
  • Published: 27 August 2024

Experience Sampling as a dietary assessment method: a scoping review towards implementation

  • Joke Verbeke 1 &
  • Christophe Matthys 1 , 2  

International Journal of Behavioral Nutrition and Physical Activity volume  21 , Article number:  94 ( 2024 ) Cite this article

1 Altmetric

Metrics details

Accurate and feasible assessment of dietary intake remains challenging for research and healthcare. Experience Sampling Methodology (ESM) is a real-time real-life data capturing method with low burden and good feasibility not yet fully explored as alternative dietary assessment method.

This scoping review is the first to explore the implementation of ESM as an alternative to traditional dietary assessment methods by mapping the methodological considerations to apply ESM and formulating recommendations to develop an Experience Sampling-based Dietary Assessment Method (ESDAM). The scoping review methodology framework was followed by searching PubMed (including OVID) and Web of Science from 2012 until 2024.

Screening of 646 articles resulted in 39 included articles describing 24 studies. ESM was mostly applied for qualitative dietary assessment (i.e. type of consumed foods) ( n  = 12), next to semi-quantitative dietary assessment (i.e. frequency of consumption, no portion size) ( n  = 7), and quantitative dietary assessment (i.e. type and portion size of consumed foods) ( n  = 5). Most studies used ESM to assess the intake of selected foods. Two studies applied ESM as an alternative to traditional dietary assessment methods assessing total dietary intake quantitatively (i.e. all food groups). ESM duration ranged from 4 to 30 days and most studies applied ESM for 7 days ( n  = 15). Sampling schedules were mostly semi-random ( n  = 12) or fixed ( n  = 9) with prompts starting at 8–10 AM and ending at 8–12 PM. ESM questionnaires were adapted from existing questionnaires, based on food consumption data or focus group discussions, and respond options were mostly presented as multiple-choice. Recall period to report dietary intake in ESM prompts varied from 15 min to 3.5 h.

Conclusions

Most studies used ESM for 7 days with fixed or semi-random sampling during waking hours and 2-h recall periods. An ESDAM can be developed starting from a food record approach (actual intake) or a validated food frequency questionnaire (long-term or habitual intake). Actual dietary intake can be measured by ESM through short intensive fixed sampling schedules while habitual dietary intake measurement by ESM allows for longer less frequent semi-random sampling schedules. ESM sampling protocols should be developed carefully to optimize feasibility and accuracy of dietary data.

Research on health and nutrition relies on accurate assessment of dietary intake [ 1 ]. However, dietary intake is a complex exposure variable with high inter- and intra-variability existing of different components ranging from micronutrients, macronutrients, food groups, meals to the dietary pattern as a whole. Therefore, measuring dietary intake accurately and feasibly is challenging for both researchers and healthcare professionals [ 2 , 3 , 4 ]. Only few established nutritional biomarkers are available and, therefore, no objective method exist to reflect true dietary intake or the dietary pattern as a whole in epidemiological research [ 2 , 3 ]. Instead, most dietary assessment methods rely on self-report. Food records, referred to as the “golden standard”, together with 24-h dietary recalls provide most detailed dietary data while Food Frequency Questionnaires (FFQ) reflects habitual (i.e. long-term usual intake) dietary intake which is the variable of interest in most diet-disease research [ 4 , 5 , 6 ]. Food records, 24-h dietary recalls, and FFQs have known limitations and challenges including recall bias, social-desirability bias, misreporting, and burdensomeness contributing to inherent measurement error in dietary intake data [ 2 , 6 ]. A review of Kirkpatrick et al . showed that feasibility, including cost-effectiveness and ease-of-use, is the main determinant for researchers in selecting a dietary assessment method instead of appropriateness for study design and purpose at the expense of data quality and accuracy [ 7 ]. To advance nutritional research and enhance the quality of dietary data, exploring the implementation of new methodologies is warranted to improve feasibility and overcome the limitations of current dietary assessment methods.

Experience Sampling Methodology (ESM), an umbrella term including Ecological Momentary Assessment (EMA), ambulatory assessment, and structured diary method, refers to intensive longitudinal assessment and real-time data-capturing methods [ 8 ]. Participants are asked to respond to short questions sent through smartphone prompt messages or beeps at random moments during the day to assess experiences or behaviors and moment-to-moment changes in daily life [ 9 ]. Originating from the field of psychology and behavioral sciences, ESM typically assesses current mood, cognitions, perceptions, or behaviors and descriptors of the momentary context (i.e. location, company) [ 9 ]. Usually, assessments are collected in a random time sampling protocol yet, assessments can also be triggered by an event (event-contingent sampling), at fixed time points, or random within fixed time intervals (semi-random). ESM questionnaires are usually designed to be completed in under 2 min consisting of open-ended questions, visual analogue scales, checklists, or self-report Likert scales. Several ESM survey applications (i.e. m-Path, PsyMate, PocketQ) are currently available in which the sampling protocol and questionnaires can be customized to the study design and aim [ 10 , 11 ]. It was shown that ESM reduces recall bias, reactivity bias, and misreporting in psychology and behavioral research by its design through unannounced, rapid, real-life, real-time repeated assessments [ 12 ]. For this reason, Experience Sampling might be an interesting new methodology to explore as an alternative dietary assessment methodology. The design of ESM could overcome recall bias, reactivity bias, social desirability bias, and misreporting seen in traditional dietary assessment methods. However, the application of ESM for dietary assessment is new. Defining and balancing ESM methodological considerations, i.e. study duration, frequency and timing of sampling (signaling technique), formulation of questions and answer options, is a delicate matter and crucial in balancing feasibility with data accuracy [ 13 ].

The application of ESM in the field of dietary assessment has not been fully explored yet. Schembre et al . reviewed ESM for dietary behavior for the first time [ 12 ]. However, it has not yet been assessed how ESM could be implemented as an alternative dietary assessment method aiming to estimate daily energy, nutrient, and food group intake quantitatively.

Therefore, this scoping review investigates how Experience Sampling Methodology can be implemented to develop an Experience Sampling-based dietary assessment method as an alternative to traditional dietary assessment methods to measure daily energy, nutrient, and food group intake quantitatively. This review aims to map ESM sampling protocols and questionnaire designs used to assess dietary intake. Additionally, the findings of this review will be combined with best practices to develop ESMs and dietary assessment methods to formulate key recommendations for the development of an Experience Sampling-based Dietary Assessment Method (ESDAM). The following questions will be answered:

How is ESM applied in literature to assess dietary intake - focusing on methodological considerations (i.e. development and formulation of questions and answers, selection and consideration of prompting schedule (timing and frequency))?

How can ESM specifically be applied for quantitative assessment of total dietary intake (i.e. as an alternative to traditional dietary assessment method)?

This scoping review followed the methodological framework for scoping reviews of Arksey and O’Malley which was further developed by Levac et al. [ 14 , 15 ]. A scoping review approach was chosen to explore and map the design aspects and considerations for developing experience sampling methods to assess dietary intake as an alternative to traditional dietary assessment methods, which is novel. Moreover, this review will formulate design recommendations to apply ESM as a dietary assessment method and will serve as starting point to develop an ESDAM. An a priori protocol was developed based on the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) and the Joanna Briggs Institute Scoping Review protocol template (Supplementary Material) [ 16 , 17 ]. According to Arksey and O’Malley methodological framework, the iterative nature of scoping reviews may include further refinement of the search strategy and the inclusion and exclusion criteria during the initial review process due to the unknown breadth of the topic [ 14 ]. Therefore, adaptations made to the methodology described in the a priori protocol based on initial searches are described below. This scoping review was reported according to the PRISMA extension for scoping reviews (PRISMA-ScR) [ 18 ].

Search strategy and screening

The search strategy was developed based on key words and Mesh terms for “dietary assessment” and “experience sampling” (Supplementary Material). The term “ecological momentary assessment” was included as a synonym of ESM. Electronic databases PubMed (including MEDLINE) and Web of Science were searched for relevant literature published between January 2012 and February 9th 2024. The year 2012 was chosen as lower limit for inclusion since this review focuses on the use of ESM by digital tools (i.e. smartphones, web-based or mobile applications) which has emerged especially since the introduction of applications for smartphones since 2008. Therefore, the time frame of this review is focused on literature published in the last 12 years. The reference lists of all included articles were screened for additional studies.

The initial search strategy described in the protocol was developed based on the assumption that research using ESM as an alternative to traditional dietary assessment was limited. Therefore, initially, research using ESM in the broader field of health research was included to obtain more evidence on methodological considerations of application of ESM. In line with the Arksey and O’Malley methodological framework, inclusion criteria were adapted following initial searches along with discussion and consensus between the reviewer (JV) and principal investigator (CM). Therefore, inclusion criteria were adapted to research applying ESM to measure dietary intake quantitatively or qualitatively since literature was also available in the field of dietary behaviour in relation to contextual factors (Table  1 ). Studies measuring dietary behaviour (i.e. cravings, hunger, eating disorder behaviour, dietary lapses) only, without assessing dietary intake, were excluded. Event-based ESM as dietary assessment method was excluded since this was deemed a similar methodology as the food record and, therefore, not serving the purpose of this review to explore a new methodology for dietary assessment to overcome limitations of traditional dietary assessment methods. All inclusion and exclusion criteria are presented in Table  1 .

All records were exported and uploaded into the review software Rayyan. Duplicates were identified through the software followed by a manual screening of the reviewer for confirmation and removal of duplicates. One reviewer (JV) screened the retrieved articles first by title and abstract followed by a full text screening [ 19 , 20 , 21 ]. In case of hesitancy on inclusion of articles, the reviewer (JV) consulted the principal investigator (CM) to reach consensus. In line with established scoping review methods, methodological quality assessment was not performed [ 14 , 18 ]. Since this review aims to shed light on design aspects and considerations of ESM and, thus focuses on the application of the methodology used in the articles rather than the study outcome, quality assessment was considered not relevant for this purpose.

Data extraction

Data were extracted in an Excel table describing the authors, title, year of publication, signalling technique, timing of prompts, study duration, dietary variables measured, answer window, (formulation of) questions, respond options, notification method, indication of qualitative or quantitative dietary assessment, delivery method, population and study name. All data were described qualitatively. Studies applying ESM for dietary assessment were categorized in separate tables for ESM used for qualitative dietary assessment (i.e. assessment of type of foods consumed without portion size, not allowing estimation of nutrient intake), ESM used for semi-quantitative dietary assessment (i.e. assessment of type of foods or frequency of consumption of foods, not allowing estimation of nutrient intake), and ESM used for quantitative dietary assessment (i.e. assessment of type of foods consumed and portion size, allowing estimation of nutrient intake).

Literature search and study characteristics

The electronic databases search resulted in 701 articles of which 55 duplicates were identified and removed. Next, 646 articles were screened by title and abstract of which 591 were excluded according to the exclusion criteria (Fig.  1 ). The remaining 55 articles were screened by full text. After exclusion of 16 articles following full text screening, 39 articles were selected for inclusion (Table  2 ). The included articles describe 24 individual studies of which the Mother’s and Their Children’s Health (MATCH) study was described most frequently ( n  = 12, 25%). Most studies were published in 2018 ( n  = 7), followed by 2020 ( n  = 6) and 2022 ( n  = 6). Students, including both high school and higher education students, were the study population in most EMA or ESM studies included ( n  = 10, 43%). Two studies applied the ESM methodology to assess dietary behaviour including dietary variables of children with mothers as proxy. Five studies referred to their methodology using the terminology ‘ESM’ while the other studies used ‘EMA’ as terminology.

figure 1

PRISMA flow diagram of the screening and selection process

Application of ESM for dietary assessment in literature

Dietary variables measured through esm.

Most studies assessed consumption of specific foods only [ 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 42 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 ]. Table 2 , 3 and 4 provide an overview of the included studies described in the manuscripts with description of specific ESM methodology characteristics according to qualitative, semi-quantitative and quantitative dietary assessment respectively. Four studies used ESM to assess snack consumption [ 45 , 46 , 47 , 48 , 49 , 50 , 51 ]. Four studies focused on snack and sugar sweetened beverage (SBB) consumption only [ 22 , 36 , 44 , 52 , 53 ]. Piontak et al . applied ESM to assess unhealthy food consumption including fast food, caffeinated drinks and not consuming any fruit or vegetables [ 35 ]. Two studies focused on palatable food consumption of which the study of Cummings et al . assessed palatable food consumption together with highly processed food intake [ 37 , 54 ]. Lin et al . applied ESM to measure empty calorie food and beverage consumption while Boronat et al . assessed Mediterranean diet food consumption [ 39 , 55 ]. Two studies assessed the occurrence of food consumption only without assessing type of foods consumed [ 40 , 41 ]. The study of de Rivaz et al . assessed the largest type of meal consumed in between signals [ 56 ]. Three studies aimed to assess total dietary intake of which the study of Lucassen et al . evaluated approaches to assess both actual and habitual dietary intake using ESM [ 43 , 57 , 58 , 59 ].

Qualitative versus quantitative dietary assessment through ESM

As shown in Table  2 , twelve studies performed qualitative dietary assessment (i.e. assessing type of foods consumed without quantification) (Table  2 ). Seven studies performed semi-quantitative dietary assessment (i.e. assessing frequency of meals/eating occasions or number of servings of food categories not allowing nutrient calculation) [ 44 , 49 , 50 , 52 , 53 , 54 , 55 , 56 ] (Table  3 ). Quantitative dietary assessment, in line with the aim of traditional dietary assessment methods (i.e. assessment of both type and quantity of foods consumed allowing to estimate nutrient intake), was performed in four studies of which Wouters et al . and Richard et al . assessed snack intake only while Jeffers et al . and Lucassen et al . assessed overall dietary intake (i.e. all food groups) [ 45 , 46 , 47 , 48 , 51 , 57 , 58 ] (Table  4 ).

Study duration, ESM timing and signaling technique

Study duration of ESM dietary assessment varied from four to thirty days of which most studies ( n  = 15) had a study duration of seven days of ESM dietary assessment. The study of Piontak et al . had the longest duration of 30 days of ESM assessment [ 35 ]. The semi-random sampling scheme (i.e. random sampling within multiple fixed time-intervals) was applied most frequently ( n  = 12), followed by the fixed sampling scheme (i.e. sampling at fixed times) ( n  = 9). Random sampling (i.e. completely random sampling) was chosen in three studies [ 34 , 36 , 55 ]. A mixed sampling approach was applied in three studies of which Lucassen et al . tested and compared both a fixed sampling and a semi-random sampling approach to assess overall dietary intake [ 22 , 42 , 57 , 59 ]. Two studies applied different sampling schemes during the weekend compared to weekdays [ 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 ]. Sampling time windows were adapted to the daily structure of the study population, i.e. shifts of shift-workers, school hours of students or (self-reported) waking hours (Table  2 ). The sampling time window of the included studies started between 6 and 10 AM and ended between 8 PM and midnight. One study applied a 24-h sampling time window since the study population were nurses working in shifts [ 39 ].

Formulation of ESM questions

Different types of questions and phrasing of questions can be identified in the studies using ESM for dietary assessment. Two studies use indirect phrasing (i.e. ‘What were you doing?’) followed by multiple-choice answer options including i.e. physical activity, eating, rest [ 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 ]. Seven studies use direct phrasing (i.e. ‘Did you eat?’) which is applied both as real-time prompts (i.e. ‘Were you eating or drinking anything – in this moment?’) and as retrospective prompts (i.e. ‘Did you eat anything since the last signal?’) without specifying specific food consumption [ 22 , 38 , 40 , 41 , 45 , 46 , 47 , 48 , 56 , 58 ]. Thirteen studies use direct and specific phrasing regarding consumption of specified foods (i.e. ‘Did you eat any snacks or sugar sweetened beverages since the last signal?’) [ 35 , 36 , 37 , 39 , 43 , 44 , 50 , 51 , 52 , 53 , 54 , 55 , 57 ]. The time period in retrospective prompts with direct phrasing varied. Ten studies assessed consumption since last signal, three studies during the past 2 h and one study during respectively the preceding 15 min, 1 h, 2.5 h, 3 h and 3.5 h [ 41 , 42 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 56 ]. The MATCH study used two different retrospective time periods of which the first prompt of the day requested to report since waking up and the following prompts during the last 2 h [ 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 ]. Forman et al . used prompts which requested to report snack intake between the last prompt of the previous day and falling asleep and between waking up and receiving the first prompt [ 49 ]. The study of Bruening et al . combined both real-time prompts, to report what participants were doing the moment before receiving the prompt, and retrospective prompts to report what they were doing the past 3 h [ 34 ].

Formulation of ESM response options

Binary (i.e. yes or no) response options are provided in eleven studies followed by open field, a built in search function or multiple-choice bullets to specify type of food or drinks consumed in five studies [ 22 , 35 , 37 , 38 , 40 , 41 , 42 , 45 , 46 , 47 , 48 , 52 , 53 , 56 , 58 ]. Food lists shown as response option to indicate food consumption were based on National Health Surveys, validated Food Frequency Questionnaires, other validated questionnaires, the National Food Composition Database or results from focus group discussions. Eight studies requested to indicate quantities of the foods consumed by open field (i.e. in grams or milliliters), Visual Analogical Scale (VAS) sliders (i.e. from zero to 100) or multiple-choice options (i.e. small, medium, large) [ 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 54 , 56 , 57 ].

This review reveals that ESM has been applied to assess dietary intake in various research settings using different design approaches. However, most studies assessed consumption of specific foods only focusing on the foods of interest related to the research question. Especially snack consumption and, in general, unhealthy foods were the foods of interest for which ESM was used most often to measure its consumption. Due to its momentary nature, ESM may be especially suitable to measure these specific foods which are often (unconsciously) missed or underreported using traditional dietary assessment methods. Findings from our review show that ESM applied to assess dietary intake shows both features of 24-h dietary recalls (24HRs) and food frequency questionnaires (FFQ). Aside from the recall-based reporting and multiple choice assessment of specific foods, found in 24HRs and FFQs respectively, the ESM is a new methodology compared to traditional dietary assessment methods. ESM shows to lends itself well to assess the total dietary intake quantitatively as well albeit less explored yet according to our review. Moreover, most studies using ESM for dietary assessment were behavioral science research (i.e. psychological aspects of eating behavior) which highlights the novelty and need of ESM specifically designed for dietary assessment and research on diet-health associations.

Recommendations to develop an Experience Sampling-based Dietary Assessment Method

The implementation of ESM will differ depending which health behavior is being measured and in which research field it is being applied [ 13 , 60 ]. This section describes recommendations of the methodological implementation of ESM as an alternative dietary assessment methodology to measure total dietary intake quantitatively based on the findings of this review, recommendations of the open handbook for ESM by Myin-Germeys et al. and practices in traditional dietary assessment development [ 13 ].

Recommendations for study duration, ESM timing and frequency

All ESM study characteristics (study duration, sampling frequency, timing, recall period) are interrelated and cannot be evaluated individually.

ESM study duration (i.e. number of days) and sampling frequency (i.e. number of prompts per day) should be reconciled and should be inversely adapted to one another (i.e. short study duration allows for higher sampling frequency per day and vice versa) to maintain low burden and good feasibility.

Our review showed an ESM study duration of 7 days is most common however reporting fatigue might arise from day 4 onwards in case of high sampling frequency (i.e. fixed sampling every 2 h) similarly as experienced with food records [ 61 ].

Frequency and timing of ESM prompts should be adapted to waking hours covering the typical eating episodes of the target study population. Typically, studies used waking hours starting around 7 AM till 10 PM however a preliminary short survey can identify feasible and accurate waking hours of the target study population and allow to adapt accordingly.

Waking hours, and consequently sampling frequency, could be different on weekend days (i.e. more frequent, longer waking hours) as seen in some studies in our review. Short recall periods (i.e. last hours or previous day) are suggested to be better than longer recalls of weeks or months [ 62 ]. Aiming to obtain more accurate dietary intake data, lower recall bias and social desirability bias by reducing the awareness of being measured requests short recall periods of 1 up to 3.5 h, with a 2-h recall most commonly applied, as demonstrated by our review. In this way, ESM allow for near real-time measurements of dietary intake.

Furthermore, study duration, sampling frequency and timing should be adapted and differs when aiming to measure actual dietary intake or habitual dietary intake.

Recommendations ESM signaling technique for actual versus habitual dietary intake

Measuring actual dietary intake using an intensive prompting schedule can only be performed for short periods, preferably three to four days, due to the risk of responding fatigue as seen similarly in food records. As demonstrated by Lucassen et al. actual intake can be measured by ESM applying a fixed sampling approach which samples every time-window during the waking hours (i.e. sampling every 2 h between 7 AM and 10 PM on dietary intake during past 2 h) [ 58 ].

Habitual dietary intake can be measured by ESM applying a semi-random sampling approach which samples every time window during waking hours multiple times during a longer period (i.e. sampling three time-windows per day on dietary intake during past 2 h for two weeks until every time window is sampled three times) [ 58 ]. Measuring habitual dietary intake by ESM using a less intensive sampling frequency allows for a longer study duration (i.e. multiple weeks). Lastly, a combination of fixed and (semi-)random sampling schedules can be applied. Both in case of measuring actual and habitual dietary intake, it is recommended to compose a sampling schedule with time windows covering all waking hours to ensure all eating occasions could be sampled [ 12 ]. Additionally, the sampling schedule should cover weekend days next to week days to be able to sample the variability in dietary intake. More so, to capture variability of dietary intake several waves of ESM measurement periods could be implemented alternated with no-measurement periods. On the other hand, the application of multiple waves is associated with higher dropout rates especially with increased time in-between waves [ 13 ].

In conclusion, ESM signaling technique, frequency, timing, recall period and duration of sampling should be carefully adapted to one another to ensure accurate dietary intake data, low burden and optimal feasibility. As recommended by Myin-Germeys et al., a pilot study allows to evaluate all ESM design characteristics to obtain optimal data quality yet remain feasible [ 13 ].

Recommendations for ESM questions and response options

Questionnaires for ESM should be carefully developed and request methodological rigor [ 63 ]. As stated by Myin-Germeys et al., there are currently no specific guidelines on how to develop questionnaires for ESM [ 63 ]. However, according to our review most studies adapt existing questionnaires to implement in ESM research. Still, few studies in our review describe methodologically which or how adaptations are made to fit in the ESM format. First, a timeframe should be chosen on which the question will reflect. Although ESM is ideally consisting of questions on momentary variables, this is less suitable to measure dietary intake. As dietary intake does not continuously take place, momentary questions (i.e. What are you eating in this moment?) would lead to a large amount of missing data and, consequently, large measurement error on daily dietary intake estimations. Instead, time intervals lend itself better to assess dietary intake with ESM. The time interval on which the question reflects should be clearly stated (i.e. What did you eat during the last two hours?). As mentioned previously, in case of an interval contingent (semi-random) ESM approach, constitution of contiguous time intervals that cover the complete waking hour time frame (i.e. waking hours between 7 AM and 10 PM with semi-random ESM sampling by intervals of every two hours) is recommended to reduce risk of missing eating occasions [ 12 ]. Therefore, following the latter approach, it is most feasible to choose the same time frame on which the question reflects as the time intervals of the prompts (i.e. semi random sampling in time intervals of two hours with question ‘What did you eat since the last signal?’). The time frame on which the question reflects should be chosen based on expected events of dietary intake (i.e. every two or three hours) and depends on dietary habits of the target population which is culture specific. Myin-Germeys et al. recommend to keep questions short and to the point so it fits the screen of the mobile device and allows for quick response [ 63 ]. Furthermore, implicit assessments (i.e. Have you eaten since the last signal?) are recommended over explicit assessments (i.e. Did you eat fast food since the last signal?) to inhibit reactivity bias. Questionnaire length is important to consider as it is recommended to maintain a completion time of maximum three minutes to keep the burden low [ 63 ]. Although in traditional ESM research questionnaires up to 30 items are accepted, in the field of dietary assessment, this would equivalent a short FFQ and can be considered too burdensome when presented all at once at every prompt reducing compliance. Moreover, ESM research in the field of psychology, where it originated from, uses most often scales (i.e. Likert scale, visual scales) as respond options. Unlike many psychological variables (i.e. mood, emotions), dietary intake can be assessed quantitatively and precise which allows for more specific response options.

Recommendations to develop ESM sampling scheme based on FFQ or food record

Questions and respond options for ESM dietary assessment could be adapted from existing questionnaires as demonstrated in the studies of our review. In the field of dietary assessment, ESM could therefore be applied to validated dietary assessment questionnaires such as validated Food Frequency Questionnaires (FFQ’s) or (web-based) food records as proposed in Fig.  2 .

figure 2

Recommendations to implement experience sampling for actual and habitual dietary assessment

Starting from the food record approach, a general open question (i.e. Did you eat anything since the last signal?) could be followed by a question to specify the consumed foods by an open field text box or food groups part originating from a National Food Consumption Database. Portion sizes of consumed foods could be provided by an open field text box with standard units (i.e. milliliters, grams) or common household measures (i.e. table spoons, glasses).

Starting from the FFQ approach, food groups assessed in FFQ’s could be regrouped to a limited number and questions reformulated to assess dietary intake in near real time to design ESM questionnaires. Consumption of all food groups could be assessed at each prompt or consumption of a different set of food groups could be assessed at each prompt. In the latter case, the study needs to be designed so that consumption of each food group is assessed at each interval multiple times to account for unanswered prompts with missing data. Moreover, ordering of questions on consumption of food groups need to be considered as the consumption of specific food groups might need to be assessed at the same prompt to reduce ambiguity (i.e. fried food consumption needs to be assessed before consumption of fast food to avoid response overlap). Asking the same set of questions at each prompt may feel repetitive but might reduce burden [ 63 ]. A control question can be added to assess careless responding.

Application of ESM as alternative dietary assessment method in literature

Most studies used ESM to measure food consumption qualitatively (i.e. type of foods consumed) or semi-quantitatively (i.e. frequency of consumption of specific foods) as opposed to quantitatively (i.e. type and quantity of foods consumed) to serve the same purpose as traditional dietary assessment methods. Questions were most often formulated using direct phrasing and asking about consumption of specific foods since the last signal. Answers were most often binary (i.e. yes/no indicating consumption of specific foods since last signal) combined with options to specify type and/or frequency or amount of foods consumed. Only the studies of Jeffers et al . and Lucassen et al . apply ESM to measure total dietary intake quantitatively of which Lucassen et al . evaluated ESM specifically as an alternative methodology for dietary assessment [ 57 , 58 ].

Although both event-contingent and signal-contingent approaches are being used for dietary assessment, signal-contingent ESM approaches might provide auspicious opportunities to overcome the limitations and biases of traditional dietary assessment methods [ 12 ]. The near-real time data collection combined with (semi-)random sampling shows potential to reduce the burden for the participant both by its low intensity of registering and by its shorter questions with easy respond options. Moreover, the (semi-)random sampling technique might make the participant less aware of being measured resulting in possibly lower social-desirability bias leading, together with the short recall period, to more accurate data. In combination with modern technology such as mobile applications feasibility could be enhanced as well. Adapting questions and response options from either a validated FFQ or food record allow for relatively easy implementation of ESM as alternative dietary assessment method for total dietary intake (i.e. all food groups). However, validity and reliability need to be evaluated in the target population, similarly as traditional dietary assessment methods.

The systematic review and meta-analysis of Perski et al . states to have reviewed the use of ESM to assess five key health behaviors including dietary behavior [ 60 ]. Similar to our findings, all four studies described by Perski et al . are assessing dietary intake through ESM of specific foods only instead of the total dietary pattern (i.e. all food groups). Moreover, Perski et al . included event-contingent sampling (i.e. registering dietary intake as it occurs) approaches as well. As highlighted by Schembre et al . event-contingent sampling entails similar limitations and biases such as social desirability bias and burden as the traditional dietary assessment methods [ 27 ]. Not surprising, as event-contingent sampling can be seen as a similar approach as the traditional food record and serves for this reason not the purpose of this review to define a new methodology to overcome the limitations of current traditional dietary assessment methods. Similarly, photo-based methodologies (i.e. using images as food diary by event-based sampling) are unlikely to overcome the limitations of traditional dietary assessment methods due to the large measurement error in estimation of portion sizes and types of foods and were for this reason excluded in our review [ 3 ]. Most importantly, the four included reviews on dietary behavior in the meta-analysis of Perski et al . lacked specific details on ESM design characteristics or methodological implication of ESM as alternative dietary assessment method. Still, the potential of ESM to obtain more accurate and reliable dietary data is highlighted together with the need for proper validation.

Altogether, the lacking details on important methodological aspects of ESM hinders drawing conclusions on common practices for implementation of ESM for quantitative dietary assessment. Nevertheless, Perski et al . emphasize the need for more elaboration on the methodological aspects in order to provide a summary of best practices on implementation of ESM for specific health behaviors including dietary behavior [ 60 ]. Our scoping review meets this need with key methodological recommendations for developing an experience sampling dietary assessment method for total dietary intake next to elaboration on commonly applied ESM design characteristics.

Limitations and strengths

An important limitation of this scoping review is, inherent to scoping reviews, the less rigor search strategy and screening process. This will have resulted in an incomplete overview of studies describing ESM for dietary assessment. Still, this review has not the aim to assess outcomes of studies but rather evaluate how ESM can be applied for dietary assessment methodologically. Therefore, its strength lies in the assessment and description of ESM approaches specifically to provide insight in its use for quantitative dietary assessment as an alternative method for the traditional dietary assessment methods. To our knowledge, this has only been performed by Schembre et al. previously [ 12 ]. However, our scoping review is, to our knowledge, the first to describe practical recommendations for developing an ESM for total dietary assessment (i.e. all food groups). Additionally, only two studies were identified to have applied ESM for total dietary assessment. Consequently, limited evidence-based information was available in literature on the development of ESM characteristics (prompting schedule, duration, questionnaire design) for quantitative dietary assessment of total dietary intake. Nevertheless, studies on qualitative and semi-quantitative dietary assessment using ESM were described and form, together with the guidelines of Myin-Germeys et al., the base of practical guidelines of designing an ESM protocol for quantitative dietary assessment of total dietary intake. To our knowledge, this review is the first to discuss recommendations on the implementation of ESM for quantitative dietary assessment as an alternative for traditional dietary assessment methods.

This review shows that ESM is increasingly being applied in research to measure dietary intake. However, few studies applied ESM to assess total dietary intake quantitatively with the same purpose of traditional dietary assessment methods. Still, the methodological characteristics of ESM show auspicious possibilities to overcome limitations of the classic dietary assessment methods. This paper provides guidance and is the starting point for the development of an Experience Sampling Dietary Assessment Method to assess total dietary intake quantitatively based on recent literature and theoretical background. Thorough evaluation and validation studies are needed to test the full potential of ESM as a feasible and accurate alternative for traditional dietary assessment methods.

Availability of data and materials

The data that support the findings of this manuscript are available from the corresponding author upon reasonable request. The review protocol can be downloaded at: KU Leuven repository.

Abbreviations

  • Ecological Momentary Assessment

Experience Sampling-based Dietary Assessment Method

Experience Sampling Method

  • Food Frequency Questionnaire

Mother’s and Their Children’s Health

Preferred Reporting Items for Systematic review and Meta-Analysis Protocols

Preferred Reporting Items for Systematic review and Meta-Analysis extension for scoping reviews

Sugar Sweetened Beverages

Visual Analog Scale

Hebert JR, Hurley TG, Steck SE, Miller DR, Tabung FK, Peterson KE, et al. Considering the value of dietary assessment data in informing nutrition-related health policy. Adv Nutr. 2014;5(4):447–55.

Article   PubMed   PubMed Central   Google Scholar  

Liang S, Nasir RF, Bell-Anderson KS, Toniutti CA, O’Leary FM, Skilton MR. Biomarkers of dietary patterns: a systematic review of randomized controlled trials. Nutr Rev. 2022;80(8):1856–95.

Bingham S, Carroll RJ, Day NE, Ferrari P, Freedman L, Kipnis V, et al. Bias in dietary-report instruments and its implications for nutritional epidemiology. Public Health Nutr. 2002;5(6a):915–23.

Article   PubMed   Google Scholar  

Kirkpatrick SI, Baranowski T, Subar AF, Tooze JA, Frongillo EA. Best Practices for Conducting and Interpreting Studies to Validate Self-Report Dietary Assessment Methods. J Acad Nutr Diet. 2019;119(11):1801–16.

Bennett DA, Landry D, Little J, Minelli C. Systematic review of statistical approaches to quantify, or correct for, measurement error in a continuous exposure in nutritional epidemiology. BMC Med Res Methodol. 2017;17(1):146.

Satija A, Yu E, Willett WC, Hu FB. Understanding nutritional epidemiology and its role in policy. Adv Nutr. 2015;6(1):5–18.

Kirkpatrick SI, Reedy J, Butler EN, Dodd KW, Subar AF, Thompson FE, et al. Dietary assessment in food environment research: a systematic review. Am J Prev Med. 2014;46(1):94–102.

The Science of Real-Time Data Capture: Self-Reports in Health Research: Oxford University Press; 2007. Available from: https://doi.org/10.1093/oso/9780195178715.001.0001 .

Verhagen SJ, Hasmi L, Drukker M, van Os J, Delespaul PA. Use of the experience sampling method in the context of clinical trials. Evid Based Ment Health. 2016;19(3):86–9.

Csikszentmihalyi M. Handbook of research methods for studying daily life: Guilford Press; 2011.

Mestdagh M, Verdonck S, Piot M, Niemeijer K, Kilani G, Tuerlinckx F, et al. m-Path: an easy-to-use and highly tailorable platform for ecological momentary assessment and intervention in behavioral research and clinical practice. Front Digit Health. 2023;5:1182175.

Schembre SM, Liao Y, O’Connor SG, Hingle MD, Shen SE, Hamoy KG, et al. Mobile Ecological Momentary Diet Assessment Methods for Behavioral Research: Systematic Review. JMIR Mhealth Uhealth. 2018;6(11): e11170.

Dejonckheere E, Erbas, Y. Designing an experience sampling study. In: Myin-Germeys I, Kuppens, P., editor. The open handbook of experience sampling methodology: A step-by-step guide to designing, conducting, and analyzing ESM studies: Center for Research on Experience Sampling and Ambulatory Methods Leuven; 2021. p. 33–70.

Arksey H, O’Malley L. Scoping Studies: Towards a Methodological Framework. International Journal of Social Research Methodology: Theory & Practice. 2005;8:19–32.

Article   Google Scholar  

Levac D, Colquhoun H, O’Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5(1):69.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1.

JBI JBI. [cited 2022 October 28th]. Available from: https://jbi.global/scoping-review-network/resources .

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018;169(7):467–73.

Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1(1):10.

Khangura S, Polisena J, Clifford TJ, Farrah K, Kamel C. RAPID REVIEW: AN EMERGING APPROACH TO EVIDENCE SYNTHESIS IN HEALTH TECHNOLOGY ASSESSMENT. Int J Technol Assess Health Care. 2014;30(1):20–7.

Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5(1):56.

Grenard JL, Stacy AW, Shiffman S, Baraldi AN, MacKinnon DP, Lockhart G, et al. Sweetened drink and snacking cues in adolescents: a study using ecological momentary assessment. Appetite. 2013;67:61–73.

Dunton GF, Dzubur E, Huh J, Belcher BR, Maher JP, O’Connor S, et al. Daily Associations of Stress and Eating in Mother-Child Dyads. Health Educ Behav. 2017;44(3):365–9.

Dunton GF, Liao Y, Dzubur E, Leventhal AM, Huh J, Gruenewald T, et al. Investigating within-day and longitudinal effects of maternal stress on children’s physical activity, dietary intake, and body composition: Protocol for the MATCH study. Contemp Clin Trials. 2015;43:142–54.

O’Connor SG, Ke W, Dzubur E, Schembre S, Dunton GF. Concordance and predictors of concordance of children’s dietary intake as reported via ecological momentary assessment and 24 h recall. Public Health Nutr. 2018;21(6):1019–27.

O’Connor SG, Koprowski C, Dzubur E, Leventhal AM, Huh J, Dunton GF. Differences in Mothers’ and Children’s Dietary Intake during Physical and Sedentary Activities: An Ecological Momentary Assessment Study. J Acad Nutr Diet. 2017;117(8):1265–71.

Liao Y, Schembre SM, O’Connor SG, Belcher BR, Maher JP, Dzubur E, et al. An Electronic Ecological Momentary Assessment Study to Examine the Consumption of High-Fat/High-Sugar Foods, Fruits/Vegetables, and Affective States Among Women. J Nutr Educ Behav. 2018;50(6):626–31.

Mason TB, Naya CH, Schembre SM, Smith KE, Dunton GF. Internalizing symptoms modulate real-world affective response to sweet food and drinks in children. Behav Res Ther. 2020;135: 103753.

Mason TB, O’Connor SG, Schembre SM, Huh J, Chu D, Dunton GF. Momentary affect, stress coping, and food intake in mother-child dyads. Health Psychol. 2019;38(3):238–47.

Mason TB, Smith KE, Dunton GF. Maternal parenting styles and ecological momentary assessment of maternal feeding practices and child food intake across middle childhood to early adolescence. Pediatr Obes. 2020;15(10): e12683.

Do B, Yang CH, Lopez NV, Mason TB, Margolin G, Dunton GF. Investigating the momentary association between maternal support and children’s fruit and vegetable consumption using ecological momentary assessment. Appetite. 2020;150: 104667.

Naya CH, Chu D, Wang WL, Nicolo M, Dunton GF, Mason TB. Children’s Daily Negative Affect Patterns and Food Consumption on Weekends: An Ecological Momentary Assessment Study. J Nutr Educ Behav. 2022;54(7):600–9.

Lopez NV, Lai MH, Yang CH, Dunton GF, Belcher BR. Associations of Maternal and Paternal Parenting Practices With Children’s Fruit and Vegetable Intake and Physical Activity: Preliminary Findings From an Ecological Momentary Study. JMIR Form Res. 2022;6(8): e38326.

Bruening M, van Woerden I, Todd M, Brennhofer S, Laska MN, Dunton G. A Mobile Ecological Momentary Assessment Tool (devilSPARC) for Nutrition and Physical Activity Behaviors in College Students: A Validation Study. J Med Internet Res. 2016;18(7): e209.

Piontak JR, Russell MA, Danese A, Copeland WE, Hoyle RH, Odgers CL. Violence exposure and adolescents’ same-day obesogenic behaviors: New findings and a replication. Soc Sci Med. 2017;189:145–51.

Campbell KL, Babiarz A, Wang Y, Tilton NA, Black MM, Hager ER. Factors in the home environment associated with toddler diet: an ecological momentary assessment study. Public Health Nutr. 2018;21(10):1855–64.

Cummings JR, Mamtora T, Tomiyama AJ. Non-food rewards and highly processed food intake in everyday life. Appetite. 2019;142: 104355.

Maher JP, Harduk M, Hevel DJ, Adams WM, McGuirt JT. Momentary Physical Activity Co-Occurs with Healthy and Unhealthy Dietary Intake in African American College Freshmen. Nutrients. 2020;12(5):1360.

Lin TT, Park C, Kapella MC, Martyn-Nemeth P, Tussing-Humphreys L, Rospenda KM, et al. Shift work relationships with same- and subsequent-day empty calorie food and beverage consumption. Scand J Work Environ Health. 2020;46(6):579–88.

Yong JYY, Tong EMW, Liu JCJ. When the camera eats first: Exploring how meal-time cell phone photography affects eating behaviours. Appetite. 2020;154:104787.

Goldstein SP, Hoover A, Evans EW, Thomas JG. Combining ecological momentary assessment, wrist-based eating detection, and dietary assessment to characterize dietary lapse: A multi-method study protocol. Digit Health. 2021;7:2055207620988212.

Chmurzynska A, Mlodzik-Czyzewska MA, Malinowska AM, Radziejewska A, Mikołajczyk-Stecyna J, Bulczak E, et al. Greater self-reported preference for fat taste and lower fat restraint are associated with more frequent intake of high-fat food. Appetite. 2021;159:105053.

Barchitta M, Maugeri A, Favara G, Magnano San Lio R, Riela PM, Guarnera L, et al. Development of a Web-App for the Ecological Momentary Assessment of Dietary Habits among College Students: The HEALTHY-UNICT Project. Nutrients. 2022;14(2):330.

Spook JE, Paulussen T, Kok G, Van Empelen P. Monitoring dietary intake and physical activity electronically: feasibility, usability, and ecological validity of a mobile-based Ecological Momentary Assessment tool. J Med Internet Res. 2013;15(9): e214.

Wouters S, Jacobs N, Duif M, Lechner L, Thewissen V. Affect and between-meal snacking in daily life: the moderating role of gender and age. Psychol Health. 2018;33(4):555–72.

Wouters S, Jacobs N, Duif M, Lechner L, Thewissen V. Negative affective stress reactivity: The dampening effect of snacking. Stress Health. 2018;34(2):286–95.

Wouters S, Thewissen V, Duif M, Lechner L, Jacobs N. Assessing Energy Intake in Daily Life: Signal-Contingent Smartphone Application Versus Event-Contingent Paper and Pencil Estimated Diet Diary. Psychol Belg. 2016;56(4):357–69.

Wouters S, Thewissen V, Duif M, van Bree RJ, Lechner L, Jacobs N. Habit strength and between-meal snacking in daily life: the moderating role of level of education. Public Health Nutr. 2018;21(14):2595–605.

Forman EM, Shaw JA, Goldstein SP, Butryn ML, Martin LM, Meiran N, et al. Mindful decision making and inhibitory control training as complementary means to decrease snack consumption. Appetite. 2016;103:176–83.

Richard A, Meule A, Reichenberger J, Blechert J. Food cravings in everyday life: An EMA study on snack-related thoughts, cravings, and consumption. Appetite. 2017;113:215–23.

Richard A, Meule A, Blechert J. Implicit evaluation of chocolate and motivational need states interact in predicting chocolate intake in everyday life. Eat Behav. 2019;33:1–6.

Zenk SN, Horoi I, McDonald A, Corte C, Riley B, Odoms-Young AM. Ecological momentary assessment of environmental and personal factors and snack food intake in African American women. Appetite. 2014;83:333–41.

Ghosh Roy P, Jones KK, Martyn-Nemeth P, Zenk SN. Contextual correlates of energy-dense snack food and sweetened beverage intake across the day in African American women: An application of ecological momentary assessment. Appetite. 2019;132:73–81.

Ortega A, Bejarano CM, Hesse DR, Reed D, Cushing CC. Temporal discounting modifies the effect of microtemporal hedonic hunger on food consumption: An ecological momentary assessment study. Eat Behav. 2022;48: 101697.

Boronat A, Clivillé-Pérez J, Soldevila-Domenech N, Forcano L, Pizarro N, Fitó M, et al. Mobile Device-assisted Dietary Ecological Momentary Assessments for the Evaluation of the Adherence to the Mediterranean Diet in a Continuous Manner. J Vis Exp. 2021(175).

de Rivaz R, Swendsen J, Berthoz S, Husky M, Merikangas K, Marques-Vidal P. Associations between Hunger and Psychological Outcomes: A Large-Scale Ecological Momentary Assessment Study. Nutrients. 2022;14(23).

Lucassen DA, Brouwer-Brolsma EM, Slotegraaf AI, Kok E, Feskens EJM. DIetary ASSessment (DIASS) Study: Design of an Evaluation Study to Assess Validity, Usability and Perceived Burden of an Innovative Dietary Assessment Methodology. Nutrients. 2022;14(6). https://doi.org/10.3390/nu14061156 .

Jeffers AJ, Mason TB, Benotsch EG. Psychological eating factors, affect, and ecological momentary assessed diet quality. Eat Weight Disord. 2020;25(5):1151–9.

Lucassen DA, Brouwer-Brolsma EM, Boshuizen HC, Mars M, de Vogel-Van den Bosch J, Feskens EJ. Validation of the smartphone-based dietary assessment tool “Traqq” for assessing actual dietary intake by repeated 2-h recalls in adults: comparison with 24-h recalls and urinary biomarkers. Am J Clin Nutr. 2023;117(6):1278–87.

Article   PubMed   CAS   Google Scholar  

Perski O, Keller J, Kale D, Asare BY, Schneider V, Powell D, et al. Understanding health behaviours in context: A systematic review and meta-analysis of ecological momentary assessment studies of five key health behaviours. Health Psychol Rev. 2022;16(4):576–601.

Thompson FE, Subar AF. Chapter 1 - Dietary Assessment Methodology. In: Coulston AM, Boushey CJ, Ferruzzi MG, Delahanty LM, editors. Nutrition in the Prevention and Treatment of Disease (Fourth Edition): Academic Press; 2017. p. 5–48.

Shiffman S, Balabanis MH, Gwaltney CJ, Paty JA, Gnys M, Kassel JD, et al. Prediction of lapse from associations between smoking and situational antecedents assessed by ecological momentary assessment. Drug Alcohol Depend. 2007;91(2-3):159–68.

Eisele G, Kasanova Z, Houben M. Questionnaire design and evaluation. In: Myin-Germeys I, Kuppens, P., editor. The open handbook of Experience Sampling Methodology: A step-by-step guide to designing, conducting, and analyzing ESM studies. Center for Research on Experience Sampling and Ambulatory Methods Leuven; 2021. p. 71–90.

Download references

Acknowledgements

Not applicable.

This work was supported by a PhD fellowship Strategic Basic research grant (1S96721N) of Research Foundation Flanders (FWO) and KU Leuven Internal Funds (C3/22/50). The funders had no role in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and affiliations.

Clinical and Experimental Endocrinology, Department of Chronic Diseases and Metabolism, KU Leuven, Leuven, Belgium

Joke Verbeke & Christophe Matthys

Department of Endocrinology, University Hospitals Leuven, Leuven, Belgium

Christophe Matthys

You can also search for this author in PubMed   Google Scholar

Contributions

JV conducted the review and screened the articles. CM was the second reviewer in case of hesitancy on inclusion of articles in the screening process. JV extracted the data and wrote the manuscript. CM revised the manuscript and supervised the research.

Corresponding author

Correspondence to Christophe Matthys .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Verbeke, J., Matthys, C. Experience Sampling as a dietary assessment method: a scoping review towards implementation. Int J Behav Nutr Phys Act 21 , 94 (2024). https://doi.org/10.1186/s12966-024-01643-1

Download citation

Received : 23 February 2024

Accepted : 14 August 2024

Published : 27 August 2024

DOI : https://doi.org/10.1186/s12966-024-01643-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Nutrition Assessment
  • Mobile Health
  • Epidemiology

International Journal of Behavioral Nutrition and Physical Activity

ISSN: 1479-5868

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

types of qualitative research sampling methods

IMAGES

  1. Sampling Method

    types of qualitative research sampling methods

  2. PPT

    types of qualitative research sampling methods

  3. Sampling Methods: Guide To All Types with Examples

    types of qualitative research sampling methods

  4. Sampling Methods

    types of qualitative research sampling methods

  5. Types Of Qualitative Research Designs And Examples

    types of qualitative research sampling methods

  6. Qualitative Research

    types of qualitative research sampling methods

VIDEO

  1. Sampling in Research

  2. SAMPLING PROCEDURE AND SAMPLE (QUALITATIVE RESEARCH)

  3. Qualitative Research Method ( Step by Step complete description )

  4. Sampling methods شرح مبسط

  5. Types of Research

  6. BSN

COMMENTS

  1. Series: Practical guidance to qualitative research. Part 3: Sampling

    Learn how to conduct qualitative research with practical guidance on sampling, data collection and analysis in this series of articles.

  2. Qualitative Sampling Methods

    Abstract. Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros ...

  3. PDF The SAGE Handbook of Qualitative Data Analysis

    However, in qualitative research the central resource through which sampling decisions are made is a focus on specific people, situations or sites because they offer a specific - 'biased' or 'information-rich' - perspective (Patton, 2002). Irrespective of the approach, sampling requires prior knowledge of the phenomenon.

  4. Sampling Techniques for Qualitative Research

    This chapter explains how to design suitable sampling strategies for qualitative research. The focus of this chapter is purposive (or theoretical) sampling to produce credible and trustworthy explanations of a phenomenon (a specific aspect of society). A specific...

  5. Sampling in Qualitative Research

    The practices of sampling, in comparison to quantitative research, are rooted in the application of multiple conceptual perspectives and interpretive stances to data collection and analyses that allow the development and evaluation of a multitude of meanings and experiences.

  6. Big enough? Sampling in qualitative inquiry

    Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects ( Staller, 2013 ). As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies." (p.537). Patton (2002) argues, "perhaps nothing better captures the ...

  7. Chapter 5. Sampling

    Sampling in qualitative research has different purposes and goals than sampling in quantitative research. Sampling in both allows you to say something of interest about a population without having to include the entire population in your sample. We begin this chapter with the case of a population of interest composed of actual people.

  8. Sampling in Qualitative Research: Rationale, Issues, and Methods

    There is a need for more explicit discussion of qualitative sampling issues. This article will outline the guiding principles and rationales, features, and practices of sampling in qualitative research. It then describes common questions about sampling in qualitative research.

  9. Sampling in Qualitative Research

    The chapter discusses different types of sampling methods used in qualitative research to select information-rich cases. Two types of sampling techniques are discussed in the past qualitative studies—the theoretical and the purposeful sampling techniques. The chapter illustrates these two types of sampling techniques relevant examples.

  10. PDF Sampling Techniques for Qualitative Research

    Qualitative studies use specific tools and techniques (methods) to sample people, organizations, or whatever is to be examined. The methodology guides the selection of tools and techniques for sampling, data analysis, quality assurance, etc. These all vary according to the purpose and design of the study and the RQ.

  11. Qualitative, Quantitative, and Mixed Methods Research Sampling

    Introduction Sampling is a critical, often overlooked aspect of the research process. The importance of sampling extends to the ability to draw accurate inferences, and it is an integral part of qualitative guidelines across research methods. Sampling considerations are important in quantitative and qualitative research when considering a target population and when drawing a sample that will ...

  12. Different Types of Sampling Techniques in Qualitative Research

    Sampling techniques in qualitative research include purposive, convenience, snowball, and theoretical sampling. Choosing the right sampling technique significantly impacts the accuracy and reliability of the research results. It's crucial to consider the potential impact on the bias, sample diversity, and generalizability when choosing a ...

  13. Purposeful sampling for qualitative data collection and analysis in

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears ...

  14. Sampling Methods

    There are two primary types of sampling methods that you can use in your research: Probability sampling involves random selection, allowing you to make strong statistical inferences about the whole group. Non-probability sampling involves non-random selection based on convenience or other criteria, allowing you to easily collect data.

  15. Qualitative Sampling Methods

    Abstract. Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros ...

  16. Sampling Methods in Qualitative Research: Definition, Types with

    Sampling in qualitative research is defined as an initial stage process involving the deliberate selection of individuals or cases from a broader population to participate in a study. Unlike quantitative research, where the emphasis is often on achieving statistical generalizability, qualitative research seeks to obtain depth and richness of ...

  17. (PDF) Sampling in Qualitative Research

    Learn about different sampling methods in qualitative research from this PDF chapter. Find and cite relevant research on ResearchGate.

  18. Sampling Methods

    Abstract. Knowledge of sampling methods is essential to design quality research. Critical questions are provided to help researchers choose a sampling method. This article reviews probability and non-probability sampling methods, lists and defines specific sampling techniques, and provides pros and cons for consideration.

  19. Series: Practical guidance to qualitative research. Part 3: Sampling

    In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting high-quality qualitative research in primary care.

  20. Sampling Methods for Research: Types, Uses, and Examples

    Types of sampling methods. There are two main sampling methods: probability sampling and non-probability sampling. These can be further refined, which we'll cover shortly. You can then decide which approach best suits your research project.

  21. Sampling in qualitative interview research: criteria, considerations

    Introduction Considerations of sampling are fundamental to any empirical study. However, in studies based on qualitative research interviews, sampling issues are rarely discussed. Possible reasons include a lack of universal 'rules of thumb' governing sampling considerations and the diversity of approaches to qualitative inquiry.

  22. PDF Qualitative Research Methods Overview

    In this section, we briefly describe three of the most common sampling methods used in qualitative research: purposive sampling, quota sampling, and snowball sampling.

  23. Types of Sampling Methods in Human Research: Why, When and How?

    We looked at the probability and non-probability types of sampling, the reasons for choosing them, and their advantages and disadvantages.

  24. Experience Sampling as a dietary assessment method: a scoping review

    Accurate and feasible assessment of dietary intake remains challenging for research and healthcare. Experience Sampling Methodology (ESM) is a real-time real-life data capturing method with low burden and good feasibility not yet fully explored as alternative dietary assessment method. This scoping review is the first to explore the implementation of ESM as an alternative to traditional ...