cases that meet some
predetermined criterion
of importance
Embedded in each strategy is the ability to compare and contrast, to identify similarities and differences in the phenomenon of interest. Nevertheless, some of these strategies (e.g., maximum variation sampling, extreme case sampling, intensity sampling, and purposeful random sampling) are used to identify and expand the range of variation or differences, similar to the use of quantitative measures to describe the variability or dispersion of values for a particular variable or variables, while other strategies (e.g., homogeneous sampling, typical case sampling, criterion sampling, and snowball sampling) are used to narrow the range of variation and focus on similarities. The latter are similar to the use of quantitative central tendency measures (e.g., mean, median, and mode). Moreover, certain strategies, like stratified purposeful sampling or opportunistic or emergent sampling, are designed to achieve both goals. As Patton (2002 , p. 240) explains, “the purpose of a stratified purposeful sample is to capture major variations rather than to identify a common core, although the latter may also emerge in the analysis. Each of the strata would constitute a fairly homogeneous sample.”
Despite its wide use, there are numerous challenges in identifying and applying the appropriate purposeful sampling strategy in any study. For instance, the range of variation in a sample from which purposive sample is to be taken is often not really known at the outset of a study. To set as the goal the sampling of information-rich informants that cover the range of variation assumes one knows that range of variation. Consequently, an iterative approach of sampling and re-sampling to draw an appropriate sample is usually recommended to make certain the theoretical saturation occurs ( Miles & Huberman, 1994 ). However, that saturation may be determined a-priori on the basis of an existing theory or conceptual framework, or it may emerge from the data themselves, as in a grounded theory approach ( Glaser & Strauss, 1967 ). Second, there are a not insignificant number in the qualitative methods field who resist or refuse systematic sampling of any kind and reject the limiting nature of such realist, systematic, or positivist approaches. This includes critics of interventions and “bottom up” case studies and critiques. However, even those who equate purposeful sampling with systematic sampling must offer a rationale for selecting study participants that is linked with the aims of the investigation (i.e., why recruit these individuals for this particular study? What qualifies them to address the aims of the study?). While systematic sampling may be associated with a post-positivist tradition of qualitative data collection and analysis, such sampling is not inherently limited to such analyses and the need for such sampling is not inherently limited to post-positivist qualitative approaches ( Patton, 2002 ).
Characteristics of implementation research.
In implementation research, quantitative and qualitative methods often play important roles, either simultaneously or sequentially, for the purpose of answering the same question through convergence of results from different sources, answering related questions in a complementary fashion, using one set of methods to expand or explain the results obtained from use of the other set of methods, using one set of methods to develop questionnaires or conceptual models that inform the use of the other set, and using one set of methods to identify the sample for analysis using the other set of methods ( Palinkas et al., 2011 ). A review of mixed method designs in implementation research conducted by Palinkas and colleagues (2011) revealed seven different sequential and simultaneous structural arrangements, five different functions of mixed methods, and three different ways of linking quantitative and qualitative data together. However, this review did not consider the sampling strategies involved in the types of quantitative and qualitative methods common to implementation research, nor did it consider the consequences of the sampling strategy selected for one method or set of methods on the choice of sampling strategy for the other method or set of methods. For instance, one of the most significant challenges to sampling in sequential mixed method designs lies in the limitations the initial method may place on sampling for the subsequent method. As Morse and Neihaus (2009) observe, when the initial method is qualitative, the sample selected may be too small and lack randomization necessary to fulfill the assumptions for a subsequent quantitative analysis. On the other hand, when the initial method is quantitative, the sample selected may be too large for each individual to be included in qualitative inquiry and lack purposeful selection to reduce the sample size to one more appropriate for qualitative research. The fact that potential participants were recruited and selected at random does not necessarily make them information rich.
A re-examination of the 22 studies and an additional 6 studies published since 2009 revealed that only 5 studies ( Aarons & Palinkas, 2007 ; Bachman et al., 2009 ; Palinkas et al., 2011 ; Palinkas et al., 2012 ; Slade et al., 2003) made a specific reference to purposeful sampling. An additional three studies ( Henke et al., 2008 ; Proctor et al., 2007 ; Swain et al., 2010 ) did not make explicit reference to purposeful sampling but did provide a rationale for sample selection. The remaining 20 studies provided no description of the sampling strategy used to identify participants for qualitative data collection and analysis; however, a rationale could be inferred based on a description of who were recruited and selected for participation. Of the 28 studies, 3 used more than one sampling strategy. Twenty-one of the 28 studies (75%) used some form of criterion sampling. In most instances, the criterion used is related to the individual’s role, either in the research project (i.e., trainer, team leader), or the agency (program director, clinical supervisor, clinician); in other words, criterion of inclusion in a certain category (criterion-i), in contrast to cases that are external to a specific criterion (criterion-e). For instance, in a series of studies based on the National Implementing Evidence-Based Practices Project, participants included semi-structured interviews with consultant trainers and program leaders at each study site ( Brunette et al., 2008 ; Marshall et al., 2008 ; Marty et al., 2007; Rapp et al., 2010 ; Woltmann et al., 2008 ). Six studies used some form of maximum variation sampling to ensure representativeness and diversity of organizations and individual practitioners. Two studies used intensity sampling to make contrasts. Aarons and Palinkas (2007) , for example, purposefully selected 15 child welfare case managers representing those having the most positive and those having the most negative views of SafeCare, an evidence-based prevention intervention, based on results of a web-based quantitative survey asking about the perceived value and usefulness of SafeCare. Kramer and Burns (2008) recruited and interviewed clinicians providing usual care and clinicians who dropped out of a study prior to consent to contrast with clinicians who provided the intervention under investigation. One study ( Hoagwood et al., 2007 ), used a typical case approach to identify participants for a qualitative assessment of the challenges faced in implementing a trauma-focused intervention for youth. One study ( Green & Aarons, 2011 ) used a combined snowball sampling/criterion-i strategy by asking recruited program managers to identify clinicians, administrative support staff, and consumers for project recruitment. County mental directors, agency directors, and program managers were recruited to represent the policy interests of implementation while clinicians, administrative support staff and consumers were recruited to represent the direct practice perspectives of EBP implementation.
Table 2 below provides a description of the use of different purposeful sampling strategies in mixed methods implementation studies. Criterion-i sampling was most frequently used in mixed methods implementation studies that employed a simultaneous design where the qualitative method was secondary to the quantitative method or studies that employed a simultaneous structure where the qualitative and quantitative methods were assigned equal priority. These mixed method designs were used to complement the depth of understanding afforded by the qualitative methods with the breadth of understanding afforded by the quantitative methods (n = 13), to explain or elaborate upon the findings of one set of methods (usually quantitative) with the findings from the other set of methods (n = 10), or to seek convergence through triangulation of results or quantifying qualitative data (n = 8). The process of mixing methods in the large majority (n = 18) of these studies involved embedding the qualitative study within the larger quantitative study. In one study (Goia & Dziadosz, 2008), criterion sampling was used in a simultaneous design where quantitative and qualitative data were merged together in a complementary fashion, and in two studies (Aarons et al., 2012; Zazelli et al., 2008 ), quantitative and qualitative data were connected together, one in sequential design for the purpose of developing a conceptual model ( Zazelli et al., 2008 ), and one in a simultaneous design for the purpose of complementing one another (Aarons et al., 2012). Three of the six studies that used maximum variation sampling used a simultaneous structure with quantitative methods taking priority over qualitative methods and a process of embedding the qualitative methods in a larger quantitative study ( Henke et al., 2008 ; Palinkas et al., 2010; Slade et al., 2008 ). Two of the six studies used maximum variation sampling in a sequential design ( Aarons et al., 2009 ; Zazelli et al., 2008 ) and one in a simultaneous design (Henke et al., 2010) for the purpose of development, and three used it in a simultaneous design for complementarity ( Bachman et al., 2009 ; Henke et al., 2008; Palinkas, Ell, Hansen, Cabassa, & Wells, 2011 ). The two studies relying upon intensity sampling used a simultaneous structure for the purpose of either convergence or expansion, and both studies involved a qualitative study embedded in a larger quantitative study ( Aarons & Palinkas, 2007 ; Kramer & Burns, 2008 ). The single typical case study involved a simultaneous design where the qualitative study was embedded in a larger quantitative study for the purpose of complementarity ( Hoagwood et al., 2007 ). The snowball/maximum variation study involved a sequential design where the qualitative study was merged into the quantitative data for the purpose of convergence and conceptual model development ( Green & Aarons, 2011 ). Although not used in any of the 28 implementation studies examined here, another common sequential sampling strategy is using criteria sampling of the larger quantitative sample to produce a second-stage qualitative sample in a manner similar to maximum variation sampling, except that the former narrows the range of variation while the latter expands the range.
Purposeful sampling strategies and mixed method designs in implementation research
Sampling strategy | Structure | Design | Function |
---|---|---|---|
Single stage sampling (n = 22) | |||
Criterion (n = 18) | Simultaneous (n = 17) Sequential (n = 6) | Merged (n = 9) Connected (n = 9) Embedded (n = 14) | Convergence (n = 6) Complementarity (n = 12) Expansion (n = 10) Development (n = 3) Sampling (n = 4) |
Maximum variation (n = 4) | Simultaneous (n = 3) Sequential (n = 1) | Merged (n = 1) Connected (n = 1) Embedded (n = 2) | Convergence (n = 1) Complementarity (n = 2) Expansion (n = 1) Development (n = 2) |
Intensity (n = 1) | Simultaneous Sequential | Merged Connected Embedded | Convergence Complementarity Expansion Development |
Typical case Study (n = 1) | Simultaneous | Embedded | Complementarity |
Multistage sampling (n = 4) | |||
Criterion/maximum variation (n = 2) | Simultaneous Sequential | Embedded Connected | Complementarity Development |
Criterion/intensity (n = 1) | Simultaneous | Embedded | Convergence Complementarity Expansion |
Criterion/snowball (n = 1) | Sequential | Connected | Convergence Development |
Criterion-i sampling as a purposeful sampling strategy shares many characteristics with random probability sampling, despite having different aims and different procedures for identifying and selecting potential participants. In both instances, study participants are drawn from agencies, organizations or systems involved in the implementation process. Individuals are selected based on the assumption that they possess knowledge and experience with the phenomenon of interest (i.e., the implementation of an EBP) and thus will be able to provide information that is both detailed (depth) and generalizable (breadth). Participants for a qualitative study, usually service providers, consumers, agency directors, or state policy-makers, are drawn from the larger sample of participants in the quantitative study. They are selected from the larger sample because they meet the same criteria, in this case, playing a specific role in the organization and/or implementation process. To some extent, they are assumed to be “representative” of that role, although implementation studies rarely explain the rationale for selecting only some and not all of the available role representatives (i.e., recruiting 15 providers from an agency for semi-structured interviews out of an available sample of 25 providers). From the perspective of qualitative methodology, participants who meet or exceed a specific criterion or criteria possess intimate (or, at the very least, greater) knowledge of the phenomenon of interest by virtue of their experience, making them information-rich cases.
However, criterion sampling may not be the most appropriate strategy for implementation research because by attempting to capture both breadth and depth of understanding, it may actually be inadequate to the task of accomplishing either. Although qualitative methods are often contrasted with quantitative methods on the basis of depth versus breadth, they actually require elements of both in order to provide a comprehensive understanding of the phenomenon of interest. Ideally, the goal of achieving theoretical saturation by providing as much detail as possible involves selection of individuals or cases that can ensure all aspects of that phenomenon are included in the examination and that any one aspect is thoroughly examined. This goal, therefore, requires an approach that sequentially or simultaneously expands and narrows the field of view, respectively. By selecting only individuals who meet a specific criterion defined on the basis of their role in the implementation process or who have a specific experience (e.g., engaged only in an implementation defined as successful or only in one defined as unsuccessful), one may fail to capture the experiences or activities of other groups playing other roles in the process. For instance, a focus only on practitioners may fail to capture the insights, experiences, and activities of consumers, family members, agency directors, administrative staff, or state policy leaders in the implementation process, thus limiting the breadth of understanding of that process. On the other hand, selecting participants on the basis of whether they were a practitioner, consumer, director, staff, or any of the above, may fail to identify those with the greatest experience or most knowledgeable or most able to communicate what they know and/or have experienced, thus limiting the depth of understanding of the implementation process.
To address the potential limitations of criterion sampling, other purposeful sampling strategies should be considered and possibly adopted in implementation research ( Figure 1 ). For instance, strategies placing greater emphasis on breadth and variation such as maximum variation, extreme case, confirming and disconfirming case sampling are better suited for an examination of differences, while strategies placing greater emphasis on depth and similarity such as homogeneous, snowball, and typical case sampling are better suited for an examination of commonalities or similarities, even though both types of sampling strategies include a focus on both differences and similarities. Alternatives to criterion sampling may be more appropriate to the specific functions of mixed methods, however. For instance, using qualitative methods for the purpose of complementarity may require that a sampling strategy emphasize similarity if it is to achieve depth of understanding or explore and develop hypotheses that complement a quantitative probability sampling strategy achieving breadth of understanding and testing hypotheses ( Kemper et al., 2003 ). Similarly, mixed methods that address related questions for the purpose of expanding or explaining results or developing new measures or conceptual models may require a purposeful sampling strategy aiming for similarity that complements probability sampling aiming for variation or dispersion. A narrowly focused purposeful sampling strategy for qualitative analysis that “complements” a broader focused probability sample for quantitative analysis may help to achieve a balance between increasing inference quality/trustworthiness (internal validity) and generalizability/transferability (external validity). A single method that focuses only on a broad view may decrease internal validity at the expense of external validity ( Kemper et al., 2003 ). On the other hand, the aim of convergence (answering the same question with either method) may suggest use of a purposeful sampling strategy that aims for breadth that parallels the quantitative probability sampling strategy.
Purposeful and Random Sampling Strategies for Mixed Method Implementation Studies
Furthermore, the specific nature of implementation research suggests that a multistage purposeful sampling strategy be used. Three different multistage sampling strategies are illustrated in Figure 1 below. Several qualitative methodologists recommend sampling for variation (breadth) before sampling for commonalities (depth) ( Glaser, 1978 ; Bernard, 2002 ) (Multistage I). Also known as a “funnel approach”, this strategy is often recommended when conducting semi-structured interviews ( Spradley, 1979 ) or focus groups ( Morgan, 1997 ). This approach begins with a broad view of the topic and then proceeds to narrow down the conversation to very specific components of the topic. However, as noted earlier, the lack of a clear understanding of the nature of the range may require an iterative approach where each stage of data analysis helps to determine subsequent means of data collection and analysis ( Denzen, 1978 ; Patton, 2001) (Multistage II). Similarly, multistage purposeful sampling designs like opportunistic or emergent sampling, allow the option of adding to a sample to take advantage of unforeseen opportunities after data collection has been initiated (Patton, 2001, p. 240) (Multistage III). Multistage I models generally involve two stages, while a Multistage II model requires a minimum of 3 stages, alternating from sampling for variation to sampling for similarity. A Multistage III model begins with sampling for variation and ends with sampling for similarity, but may involve one or more intervening stages of sampling for variation or similarity as the need or opportunity arises.
Multistage purposeful sampling is also consistent with the use of hybrid designs to simultaneously examine intervention effectiveness and implementation. An extension of the concept of “practical clinical trials” ( Tunis, Stryer & Clancey, 2003 ), effectiveness-implementation hybrid designs provide benefits such as more rapid translational gains in clinical intervention uptake, more effective implementation strategies, and more useful information for researchers and decision makers ( Curran et al., 2012 ). Such designs may give equal priority to the testing of clinical treatments and implementation strategies (Hybrid Type 2) or give priority to the testing of treatment effectiveness (Hybrid Type 1) or implementation strategy (Hybrid Type 3). Curran and colleagues (2012) suggest that evaluation of the intervention’s effectiveness will require or involve use of quantitative measures while evaluation of the implementation process will require or involve use of mixed methods. When conducting a Hybrid Type 1 design (conducting a process evaluation of implementation in the context of a clinical effectiveness trial), the qualitative data could be used to inform the findings of the effectiveness trial. Thus, an effectiveness trial that finds substantial variation might purposefully select participants using a broader strategy like sampling for disconfirming cases to account for the variation. For instance, group randomized trials require knowledge of the contexts and circumstances similar and different across sites to account for inevitable site differences in interventions and assist local implementations of an intervention ( Bloom & Michalopoulos, 2013 ; Raudenbush & Liu, 2000 ). Alternatively, a narrow strategy may be used to account for the lack of variation. In either instance, the choice of a purposeful sampling strategy is determined by the outcomes of the quantitative analysis that is based on a probability sampling strategy. In Hybrid Type 2 and Type 3 designs where the implementation process is given equal or greater priority than the effectiveness trial, the purposeful sampling strategy must be first and foremost consistent with the aims of the implementation study, which may be to understand variation, central tendencies, or both. In all three instances, the sampling strategy employed for the implementation study may vary based on the priority assigned to that study relative to the effectiveness trial. For instance, purposeful sampling for a Hybrid Type 1 design may give higher priority to variation and comparison to understand the parameters of implementation processes or context as a contribution to an understanding of effectiveness outcomes (i.e., using qualitative data to expand upon or explain the results of the effectiveness trial), In effect, these process measures could be seen as modifiers of innovation/EBP outcome. In contrast, purposeful sampling for a Hybrid Type 3 design may give higher priority to similarity and depth to understand the core features of successful outcomes only.
Finally, multistage sampling strategies may be more consistent with innovations in experimental designs representing alternatives to the classic randomized controlled trial in community-based settings that have greater feasibility, acceptability, and external validity. While RCT designs provide the highest level of evidence, “in many clinical and community settings, and especially in studies with underserved populations and low resource settings, randomization may not be feasible or acceptable” ( Glasgow, et al., 2005 , p. 554). Randomized trials are also “relatively poor in assessing the benefit from complex public health or medical interventions that account for individual preferences for or against certain interventions, differential adherence or attrition, or varying dosage or tailoring of an intervention to individual needs” ( Brown et al., 2009 , p. 2). Several alternatives to the randomized design have been proposed, such as “interrupted time series,” “multiple baseline across settings” or “regression-discontinuity” designs. Optimal designs represent one such alternative to the classic RCT and are addressed in detail by Duan and colleagues (this issue) . Like purposeful sampling, optimal designs are intended to capture information-rich cases, usually identified as individuals most likely to benefit from the experimental intervention. The goal here is not to identify the typical or average patient, but patients who represent one end of the variation in an extreme case, intensity sampling, or criterion sampling strategy. Hence, a sampling strategy that begins by sampling for variation at the first stage and then sampling for homogeneity within a specific parameter of that variation (i.e., one end or the other of the distribution) at the second stage would seem the best approach for identifying an “optimal” sample for the clinical trial.
Another alternative to the classic RCT are the adaptive designs proposed by Brown and colleagues ( Brown et al, 2006 ; Brown et al., 2008 ; Brown et al., 2009 ). Adaptive designs are a sequence of trials that draw on the results of existing studies to determine the next stage of evaluation research. They use cumulative knowledge of current treatment successes or failures to change qualities of the ongoing trial. An adaptive intervention modifies what an individual subject (or community for a group-based trial) receives in response to his or her preferences or initial responses to an intervention. Consistent with multistage sampling in qualitative research, the design is somewhat iterative in nature in the sense that information gained from analysis of data collected at the first stage influences the nature of the data collected, and the way they are collected, at subsequent stages ( Denzen, 1978 ). Furthermore, many of these adaptive designs may benefit from a multistage purposeful sampling strategy at early phases of the clinical trial to identify the range of variation and core characteristics of study participants. This information can then be used for the purposes of identifying optimal dose of treatment, limiting sample size, randomizing participants into different enrollment procedures, determining who should be eligible for random assignment (as in the optimal design) to maximize treatment adherence and minimize dropout, or identifying incentives and motives that may be used to encourage participation in the trial itself.
Alternatives to the classic RCT design may also be desirable in studies that adopt a community-based participatory research framework ( Minkler & Wallerstein, 2003 ), considered to be an important tool on conducting implementation research ( Palinkas & Soydan, 2012 ). Such frameworks suggest that identification and recruitment of potential study participants will place greater emphasis on the priorities and “local knowledge” of community partners than on the need to sample for variation or uniformity. In this instance, the first stage of sampling may approximate the strategy of sampling politically important cases ( Patton, 2002 ) at the first stage, followed by other sampling strategies intended to maximize variations in stakeholder opinions or experience.
On the basis of this review, the following recommendations are offered for the use of purposeful sampling in mixed method implementation research. First, many mixed methods studies in health services research and implementation science do not clearly identify or provide a rationale for the sampling procedure for either quantitative or qualitative components of the study ( Wisdom et al., 2011 ), so a primary recommendation is for researchers to clearly describe their sampling strategies and provide the rationale for the strategy.
Second, use of a single stage strategy for purposeful sampling for qualitative portions of a mixed methods implementation study should adhere to the same general principles that govern all forms of sampling, qualitative or quantitative. Kemper and colleagues (2003) identify seven such principles: 1) the sampling strategy should stem logically from the conceptual framework as well as the research questions being addressed by the study; 2) the sample should be able to generate a thorough database on the type of phenomenon under study; 3) the sample should at least allow the possibility of drawing clear inferences and credible explanations from the data; 4) the sampling strategy must be ethical; 5) the sampling plan should be feasible; 6) the sampling plan should allow the researcher to transfer/generalize the conclusions of the study to other settings or populations; and 7) the sampling scheme should be as efficient as practical.
Third, the field of implementation research is at a stage itself where qualitative methods are intended primarily to explore the barriers and facilitators of EBP implementation and to develop new conceptual models of implementation process and outcomes. This is especially important in state implementation research, where fiscal necessities are driving policy reforms for which knowledge about EBP implementation barriers and facilitators are urgently needed. Thus a multistage strategy for purposeful sampling should begin first with a broader view with an emphasis on variation or dispersion and move to a narrow view with an emphasis on similarity or central tendencies. Such a strategy is necessary for the task of finding the optimal balance between internal and external validity.
Fourth, if we assume that probability sampling will be the preferred strategy for the quantitative components of most implementation research, the selection of a single or multistage purposeful sampling strategy should be based, in part, on how it relates to the probability sample, either for the purpose of answering the same question (in which case a strategy emphasizing variation and dispersion is preferred) or the for answering related questions (in which case, a strategy emphasizing similarity and central tendencies is preferred).
Fifth, it should be kept in mind that all sampling procedures, whether purposeful or probability, are designed to capture elements of both similarity and differences, of both centrality and dispersion, because both elements are essential to the task of generating new knowledge through the processes of comparison and contrast. Selecting a strategy that gives emphasis to one does not mean that it cannot be used for the other. Having said that, our analysis has assumed at least some degree of concordance between breadth of understanding associated with quantitative probability sampling and purposeful sampling strategies that emphasize variation on the one hand, and between the depth of understanding and purposeful sampling strategies that emphasize similarity on the other hand. While there may be some merit to that assumption, depth of understanding requires both an understanding of variation and common elements.
Finally, it should also be kept in mind that quantitative data can be generated from a purposeful sampling strategy and qualitative data can be generated from a probability sampling strategy. Each set of data is suited to a specific objective and each must adhere to a specific set of assumptions and requirements. Nevertheless, the promise of mixed methods, like the promise of implementation science, lies in its ability to move beyond the confines of existing methodological approaches and develop innovative solutions to important and complex problems. For states engaged in EBP implementation, the need for these solutions is urgent.
Multistage Purposeful Sampling Strategies
This study was funded through a grant from the National Institute of Mental Health (P30-MH090322: K. Hoagwood, PI).
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on September 19, 2019 by Shona McCombes . Revised on June 22, 2023.
When you conduct research about a group of people, it’s rarely possible to collect data from every person in that group. Instead, you select a sample . The sample is the group of individuals who will actually participate in the research.
To draw valid conclusions from your results, you have to carefully decide how you will select a sample that is representative of the group as a whole. This is called a sampling method . There are two primary types of sampling methods that you can use in your research:
You should clearly explain how you selected your sample in the methodology section of your paper or thesis, as well as how you approached minimizing research bias in your work.
Population vs. sample, probability sampling methods, non-probability sampling methods, other interesting articles, frequently asked questions about sampling.
First, you need to understand the difference between a population and a sample , and identify the target population of your research.
The population can be defined in terms of geographical location, age, income, or many other characteristics.
It is important to carefully define your target population according to the purpose and practicalities of your project.
If the population is very large, demographically mixed, and geographically dispersed, it might be difficult to gain access to a representative sample. A lack of a representative sample affects the validity of your results, and can lead to several research biases , particularly sampling bias .
The sampling frame is the actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).
The number of individuals you should include in your sample depends on various factors, including the size and variability of the population and your research design. There are different sample size calculators and formulas depending on what you want to achieve with statistical analysis .
Probability sampling means that every member of the population has a chance of being selected. It is mainly used in quantitative research . If you want to produce results that are representative of the whole population, probability sampling techniques are the most valid choice.
There are four main types of probability sample.
In a simple random sample, every member of the population has an equal chance of being selected. Your sampling frame should include the whole population.
To conduct this type of sampling, you can use tools like random number generators or other techniques that are based entirely on chance.
Systematic sampling is similar to simple random sampling, but it is usually slightly easier to conduct. Every member of the population is listed with a number, but instead of randomly generating numbers, individuals are chosen at regular intervals.
If you use this technique, it is important to make sure that there is no hidden pattern in the list that might skew the sample. For example, if the HR database groups employees by team, and team members are listed in order of seniority, there is a risk that your interval might skip over people in junior roles, resulting in a sample that is skewed towards senior employees.
Stratified sampling involves dividing the population into subpopulations that may differ in important ways. It allows you draw more precise conclusions by ensuring that every subgroup is properly represented in the sample.
To use this sampling method, you divide the population into subgroups (called strata) based on the relevant characteristic (e.g., gender identity, age range, income bracket, job role).
Based on the overall proportions of the population, you calculate how many people should be sampled from each subgroup. Then you use random or systematic sampling to select a sample from each subgroup.
Cluster sampling also involves dividing the population into subgroups, but each subgroup should have similar characteristics to the whole sample. Instead of sampling individuals from each subgroup, you randomly select entire subgroups.
If it is practically possible, you might include every individual from each sampled cluster. If the clusters themselves are large, you can also sample individuals from within each cluster using one of the techniques above. This is called multistage sampling .
This method is good for dealing with large and dispersed populations, but there is more risk of error in the sample, as there could be substantial differences between clusters. It’s difficult to guarantee that the sampled clusters are really representative of the whole population.
In a non-probability sample, individuals are selected based on non-random criteria, and not every individual has a chance of being included.
This type of sample is easier and cheaper to access, but it has a higher risk of sampling bias . That means the inferences you can make about the population are weaker than with probability samples, and your conclusions may be more limited. If you use a non-probability sample, you should still aim to make it as representative of the population as possible.
Non-probability sampling techniques are often used in exploratory and qualitative research . In these types of research, the aim is not to test a hypothesis about a broad population, but to develop an initial understanding of a small or under-researched population.
A convenience sample simply includes the individuals who happen to be most accessible to the researcher.
This is an easy and inexpensive way to gather initial data, but there is no way to tell if the sample is representative of the population, so it can’t produce generalizable results. Convenience samples are at risk for both sampling bias and selection bias .
Similar to a convenience sample, a voluntary response sample is mainly based on ease of access. Instead of the researcher choosing participants and directly contacting them, people volunteer themselves (e.g. by responding to a public online survey).
Voluntary response samples are always at least somewhat biased , as some people will inherently be more likely to volunteer than others, leading to self-selection bias .
This type of sampling, also known as judgement sampling, involves the researcher using their expertise to select a sample that is most useful to the purposes of the research.
It is often used in qualitative research , where the researcher wants to gain detailed knowledge about a specific phenomenon rather than make statistical inferences, or where the population is very small and specific. An effective purposive sample must have clear criteria and rationale for inclusion. Always make sure to describe your inclusion and exclusion criteria and beware of observer bias affecting your arguments.
If the population is hard to access, snowball sampling can be used to recruit participants via other participants. The number of people you have access to “snowballs” as you get in contact with more people. The downside here is also representativeness, as you have no way of knowing how representative your sample is due to the reliance on participants recruiting others. This can lead to sampling bias .
Quota sampling relies on the non-random selection of a predetermined number or proportion of units. This is called a quota.
You first divide the population into mutually exclusive subgroups (called strata) and then recruit sample units until you reach your quota. These units share specific characteristics, determined by you prior to forming your strata. The aim of quota sampling is to control what or who makes up your sample.
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
Discover proofreading & editing
A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.
In statistics, sampling allows you to test a hypothesis about the characteristics of a population.
Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.
Probability sampling means that every member of the target population has a known chance of being included in the sample.
Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .
In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.
Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .
In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.
This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.
Sampling bias occurs when some members of a population are systematically more likely to be selected in a sample than others.
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
McCombes, S. (2023, June 22). Sampling Methods | Types, Techniques & Examples. Scribbr. Retrieved August 26, 2024, from https://www.scribbr.com/methodology/sampling-methods/
Other students also liked, population vs. sample | definitions, differences & examples, simple random sampling | definition, steps & examples, sampling bias and how to avoid it | types & examples, what is your plagiarism score.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Email citation, add to collections.
Your saved search, create a file for external citation management software, your rss feed.
Affiliation.
Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros and cons of each. Sample size and data saturation are discussed.
Keywords: breastfeeding; qualitative methods; sampling; sampling methods.
PubMed Disclaimer
Full text sources.
NCBI Literature Resources
MeSH PMC Bookshelf Disclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
Run remote usability tests on any digital product to deep dive into your key user flows
Learn how users are behaving on your website in real time and uncover points of frustration
A tool for collaborative analysis of qualitative data and for building your research repository and database.
How-to articles, expert tips, and the latest news in user testing & user experience
Detailed explainers of Trymata’s features & plans, and UX research terms & topics
Conduct user testing, desktop usability video.
You’re on a business trip in Oakland, CA. You've been working late in downtown and now you're looking for a place nearby to grab a late dinner. You decided to check Zomato to try and find somewhere to eat. (Don't begin searching yet).
It was hard to find the bart station. The collections not being able to be sorted was a bit of a bummer
Feedback from the owners would be nice
The flow was good, lots of bright photos
I like that you can sort by what you are looking for and i like the idea of collections
You're going on a vacation to Italy next month, and you want to learn some basic Italian for getting around while there. You decided to try Duolingo.
I felt like there could have been a little more of an instructional component to the lesson.
It would be cool if there were some feature that could allow two learners studying the same language to take lessons together. I imagine that their screens would be synced and they could go through lessons together and chat along the way.
Overall, the app was very intuitive to use and visually appealing. I also liked the option to connect with others.
Overall, the app seemed very helpful and easy to use. I feel like it makes learning a new language fun and almost like a game. It would be nice, however, if it contained more of an instructional portion.
All accounts, tests, and data have been migrated to our new & improved system!
Use the same email and password to log in:
Legacy login: Our legacy system is still available in view-only mode, login here >
What’s the new system about? Read more about our transition & what it-->
Sampling in qualitative research is defined as an initial stage process involving the deliberate selection of individuals or cases from a broader population to participate in a study.
Unlike quantitative research, where the emphasis is often on achieving statistical generalizability, qualitative research seeks to obtain depth and richness of information.
In qualitative research sampling, the focus is not on achieving statistical representation of population but rather on gaining a profound understanding of the subject under investigation. Researchers carefully consider the appropriateness of each sampling method based on the research question, objectives, and the nature of the study population, ensuring alignment with the qualitative approach and the desired richness of data.
Various sampling methods are employed to select participants or cases that can provide meaningful insights and contribute to a rich understanding of the research question. Here, we’ll explore four common types of sampling methods in qualitative research, along with explanations and examples:
Purposeful sampling involves intentionally selecting participants or cases based on specific criteria relevant to the research question. The goal is to gather in-depth information from individuals who can provide rich insights into the phenomenon under investigation. Researchers may use different purposeful sampling strategies, such as maximum variation (selecting diverse cases) or typical case (choosing a representative example).
Example: In a study exploring the experiences of cancer survivors, purposeful sampling might involve selecting participants with a variety of cancer types, treatment histories, and socio-demographic backgrounds to capture diverse perspectives.
Snowball sampling, or chain referral sampling, is used when studying populations that are challenging to reach through traditional methods. The researcher starts with a small number of participants and asks them to refer others who share similar characteristics or experiences. This method is particularly useful for studying hidden populations or subcultures.
Example: When researching illicit drug users, a researcher might start by interviewing a few individuals and then ask them to refer others in their social network who have similar experiences with drug use.
Theoretical sampling is associated with grounded theory methodology. Unlike other sampling methods, theoretical sampling involves an ongoing and iterative process. Sampling decisions are made based on emerging themes and theoretical insights uncovered during data analysis. The goal is to gather data that help develop and refine emerging theories.
Example: In a study exploring the experiences of individuals transitioning between careers, theoretical sampling might involve selecting participants who can provide insights into specific aspects of the transition process as the study progresses.
Quota sampling involves setting specific quotas based on predetermined characteristics such as age, gender, or socio-economic status. The researcher aims to ensure that the sample reflects the diversity present in the larger population. Quota sampling provides a structured way to achieve a balanced sample.
Example: In a study on consumer preferences for a new product, quota sampling might involve ensuring that the sample includes a proportional representation of different age groups and income levels to capture a range of perspectives.
These sampling methods are selected based on the nature of the research question, the goals of the study, and the characteristics of the population under investigation. Researchers often choose a method that aligns with the qualitative approach and allows for the collection of rich, context-specific data.
Convenience sampling involves selecting participants who are readily available and easily accessible to the researcher. This method is often pragmatic and efficient, but it may introduce bias since participants are not chosen based on specific criteria related to the research question. Convenience sampling is common in exploratory or pilot studies.
Example: If a researcher is studying the use of mobile banking apps, they might approach individuals in a public space, such as a coffee shop, and ask them about their experiences with mobile banking for a quick and accessible sample.
Criterion sampling involves selecting participants who meet specific criteria relevant to the research question. The criteria are predetermined and guide the researcher in choosing individuals who possess certain characteristics or have experienced particular events. This method ensures that the sample aligns closely with the study’s objectives.
Example: In a study on the impact of a specific educational intervention, criterion sampling might involve selecting participants who have completed the intervention program, ensuring that the sample includes individuals directly affected by the educational initiative.
Each of these qualitative sampling methods has its advantages and limitations. Researchers carefully consider the appropriateness of the method based on the research question, the study’s objectives, and the characteristics of the population being studied. The goal is to select a sampling strategy that aligns with the qualitative research approach, allowing for a nuanced exploration of the phenomenon under investigation.
Using sampling methods in qualitative research requires thoughtful consideration and adherence to best practices to ensure the study’s validity, reliability, and relevance. Here are some best practices for employing sampling methods in qualitative research:
1. Clearly Define Research Objectives:
Begin by clearly defining the research objectives and the specific goals of the study. This clarity will guide the selection of an appropriate sampling method aligned with the research questions.
2. Select a Sampling Method Aligned with Research Goals:
Choose a sampling method that aligns with the nature of the research question and the study’s objectives. Consider the strengths and limitations of each method, and select the one that best serves the research purpose.
3. Use Multiple Sampling Strategies:
Consider employing multiple sampling strategies within the same study. This can enhance the richness and diversity of the data by capturing various perspectives and experiences related to the research question.
4. Establish Inclusion and Exclusion Criteria:
Clearly define inclusion and exclusion criteria based on the study’s objectives. This helps ensure that participants or cases selected contribute directly to the research question and provide relevant insights.
5. Document Sampling Decisions:
Document the rationale behind sampling decisions, including the criteria used and any adjustments made during the study. Transparent documentation enhances the study’s transparency, replicability, and credibility.
6. Consider Saturation:
Monitor data saturation throughout the study. Once saturation is reached, which means that no new data is available, data collection can cease, ensuring that the study has sufficiently explored the research question.
7. Strive for Diversity within the Sample:
Aim for diversity within the sample to capture a range of perspectives. Diversity can include variations in age, gender, socio-economic status, or other relevant characteristics, depending on the research question.
8. Ethical Considerations:
Prioritize ethical considerations in participant selection. Obtain informed consent, safeguard participant confidentiality, and ensure that vulnerable populations are treated with sensitivity and respect.
9. Adapt Sampling Strategies as Needed:
Be open to adapting sampling strategies based on emerging insights. Theoretical sampling, in particular, allows for adjustments in the sampling plan as the study progresses and new themes emerge.
10. Member Checking:
Consider implementing member checking, where preliminary findings are shared with participants to validate or refine the interpretations. This enhances the trustworthiness and credibility of the study.
11. Reflect on Researcher Bias:
Acknowledge and reflect on the potential biases introduced by the researcher during the sampling process. Reflexivity ensures transparency and helps mitigate bias in participant selection and interpretation of data.
By adhering to these best practices, researchers can enhance the rigor and quality of qualitative research. These practices contribute to the trustworthiness of the study and ensure that the selected sampling method aligns effectively with the research objectives.
Interested in learning more about the fields of product, research, and design? Search our articles here for helpful information spanning a wide range of topics!
Mobile usability test: from preparation to execution & more, ux research process: a step-by-step guide for you, usability testing questions for improving user’s experience.
Last updated
27 February 2023
Reviewed by
Cathy Heath
When researching perceptions or attributes of a product, service, or people, you have two options:
Survey every person in your chosen group (the target market, or population), collate your responses, and reach your conclusions.
Select a smaller group from within your target market and use their answers to represent everyone. This option is sampling .
Sampling saves you time and money. When you use the sampling method, the whole population being studied is called the sampling frame .
The sample you choose should represent your target market, or the sampling frame, well enough to do one of the following:
Generalize your findings across the sampling frame and use them as though you had surveyed everyone
Use the findings to decide on your next step, which might involve more in-depth sampling
Dovetail streamlines research to help you uncover and share actionable insights
Valery Glivenko and Francesco Cantelli, two mathematicians studying probability theory in the early 1900s, devised the sampling method. Their research showed that a properly chosen sample of people would reflect the larger group’s status, opinions, decisions, and decision-making steps.
They proved you don't need to survey the entire target market, thereby saving the rest of us a lot of time and money.
We’ve already touched on the fact that sampling saves you time and money. When you get reliable results quickly, you can act on them sooner. And the money you save can pay for something else.
It’s often easier to survey a sample than a whole population. Sample inferences can be more reliable than those you get from a very large group because you can choose your samples carefully and scientifically.
Sampling is also useful because it is often impossible to survey the entire population. You probably have no choice but to collect only a sample in the first place.
Because you’re working with fewer people, you can collect richer data, which makes your research more accurate. You can:
Ask more questions
Go into more detail
Seek opinions instead of just collecting facts
Observe user behaviors
Double-check your findings if you need to
In short, sampling works! Let's take a look at the most common sampling methods.
There are two main sampling methods: probability sampling and non-probability sampling. These can be further refined, which we'll cover shortly. You can then decide which approach best suits your research project.
Probability sampling is used in quantitative research , so it provides data on the survey topic in terms of numbers. Probability relates to mathematics, hence the name ‘quantitative research’. Subjects are asked questions like:
How many boxes of candy do you buy at one time?
How often do you shop for candy?
How much would you pay for a box of candy?
This method is also called random sampling because everyone in the target market has an equal chance of being chosen for the survey. It is designed to reduce sampling error for the most important variables. You should, therefore, get results that fairly reflect the larger population.
In this method, not everyone has an equal chance of being part of the sample. It's usually easier (and cheaper) to select people for the sample group. You choose people who are more likely to be involved in or know more about the topic you’re researching.
Non-probability sampling is used for qualitative research. Qualitative data is generated by questions like:
Where do you usually shop for candy (supermarket, gas station, etc.?)
Which candy brand do you usually buy?
Why do you like that brand?
Here are five ways of doing probability sampling:
Simple random sampling (basic probability sampling)
Stratified sampling.
Cluster sampling
Simple random sampling.
There are three basic steps to simple random sampling:
Choose your sampling frame.
Decide on your sample size. Make sure it is large enough to give you reliable data.
Randomly choose your sample participants.
You could put all their names in a hat, shake the hat to mix the names, and pull out however many names you want in your sample (without looking!)
You could be more scientific by giving each participant a number and then using a random number generator program to choose the numbers.
Instead of choosing names or numbers, you decide beforehand on a selection method. For example, collect all the names in your sampling frame and start at, for example, the fifth person on the list, then choose every fourth name or every tenth name. Alternatively, you could choose everyone whose last name begins with randomly-selected initials, such as A, G, or W.
Choose your system of selecting names, and away you go.
This is a more sophisticated way to choose your sample. You break the sampling frame down into important subgroups or strata . Then, decide how many you want in your sample, and choose an equal number (or a proportionate number) from each subgroup.
For example, you want to survey how many people in a geographic area buy candy, so you compile a list of everyone in that area. You then break that list down into, for example, males and females, then into pre-teens, teenagers, young adults, senior citizens, etc. who are male or female.
So, if there are 1,000 young male adults and 2,000 young female adults in the whole sampling frame, you may want to choose 100 males and 200 females to keep the proportions balanced. You then choose the individual survey participants through the systematic sampling method.
This method is used when you want to subdivide a sample into smaller groups or clusters that are geographically or organizationally related.
Let’s say you’re doing quantitative research into candy sales. You could choose your sample participants from urban, suburban, or rural populations. This would give you three geographic clusters from which to select your participants.
This is a more refined way of doing cluster sampling. Let’s say you have your urban cluster, which is your primary sampling unit. You can subdivide this into a secondary sampling unit, say, participants who typically buy their candy in supermarkets. You could then further subdivide this group into your ultimate sampling unit. Finally, you select the actual survey participants from this unit.
Probability sampling has three main advantages:
It helps minimizes the likelihood of sampling bias. How you choose your sample determines the quality of your results. Probability sampling gives you an unbiased, randomly selected sample of your target market.
It allows you to create representative samples and subgroups within a sample out of a large or diverse target market.
It lets you use sophisticated statistical methods to select as close to perfect samples as possible.
To recap, with non-probability sampling, you choose people for your sample in a non-random way, so not everyone in your sampling frame has an equal chance of being chosen. Your research findings, therefore, may not be as representative overall as probability sampling, but you may not want them to be.
Sampling bias is not a concern if all potential survey participants share similar traits. For example, you may want to specifically focus on young male adults who spend more than others on candy. In addition, it is usually a cheaper and quicker method because you don't have to work out a complex selection system that represents the entire population in that community.
Researchers do need to be mindful of carefully considering the strengths and limitations of each method before selecting a sampling technique.
Non-probability sampling is best for exploratory research , such as at the beginning of a research project.
There are five main types of non-probability sampling methods:
Purposive sampling, voluntary response sampling, snowball sampling, quota sampling.
The strategy of convenience sampling is to choose your sample quickly and efficiently, using the least effort, usually to save money.
Let's say you want to survey the opinions of 100 millennials about a particular topic. You could send out a questionnaire over the social media platforms millennials use. Ask respondents to confirm their birth year at the top of their response sheet and, when you have your 100 responses, begin your analysis. Or you could visit restaurants and bars where millennials spend their evenings and sign people up.
A drawback of convenience sampling is that it may not yield results that apply to a broader population.
This method relies on your judgment to choose the most likely sample to deliver the most useful results. You must know enough about the survey goals and the sampling frame to choose the most appropriate sample respondents.
Your knowledge and experience save you time because you know your ideal sample candidates, so you should get high-quality results.
This method is similar to convenience sampling, but it is based on potential sample members volunteering rather than you looking for people.
You make it known you want to do a survey on a particular topic for a particular reason and wait until enough people volunteer. Then you give them the questionnaire or arrange interviews to ask your questions directly.
Snowball sampling involves asking selected participants to refer others who may qualify for the survey. This method is best used when there is no sampling frame available. It is also useful when the researcher doesn’t know much about the target population.
Let's say you want to research a niche topic that involves people who may be difficult to locate. For our candy example, this could be young males who buy a lot of candy, go rock climbing during the day, and watch adventure movies at night. You ask each participant to name others they know who do the same things, so you can contact them. As you make contact with more people, your sample 'snowballs' until you have all the names you need.
This sampling method involves collecting the specific number of units (quotas) from your predetermined subpopulations. Quota sampling is a way of ensuring that your sample accurately represents the sampling frame.
You can use non-probability sampling when you:
Want to do a quick test to see if a more detailed and sophisticated survey may be worthwhile
Want to explore an idea to see if it 'has legs'
Launch a pilot study
Do some initial qualitative research
Have little time or money available (half a loaf is better than no bread at all)
Want to see if the initial results will help you justify a longer, more detailed, and more expensive research project
Sampling bias can fog or limit your research results. This will have an impact when you generalize your results across the whole target market. The two main causes of sampling bias are faulty research design and poor data collection or recording. They can affect probability and non-probability sampling.
If a surveyor chooses participants inappropriately, the results will not reflect the population as a whole.
A famous example is the 1948 presidential race. A telephone survey was conducted to see which candidate had more support. The problem with the research design was that, in 1948, most people with telephones were wealthy, and their opinions were very different from voters as a whole. The research implied Dewey would win, but it was Truman who became president.
This problem speaks for itself. The survey may be well structured, the sample groups appropriate, the questions clear and easy to understand, and the cluster sizes appropriate. But if surveyors check the wrong boxes when they get an answer or if the entire subgroup results are lost, the survey results will be biased.
To get results you can rely on, you must:
Know enough about your target market
Choose one or more sample surveys to cover the whole target market properly
Choose enough people in each sample so your results mirror your target market
Have content validity . This means the content of your questions must be direct and efficiently worded. If it isn’t, the viability of your survey could be questioned. That would also be a waste of time and money, so make the wording of your questions your top focus.
If using probability sampling, make sure your sampling frame includes everyone it should and that your random sampling selection process includes the right proportion of the subgroups
If using non-probability sampling, focus on fairness, equality, and completeness in identifying your samples and subgroups. Then balance those criteria against simple convenience or other relevant factors.
Self-selection bias. If you mass-mail questionnaires to everyone in the sample, you’re more likely to get results from people with extrovert or activist personalities and not from introverts or pragmatists. So if your convenience sampling focuses on getting your quota responses quickly, it may be skewed.
Non-response bias. Unhappy customers, stressed-out employees, or other sub-groups may not want to cooperate or they may pull out early.
Undercoverage bias. If your survey is done, say, via email or social media platforms, it will miss people without internet access, such as those living in rural areas, the elderly, or lower-income groups.
Survivorship bias. Unsuccessful people are less likely to take part. Another example may be a researcher excluding results that don’t support the overall goal. If the CEO wants to tell the shareholders about a successful product or project at the AGM, some less positive survey results may go “missing” (to take an extreme example.) The result is that your data will reflect an overly optimistic representation of the truth.
Pre-screening bias. If the researcher, whose experience and knowledge are being used to pre-select respondents in a judgmental sampling, focuses more on convenience than judgment, the results may be compromised.
Focus on the bullet points in the next section and:
Make survey questionnaires as direct, easy, short, and available as possible, so participants are more likely to complete them accurately and send them back
Follow up with the people who have been selected but have not returned their responses
Ignore any pressure that may produce bias
Use the ideas you've gleaned from this article to give yourself a platform, then choose the best method to meet your goals while staying within your time and cost limits.
If it isn't obvious which method you should choose, use this strategy:
Clarify your research goals
Clarify how accurate your research results must be to reach your goals
Evaluate your goals against time and budget
List the two or three most obvious sampling methods that will work for you
Confirm the availability of your resources (researchers, computer time, etc.)
Compare each of the possible methods with your goals, accuracy, precision, resource, time, and cost constraints
Make your decision
Effective market research is the basis of successful marketing, advertising, and future productivity. By selecting the most appropriate sampling methods, you will collect the most useful market data and make the most effective decisions.
Do you want to discover previous research faster?
Do you share your research findings with others?
Do you analyze research data?
Start for free today, add your research, and get to key insights faster
Last updated: 18 April 2023
Last updated: 27 February 2023
Last updated: 22 August 2024
Last updated: 5 February 2023
Last updated: 16 August 2024
Last updated: 9 March 2023
Last updated: 30 April 2024
Last updated: 12 December 2023
Last updated: 11 March 2024
Last updated: 4 July 2024
Last updated: 6 March 2024
Last updated: 5 March 2024
Last updated: 13 May 2024
Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.
Get started for free
International Journal of Behavioral Nutrition and Physical Activity volume 21 , Article number: 94 ( 2024 ) Cite this article
1 Altmetric
Metrics details
Accurate and feasible assessment of dietary intake remains challenging for research and healthcare. Experience Sampling Methodology (ESM) is a real-time real-life data capturing method with low burden and good feasibility not yet fully explored as alternative dietary assessment method.
This scoping review is the first to explore the implementation of ESM as an alternative to traditional dietary assessment methods by mapping the methodological considerations to apply ESM and formulating recommendations to develop an Experience Sampling-based Dietary Assessment Method (ESDAM). The scoping review methodology framework was followed by searching PubMed (including OVID) and Web of Science from 2012 until 2024.
Screening of 646 articles resulted in 39 included articles describing 24 studies. ESM was mostly applied for qualitative dietary assessment (i.e. type of consumed foods) ( n = 12), next to semi-quantitative dietary assessment (i.e. frequency of consumption, no portion size) ( n = 7), and quantitative dietary assessment (i.e. type and portion size of consumed foods) ( n = 5). Most studies used ESM to assess the intake of selected foods. Two studies applied ESM as an alternative to traditional dietary assessment methods assessing total dietary intake quantitatively (i.e. all food groups). ESM duration ranged from 4 to 30 days and most studies applied ESM for 7 days ( n = 15). Sampling schedules were mostly semi-random ( n = 12) or fixed ( n = 9) with prompts starting at 8–10 AM and ending at 8–12 PM. ESM questionnaires were adapted from existing questionnaires, based on food consumption data or focus group discussions, and respond options were mostly presented as multiple-choice. Recall period to report dietary intake in ESM prompts varied from 15 min to 3.5 h.
Most studies used ESM for 7 days with fixed or semi-random sampling during waking hours and 2-h recall periods. An ESDAM can be developed starting from a food record approach (actual intake) or a validated food frequency questionnaire (long-term or habitual intake). Actual dietary intake can be measured by ESM through short intensive fixed sampling schedules while habitual dietary intake measurement by ESM allows for longer less frequent semi-random sampling schedules. ESM sampling protocols should be developed carefully to optimize feasibility and accuracy of dietary data.
Research on health and nutrition relies on accurate assessment of dietary intake [ 1 ]. However, dietary intake is a complex exposure variable with high inter- and intra-variability existing of different components ranging from micronutrients, macronutrients, food groups, meals to the dietary pattern as a whole. Therefore, measuring dietary intake accurately and feasibly is challenging for both researchers and healthcare professionals [ 2 , 3 , 4 ]. Only few established nutritional biomarkers are available and, therefore, no objective method exist to reflect true dietary intake or the dietary pattern as a whole in epidemiological research [ 2 , 3 ]. Instead, most dietary assessment methods rely on self-report. Food records, referred to as the “golden standard”, together with 24-h dietary recalls provide most detailed dietary data while Food Frequency Questionnaires (FFQ) reflects habitual (i.e. long-term usual intake) dietary intake which is the variable of interest in most diet-disease research [ 4 , 5 , 6 ]. Food records, 24-h dietary recalls, and FFQs have known limitations and challenges including recall bias, social-desirability bias, misreporting, and burdensomeness contributing to inherent measurement error in dietary intake data [ 2 , 6 ]. A review of Kirkpatrick et al . showed that feasibility, including cost-effectiveness and ease-of-use, is the main determinant for researchers in selecting a dietary assessment method instead of appropriateness for study design and purpose at the expense of data quality and accuracy [ 7 ]. To advance nutritional research and enhance the quality of dietary data, exploring the implementation of new methodologies is warranted to improve feasibility and overcome the limitations of current dietary assessment methods.
Experience Sampling Methodology (ESM), an umbrella term including Ecological Momentary Assessment (EMA), ambulatory assessment, and structured diary method, refers to intensive longitudinal assessment and real-time data-capturing methods [ 8 ]. Participants are asked to respond to short questions sent through smartphone prompt messages or beeps at random moments during the day to assess experiences or behaviors and moment-to-moment changes in daily life [ 9 ]. Originating from the field of psychology and behavioral sciences, ESM typically assesses current mood, cognitions, perceptions, or behaviors and descriptors of the momentary context (i.e. location, company) [ 9 ]. Usually, assessments are collected in a random time sampling protocol yet, assessments can also be triggered by an event (event-contingent sampling), at fixed time points, or random within fixed time intervals (semi-random). ESM questionnaires are usually designed to be completed in under 2 min consisting of open-ended questions, visual analogue scales, checklists, or self-report Likert scales. Several ESM survey applications (i.e. m-Path, PsyMate, PocketQ) are currently available in which the sampling protocol and questionnaires can be customized to the study design and aim [ 10 , 11 ]. It was shown that ESM reduces recall bias, reactivity bias, and misreporting in psychology and behavioral research by its design through unannounced, rapid, real-life, real-time repeated assessments [ 12 ]. For this reason, Experience Sampling might be an interesting new methodology to explore as an alternative dietary assessment methodology. The design of ESM could overcome recall bias, reactivity bias, social desirability bias, and misreporting seen in traditional dietary assessment methods. However, the application of ESM for dietary assessment is new. Defining and balancing ESM methodological considerations, i.e. study duration, frequency and timing of sampling (signaling technique), formulation of questions and answer options, is a delicate matter and crucial in balancing feasibility with data accuracy [ 13 ].
The application of ESM in the field of dietary assessment has not been fully explored yet. Schembre et al . reviewed ESM for dietary behavior for the first time [ 12 ]. However, it has not yet been assessed how ESM could be implemented as an alternative dietary assessment method aiming to estimate daily energy, nutrient, and food group intake quantitatively.
Therefore, this scoping review investigates how Experience Sampling Methodology can be implemented to develop an Experience Sampling-based dietary assessment method as an alternative to traditional dietary assessment methods to measure daily energy, nutrient, and food group intake quantitatively. This review aims to map ESM sampling protocols and questionnaire designs used to assess dietary intake. Additionally, the findings of this review will be combined with best practices to develop ESMs and dietary assessment methods to formulate key recommendations for the development of an Experience Sampling-based Dietary Assessment Method (ESDAM). The following questions will be answered:
How is ESM applied in literature to assess dietary intake - focusing on methodological considerations (i.e. development and formulation of questions and answers, selection and consideration of prompting schedule (timing and frequency))?
How can ESM specifically be applied for quantitative assessment of total dietary intake (i.e. as an alternative to traditional dietary assessment method)?
This scoping review followed the methodological framework for scoping reviews of Arksey and O’Malley which was further developed by Levac et al. [ 14 , 15 ]. A scoping review approach was chosen to explore and map the design aspects and considerations for developing experience sampling methods to assess dietary intake as an alternative to traditional dietary assessment methods, which is novel. Moreover, this review will formulate design recommendations to apply ESM as a dietary assessment method and will serve as starting point to develop an ESDAM. An a priori protocol was developed based on the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) and the Joanna Briggs Institute Scoping Review protocol template (Supplementary Material) [ 16 , 17 ]. According to Arksey and O’Malley methodological framework, the iterative nature of scoping reviews may include further refinement of the search strategy and the inclusion and exclusion criteria during the initial review process due to the unknown breadth of the topic [ 14 ]. Therefore, adaptations made to the methodology described in the a priori protocol based on initial searches are described below. This scoping review was reported according to the PRISMA extension for scoping reviews (PRISMA-ScR) [ 18 ].
The search strategy was developed based on key words and Mesh terms for “dietary assessment” and “experience sampling” (Supplementary Material). The term “ecological momentary assessment” was included as a synonym of ESM. Electronic databases PubMed (including MEDLINE) and Web of Science were searched for relevant literature published between January 2012 and February 9th 2024. The year 2012 was chosen as lower limit for inclusion since this review focuses on the use of ESM by digital tools (i.e. smartphones, web-based or mobile applications) which has emerged especially since the introduction of applications for smartphones since 2008. Therefore, the time frame of this review is focused on literature published in the last 12 years. The reference lists of all included articles were screened for additional studies.
The initial search strategy described in the protocol was developed based on the assumption that research using ESM as an alternative to traditional dietary assessment was limited. Therefore, initially, research using ESM in the broader field of health research was included to obtain more evidence on methodological considerations of application of ESM. In line with the Arksey and O’Malley methodological framework, inclusion criteria were adapted following initial searches along with discussion and consensus between the reviewer (JV) and principal investigator (CM). Therefore, inclusion criteria were adapted to research applying ESM to measure dietary intake quantitatively or qualitatively since literature was also available in the field of dietary behaviour in relation to contextual factors (Table 1 ). Studies measuring dietary behaviour (i.e. cravings, hunger, eating disorder behaviour, dietary lapses) only, without assessing dietary intake, were excluded. Event-based ESM as dietary assessment method was excluded since this was deemed a similar methodology as the food record and, therefore, not serving the purpose of this review to explore a new methodology for dietary assessment to overcome limitations of traditional dietary assessment methods. All inclusion and exclusion criteria are presented in Table 1 .
All records were exported and uploaded into the review software Rayyan. Duplicates were identified through the software followed by a manual screening of the reviewer for confirmation and removal of duplicates. One reviewer (JV) screened the retrieved articles first by title and abstract followed by a full text screening [ 19 , 20 , 21 ]. In case of hesitancy on inclusion of articles, the reviewer (JV) consulted the principal investigator (CM) to reach consensus. In line with established scoping review methods, methodological quality assessment was not performed [ 14 , 18 ]. Since this review aims to shed light on design aspects and considerations of ESM and, thus focuses on the application of the methodology used in the articles rather than the study outcome, quality assessment was considered not relevant for this purpose.
Data were extracted in an Excel table describing the authors, title, year of publication, signalling technique, timing of prompts, study duration, dietary variables measured, answer window, (formulation of) questions, respond options, notification method, indication of qualitative or quantitative dietary assessment, delivery method, population and study name. All data were described qualitatively. Studies applying ESM for dietary assessment were categorized in separate tables for ESM used for qualitative dietary assessment (i.e. assessment of type of foods consumed without portion size, not allowing estimation of nutrient intake), ESM used for semi-quantitative dietary assessment (i.e. assessment of type of foods or frequency of consumption of foods, not allowing estimation of nutrient intake), and ESM used for quantitative dietary assessment (i.e. assessment of type of foods consumed and portion size, allowing estimation of nutrient intake).
The electronic databases search resulted in 701 articles of which 55 duplicates were identified and removed. Next, 646 articles were screened by title and abstract of which 591 were excluded according to the exclusion criteria (Fig. 1 ). The remaining 55 articles were screened by full text. After exclusion of 16 articles following full text screening, 39 articles were selected for inclusion (Table 2 ). The included articles describe 24 individual studies of which the Mother’s and Their Children’s Health (MATCH) study was described most frequently ( n = 12, 25%). Most studies were published in 2018 ( n = 7), followed by 2020 ( n = 6) and 2022 ( n = 6). Students, including both high school and higher education students, were the study population in most EMA or ESM studies included ( n = 10, 43%). Two studies applied the ESM methodology to assess dietary behaviour including dietary variables of children with mothers as proxy. Five studies referred to their methodology using the terminology ‘ESM’ while the other studies used ‘EMA’ as terminology.
PRISMA flow diagram of the screening and selection process
Dietary variables measured through esm.
Most studies assessed consumption of specific foods only [ 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 42 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 ]. Table 2 , 3 and 4 provide an overview of the included studies described in the manuscripts with description of specific ESM methodology characteristics according to qualitative, semi-quantitative and quantitative dietary assessment respectively. Four studies used ESM to assess snack consumption [ 45 , 46 , 47 , 48 , 49 , 50 , 51 ]. Four studies focused on snack and sugar sweetened beverage (SBB) consumption only [ 22 , 36 , 44 , 52 , 53 ]. Piontak et al . applied ESM to assess unhealthy food consumption including fast food, caffeinated drinks and not consuming any fruit or vegetables [ 35 ]. Two studies focused on palatable food consumption of which the study of Cummings et al . assessed palatable food consumption together with highly processed food intake [ 37 , 54 ]. Lin et al . applied ESM to measure empty calorie food and beverage consumption while Boronat et al . assessed Mediterranean diet food consumption [ 39 , 55 ]. Two studies assessed the occurrence of food consumption only without assessing type of foods consumed [ 40 , 41 ]. The study of de Rivaz et al . assessed the largest type of meal consumed in between signals [ 56 ]. Three studies aimed to assess total dietary intake of which the study of Lucassen et al . evaluated approaches to assess both actual and habitual dietary intake using ESM [ 43 , 57 , 58 , 59 ].
As shown in Table 2 , twelve studies performed qualitative dietary assessment (i.e. assessing type of foods consumed without quantification) (Table 2 ). Seven studies performed semi-quantitative dietary assessment (i.e. assessing frequency of meals/eating occasions or number of servings of food categories not allowing nutrient calculation) [ 44 , 49 , 50 , 52 , 53 , 54 , 55 , 56 ] (Table 3 ). Quantitative dietary assessment, in line with the aim of traditional dietary assessment methods (i.e. assessment of both type and quantity of foods consumed allowing to estimate nutrient intake), was performed in four studies of which Wouters et al . and Richard et al . assessed snack intake only while Jeffers et al . and Lucassen et al . assessed overall dietary intake (i.e. all food groups) [ 45 , 46 , 47 , 48 , 51 , 57 , 58 ] (Table 4 ).
Study duration of ESM dietary assessment varied from four to thirty days of which most studies ( n = 15) had a study duration of seven days of ESM dietary assessment. The study of Piontak et al . had the longest duration of 30 days of ESM assessment [ 35 ]. The semi-random sampling scheme (i.e. random sampling within multiple fixed time-intervals) was applied most frequently ( n = 12), followed by the fixed sampling scheme (i.e. sampling at fixed times) ( n = 9). Random sampling (i.e. completely random sampling) was chosen in three studies [ 34 , 36 , 55 ]. A mixed sampling approach was applied in three studies of which Lucassen et al . tested and compared both a fixed sampling and a semi-random sampling approach to assess overall dietary intake [ 22 , 42 , 57 , 59 ]. Two studies applied different sampling schemes during the weekend compared to weekdays [ 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 ]. Sampling time windows were adapted to the daily structure of the study population, i.e. shifts of shift-workers, school hours of students or (self-reported) waking hours (Table 2 ). The sampling time window of the included studies started between 6 and 10 AM and ended between 8 PM and midnight. One study applied a 24-h sampling time window since the study population were nurses working in shifts [ 39 ].
Different types of questions and phrasing of questions can be identified in the studies using ESM for dietary assessment. Two studies use indirect phrasing (i.e. ‘What were you doing?’) followed by multiple-choice answer options including i.e. physical activity, eating, rest [ 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 ]. Seven studies use direct phrasing (i.e. ‘Did you eat?’) which is applied both as real-time prompts (i.e. ‘Were you eating or drinking anything – in this moment?’) and as retrospective prompts (i.e. ‘Did you eat anything since the last signal?’) without specifying specific food consumption [ 22 , 38 , 40 , 41 , 45 , 46 , 47 , 48 , 56 , 58 ]. Thirteen studies use direct and specific phrasing regarding consumption of specified foods (i.e. ‘Did you eat any snacks or sugar sweetened beverages since the last signal?’) [ 35 , 36 , 37 , 39 , 43 , 44 , 50 , 51 , 52 , 53 , 54 , 55 , 57 ]. The time period in retrospective prompts with direct phrasing varied. Ten studies assessed consumption since last signal, three studies during the past 2 h and one study during respectively the preceding 15 min, 1 h, 2.5 h, 3 h and 3.5 h [ 41 , 42 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 56 ]. The MATCH study used two different retrospective time periods of which the first prompt of the day requested to report since waking up and the following prompts during the last 2 h [ 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 ]. Forman et al . used prompts which requested to report snack intake between the last prompt of the previous day and falling asleep and between waking up and receiving the first prompt [ 49 ]. The study of Bruening et al . combined both real-time prompts, to report what participants were doing the moment before receiving the prompt, and retrospective prompts to report what they were doing the past 3 h [ 34 ].
Binary (i.e. yes or no) response options are provided in eleven studies followed by open field, a built in search function or multiple-choice bullets to specify type of food or drinks consumed in five studies [ 22 , 35 , 37 , 38 , 40 , 41 , 42 , 45 , 46 , 47 , 48 , 52 , 53 , 56 , 58 ]. Food lists shown as response option to indicate food consumption were based on National Health Surveys, validated Food Frequency Questionnaires, other validated questionnaires, the National Food Composition Database or results from focus group discussions. Eight studies requested to indicate quantities of the foods consumed by open field (i.e. in grams or milliliters), Visual Analogical Scale (VAS) sliders (i.e. from zero to 100) or multiple-choice options (i.e. small, medium, large) [ 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 54 , 56 , 57 ].
This review reveals that ESM has been applied to assess dietary intake in various research settings using different design approaches. However, most studies assessed consumption of specific foods only focusing on the foods of interest related to the research question. Especially snack consumption and, in general, unhealthy foods were the foods of interest for which ESM was used most often to measure its consumption. Due to its momentary nature, ESM may be especially suitable to measure these specific foods which are often (unconsciously) missed or underreported using traditional dietary assessment methods. Findings from our review show that ESM applied to assess dietary intake shows both features of 24-h dietary recalls (24HRs) and food frequency questionnaires (FFQ). Aside from the recall-based reporting and multiple choice assessment of specific foods, found in 24HRs and FFQs respectively, the ESM is a new methodology compared to traditional dietary assessment methods. ESM shows to lends itself well to assess the total dietary intake quantitatively as well albeit less explored yet according to our review. Moreover, most studies using ESM for dietary assessment were behavioral science research (i.e. psychological aspects of eating behavior) which highlights the novelty and need of ESM specifically designed for dietary assessment and research on diet-health associations.
The implementation of ESM will differ depending which health behavior is being measured and in which research field it is being applied [ 13 , 60 ]. This section describes recommendations of the methodological implementation of ESM as an alternative dietary assessment methodology to measure total dietary intake quantitatively based on the findings of this review, recommendations of the open handbook for ESM by Myin-Germeys et al. and practices in traditional dietary assessment development [ 13 ].
All ESM study characteristics (study duration, sampling frequency, timing, recall period) are interrelated and cannot be evaluated individually.
ESM study duration (i.e. number of days) and sampling frequency (i.e. number of prompts per day) should be reconciled and should be inversely adapted to one another (i.e. short study duration allows for higher sampling frequency per day and vice versa) to maintain low burden and good feasibility.
Our review showed an ESM study duration of 7 days is most common however reporting fatigue might arise from day 4 onwards in case of high sampling frequency (i.e. fixed sampling every 2 h) similarly as experienced with food records [ 61 ].
Frequency and timing of ESM prompts should be adapted to waking hours covering the typical eating episodes of the target study population. Typically, studies used waking hours starting around 7 AM till 10 PM however a preliminary short survey can identify feasible and accurate waking hours of the target study population and allow to adapt accordingly.
Waking hours, and consequently sampling frequency, could be different on weekend days (i.e. more frequent, longer waking hours) as seen in some studies in our review. Short recall periods (i.e. last hours or previous day) are suggested to be better than longer recalls of weeks or months [ 62 ]. Aiming to obtain more accurate dietary intake data, lower recall bias and social desirability bias by reducing the awareness of being measured requests short recall periods of 1 up to 3.5 h, with a 2-h recall most commonly applied, as demonstrated by our review. In this way, ESM allow for near real-time measurements of dietary intake.
Furthermore, study duration, sampling frequency and timing should be adapted and differs when aiming to measure actual dietary intake or habitual dietary intake.
Measuring actual dietary intake using an intensive prompting schedule can only be performed for short periods, preferably three to four days, due to the risk of responding fatigue as seen similarly in food records. As demonstrated by Lucassen et al. actual intake can be measured by ESM applying a fixed sampling approach which samples every time-window during the waking hours (i.e. sampling every 2 h between 7 AM and 10 PM on dietary intake during past 2 h) [ 58 ].
Habitual dietary intake can be measured by ESM applying a semi-random sampling approach which samples every time window during waking hours multiple times during a longer period (i.e. sampling three time-windows per day on dietary intake during past 2 h for two weeks until every time window is sampled three times) [ 58 ]. Measuring habitual dietary intake by ESM using a less intensive sampling frequency allows for a longer study duration (i.e. multiple weeks). Lastly, a combination of fixed and (semi-)random sampling schedules can be applied. Both in case of measuring actual and habitual dietary intake, it is recommended to compose a sampling schedule with time windows covering all waking hours to ensure all eating occasions could be sampled [ 12 ]. Additionally, the sampling schedule should cover weekend days next to week days to be able to sample the variability in dietary intake. More so, to capture variability of dietary intake several waves of ESM measurement periods could be implemented alternated with no-measurement periods. On the other hand, the application of multiple waves is associated with higher dropout rates especially with increased time in-between waves [ 13 ].
In conclusion, ESM signaling technique, frequency, timing, recall period and duration of sampling should be carefully adapted to one another to ensure accurate dietary intake data, low burden and optimal feasibility. As recommended by Myin-Germeys et al., a pilot study allows to evaluate all ESM design characteristics to obtain optimal data quality yet remain feasible [ 13 ].
Questionnaires for ESM should be carefully developed and request methodological rigor [ 63 ]. As stated by Myin-Germeys et al., there are currently no specific guidelines on how to develop questionnaires for ESM [ 63 ]. However, according to our review most studies adapt existing questionnaires to implement in ESM research. Still, few studies in our review describe methodologically which or how adaptations are made to fit in the ESM format. First, a timeframe should be chosen on which the question will reflect. Although ESM is ideally consisting of questions on momentary variables, this is less suitable to measure dietary intake. As dietary intake does not continuously take place, momentary questions (i.e. What are you eating in this moment?) would lead to a large amount of missing data and, consequently, large measurement error on daily dietary intake estimations. Instead, time intervals lend itself better to assess dietary intake with ESM. The time interval on which the question reflects should be clearly stated (i.e. What did you eat during the last two hours?). As mentioned previously, in case of an interval contingent (semi-random) ESM approach, constitution of contiguous time intervals that cover the complete waking hour time frame (i.e. waking hours between 7 AM and 10 PM with semi-random ESM sampling by intervals of every two hours) is recommended to reduce risk of missing eating occasions [ 12 ]. Therefore, following the latter approach, it is most feasible to choose the same time frame on which the question reflects as the time intervals of the prompts (i.e. semi random sampling in time intervals of two hours with question ‘What did you eat since the last signal?’). The time frame on which the question reflects should be chosen based on expected events of dietary intake (i.e. every two or three hours) and depends on dietary habits of the target population which is culture specific. Myin-Germeys et al. recommend to keep questions short and to the point so it fits the screen of the mobile device and allows for quick response [ 63 ]. Furthermore, implicit assessments (i.e. Have you eaten since the last signal?) are recommended over explicit assessments (i.e. Did you eat fast food since the last signal?) to inhibit reactivity bias. Questionnaire length is important to consider as it is recommended to maintain a completion time of maximum three minutes to keep the burden low [ 63 ]. Although in traditional ESM research questionnaires up to 30 items are accepted, in the field of dietary assessment, this would equivalent a short FFQ and can be considered too burdensome when presented all at once at every prompt reducing compliance. Moreover, ESM research in the field of psychology, where it originated from, uses most often scales (i.e. Likert scale, visual scales) as respond options. Unlike many psychological variables (i.e. mood, emotions), dietary intake can be assessed quantitatively and precise which allows for more specific response options.
Questions and respond options for ESM dietary assessment could be adapted from existing questionnaires as demonstrated in the studies of our review. In the field of dietary assessment, ESM could therefore be applied to validated dietary assessment questionnaires such as validated Food Frequency Questionnaires (FFQ’s) or (web-based) food records as proposed in Fig. 2 .
Recommendations to implement experience sampling for actual and habitual dietary assessment
Starting from the food record approach, a general open question (i.e. Did you eat anything since the last signal?) could be followed by a question to specify the consumed foods by an open field text box or food groups part originating from a National Food Consumption Database. Portion sizes of consumed foods could be provided by an open field text box with standard units (i.e. milliliters, grams) or common household measures (i.e. table spoons, glasses).
Starting from the FFQ approach, food groups assessed in FFQ’s could be regrouped to a limited number and questions reformulated to assess dietary intake in near real time to design ESM questionnaires. Consumption of all food groups could be assessed at each prompt or consumption of a different set of food groups could be assessed at each prompt. In the latter case, the study needs to be designed so that consumption of each food group is assessed at each interval multiple times to account for unanswered prompts with missing data. Moreover, ordering of questions on consumption of food groups need to be considered as the consumption of specific food groups might need to be assessed at the same prompt to reduce ambiguity (i.e. fried food consumption needs to be assessed before consumption of fast food to avoid response overlap). Asking the same set of questions at each prompt may feel repetitive but might reduce burden [ 63 ]. A control question can be added to assess careless responding.
Most studies used ESM to measure food consumption qualitatively (i.e. type of foods consumed) or semi-quantitatively (i.e. frequency of consumption of specific foods) as opposed to quantitatively (i.e. type and quantity of foods consumed) to serve the same purpose as traditional dietary assessment methods. Questions were most often formulated using direct phrasing and asking about consumption of specific foods since the last signal. Answers were most often binary (i.e. yes/no indicating consumption of specific foods since last signal) combined with options to specify type and/or frequency or amount of foods consumed. Only the studies of Jeffers et al . and Lucassen et al . apply ESM to measure total dietary intake quantitatively of which Lucassen et al . evaluated ESM specifically as an alternative methodology for dietary assessment [ 57 , 58 ].
Although both event-contingent and signal-contingent approaches are being used for dietary assessment, signal-contingent ESM approaches might provide auspicious opportunities to overcome the limitations and biases of traditional dietary assessment methods [ 12 ]. The near-real time data collection combined with (semi-)random sampling shows potential to reduce the burden for the participant both by its low intensity of registering and by its shorter questions with easy respond options. Moreover, the (semi-)random sampling technique might make the participant less aware of being measured resulting in possibly lower social-desirability bias leading, together with the short recall period, to more accurate data. In combination with modern technology such as mobile applications feasibility could be enhanced as well. Adapting questions and response options from either a validated FFQ or food record allow for relatively easy implementation of ESM as alternative dietary assessment method for total dietary intake (i.e. all food groups). However, validity and reliability need to be evaluated in the target population, similarly as traditional dietary assessment methods.
The systematic review and meta-analysis of Perski et al . states to have reviewed the use of ESM to assess five key health behaviors including dietary behavior [ 60 ]. Similar to our findings, all four studies described by Perski et al . are assessing dietary intake through ESM of specific foods only instead of the total dietary pattern (i.e. all food groups). Moreover, Perski et al . included event-contingent sampling (i.e. registering dietary intake as it occurs) approaches as well. As highlighted by Schembre et al . event-contingent sampling entails similar limitations and biases such as social desirability bias and burden as the traditional dietary assessment methods [ 27 ]. Not surprising, as event-contingent sampling can be seen as a similar approach as the traditional food record and serves for this reason not the purpose of this review to define a new methodology to overcome the limitations of current traditional dietary assessment methods. Similarly, photo-based methodologies (i.e. using images as food diary by event-based sampling) are unlikely to overcome the limitations of traditional dietary assessment methods due to the large measurement error in estimation of portion sizes and types of foods and were for this reason excluded in our review [ 3 ]. Most importantly, the four included reviews on dietary behavior in the meta-analysis of Perski et al . lacked specific details on ESM design characteristics or methodological implication of ESM as alternative dietary assessment method. Still, the potential of ESM to obtain more accurate and reliable dietary data is highlighted together with the need for proper validation.
Altogether, the lacking details on important methodological aspects of ESM hinders drawing conclusions on common practices for implementation of ESM for quantitative dietary assessment. Nevertheless, Perski et al . emphasize the need for more elaboration on the methodological aspects in order to provide a summary of best practices on implementation of ESM for specific health behaviors including dietary behavior [ 60 ]. Our scoping review meets this need with key methodological recommendations for developing an experience sampling dietary assessment method for total dietary intake next to elaboration on commonly applied ESM design characteristics.
An important limitation of this scoping review is, inherent to scoping reviews, the less rigor search strategy and screening process. This will have resulted in an incomplete overview of studies describing ESM for dietary assessment. Still, this review has not the aim to assess outcomes of studies but rather evaluate how ESM can be applied for dietary assessment methodologically. Therefore, its strength lies in the assessment and description of ESM approaches specifically to provide insight in its use for quantitative dietary assessment as an alternative method for the traditional dietary assessment methods. To our knowledge, this has only been performed by Schembre et al. previously [ 12 ]. However, our scoping review is, to our knowledge, the first to describe practical recommendations for developing an ESM for total dietary assessment (i.e. all food groups). Additionally, only two studies were identified to have applied ESM for total dietary assessment. Consequently, limited evidence-based information was available in literature on the development of ESM characteristics (prompting schedule, duration, questionnaire design) for quantitative dietary assessment of total dietary intake. Nevertheless, studies on qualitative and semi-quantitative dietary assessment using ESM were described and form, together with the guidelines of Myin-Germeys et al., the base of practical guidelines of designing an ESM protocol for quantitative dietary assessment of total dietary intake. To our knowledge, this review is the first to discuss recommendations on the implementation of ESM for quantitative dietary assessment as an alternative for traditional dietary assessment methods.
This review shows that ESM is increasingly being applied in research to measure dietary intake. However, few studies applied ESM to assess total dietary intake quantitatively with the same purpose of traditional dietary assessment methods. Still, the methodological characteristics of ESM show auspicious possibilities to overcome limitations of the classic dietary assessment methods. This paper provides guidance and is the starting point for the development of an Experience Sampling Dietary Assessment Method to assess total dietary intake quantitatively based on recent literature and theoretical background. Thorough evaluation and validation studies are needed to test the full potential of ESM as a feasible and accurate alternative for traditional dietary assessment methods.
The data that support the findings of this manuscript are available from the corresponding author upon reasonable request. The review protocol can be downloaded at: KU Leuven repository.
Experience Sampling-based Dietary Assessment Method
Experience Sampling Method
Mother’s and Their Children’s Health
Preferred Reporting Items for Systematic review and Meta-Analysis Protocols
Preferred Reporting Items for Systematic review and Meta-Analysis extension for scoping reviews
Sugar Sweetened Beverages
Visual Analog Scale
Hebert JR, Hurley TG, Steck SE, Miller DR, Tabung FK, Peterson KE, et al. Considering the value of dietary assessment data in informing nutrition-related health policy. Adv Nutr. 2014;5(4):447–55.
Article PubMed PubMed Central Google Scholar
Liang S, Nasir RF, Bell-Anderson KS, Toniutti CA, O’Leary FM, Skilton MR. Biomarkers of dietary patterns: a systematic review of randomized controlled trials. Nutr Rev. 2022;80(8):1856–95.
Bingham S, Carroll RJ, Day NE, Ferrari P, Freedman L, Kipnis V, et al. Bias in dietary-report instruments and its implications for nutritional epidemiology. Public Health Nutr. 2002;5(6a):915–23.
Article PubMed Google Scholar
Kirkpatrick SI, Baranowski T, Subar AF, Tooze JA, Frongillo EA. Best Practices for Conducting and Interpreting Studies to Validate Self-Report Dietary Assessment Methods. J Acad Nutr Diet. 2019;119(11):1801–16.
Bennett DA, Landry D, Little J, Minelli C. Systematic review of statistical approaches to quantify, or correct for, measurement error in a continuous exposure in nutritional epidemiology. BMC Med Res Methodol. 2017;17(1):146.
Satija A, Yu E, Willett WC, Hu FB. Understanding nutritional epidemiology and its role in policy. Adv Nutr. 2015;6(1):5–18.
Kirkpatrick SI, Reedy J, Butler EN, Dodd KW, Subar AF, Thompson FE, et al. Dietary assessment in food environment research: a systematic review. Am J Prev Med. 2014;46(1):94–102.
The Science of Real-Time Data Capture: Self-Reports in Health Research: Oxford University Press; 2007. Available from: https://doi.org/10.1093/oso/9780195178715.001.0001 .
Verhagen SJ, Hasmi L, Drukker M, van Os J, Delespaul PA. Use of the experience sampling method in the context of clinical trials. Evid Based Ment Health. 2016;19(3):86–9.
Csikszentmihalyi M. Handbook of research methods for studying daily life: Guilford Press; 2011.
Mestdagh M, Verdonck S, Piot M, Niemeijer K, Kilani G, Tuerlinckx F, et al. m-Path: an easy-to-use and highly tailorable platform for ecological momentary assessment and intervention in behavioral research and clinical practice. Front Digit Health. 2023;5:1182175.
Schembre SM, Liao Y, O’Connor SG, Hingle MD, Shen SE, Hamoy KG, et al. Mobile Ecological Momentary Diet Assessment Methods for Behavioral Research: Systematic Review. JMIR Mhealth Uhealth. 2018;6(11): e11170.
Dejonckheere E, Erbas, Y. Designing an experience sampling study. In: Myin-Germeys I, Kuppens, P., editor. The open handbook of experience sampling methodology: A step-by-step guide to designing, conducting, and analyzing ESM studies: Center for Research on Experience Sampling and Ambulatory Methods Leuven; 2021. p. 33–70.
Arksey H, O’Malley L. Scoping Studies: Towards a Methodological Framework. International Journal of Social Research Methodology: Theory & Practice. 2005;8:19–32.
Article Google Scholar
Levac D, Colquhoun H, O’Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5(1):69.
Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1.
JBI JBI. [cited 2022 October 28th]. Available from: https://jbi.global/scoping-review-network/resources .
Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018;169(7):467–73.
Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1(1):10.
Khangura S, Polisena J, Clifford TJ, Farrah K, Kamel C. RAPID REVIEW: AN EMERGING APPROACH TO EVIDENCE SYNTHESIS IN HEALTH TECHNOLOGY ASSESSMENT. Int J Technol Assess Health Care. 2014;30(1):20–7.
Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5(1):56.
Grenard JL, Stacy AW, Shiffman S, Baraldi AN, MacKinnon DP, Lockhart G, et al. Sweetened drink and snacking cues in adolescents: a study using ecological momentary assessment. Appetite. 2013;67:61–73.
Dunton GF, Dzubur E, Huh J, Belcher BR, Maher JP, O’Connor S, et al. Daily Associations of Stress and Eating in Mother-Child Dyads. Health Educ Behav. 2017;44(3):365–9.
Dunton GF, Liao Y, Dzubur E, Leventhal AM, Huh J, Gruenewald T, et al. Investigating within-day and longitudinal effects of maternal stress on children’s physical activity, dietary intake, and body composition: Protocol for the MATCH study. Contemp Clin Trials. 2015;43:142–54.
O’Connor SG, Ke W, Dzubur E, Schembre S, Dunton GF. Concordance and predictors of concordance of children’s dietary intake as reported via ecological momentary assessment and 24 h recall. Public Health Nutr. 2018;21(6):1019–27.
O’Connor SG, Koprowski C, Dzubur E, Leventhal AM, Huh J, Dunton GF. Differences in Mothers’ and Children’s Dietary Intake during Physical and Sedentary Activities: An Ecological Momentary Assessment Study. J Acad Nutr Diet. 2017;117(8):1265–71.
Liao Y, Schembre SM, O’Connor SG, Belcher BR, Maher JP, Dzubur E, et al. An Electronic Ecological Momentary Assessment Study to Examine the Consumption of High-Fat/High-Sugar Foods, Fruits/Vegetables, and Affective States Among Women. J Nutr Educ Behav. 2018;50(6):626–31.
Mason TB, Naya CH, Schembre SM, Smith KE, Dunton GF. Internalizing symptoms modulate real-world affective response to sweet food and drinks in children. Behav Res Ther. 2020;135: 103753.
Mason TB, O’Connor SG, Schembre SM, Huh J, Chu D, Dunton GF. Momentary affect, stress coping, and food intake in mother-child dyads. Health Psychol. 2019;38(3):238–47.
Mason TB, Smith KE, Dunton GF. Maternal parenting styles and ecological momentary assessment of maternal feeding practices and child food intake across middle childhood to early adolescence. Pediatr Obes. 2020;15(10): e12683.
Do B, Yang CH, Lopez NV, Mason TB, Margolin G, Dunton GF. Investigating the momentary association between maternal support and children’s fruit and vegetable consumption using ecological momentary assessment. Appetite. 2020;150: 104667.
Naya CH, Chu D, Wang WL, Nicolo M, Dunton GF, Mason TB. Children’s Daily Negative Affect Patterns and Food Consumption on Weekends: An Ecological Momentary Assessment Study. J Nutr Educ Behav. 2022;54(7):600–9.
Lopez NV, Lai MH, Yang CH, Dunton GF, Belcher BR. Associations of Maternal and Paternal Parenting Practices With Children’s Fruit and Vegetable Intake and Physical Activity: Preliminary Findings From an Ecological Momentary Study. JMIR Form Res. 2022;6(8): e38326.
Bruening M, van Woerden I, Todd M, Brennhofer S, Laska MN, Dunton G. A Mobile Ecological Momentary Assessment Tool (devilSPARC) for Nutrition and Physical Activity Behaviors in College Students: A Validation Study. J Med Internet Res. 2016;18(7): e209.
Piontak JR, Russell MA, Danese A, Copeland WE, Hoyle RH, Odgers CL. Violence exposure and adolescents’ same-day obesogenic behaviors: New findings and a replication. Soc Sci Med. 2017;189:145–51.
Campbell KL, Babiarz A, Wang Y, Tilton NA, Black MM, Hager ER. Factors in the home environment associated with toddler diet: an ecological momentary assessment study. Public Health Nutr. 2018;21(10):1855–64.
Cummings JR, Mamtora T, Tomiyama AJ. Non-food rewards and highly processed food intake in everyday life. Appetite. 2019;142: 104355.
Maher JP, Harduk M, Hevel DJ, Adams WM, McGuirt JT. Momentary Physical Activity Co-Occurs with Healthy and Unhealthy Dietary Intake in African American College Freshmen. Nutrients. 2020;12(5):1360.
Lin TT, Park C, Kapella MC, Martyn-Nemeth P, Tussing-Humphreys L, Rospenda KM, et al. Shift work relationships with same- and subsequent-day empty calorie food and beverage consumption. Scand J Work Environ Health. 2020;46(6):579–88.
Yong JYY, Tong EMW, Liu JCJ. When the camera eats first: Exploring how meal-time cell phone photography affects eating behaviours. Appetite. 2020;154:104787.
Goldstein SP, Hoover A, Evans EW, Thomas JG. Combining ecological momentary assessment, wrist-based eating detection, and dietary assessment to characterize dietary lapse: A multi-method study protocol. Digit Health. 2021;7:2055207620988212.
Chmurzynska A, Mlodzik-Czyzewska MA, Malinowska AM, Radziejewska A, Mikołajczyk-Stecyna J, Bulczak E, et al. Greater self-reported preference for fat taste and lower fat restraint are associated with more frequent intake of high-fat food. Appetite. 2021;159:105053.
Barchitta M, Maugeri A, Favara G, Magnano San Lio R, Riela PM, Guarnera L, et al. Development of a Web-App for the Ecological Momentary Assessment of Dietary Habits among College Students: The HEALTHY-UNICT Project. Nutrients. 2022;14(2):330.
Spook JE, Paulussen T, Kok G, Van Empelen P. Monitoring dietary intake and physical activity electronically: feasibility, usability, and ecological validity of a mobile-based Ecological Momentary Assessment tool. J Med Internet Res. 2013;15(9): e214.
Wouters S, Jacobs N, Duif M, Lechner L, Thewissen V. Affect and between-meal snacking in daily life: the moderating role of gender and age. Psychol Health. 2018;33(4):555–72.
Wouters S, Jacobs N, Duif M, Lechner L, Thewissen V. Negative affective stress reactivity: The dampening effect of snacking. Stress Health. 2018;34(2):286–95.
Wouters S, Thewissen V, Duif M, Lechner L, Jacobs N. Assessing Energy Intake in Daily Life: Signal-Contingent Smartphone Application Versus Event-Contingent Paper and Pencil Estimated Diet Diary. Psychol Belg. 2016;56(4):357–69.
Wouters S, Thewissen V, Duif M, van Bree RJ, Lechner L, Jacobs N. Habit strength and between-meal snacking in daily life: the moderating role of level of education. Public Health Nutr. 2018;21(14):2595–605.
Forman EM, Shaw JA, Goldstein SP, Butryn ML, Martin LM, Meiran N, et al. Mindful decision making and inhibitory control training as complementary means to decrease snack consumption. Appetite. 2016;103:176–83.
Richard A, Meule A, Reichenberger J, Blechert J. Food cravings in everyday life: An EMA study on snack-related thoughts, cravings, and consumption. Appetite. 2017;113:215–23.
Richard A, Meule A, Blechert J. Implicit evaluation of chocolate and motivational need states interact in predicting chocolate intake in everyday life. Eat Behav. 2019;33:1–6.
Zenk SN, Horoi I, McDonald A, Corte C, Riley B, Odoms-Young AM. Ecological momentary assessment of environmental and personal factors and snack food intake in African American women. Appetite. 2014;83:333–41.
Ghosh Roy P, Jones KK, Martyn-Nemeth P, Zenk SN. Contextual correlates of energy-dense snack food and sweetened beverage intake across the day in African American women: An application of ecological momentary assessment. Appetite. 2019;132:73–81.
Ortega A, Bejarano CM, Hesse DR, Reed D, Cushing CC. Temporal discounting modifies the effect of microtemporal hedonic hunger on food consumption: An ecological momentary assessment study. Eat Behav. 2022;48: 101697.
Boronat A, Clivillé-Pérez J, Soldevila-Domenech N, Forcano L, Pizarro N, Fitó M, et al. Mobile Device-assisted Dietary Ecological Momentary Assessments for the Evaluation of the Adherence to the Mediterranean Diet in a Continuous Manner. J Vis Exp. 2021(175).
de Rivaz R, Swendsen J, Berthoz S, Husky M, Merikangas K, Marques-Vidal P. Associations between Hunger and Psychological Outcomes: A Large-Scale Ecological Momentary Assessment Study. Nutrients. 2022;14(23).
Lucassen DA, Brouwer-Brolsma EM, Slotegraaf AI, Kok E, Feskens EJM. DIetary ASSessment (DIASS) Study: Design of an Evaluation Study to Assess Validity, Usability and Perceived Burden of an Innovative Dietary Assessment Methodology. Nutrients. 2022;14(6). https://doi.org/10.3390/nu14061156 .
Jeffers AJ, Mason TB, Benotsch EG. Psychological eating factors, affect, and ecological momentary assessed diet quality. Eat Weight Disord. 2020;25(5):1151–9.
Lucassen DA, Brouwer-Brolsma EM, Boshuizen HC, Mars M, de Vogel-Van den Bosch J, Feskens EJ. Validation of the smartphone-based dietary assessment tool “Traqq” for assessing actual dietary intake by repeated 2-h recalls in adults: comparison with 24-h recalls and urinary biomarkers. Am J Clin Nutr. 2023;117(6):1278–87.
Article PubMed CAS Google Scholar
Perski O, Keller J, Kale D, Asare BY, Schneider V, Powell D, et al. Understanding health behaviours in context: A systematic review and meta-analysis of ecological momentary assessment studies of five key health behaviours. Health Psychol Rev. 2022;16(4):576–601.
Thompson FE, Subar AF. Chapter 1 - Dietary Assessment Methodology. In: Coulston AM, Boushey CJ, Ferruzzi MG, Delahanty LM, editors. Nutrition in the Prevention and Treatment of Disease (Fourth Edition): Academic Press; 2017. p. 5–48.
Shiffman S, Balabanis MH, Gwaltney CJ, Paty JA, Gnys M, Kassel JD, et al. Prediction of lapse from associations between smoking and situational antecedents assessed by ecological momentary assessment. Drug Alcohol Depend. 2007;91(2-3):159–68.
Eisele G, Kasanova Z, Houben M. Questionnaire design and evaluation. In: Myin-Germeys I, Kuppens, P., editor. The open handbook of Experience Sampling Methodology: A step-by-step guide to designing, conducting, and analyzing ESM studies. Center for Research on Experience Sampling and Ambulatory Methods Leuven; 2021. p. 71–90.
Download references
Not applicable.
This work was supported by a PhD fellowship Strategic Basic research grant (1S96721N) of Research Foundation Flanders (FWO) and KU Leuven Internal Funds (C3/22/50). The funders had no role in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript.
Authors and affiliations.
Clinical and Experimental Endocrinology, Department of Chronic Diseases and Metabolism, KU Leuven, Leuven, Belgium
Joke Verbeke & Christophe Matthys
Department of Endocrinology, University Hospitals Leuven, Leuven, Belgium
Christophe Matthys
You can also search for this author in PubMed Google Scholar
JV conducted the review and screened the articles. CM was the second reviewer in case of hesitancy on inclusion of articles in the screening process. JV extracted the data and wrote the manuscript. CM revised the manuscript and supervised the research.
Correspondence to Christophe Matthys .
Ethics approval and consent to participate, consent for publication, competing interests.
The authors declare that they have no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary material 1., rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Verbeke, J., Matthys, C. Experience Sampling as a dietary assessment method: a scoping review towards implementation. Int J Behav Nutr Phys Act 21 , 94 (2024). https://doi.org/10.1186/s12966-024-01643-1
Download citation
Received : 23 February 2024
Accepted : 14 August 2024
Published : 27 August 2024
DOI : https://doi.org/10.1186/s12966-024-01643-1
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1479-5868
IMAGES
VIDEO
COMMENTS
Learn how to conduct qualitative research with practical guidance on sampling, data collection and analysis in this series of articles.
Abstract. Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros ...
However, in qualitative research the central resource through which sampling decisions are made is a focus on specific people, situations or sites because they offer a specific - 'biased' or 'information-rich' - perspective (Patton, 2002). Irrespective of the approach, sampling requires prior knowledge of the phenomenon.
This chapter explains how to design suitable sampling strategies for qualitative research. The focus of this chapter is purposive (or theoretical) sampling to produce credible and trustworthy explanations of a phenomenon (a specific aspect of society). A specific...
The practices of sampling, in comparison to quantitative research, are rooted in the application of multiple conceptual perspectives and interpretive stances to data collection and analyses that allow the development and evaluation of a multitude of meanings and experiences.
Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects ( Staller, 2013 ). As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies." (p.537). Patton (2002) argues, "perhaps nothing better captures the ...
Sampling in qualitative research has different purposes and goals than sampling in quantitative research. Sampling in both allows you to say something of interest about a population without having to include the entire population in your sample. We begin this chapter with the case of a population of interest composed of actual people.
There is a need for more explicit discussion of qualitative sampling issues. This article will outline the guiding principles and rationales, features, and practices of sampling in qualitative research. It then describes common questions about sampling in qualitative research.
The chapter discusses different types of sampling methods used in qualitative research to select information-rich cases. Two types of sampling techniques are discussed in the past qualitative studies—the theoretical and the purposeful sampling techniques. The chapter illustrates these two types of sampling techniques relevant examples.
Qualitative studies use specific tools and techniques (methods) to sample people, organizations, or whatever is to be examined. The methodology guides the selection of tools and techniques for sampling, data analysis, quality assurance, etc. These all vary according to the purpose and design of the study and the RQ.
Introduction Sampling is a critical, often overlooked aspect of the research process. The importance of sampling extends to the ability to draw accurate inferences, and it is an integral part of qualitative guidelines across research methods. Sampling considerations are important in quantitative and qualitative research when considering a target population and when drawing a sample that will ...
Sampling techniques in qualitative research include purposive, convenience, snowball, and theoretical sampling. Choosing the right sampling technique significantly impacts the accuracy and reliability of the research results. It's crucial to consider the potential impact on the bias, sample diversity, and generalizability when choosing a ...
Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears ...
There are two primary types of sampling methods that you can use in your research: Probability sampling involves random selection, allowing you to make strong statistical inferences about the whole group. Non-probability sampling involves non-random selection based on convenience or other criteria, allowing you to easily collect data.
Abstract. Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros ...
Sampling in qualitative research is defined as an initial stage process involving the deliberate selection of individuals or cases from a broader population to participate in a study. Unlike quantitative research, where the emphasis is often on achieving statistical generalizability, qualitative research seeks to obtain depth and richness of ...
Learn about different sampling methods in qualitative research from this PDF chapter. Find and cite relevant research on ResearchGate.
Abstract. Knowledge of sampling methods is essential to design quality research. Critical questions are provided to help researchers choose a sampling method. This article reviews probability and non-probability sampling methods, lists and defines specific sampling techniques, and provides pros and cons for consideration.
In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting high-quality qualitative research in primary care.
Types of sampling methods. There are two main sampling methods: probability sampling and non-probability sampling. These can be further refined, which we'll cover shortly. You can then decide which approach best suits your research project.
Introduction Considerations of sampling are fundamental to any empirical study. However, in studies based on qualitative research interviews, sampling issues are rarely discussed. Possible reasons include a lack of universal 'rules of thumb' governing sampling considerations and the diversity of approaches to qualitative inquiry.
In this section, we briefly describe three of the most common sampling methods used in qualitative research: purposive sampling, quota sampling, and snowball sampling.
We looked at the probability and non-probability types of sampling, the reasons for choosing them, and their advantages and disadvantages.
Accurate and feasible assessment of dietary intake remains challenging for research and healthcare. Experience Sampling Methodology (ESM) is a real-time real-life data capturing method with low burden and good feasibility not yet fully explored as alternative dietary assessment method. This scoping review is the first to explore the implementation of ESM as an alternative to traditional ...