Pilot Study in Research: Definition & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A pilot study, also known as a feasibility study, is a small-scale preliminary study conducted before the main research to check the feasibility or improve the research design.

Pilot studies can be very important before conducting a full-scale research project, helping design the research methods and protocol.

How Does it Work?

Pilot studies are a fundamental stage of the research process. They can help identify design issues and evaluate a study’s feasibility, practicality, resources, time, and cost before the main research is conducted.

It involves selecting a few people and trying out the study on them. It is possible to save time and, in some cases, money by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e., unusual things), confusion in the information given to participants, or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling.”

This enables researchers to predict an appropriate sample size, budget accordingly, and improve the study design before performing a full-scale project.

Pilot studies also provide researchers with preliminary data to gain insight into the potential results of their proposed experiment.

However, pilot studies should not be used to test hypotheses since the appropriate power and sample size are not calculated. Rather, pilot studies should be used to assess the feasibility of participant recruitment or study design.

By conducting a pilot study, researchers will be better prepared to face the challenges that might arise in the larger study. They will be more confident with the instruments they will use for data collection.

Multiple pilot studies may be needed in some studies, and qualitative and/or quantitative methods may be used.

To avoid bias, pilot studies are usually carried out on individuals who are as similar as possible to the target population but not on those who will be a part of the final sample.

Feedback from participants in the pilot study can be used to improve the experience for participants in the main study. This might include reducing the burden on participants, improving instructions, or identifying potential ethical issues.

Experiment Pilot Study

In a pilot study with an experimental design , you would want to ensure that your measures of these variables are reliable and valid.

You would also want to check that you can effectively manipulate your independent variables and that you can control for potential confounding variables.

A pilot study allows the research team to gain experience and training, which can be particularly beneficial if new experimental techniques or procedures are used.

Questionnaire Pilot Study

It is important to conduct a questionnaire pilot study for the following reasons:
  • Check that respondents understand the terminology used in the questionnaire.
  • Check that emotive questions are not used, as they make people defensive and could invalidate their answers.
  • Check that leading questions have not been used as they could bias the respondent’s answer.
  • Ensure that the questionnaire can be completed in a reasonable amount of time. If it’s too long, respondents may lose interest or not have enough time to complete it, which could affect the response rate and the data quality.

By identifying and addressing issues in the pilot study, researchers can reduce errors and risks in the main study. This increases the reliability and validity of the main study’s results.

Assessing the practicality and feasibility of the main study

Testing the efficacy of research instruments

Identifying and addressing any weaknesses or logistical problems

Collecting preliminary data

Estimating the time and costs required for the project

Determining what resources are needed for the study

Identifying the necessity to modify procedures that do not elicit useful data

Adding credibility and dependability to the study

Pretesting the interview format

Enabling researchers to develop consistent practices and familiarize themselves with the procedures in the protocol

Addressing safety issues and management problems

Limitations

Require extra costs, time, and resources.

Do not guarantee the success of the main study.

Contamination (ie: if data from the pilot study or pilot participants are included in the main study results).

Funding bodies may be reluctant to fund a further study if the pilot study results are published.

Do not have the power to assess treatment effects due to small sample size.

  • Viscocanalostomy: A Pilot Study (Carassa, Bettin, Fiori, & Brancato, 1998)
  • WHO International Pilot Study of Schizophrenia (Sartorius, Shapiro, Kimura, & Barrett, 1972)
  • Stephen LaBerge of Stanford University ran a series of experiments in the 80s that investigated lucid dreaming. In 1985, he performed a pilot study that demonstrated that time perception is the same as during wakefulness. Specifically, he had participants go into a state of lucid dreaming and count out ten seconds, signaling the start and end with pre-determined eye movements measured with the EOG.
  • Negative Word-of-Mouth by Dissatisfied Consumers: A Pilot Study (Richins, 1983)
  • A pilot study and randomized controlled trial of the mindful self‐compassion program (Neff & Germer, 2013)
  • Pilot study of secondary prevention of posttraumatic stress disorder with propranolol (Pitman et al., 2002)
  • In unstructured observations, the researcher records all relevant behavior without a system. There may be too much to record, and the behaviors recorded may not necessarily be the most important, so the approach is usually used as a pilot study to see what type of behaviors would be recorded.
  • Perspectives of the use of smartphones in travel behavior studies: Findings from a literature review and a pilot study (Gadziński, 2018)

Further Information

  • Lancaster, G. A., Dodd, S., & Williamson, P. R. (2004). Design and analysis of pilot studies: recommendations for good practice. Journal of evaluation in clinical practice, 10 (2), 307-312.
  • Thabane, L., Ma, J., Chu, R., Cheng, J., Ismaila, A., Rios, L. P., … & Goldsmith, C. H. (2010). A tutorial on pilot studies: the what, why and how. BMC Medical Research Methodology, 10 (1), 1-10.
  • Moore, C. G., Carter, R. E., Nietert, P. J., & Stewart, P. W. (2011). Recommendations for planning pilot studies in clinical and translational research. Clinical and translational science, 4 (5), 332-337.

Carassa, R. G., Bettin, P., Fiori, M., & Brancato, R. (1998). Viscocanalostomy: a pilot study. European journal of ophthalmology, 8 (2), 57-61.

Gadziński, J. (2018). Perspectives of the use of smartphones in travel behaviour studies: Findings from a literature review and a pilot study. Transportation Research Part C: Emerging Technologies, 88 , 74-86.

In J. (2017). Introduction of a pilot study. Korean Journal of Anesthesiology, 70 (6), 601–605. https://doi.org/10.4097/kjae.2017.70.6.601

LaBerge, S., LaMarca, K., & Baird, B. (2018). Pre-sleep treatment with galantamine stimulates lucid dreaming: A double-blind, placebo-controlled, crossover study. PLoS One, 13 (8), e0201246.

Leon, A. C., Davis, L. L., & Kraemer, H. C. (2011). The role and interpretation of pilot studies in clinical research. Journal of psychiatric research, 45 (5), 626–629. https://doi.org/10.1016/j.jpsychires.2010.10.008

Malmqvist, J., Hellberg, K., Möllås, G., Rose, R., & Shevlin, M. (2019). Conducting the Pilot Study: A Neglected Part of the Research Process? Methodological Findings Supporting the Importance of Piloting in Qualitative Research Studies. International Journal of Qualitative Methods. https://doi.org/10.1177/1609406919878341

Neff, K. D., & Germer, C. K. (2013). A pilot study and randomized controlled trial of the mindful self‐compassion program. Journal of Clinical Psychology, 69 (1), 28-44.

Pitman, R. K., Sanders, K. M., Zusman, R. M., Healy, A. R., Cheema, F., Lasko, N. B., … & Orr, S. P. (2002). Pilot study of secondary prevention of posttraumatic stress disorder with propranolol. Biological psychiatry, 51 (2), 189-192.

Richins, M. L. (1983). Negative word-of-mouth by dissatisfied consumers: A pilot study. Journal of Marketing, 47 (1), 68-78.

Sartorius, N., Shapiro, R., Kimura, M., & Barrett, K. (1972). WHO International Pilot Study of Schizophrenia1. Psychological medicine, 2 (4), 422-425.

Teijlingen, E. R; V. Hundley (2001). The importance of pilot studies, Social research UPDATE, (35)

Print Friendly, PDF & Email

  • En español – ExME
  • Em português – EME

What is a pilot study?

Posted on 31st July 2017 by Luiz Cadete

research methodology pilot study

Pilot studies can play a very important role prior to conducting a full-scale research project

Pilot studies are small-scale, preliminary studies which aim to investigate whether crucial components of a main study – usually a randomized controlled trial (RCT) – will be feasible. For example, they may be used in attempt to predict an appropriate sample size for the full-scale project and/or to improve upon various aspects of the study design. Often RCTs require a lot of time and money to be carried out, so it is crucial that the researchers have confidence in the key steps they will take when conducting this type of study to avoid wasting time and resources.

Thus, a pilot study must answer a simple question: “Can the full-scale study be conducted in the way that has been planned or should some component(s) be altered?”

The reporting of pilot studies must be of high quality to allow readers to interpret the results and implications correctly. This blog will highlight some key things for readers to consider when they are appraising a pilot study.

What are the main reasons to conduct a pilot study?

Pilot studies are conducted to evaluate the feasibility of some crucial component(s) of the full-scale study. Typically, these can be divided into 4 main aspects:

  • Process : where the feasibility of the key steps in the main study is assessed (e.g. recruitment rate; retention levels and eligibility criteria)
  • Resources: assessing problems with time and resources that may occur during the main study (e.g. how much time the main study will take to be completed; whether use of some equipment will be feasible or whether the form(s) of evaluation selected for the main study are as good as possible)
  • Management: problems with data management and with the team involved in the study (e.g. whether there were problems with collecting all the data needed for future analysis; whether the collected data are highly variable and whether data from different institutions can be analyzed together).

Reasons for not conducting a pilot study

A study should not simply be labelled a ‘pilot study’ by researchers hoping to justify a small sample size. Pilot studies should always have their objectives linked with feasibility and should inform researchers about the best way to conduct the future, full-scale project.

How to interpret a pilot study

Readers must interpret pilot studies carefully. Below are some key things to consider when assessing a pilot study:

  • The objectives of pilot studies must always be linked with feasibility and the crucial component that will be tested must always be stated.
  • The method section must present the criteria for success. For example: “the main study will be feasible if the retention rate of the pilot study exceeds 90%”. Sample size may vary in pilot studies (different articles present different sample size calculations) but the pilot study population, from which the sample is formed, must be the same as the main study. However, the participants in the pilot study should not be entered into the full-scale study. This is because participants may change their later behaviour if they had previously been involved in the research.
  • The pilot study may or may not be a randomized trial (depending on the nature of the study). If the researchers do randomize the sample in the pilot study, it is important that the process for randomization is kept the same in the full-scale project. If the authors decide to test the randomization feasibility through a pilot study, different kinds of randomization procedures could be used.
  • As well as the method section, the results of the pilot studies should be read carefully. Although pilot studies often present results related to the effectiveness of the interventions, these results should be interpreted as “potential effectiveness”. The focus in the results of pilot studies should always be on feasibility, rather than statistical significance. However, results of the pilot studies should nonetheless be provided with measures of variability (such as confidence intervals), particularly as the sample size of these studies is usually relatively small, and this might produce biased results.

After an interpretation of results, pilot studies should conclude with one of the following:

(1) the main study is not feasible;

(2) the main study is feasible, with changes to the protocol;

(3) the main study is feasible without changes to the protocol OR

(4) the main study is feasible with close monitoring.

Any recommended changes to the protocol should be clearly outlined.

Take home message

  • A pilot study must provide information about whether a full-scale study is feasible and list any recommended amendments to the design of the future study.

Thabane L, Ma J, Chu R, et al. A tutorial on pilot studies: what, why and how? BMC Med Res Methodol. 2010; 10: 1.

Cocks K and Torgerson DJ. Sample Size Calculations for Randomized Pilot Trials: A Confidence Interval approach . Journal of Clinical Epidemiology. 2013.

Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004; 10 (2): 307-12.

Moore et al. Recommendations for Planning Pilot Studies in Clinical and Translational Research. Clin Transl Sci. 2011 October ; 4(5): 332–337.

' src=

Luiz Cadete

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

No Comments on What is a pilot study?

' src=

I want to do pilot study what can I do. My age is 30 .

' src=

Dear Khushbu, were you wanting to get involved in research? If so, what type of research were you interested in. There are lots of resources we can point you towards.

' src=

very intersesting

' src=

How can I study pilot and how can I start at first step? What should I do at the first time.

' src=

Informative.Thank you

' src=

If i am conducting a RCT then is it necessary to give interventions before conducting pilot study???

' src=

This fantastic. I am a doctoral student preparing do a pilot study on my main study.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Popular Articles

student tutorial

A 20 Minute Introduction to Cluster Randomised Trials

Another 20 minute tutorial from Tim.

Student Tutorial badge

A beginner’s guide to interpreting odds ratios, confidence intervals and p-values

The nuts and bolts 20 minute tutorial from Tim.

""

Free information and resources for pupils and students interested in evidence-based health care

This new webpage from Cochrane UK is aimed at students of all ages. What is evidence-based practice? What is ‘best available research evidence’? Which resources will help you understand evidence and evidence-based practice, and search for evidence?

Pilot Study in Research

  • Key Concepts
  • Major Sociologists
  • News & Issues
  • Research, Samples, and Statistics
  • Recommended Reading
  • Archaeology

A pilot study is a preliminary small-scale study that researchers conduct in order to help them decide how best to conduct a large-scale research project. Using a pilot study, a researcher can identify or refine a research question, figure out what methods are best for pursuing it, and estimate how much time and resources will be necessary to complete the larger version, among other things.

Key Takeaways: Pilot Studies

  • Before running a larger study, researchers can conduct a pilot study : a small-scale study that helps them refine their research topic and study methods.
  • Pilot studies can be useful for determining the best research methods to use, troubleshooting unforeseen issues in the project, and determining whether a research project is feasible.
  • Pilot studies can be used in both quantitative and qualitative social science research.

Large-scale research projects tend to be complex, take a lot of time to design and execute, and typically require quite a bit of funding. Conducting a pilot study beforehand allows a researcher to design and execute a large-scale project in as methodologically rigorous a way as possible, and can save time and costs by reducing the risk of errors or problems. For these reasons, pilot studies are used by both quantitative and qualitative researchers in the social sciences.

Advantages of Conducting a Pilot Study

Pilot studies are useful for a number of reasons, including:

  • Identifying or refining a research question or set of questions
  • Identifying or refining a hypothesis or set of hypotheses
  • Identifying and evaluating a sample population, research field site , or data set
  • Testing research instruments like survey questionnaires , interview, discussion guides, or statistical formulas
  • Evaluating and deciding upon research methods
  • Identifying and resolving as many potential problems or issues as possible
  • Estimating the time and costs required for the project
  • Gauging whether the research goals and design are realistic
  • Producing preliminary results that can help secure funding and other forms of institutional investment

After conducting a pilot study and taking the steps listed above, a researcher will know what to do in order to proceed in a way that will make the study a success. 

Example: Quantitative Survey Research

Say you want to conduct a large-scale quantitative research project using survey data to study the relationship between race and political party affiliation . To best design and execute this research, you would first want to select a data set to use, such as the General Social Survey , for example, download one of their data sets, and then use a statistical analysis program to examine this relationship. In the process of analyzing the relationship, you are likely to realize the importance of other variables that may have an impact on political party affiliation. For example, place of residence, age, education level, socioeconomic status, and gender may impact party affiliation (either on their own or in interaction with race). You might also realize that the data set you chose does not offer you all the information that you need to best answer this question, so you might choose to use another data set, or combine another with the original that you selected. Going through this pilot study process will allow you to work out the kinks in your research design and then execute high-quality research.

Example: Qualitative Interview Studies

Pilot studies can also be useful for qualitative research studies, such as interview-based studies. For example, imagine that a researcher is interested in studying the relationship that Apple consumers have to the company's brand and products . The researcher might choose to first do a pilot study consisting of a couple of focus groups in order to identify questions and thematic areas that would be useful to pursue in-depth, one-on-one interviews. A focus group can be useful to this kind of study because while a researcher will have a notion of what questions to ask and topics to raise, she may find that other topics and questions arise when members of the target group talk among themselves. After a focus group pilot study, the researcher will have a better idea of how to craft an effective interview guide for a larger research project.

  • Definition of Idiographic and Nomothetic
  • Definition and Overview of Grounded Theory
  • Social Surveys: Questionnaires, Interviews, and Telephone Polls
  • Macro- and Microsociology
  • The Different Types of Sampling Designs in Sociology
  • Introduction to Sociology
  • An Overview of Qualitative Research Methods
  • Conducting Case Study Research in Sociology
  • Understanding Purposive Sampling
  • Abstract Writing for Sociology
  • Understanding Path Analysis
  • Definition of a Hypothesis
  • What Is a Research Paper?
  • How to Conduct a Sociology Research Interview
  • Understanding Secondary Data and How to Use It in Research
  • The Difference Between Descriptive and Inferential Statistics

10 Things to Know About Pilot Studies

1 what is a pilot, and what is it good for.

During the process of planning an experiment, researchers often face questions regarding their study’s theoretical and conceptual underpinnings, its measurement approach, and associated logistics. Pilot studies can help you to consider and improve these elements of your research whether you are running a survey, lab, or field experiment. In particular, a pilot study is a smaller scale preliminary test or trial run, used to assist with the preparation of a more comprehensive investigation . Pilots are typically administered before a research design is finalized in order to evaluate and improve the feasibility, reliability, and validity of the proposed study.

While it may be tempting to think about a pilot as simply a miniature version of one’s final study, helpful for doing an initial test of one’s hypotheses, pilot studies are neither especially appropriate for hypothesis testing , nor are they limited to it. Given smaller sample sizes, pilots are typically underpowered for evaluating hypotheses. Besides, deciding whether to continue a study based on initial results contributes to the “file drawer” problem where important studies and results—including null results —are never published, leading to misrepresentative bodies of published research ( Franco, Malhotra, and Simonovits 2014 ) .

Fortunately, as depicted in the table below, pilots can be useful for a wide range of research purposes including theory development, research design, improving measurement, sampling considerations, evaluating logistics, pre-planning analysis, weighing ethical considerations, and communicating one’s research ( Teijilngen et al. ( 2001 ) ; Thabane et al. ( 2010 ) ). Each of these activities can help improve the quality of one’s main study and render it more compelling and ultimately successful.

In the sections below, we review these many benefits. First, we consider how pilots can assist with a study’s theory and measurement approach, evaluate its logistical feasibility, and provide information about the sample size needed to test hypotheses. In addition, we discuss how piloting may help you secure research funding and institutional support, gather feedback, and incorporate best practices in research ethics. Finally, we offer recommendations on how to use pilots to inform your main study design, reveal important unknowns, and contribute to your broader research agenda.

2 Pilots are useful for improving your study’s measurement approach in relation to your theory

Fewer things are more frustrating for a researcher than investing significant resources and time into a study only to find that one’s outcome measures lack reliability to accurately assess the concepts of interest, or that participants did not receive treatments in the way that was anticipated. Unexpected results are common, even when one approaches measurement carefully, as operationalizing concepts into an effective measurement strategy in the social sciences is a complex endeavor.

As a remedy, pilots are great for testing different versions of treatments and outcomes to see which of them “work” or whether any changes may be needed to increase validity, reliability, and clarity. For example, you can include a larger set of outcome measures in a pilot than is feasible in the main study, and perform simple factor analysis to identify a preferred subset of measures. Pilot results can also inform whether it makes sense to create indices or scales in order to improve reliability and decrease variance. Subtle variation in the strength, nature, timing, or number of treatments can also significantly alter study findings. Pilots offer researchers the opportunity to evaluate multiple possibilities for one’s treatment design, and to assess how these options influence participant compliance, uptake of treatment, attrition, and more.

When refining one’s approach to measuring treatments, outcomes, and covariates, it is especially important to keep in mind how these elements of one’s research design speak to the broader concepts and theory under study. Will the data you receive provide the necessary information regarding the theoretical elements and causal mechanisms under study? Are there other causal channels that may be in play, or heterogeneous effects within subgroups that you hadn’t thought about previously? These kinds of considerations can inform changes to your research design, such as alterations to your randomization strategy , the introduction of new dimensions in your treatments, and decisions about which aspects of your study are core and which can be saved for a later date.

When you begin a pilot study, you may have an initial conception of your research questions, hypotheses, and measurement approach; but with a careful pilot, you have the opportunity to refine all of these aspects in a way that can increase your (and others’) confidence in the overall quality of the study.

3 Pilots can also help you to prepare for the logistics of running your experiment

Perhaps the most often emphasized purpose of pilots is to work out any logistical kinks that might impede the main study ( Thabane et al. 2010 ) . Logistical considerations include those implicating a project’s overall resources, the study team, the participants, and the administration of study instruments.

In terms of participant logistics, is important to establish whether your study participants understand the research tools and procedures, are able to receive treatment, and feel comfortable answering questions or performing tasks. Based on how successful the pilot is, you may find ways to improve your recruitment strategy, adjust your eligibility criteria, and improve the clarity of your research instruments. It is also important to ensure that basic elements like randomization and data collection are working as anticipated. Data simulations and pilots with very small samples can also be used to test certain study elements.

Similar considerations apply to your study team and partners. Do they understand the protocols or might they require additional training, for example, to promote reliability in procedures such as data collection? Are there any asymmetries in delivery or measurement of treatment or outcomes of which you were not aware? A pilot can also be very helpful for determining the resources needed to conduct the main study. For example, how much time does it take—for both members of the study team and participants—and what might this imply for the size of the sample and complexity of the research design that is ultimately feasible for the main study?

While the specific logistical considerations will vary depending on whether your research design is centered around a field experiment, survey experiment, lab experiment, or something else, pilots will help you ensure that your experiment goes as planned. Thus, when constructing a pilot study, consider making a list of the logistical elements you want to evaluate and designing the pilot to facilitate answering associated questions. Asking your participants whether they were able to “hear the video” or “understood the instructions” can make your research easier down the road.

4 You can use pilots for power analysis or for calculating minimum detectable effects

Another common purpose of pilot studies, in light of limited resources, is assessing statistical power , which helps to avoid the risk of false negatives (or false positives) from underpowered studies. As described in the EGAP methods guide about power analysis , a researcher’s goal is to answer the following: “supposing there truly is a treatment effect and you were to run your experiment a huge number of times, how often will you get a statistically significant result?” To improve the likelihood that your main study will achieve a typical target power value of 80%, you can use a pilot study combined with careful simulations. These results are helpful for determining appropriate sample sizes, the extent to which you can subset your sample for various analyses, and whether adjustments may be necessary for increasing power.

Traditionally, a pilot study is used to obtain an estimate of the effect size, which becomes the presumed estimate for simulations used to determine power and sample size in one’s main study. However, the DeclareDesign cautions that effect size estimates are very noisy in small pilots, especially when true effect sizes are small (for example, under 0.2 sd). As an alternative, they recommend using pilot studies to estimate the standard deviation of the outcome variable. Using this estimate, one can easily obtain an unbiased estimate of a main study’s minimum detectable effect (MDE) for a given outcome and with 80% power as 2.8 times the estimated standard error of the associated outcome variable ( Gelman and Hill 2006 ) . The goal is to ensure that the MDE is small enough that the study would capture any substantively meaningful effect.

Using the recommendations from DeclareDesign, we provide sample code based on a hypothetical pilot study. In the code below, we assume we have already conducted a pilot study and have calculated the standard deviation of a key outcome measure for both the control and treatment groups. Next, we use these estimates to calculate MDEs for different possible sample sizes, in order to inform our target sample size for a future main study.

research methodology pilot study

Based on the hypothetical standard deviations in the control and treatment groups from the pilot study, the main study would need a sample size of approximately 1,500 to ensure that effect sizes as small as 0.2 sd are detectable. While calculating the MDE based on pilot results is straightforward, determining how small an MDE should be is more subjective, and should be informed by theory and prior work.

MDE calculations are based on the design of the pilot and the specific outcome measures. Thus, you may need to perform MDE calculations for each estimand of interest to determine what sample sizes are needed and which hypotheses can be addressed with sufficient power given your experimental design. Keep in mind that as you use pilot results to refine your study design, including treatments and outcomes, the relevant standard deviations of the outcome measures and the resultant MDE calculations may change. Another quick pilot is an option, though it is important to take resource constraints into account.

5 Piloting may help you secure funding and support for your research

Pilots are not only helpful for improving the quality of your main study, they may also help to ensure you have the support and funding to enable your study to go forward. Given a general trend of tightening of per-researcher funding, particularly for smaller projects ( Bloch and Sørensen 2015 ) , and a movement towards evidence-based decision-making, you may wish to draw on pilots to provide initial evidence that your study is worthwhile. For example, the National Science Foundation (NSF) recognizes the need for more early-stage exploratory studies that can provide a basis for future larger-scale studies, and Time-sharing Experiments for the Social Sciences (TESS) notes that “Proposals that report trial runs of novel and focal ideas will be viewed as more credible.”

This does not entail that one needs to show that effect sizes are large enough nor that hypotheses are likely to be confirmed. Instead, a pilot can demonstrate that your research project is feasible in terms of time and resources, that your study design is adequate for answering the research questions proposed, and that your research team has the expertise and capacity to administer the study, perform analyses, and even present results in a compelling fashion ( Teijilngen et al. 2001 ) . In a similar fashion, piloting can help you to recruit study team members and organizational partners, or solicit institutional support.

6 You can use a pilot to get feedback on your study design

Sharing initial findings, successes, and challenges is a great way to help you prepare for your main study. In light of the growth of the open science movement ( Christensen et al. 2020 ) , conferences and workshops are increasingly open to accepting submissions based on pre-analysis plans and pilot results. Whether through these more formal venues, or by reaching out to colleagues or experts, you can use pilot results to receive feedback about your study design, such as strategies to address possible challenges and unexplored theoretical or empirical directions that you can incorporate in your main study.

Further, it can take a long time to complete an experimental study and publish results. Sharing intermediate findings allows you to coordinate with other researchers in the field, helping you to align your work and incorporate recent theoretical and empirical innovations relevant to your study.

Ultimately, the design and piloting stage is the best time to receive feedback, as you still have time to make improvements. In contrast, most key research design decisions will already have been finalized by the time your study undergoes formal peer review for publication.

7 Keep an eye out for ethical considerations when piloting

When you design your study initially, all of the relevant ethical considerations and risks may not be immediately apparent. The piloting stage is thus a good opportunity to review whether any risks or harms you anticipated may come into play and whether still other ethical considerations should be incorporated into your main study.

You can use your pilot to evaluate whether your procedures around informed consent are adequate and to assess the extent of burdens such as time required of participants. You may find, for example, that certain topics—such as mental health and personal identity—are more sensitive than anticipated, or that survey questions (even those based on validated and popular measures) use outdated and offensive language. The best way to find out is to ask. Consider including open-ended survey questions or talking to participants directly to determine what participants think of the study in terms of its normative and cultural acceptability in a given context.

In addition, the piloting process is a good time to practice your procedures around privacy, confidentiality, and security of data and other materials. You may find that other procedures for obtaining consent, ensuring participant safety and well-being, and promoting privacy and anonymity are necessary to improve the ethical dimensions of your main study. This can include decreasing risks as well as increasing benefits to participants, such as by providing helpful resources and information that may help to mitigate possible harms.

Note that this doesn’t imply any less ethical consideration should be given to your pilot itself. All appropriate safeguards, including IRB review, apply to human subjects research for a pilot study as well.

8 Be transparent about how your pilot informs the design of your main study

As noted, it’s helpful to pre-identify a set of questions about logistics, measurement, or other features of your study that you believe a pilot can help to answer. Putting these questions into writing, designing your pilot to facilitate answering them, and reporting on how the answers shape any modifications to your main study is a way in which you can promote transparency in research. This includes transparency within one’s study team about the purposes of a pilot, as well as for external audiences such as funders or peer reviewers. Transparency helps to alleviate concerns such as the file drawer problem, for example by demonstrating that a pilot study does not function as a method for cherry-picking statistically significant results. It also facilitates understanding and receipt of feedback.

The table below is a simple illustrative example of how one could transparently present lessons learned from a hypothetical pilot, for example, in a pre-analysis plan or research proposal. The first column indicates the question the pilot is intended to help answer, in this case questions related to logistical adequacy, manipulation checks, delivery of treatments, and measurement of outcomes. The second column presents the associated findings from the pilot, and the third column discusses how the lessons learned will inform design choices for the main study.

9 Explore unknowns

While you may begin your study with a set of preestablished questions—or “known unknowns”— that your pilot can help to address, keep in mind that there is a whole universe of “unknown unknowns” left to be explored. Of course, not all of these will be relevant to your study, but some are likely to be. For example, you may find that the participants have vastly different interpretations of an informational treatment, understand survey scales differently, or are reluctant to share responses given a perceived political bias.

There are a few ways to explore these unknowns in your pilot study. You might wish to include additional exploratory outcome questions, collect extra covariate data, or use a variety of treatments drawing on innovative or untested ideas. Another great way to identify unknowns is through open-ended questions of study participants. Regardless of your study design, you might consider surveying or interviewing participants to ask “what they think about topic X,” or “what comes to mind when they hear the term Y.” The world is often more complex than researchers model in their study designs, and study participants often have more diverse perspectives and relationships with the issues at hand than researchers expect.

Through this process, you may identify novel research questions, potential theoretical dimensions or causal mechanisms, and hypotheses that you had not originally formulated when you began studying the topic. Exploring unknowns can therefore lead to refining the ideas you had originally conceived of, as you are unlikely to have determined the best version of your study from the beginning. It can also lead to entirely novel considerations and ideas altogether, some of which you may be able to incorporate into the current study or future studies.

Overall, pilots afford a wonderful opportunity for open-ended exploration and idea generation.

10 A pilot is part of a broader sequence of research activities

While we’ve talked about research in terms of “pilots” and “main studies,” this is an oversimplification of the research process. Indeed one can (and often should) employ multiple pilot studies, perhaps with different sample sizes, to evaluate different questions and to continue refining one’s research design as new questions and possibilities emerge. Pilot studies certainly do set the stage for main studies as well as subsequent “follow-up” studies, but there may not be a sharp demarcation between these kinds of studies. As depicted in the graphic, pilot studies do not merely lead to main studies. They can also help one develop new ideas and they may serve as a venue through which one can contribute to or coordinate with the broader research community around topics of shared interest.

research methodology pilot study

As pilots are part of this broader sequence of research activities, an important consideration is what portion of one’s funding and resources should be allocated to a pilot study versus one’s main study. Researchers may be discouraged from devoting research resources to pilots due to a perception that pilots only provide preliminary data that cannot be used for final research products. While there is no simple “rule of thumb” about the share of the budget that researchers should apply to pilots, we believe that pilots can pay off when they empower researchers to improve their study designs and make more grounded decisions about their research.

Administering a second pilot study is especially prudent when pilot results or procedures deviate significantly from your expectations, or if you make substantial alterations to your design. Piloting one or more times is particularly beneficial for field experiments, as researchers often have just one opportunity to successfully implement their full study. However, when resources are constrained, you may opt instead to use simulations to evaluate alternative research designs based on results from your single pilot study. The DeclareDesign package in R is a helpful resource.

In short, pilots are neither pre-tests of hypotheses nor merely checks on basic study logistics that deplete one’s research funds. Instead, pilots are living and breathing elements of the broader research process that provide value in their own right.

11 References

Enago Academy

Why Is a Pilot Study Important in Research?

' src=

Are you working on a new research project ? We know that you are excited to start, but before you dive in, make sure your study is feasible. You don’t want to end up having to process too many samples at once or realize you forgot to add an essential question to your questionnaire.

What is a Pilot Study?

You can determine the feasibility of your research design, with a pilot study before you start. This is a preliminary, small-scale “rehearsal” in which you test the methods you plan to use for your research project. You will use the results to guide the methodology of your large-scale investigation. Pilot studies should be performed for both qualitative and quantitative studies. Here, we discuss the importance of the pilot study and how it will save you time, frustration and resources.

“ You never test the depth of a river with both feet ” – African proverb

Components of a Pilot Study

Whether your research is a clinical trial of a medical treatment or a survey in the form of a questionnaire, you want your study to be informative and add value to your research field. Things to consider in your pilot study include:

  • Sample size and selection. Your data needs to be representative of the target study population. You should use statistical methods to estimate the feasibility of your sample size.
  • Determine the criteria for a successful pilot study based on the objectives of your study. How will your pilot study address these criteria?
  • When recruiting subjects or collecting samples ensure that the process is practical and manageable.
  • Always test the measurement instrument . This could be a questionnaire, equipment, or methods used. Is it realistic and workable? How can it be improved?
  • Data entry and analysis . Run the trial data through your proposed statistical analysis to see whether your proposed analysis is appropriate for your data set.
  • Create a flow chart of the process.

How to Conduct a Pilot Study

Conducting a pilot study is an essential step in many research projects. Here’s a general guide on how to conduct a pilot study:

Step 1: Define Objectives

Inspect what specific aspects of your main study do you want to test or evaluate in your pilot study.

Step 2: Evaluate Sample Size

Decide on an appropriate sample size for your pilot study. This can be smaller than your main study but should still be large enough to provide meaningful feedback.

Step 3: Select Participants

Choose participants who are similar to those you’ll include in the main study. Ensure they match the demographics and characteristics of your target population.

Step 4: Prepare Materials

Develop or gather all the materials needed for the study, such as surveys, questionnaires, protocols, etc.

Step 5: Explain the Purpose of the Study

Briefly explain the purpose and implementation method of the pilot study to participants. Pay attention to the study duration to help you refine your timeline for the main study.

Step 6: Gather Feedback

Gather feedback from participants through surveys, interviews, or discussions. Ask about their understanding of the questions, clarity of instructions, time taken, etc.

 Step 7: Analyze Results

Analyze the collected data and identify any trends or patterns. Take note of any unexpected issues, confusion, or problems that arise during the pilot.

Step 8: Report Findings

Write a brief report detailing the process, results, and any changes made.

Based on the results observed in the pilot study, make necessary adjustments to your study design, materials, procedures, etc. Furthermore, ensure you are following ethical guidelines for research, even in a pilot study.

Ready to test your understanding on conducting a pilot study? Take our short quiz today!

Fill the Details to Check Your Score

clock.png

Importance of Pilot Study in Research

Pilot studies should be routinely incorporated into research design s because they:

  • Help define the research question
  • Test the proposed study design and process. This could alert you to issues which may negatively affect your project.
  • Educate yourself on different techniques related to your study.
  • Test the safety of the medical treatment in preclinical trials on a small number of participants. This is an essential step in clinical trials.
  • Determine the feasibility of your study, so you don’t waste resources and time.
  • Provide preliminary data that you can use to improve your chances for funding and convince stakeholders that you have the necessary skills and expertise to successfully carry out the research.

Are Pilot Studies Always Necessary?

We recommend pilot studies for all research. Scientific research does not always go as planned; therefore, you should optimize the process to minimize unforeseen events. Why risk disastrous and expensive mistakes that could have been discovered and corrected in a pilot study?

An Essential Component for Good Research Design

Pilot work not only gives you a chance to determine whether your project is feasible but also an opportunity to publish its results. You have an ethical and scientific obligation to get your information out to assist other researchers in making the most of their resources.

A successful pilot study does not ensure the success of a research project. However, it does help you assess your approach and practice the necessary techniques required for your project. It will give you an indication of whether your project will work. Would you start a research project without a pilot study? Let us know in the comments section below.

' src=

But it depends on the nature of the research, I suppose.

Awesome document

Good document

I totally agree with this article that pilot study helps the researcher be sure how feasible his research idea is. And is well worth the time, as it saves future time wastage.

Great article, it is always wise to carry out that test before putting out the Main stuff. It saves you time and future embarrasment.

Rate this article Cancel Reply

Your email address will not be published.

research methodology pilot study

Enago Academy's Most Popular Articles

Content Analysis vs Thematic Analysis: What's the difference?

  • Reporting Research

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for data interpretation

In research, choosing the right approach to understand data is crucial for deriving meaningful insights.…

Cross-sectional and Longitudinal Study Design

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right approach

The process of choosing the right research design can put ourselves at the crossroads of…

Networking in Academic Conferences

  • Career Corner

Unlocking the Power of Networking in Academic Conferences

Embarking on your first academic conference experience? Fear not, we got you covered! Academic conferences…

Research recommendation

Research Recommendations – Guiding policy-makers for evidence-based decision making

Research recommendations play a crucial role in guiding scholars and researchers toward fruitful avenues of…

research methodology pilot study

  • AI in Academia

Disclosing the Use of Generative AI: Best practices for authors in manuscript preparation

The rapid proliferation of generative and other AI-based tools in research writing has ignited an…

Intersectionality in Academia: Dealing with diverse perspectives

Meritocracy and Diversity in Science: Increasing inclusivity in STEM education

Avoiding the AI Trap: Pitfalls of relying on ChatGPT for PhD applications

research methodology pilot study

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

research methodology pilot study

What should universities' stance be on AI tools in research and academic writing?

  • Correspondence
  • Open access
  • Published: 16 July 2010

What is a pilot or feasibility study? A review of current practice and editorial policy

  • Mubashir Arain 1 ,
  • Michael J Campbell 1 ,
  • Cindy L Cooper 1 &
  • Gillian A Lancaster 2  

BMC Medical Research Methodology volume  10 , Article number:  67 ( 2010 ) Cite this article

186k Accesses

787 Citations

30 Altmetric

Metrics details

In 2004, a review of pilot studies published in seven major medical journals during 2000-01 recommended that the statistical analysis of such studies should be either mainly descriptive or focus on sample size estimation, while results from hypothesis testing must be interpreted with caution. We revisited these journals to see whether the subsequent recommendations have changed the practice of reporting pilot studies. We also conducted a survey to identify the methodological components in registered research studies which are described as 'pilot' or 'feasibility' studies. We extended this survey to grant-awarding bodies and editors of medical journals to discover their policies regarding the function and reporting of pilot studies.

Papers from 2007-08 in seven medical journals were screened to retrieve published pilot studies. Reports of registered and completed studies on the UK Clinical Research Network (UKCRN) Portfolio database were retrieved and scrutinized. Guidance on the conduct and reporting of pilot studies was retrieved from the websites of three grant giving bodies and seven journal editors were canvassed.

54 pilot or feasibility studies published in 2007-8 were found, of which 26 (48%) were pilot studies of interventions and the remainder feasibility studies. The majority incorporated hypothesis-testing (81%), a control arm (69%) and a randomization procedure (62%). Most (81%) pointed towards the need for further research. Only 8 out of 90 pilot studies identified by the earlier review led to subsequent main studies. Twelve studies which were interventional pilot/feasibility studies and which included testing of some component of the research process were identified through the UKCRN Portfolio database. There was no clear distinction in use of the terms 'pilot' and 'feasibility'. Five journal editors replied to our entreaty. In general they were loathe to publish studies described as 'pilot'.

Pilot studies are still poorly reported, with inappropriate emphasis on hypothesis-testing. Authors should be aware of the different requirements of pilot studies, feasibility studies and main studies and report them appropriately. Authors should be explicit as to the purpose of a pilot study. The definitions of feasibility and pilot studies vary and we make proposals here to clarify terminology.

Peer Review reports

A brief definition is that a pilot study is a 'small study for helping to design a further confirmatory study'[ 1 ]. A very useful discussion of exactly what is a pilot study has been given by Thabane et al. [ 2 ] Such kinds of study may have various purposes such as testing study procedures, validity of tools, estimation of the recruitment rate, and estimation of parameters such as the variance of the outcome variable to calculate sample size etc. In pharmacological trials they may be referred to as 'proof of concept' or Phase I or Phase II studies. It has become apparent to us when reviewing research proposals that small studies with all the trappings of a major study, such as randomization and hypothesis testing may be labeled a 'pilot' because they do not have the power to test clinically meaningful hypotheses. The authors of such studies perhaps hope that reviewers will regard a 'pilot' more favourably than a small clinical trial. This lead us to ask when it is legitimate to label a study as a 'pilot' or 'feasibility' study, and what features should be included in these types of studies.

Lancaster et al [ 3 ] conducted a review of seven major medical journals in 2000-1 to produce evidence regarding the components of pilot studies for randomized controlled trials. Their search included both 'pilot' and 'feasibility' studies as keywords. They reported certain recommendations: having clear objectives in a pilot study, inappropriateness of mixing pilot data with main research study, using mainly descriptive statistics obtained and caution regarding the use of hypothesis testing for conclusions. Arnold et al [ 1 ] recently reviewed pilot studies particularly related to critical care medicine by searching the literature from 1997 to 2007. They provided narrative descriptions of some pilot papers particularly those describing critical care medicine procedures. They pointed out that few pilot trials later evolved into subsequent published major trials. They made useful distinctions between: pilot work which is any background research to inform a future study, a pilot study which has specific hypotheses, objectives and methodology and a pilot trial which is a stand-alone pilot study and includes a randomization procedure. They excluded feasibility studies from their consideration.

Thabane et al [ 2 ] gave a checklist of what they think should be included in a pilot study. They included 'feasibility' or 'vanguard' studies but did not distinguish them from pilot studies. They provided a good discussion on how to interpret a pilot study. They stress that not only the outcome or surrogate outcome for the subsequent main study should be described but also that a pilot study should have feasibility outcomes which should be clearly defined and described. Their article was opinion based and not supported by a review of current practice.

The objective of this paper is to provide writers and reviewers of research proposals with evidence from a variety of sources for which components they should expect, and which are unnecessary or unhelpful, in a study which is labeled as a pilot or feasibility study. To do this we repeated Lancaster et al's [ 3 ] review for current papers see if there has been any change in how pilot studies were reported since their study. As many pilot studies are never published we also identified pilot studies which were registered with the UK Clinical Research Network (UKCRN) Portfolio Database. This aims to be a "complete picture of the clinical research which is currently taking place across the UK". All studies included have to have been peer reviewed through a formal independent process. We examined the websites of some grant giving bodies to find their definition of a pilot study and their funding policy toward them. Finally we contacted editors of leading medical journals to discover their policy of accepting studies described as 'pilot' or 'feasibility'.

Literature survey

MEDLINE, Web of Science and university library data bases were searched for the years 2007-8 using the same key words "Pilot" or "Feasibility" as used by Lancaster et al. [ 3 ]. We reviewed the same four general medicine journals: the British Medical Journal (BMJ), Lancet, the New England Journal of Medicine (NEJM) and the Journal of American Medical Association (JAMA) and the same three specialist journals: British Journal of Surgery (BJS), British Journal of Cancer (BJC), British Journal of Obstetrics and Gynecology (BJOG). We excluded review papers. The full text of the relevant papers was obtained. GL reviewed 20 papers and classified them into groups as described in her original paper [ 3 ]. Subsequently MA, in discussion with MC, designed a data extraction form to classify the papers. We changed one category from GL's original paper. We separated the category 'Phase I/II trials' from the 'Piloting new treatment, technique, combination of treatments' category. We then classified the remaining paper into the categories described in Table 1 . The total number of research papers by journal was obtained by searching journal article with abstracts (excluding reviews) using Pubmed. We searched citations to see whether the pilot studies identified by Lancaster et al [ 3 ] eventually led to main trials.

Portfolio database review

The (UKCRN) Portfolio Database was searched for the terms 'feasibility' or 'pilot' in the title or research summary. Duplicate cases and studies classified as 'observational' were omitted. From the remaining studies those classified as 'closed' were selected to exclude studies which may not have started or progressed. Data were extracted directly from the research summary of the database or where that was insufficient the principle investigator was contacted for related publications or study protocols.

Editor and funding agency survey

We wrote to the seven medical journal editors of the same journals used by Lancaster et al. [ 3 ], (BMJ, Lancet, NEJM, JAMA. BJS, BJC and BJOG) and looked at the policies of three funding agencies (British Medical Research Council, Research for Patient Benefit and NETSCC (National Institute for Health Research Trials and Studies Coordinating Centre). We wished to explore whether there was any specified policy of the journal for publishing pilot trials and how the editors defined a pilot study. We also wished to see if there was funding for pilot studies.

Initially 77 papers were found in the target journals for 2007-8 but 23 were review papers or commentaries or indirectly referred to the word "pilot" or "feasibility" and were not actually pilot studies leaving a total of 54 papers. Table 1 shows the results by journal and by type of study and also shows the numbers reported by Lancaster et al. [ 3 ] for 2000-01 in the same medical journals. There was a decrease in the proportion of pilot studies published over the period of time, however the difference was not statistically significant (2.0% vs 1.6%; X 2 = 1.6, P = 0.2). It is noticeable that the Phase I or Phase II studies are largely confined to the cancer journals.

Lancaster et al [ 3 ] found that 50% of pilot studies reported the intention of further work yet we identified only 8 (8.8%) which were followed up by a major study. Of these 2 (25%) were published in the same journal as the pilot.

Twenty-six of the studies found in 2007-8 were described as pilot or feasibility studies for randomized clinical trials (RCTs) including Phase II studies. Table 2 gives the numbers of studies which describe specific components of RCTs. Sample size calculations were performed and reported in 9 (36%) of the studies. Hypothesis testing and performing inferential statistics to report significant results was observed in 21 (81%) of pilot studies. The processes of blinding was observed in only 5 (20%) although the randomization procedure was applied or tested in 16 (62%) studies. Similarly a control group was assigned in most of the studies (n = 18; 69%). As many as 21 (81%) of pilot studies suggested the need for further investigation of the tested drug or procedure and did not report conclusive results on the basis of their pilot data. The median number of participants was 76, inter-quartile range (42, 216).

Of the 54 studies in 2007-8, a total of 20 were described as 'pilot' and 34 were described as 'feasibility' studies. Table 3 contrasts those which were identified by the keyword 'pilot' with those identified by 'feasibility'. Those using 'pilot' were more likely to have a pre-study sample size estimate, to use randomization and to use a control group. In the 'pilot' group 16(80%) suggested further study, in contrast to 15 (44%) in the 'feasibility' group.

A total of 34 studies were identified using the term 'feasibility' or 'pilot' in the title or research summary which were prospective interventional studies and were closed, i.e. not currently running and available for analysis. Only 12 studies were interventional pilot/feasibility studies which included testing of some component of the research process. Of these 5 were referred to as 'feasibility', 6 as 'pilot' and 1 as both 'feasibility' and 'pilot' (Table 4 ).

The methodological components tested within these studies were: estimation of sample size; number of subjects eligible; resources (e.g. cost), time scale; population-related (e.g. exclusion criteria), randomisation process/acceptability; data collection systems/forms; outcome measures; follow-up (response rates, adherence); overall design; whole trial feasibility. In addition to one or more of these, some studies also looked at clinical outcomes including: feasibility/acceptability of intervention; dose, efficacy and safety of intervention.

The results are shown in Table 4 . Pilot studies alone included estimation of sample size for a future bigger study and tested a greater number of components in each study. The majority of the pilots and the feasibility studies ran the whole study 'in miniature' as it would be in the full study, with or without randomization.

As an example of a pilot study consider 'CHOICES: A pilot patient preference randomised controlled trial of admission to a Women's Crisis House compared with psychiatric hospital admissions' http://www.iop.kcl.ac.uk/projects/default.aspx?id=10290 . This study looked at multiple components of a potential bigger study. It aimed to determine the proportion of women unwilling to be randomised, the feasibility of a patient preference RCT design, the outcome and cost measures to determine which outcome measures to use, the recruitment and drop out rates; and to estimate the levels of outcome variability to calculate sample sizes for the main study. It also intended to develop a user focused and designed instrument which is the outcome from the study. The sample size was 70.

The editors of five (out of seven) medical journals responded to our request for information regarding publishing policy for pilot studies. Four of the journals did not have a specified policy about publishing pilot studies and mostly reported that pilot trials cannot be published if the standard is lower than a full clinical trial requirement. The Lancet has started creating space for preliminary phase I trials and set a different standard for preliminary studies. Most of the other journals do not encourage the publication of pilot studies because they consider them less rigorous than main studies. Nevertheless some editors accepted pilot studies for publication by compromising only on the requirement for a pre-study sample size calculation. All other methodological issued were considered as important as for the full trials, such as trial registration, randomization, hypothesis testing, statistical analysis and reporting according to the CONSORT guidelines.

All three funding bodies made a point to note that pilot and feasibility studies would be considered for funding. Thabane et al [ 2 ] provided a list of websites which define pilot or feasibility studies. We considered the NETSCC definition to be most helpful and to most closely mirror what investigators are doing and it is given below.

NETSCC definition of pilot and feasibility studies http://www.netscc.ac.uk/glossary/

Feasibility Studies

Feasibility Studies are pieces of research done before a main study. They are used to estimate important parameters that are needed to design the main study. For instance:

standard deviation of the outcome measure, which is needed in some cases to estimate sample size,

willingness of participants to be randomised,

willingness of clinicians to recruit participants,

number of eligible patients,

characteristics of the proposed outcome measure and in some cases feasibility studies might involve designing a suitable outcome measure,

follow-up rates, response rates to questionnaires, adherence/compliance rates, ICCs in cluster trials, etc.

Feasibility studies for randomised controlled trials may not themselves be randomised. Crucially, feasibility studies do not evaluate the outcome of interest; that is left to the main study.

If a feasibility study is a small randomised controlled trial, it need not have a primary outcome and the usual sort of power calculation is not normally undertaken. Instead the sample size should be adequate to estimate the critical parameters (e.g. recruitment rate) to the necessary degree of precision.

Pilot studies

A Pilot Study is a version of the main study that is run in miniature to test whether the components of the main study can all work together. It is focused on the processes of the main study, for example to ensure recruitment, randomisation, treatment, and follow-up assessments all run smoothly. It will therefore resemble the main study in many respects. In some cases this will be the first phase of the substantive study and data from the pilot phase may contribute to the final analysis; this can be referred to as an internal pilot. Alternatively at the end of the pilot study the data may be analysed and set aside, a so-called external pilot.

In our repeat of Lancaster et al's study [ 3 ] we found that the reporting of pilot studies was still poor. It is generally accepted that small, underpowered clinical trials are unethical [ 4 ]. Thus it is not an excuse to label such a study as a pilot and hope to make it ethical. We have shown that pilot studies have different objectives to RCTs and these should be clearly described. Participants in such studies should be informed that they are in a pilot study and that there may not be a further larger study.

It is helpful to make a more formal distinction between a 'pilot' and a 'feasibility' study. We found that studies labeled 'feasibility' were conducted with more flexible methodology compared to those labeled 'pilot'. For example the term 'feasibility' has been used for large scale studies such as a screening programme applied at a population level to determine the initial feasibility of the programme. On the other hand 'pilot' studies were reported with more rigorous methodological components like sample size estimation, randomization and control group selection than studies labeled 'feasibility'. We found the NETSCC definition to be the most helpful since it distinguishes between these types of study.

In addition it was observed that most of the pilot studies report their results as inconclusive, with the intention of conducting a further, larger study. In contrast, several of the feasibility studies did not admit such an intention. On the basis of their intention one would have expected about 45 of the studies identified by Lancaster et al in 2000/1 to have been followed by a bigger study whereas we only found 8. This would reflect the opinion of most of the journal editors and experts who responded to our survey, who felt that pilot studies rarely act as a precursor for a bigger study. The main reason given was that if the pilot shows significant results then researchers may not find it necessary to conduct the main trial. In addition if the results are unfavorable or the authors find an unfeasible procedure, the main study is less likely to be considered useful. Our limited review of funding bodies was encouraging. Certainly when reviewing grant applications, we have found it helpful to have the results of a pilot study included in the bid. We think that authors of pilots studies should be explicit as to their purpose, e.g. to test a new procedure in preparation for a clinical trial. We also think that authors of proposals for pilot studies should be more explicit as to the criteria which lead to further studies being abandoned, and that this should be an important part of the proposal.

In the Portfolio Database review, only pilot studies cited an intention to estimate sample size calculations for future studies and the majority of pilot studies were full studies run with smaller sample sizes to test out a number of methodological components and clinical outcomes simultaneously. In comparison the feasibility studies tended to focus on fewer methodological components within individual studies. For example, the 6 pilot studies reported the intention to evaluate a total of 17 methodological components whereas in the 5 feasibility studies a total of only 6 methodological components were specifically identified as being under investigation (Table 4 ). However, both pilot and feasibility studies included trials run as complete studies, including randomization, but with sample sizes smaller than would be intended in the full study and the distinction between the two terms was not clear-cut.

Another reason for conducting a pilot study is to provide information to enable a sample size calculation in a subsequent main study. However since pilot studies tend to be small, the results should be interpreted with caution [ 5 ]. Only a small proportion of published pilot studies reported pre-study sample size calculations. Most journal editors reported that a sample size calculation is not a mandatory criterion for publishing pilot studies and suggested that it should not be done.

Some authors suggest that analysis of pilot studies should mainly be descriptive,[ 3 , 6 ] as hypothesis testing requires a powered sample size which is usually not available in pilot studies. In addition, inferential statistics and testing hypothesis for effectiveness require a control arm which may not be present in all pilot studies. However most of the pilot interventional studies in this review contained a control group and the authors performed and reported hypothesis testing for one or more variables. Some tested the effectiveness of an intervention and others just performed statistical testing to discover any important associations in the study variables. Observed practice is not necessarily good practice and we concur with Thabane et al [ 2 ] that any testing of an intervention needs to be reported cautiously.

The views of the journal editors, albeit from a small sample, were not particularly encouraging and reflected the experience of Lancaster et al [ 3 ]. Pilot studies, by their nature, will not produce 'significant' (i.e P < 0.05) results. We believe that publishing the results of well conducted pilot or feasibility studies is important for research, irrespective of outcome.. There is an increasing awareness that publishing only 'significant' results can lead to considerably error [ 7 ]. The journals we considered were all established, paper journals and perhaps the newer electronic journals will be more willing to consider the publication of the results from these types of studies.

We may expect that trials will increasingly be used to evaluate 'complex interventions'[ 8 , 9 ]. The MRC guidelines [ 8 ] explicitly suggest that preliminary studies, including pilots, be used prior to any major trial which seeks to evaluate a package of interventions (such as an educational course), rather than a single intervention (such as a drug). Thus it is likely that reviewers will be increasingly asked to pronounce on these and will require guidance as to how to review them.

Conclusions

We conclude that pilot studies are still poorly reported, with inappropriate emphasis on hypothesis-testing. We believe authors should be aware of the different requirements of pilot studies and feasibility studies and report them appropriately. We found that in practice the definitions of feasibility and pilot studies are not distinct and vary between health research funding bodies and we suggest use of the NETSCC definition to clarify terminology.

Arnold DM, Burns KE, Adhikari NK, Kho ME, Meade MO, Cook DJ: McMaster Critical Care Interest Group. The design and interpretation of pilot trials in clinical research in critical care. Crit Care Med. 2009, 37 (Suppl 1): S69-74. 10.1097/CCM.0b013e3181920e33.

Article   PubMed   Google Scholar  

Thabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, Robson R, Thabane M, Goldsmith CH: A tutorial on pilot studies: The what, why and How. BMC Medical Research Methodology. 2010, 10: 1-10.1186/1471-2288-10-1.

Article   PubMed   PubMed Central   Google Scholar  

Lancaster GA, Dodd S, Williamson PR: Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004, 10: 307-12. 10.1111/j..2002.384.doc.x.

Halpern SD, Karlawish JH, Berlin JA: The continuing unethical conduct of underpowered clinical trials. JAMA. 2002, 288: 358-62. 10.1001/jama.288.3.358.

Kraemer HC, Mintz J, Noda A, Tinklenberg J, Yesavage JA: Caution regarding the use of pilot studies to guide power calculations for study proposals. Arch Gen Psychiatry. 2006, 63: 484-9. 10.1001/archpsyc.63.5.484.

Grimes DA, Schulz KF: Descriptive studies: what they can and cannot do. Lancet. 2002, 359: 145-9. 10.1016/S0140-6736(02)07373-7.

Ioannidis JPA: Why Most Published Research Findings Are False. PLoS Med. 2005, 2 (8): e124-10.1371/journal.pmed.0020124.

Lancaster GA, Campbell MJ, Eldridge S, Farrin A, Marchant M, Muller S, Perera R, Peters TJ, Prevost AT, Rait G: Trials in Primary Care: issues in the design of complex intervention. Statistical Methods in Medical Research. 2010, to appear

Google Scholar  

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M: Developing and evaluating complex interventions: new guidance. 2008, Medical Research Council, [ http://www.mrc.ac.uk/Utilities/Documentrecord/index.htm?d=MRC004871 ]

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/10/67/prepub

Download references

Author information

Authors and affiliations.

Health Services Research, ScHARR, University of Sheffield, Regent Court, Regent St Sheffield, S1 4DA, UK

Mubashir Arain, Michael J Campbell & Cindy L Cooper

Department of Mathematics and Statistics, University of Lancaster, LA1 4YF, UK

Gillian A Lancaster

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Michael J Campbell .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors' contributions

MA reviewed the papers of 2000/1 and those of 2007/8 under the supervision of MC and helped to draft the manuscript. MC conceived of the study, and participated in its design and coordination and drafted the manuscript. CC conducted the portfolio database study and commented on the manuscript. GA conducted the original study, reviewed 20 papers and commented on the manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Arain, M., Campbell, M.J., Cooper, C.L. et al. What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Med Res Methodol 10 , 67 (2010). https://doi.org/10.1186/1471-2288-10-67

Download citation

Received : 20 May 2010

Accepted : 16 July 2010

Published : 16 July 2010

DOI : https://doi.org/10.1186/1471-2288-10-67

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Pilot Study
  • Feasibility Study
  • Journal Editor
  • Methodological Component

BMC Medical Research Methodology

ISSN: 1471-2288

research methodology pilot study

A tutorial on pilot studies: the what, why and how

Affiliation.

  • 1 Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton ON, Canada. [email protected]
  • PMID: 20053272
  • PMCID: PMC2824145
  • DOI: 10.1186/1471-2288-10-1

Pilot studies for phase III trials - which are comparative randomized trials designed to provide preliminary evidence on the clinical efficacy of a drug or intervention - are routinely performed in many clinical areas. Also commonly know as "feasibility" or "vanguard" studies, they are designed to assess the safety of treatment or interventions; to assess recruitment potential; to assess the feasibility of international collaboration or coordination for multicentre trials; to increase clinical experience with the study medication or intervention for the phase III trials. They are the best way to assess feasibility of a large, expensive full-scale study, and in fact are an almost essential pre-requisite. Conducting a pilot prior to the main study can enhance the likelihood of success of the main study and potentially help to avoid doomed main studies. The objective of this paper is to provide a detailed examination of the key aspects of pilot studies for phase III trials including: 1) the general reasons for conducting a pilot study; 2) the relationships between pilot studies, proof-of-concept studies, and adaptive designs; 3) the challenges of and misconceptions about pilot studies; 4) the criteria for evaluating the success of a pilot study; 5) frequently asked questions about pilot studies; 7) some ethical aspects related to pilot studies; and 8) some suggestions on how to report the results of pilot investigations using the CONSORT format.

  • Clinical Trials, Phase III as Topic / methods
  • Pilot Projects*
  • Randomized Controlled Trials as Topic
  • Research Design* / standards
  • Open access
  • Published: 31 October 2020

Guidance for conducting feasibility and pilot studies for implementation trials

  • Nicole Pearson   ORCID: orcid.org/0000-0003-2677-2327 1 , 2 ,
  • Patti-Jean Naylor 3 ,
  • Maureen C. Ashe 5 ,
  • Maria Fernandez 4 ,
  • Sze Lin Yoong 1 , 2 &
  • Luke Wolfenden 1 , 2  

Pilot and Feasibility Studies volume  6 , Article number:  167 ( 2020 ) Cite this article

80k Accesses

116 Citations

24 Altmetric

Metrics details

Implementation trials aim to test the effects of implementation strategies on the adoption, integration or uptake of an evidence-based intervention within organisations or settings. Feasibility and pilot studies can assist with building and testing effective implementation strategies by helping to address uncertainties around design and methods, assessing potential implementation strategy effects and identifying potential causal mechanisms. This paper aims to provide broad guidance for the conduct of feasibility and pilot studies for implementation trials.

We convened a group with a mutual interest in the use of feasibility and pilot trials in implementation science including implementation and behavioural science experts and public health researchers. We conducted a literature review to identify existing recommendations for feasibility and pilot studies, as well as publications describing formative processes for implementation trials. In the absence of previous explicit guidance for the conduct of feasibility or pilot implementation trials specifically, we used the effectiveness-implementation hybrid trial design typology proposed by Curran and colleagues as a framework for conceptualising the application of feasibility and pilot testing of implementation interventions. We discuss and offer guidance regarding the aims, methods, design, measures, progression criteria and reporting for implementation feasibility and pilot studies.

Conclusions

This paper provides a resource for those undertaking preliminary work to enrich and inform larger scale implementation trials.

Peer Review reports

The failure to translate effective interventions for improving population and patient outcomes into policy and routine health service practice denies the community the benefits of investment in such research [ 1 ]. Improving the implementation of effective interventions has therefore been identified as a priority of health systems and research agencies internationally [ 2 , 3 , 4 , 5 , 6 ]. The increased emphasis on research translation has resulted in the rapid emergence of implementation science as a scientific discipline, with the goal of integrating effective medical and public health interventions into health care systems, policies and practice [ 1 ]. Implementation research aims to do this via the generation of new knowledge, including the evaluation of the effectiveness of implementation strategies [ 7 ]. The term “implementation strategies” is used to describe the methods or techniques (e.g. training, performance feedback, communities of practice) used to enhance the adoption, implementation and/or sustainability of evidence-based interventions (Fig. 1 ) [ 8 , 9 ].

figure 1

Conceptual role of implementation strategies in improving intervention implementation and patient and public health outcomes

While there has been a rapid increase in the number of implementation trials over the past decade, the quality of trials has been criticised, and the effects of the strategies for such trials on implementation, patient or public health outcomes have been modest [ 11 , 12 , 13 ]. To improve the likelihood of impact, factors that may impede intervention implementation should be considered during intervention development and across each phase of the research translation process [ 2 ]. Feasibility and pilot studies play an important role in improving the conduct and quality of a definitive randomised controlled trial (RCT) for both intervention and implementation trials [ 10 ]. For clinical or public health interventions, pilot and feasibility studies may serve to identify potential refinements to the intervention, address uncertainties around the feasibility of intervention trial methods, or test preliminary effects of the intervention [ 10 ]. In implementation research, feasibility and pilot studies perform the same functions as those for intervention trials, however with a focus on developing or refining implementation strategies, refining research methods for an implementation intervention trial, or undertake preliminary testing of implementation strategies [ 14 , 15 ]. Despite this, reviews of implementation studies appear to suggest that few full implementation randomised controlled trials have undertaken feasibility and pilot work in advance of a larger trial [ 16 ].

A range of publications provides guidance for the conduct of feasibility and pilot studies for conventional clinical or public health efficacy trials including Guidance for Exploratory Studies of complex public health interventions [ 17 ] and the Consolidated Standards of Reporting Trials (CONSORT 2010) for Pilot and Feasibility trials [ 18 ]. However, given the differences between implementation trials and conventional clinical or public health efficacy trials, the field of implementation science has identified the need for nuanced guidance [ 14 , 15 , 16 , 19 , 20 ]. Specifically, unlike traditional feasibility and pilot studies that may include the preliminary testing of interventions on individual clinical or public health outcomes, implementation feasibility and pilot studies that explore strategies to improve intervention implementation often require assessing changes across multiple levels including individuals (e.g. service providers or clinicians) and organisational systems [ 21 ]. Due to the complexity of influencing behaviour change, the role of feasibility and pilot studies of implementation may also extend to identifying potential causal mechanisms of change and facilitate an iterative process of refining intervention strategies and optimising their impact [ 16 , 17 ]. In addition, where conventional clinical or public health efficacy trials are typically conducted under controlled conditions and directed mostly by researchers, implementation trials are more pragmatic [ 15 ]. As is the case for well conducted effectiveness trials, implementation trials often require partnerships with end-users and at times, the prioritisation of end-user needs over methods (e.g. random assignment) that seek to maximise internal validity [ 15 , 22 ]. These factors pose additional challenges for implementation researchers and underscore the need for guidance on conducting feasibility and pilot implementation studies.

Given the importance of feasibility and pilot studies in improving implementation strategies and the quality of full-scale trials of those implementation strategies, our aim is to provide practice guidance for those undertaking formative feasibility or pilot studies in the field of implementation science. Specifically, we seek to provide guidance pertaining to the three possible purposes of undertaking pilot and feasibility studies, namely (i) to inform implementation strategy development, (ii) to assess potential implementation strategy effects and (iii) to assess the feasibility of study methods.

A series of three facilitated group discussions were conducted with a group comprising of the 6 members from Canada, the U.S. and Australia (authors of the manuscript) that were mutually interested in the use of feasibility and pilot trials in implementation science. Members included international experts in implementation and behavioural science, public health and trial methods, and had considerable experience in conducting feasibility, pilot and/ or implementation trials. The group was responsible for developing the guidance document, including identification and synthesis of pertinent literature, and approving the final guidance.

To inform guidance development, a literature review was undertaken in electronic bibliographic databases and google, to identify and compile existing recommendations and guidelines for feasibility and pilot studies broadly. Through this process, we identified 30 such guidelines and recommendations relevant to our aim [ 2 , 10 , 14 , 15 , 17 , 18 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 ]. In addition, seminal methods and implementation science texts recommended by the group were examined. These included the CONSORT 2010 Statement: extension to randomised pilot and feasibility trials [ 18 ], the Medical Research Council’s framework for development and evaluation of randomised controlled trials for complex interventions to improve health [ 2 ], the National Institute of Health Research (NIHR) definitions [ 39 ] and the Quality Enhancement Research Initiative (QUERI) Implementation Guide [ 4 ]. A summary of feasibility and pilot study guidelines and recommendations, and that of seminal methods and implementation science texts, was compiled by two authors. This document served as the primary discussion document in meetings of the group. Additional targeted searches of the literature were undertaken in circumstances where the identified literature did not provide sufficient guidance. The manuscript was developed iteratively over 9 months via electronic circulation and comment by the group. Any differences in views between reviewers was discussed and resolved via consensus during scheduled international video-conference calls. All members of the group supported and approved the content of the final document.

The broad guidance provided is intended to be used as supplementary resources to existing seminal feasibility and pilot study resources. We used the definitions of feasibility and pilot studies as proposed by Eldridge and colleagues [ 10 ]. These definitions propose that any type of study relating to the preparation for a main study may be classified as a “feasibility study”, and that the term “pilot” study represents a subset of feasibility studies that specifically look at a design feature proposed for the main trial, whether in part of in full, that is being conducted on a smaller scale [ 10 ]. In addition, when referring to pilot studies, unless explicitly stated otherwise, we will primarily focus on pilot trials using a randomised design. We focus on randomised trials as such designs are the most common trial design in implementation research, and randomised designs may provide the most robust estimates of the potential effect of implementation strategies [ 46 ]. Those undertaking pilot studies that employ non-randomised designs need to interpret the guidance provided in this context. We acknowledge, however, that using randomised designs can prove particularly challenging in the field of implementation science, where research is often undertaken in real-world contexts with pragmatic constraints.

We used the effectiveness-implementation hybrid trial design typology proposed by Curran and colleagues as the framework for conceptualising the application of feasibility testing of implementation interventions [ 47 ]. The typology makes an explicit distinction between the purpose and methods of implementation and conventional clinical (or public health efficacy) trials. Specifically, the first two of the three hybrid designs may be relevant for implementation feasibility or pilot studies. Hybrid Type 1 trials are those designed to test the effectiveness of an intervention on clinical or public health outcomes (primary aim) while conducting a feasibility or pilot study for future implementation via observing and gathering information regarding implementation in a real-world setting/situation (secondary aim) [ 47 ]. Hybrid Type 2 trials involve the simultaneous testing of both the clinical intervention and the testing or feasibility of a formed implementation intervention/strategy as co-primary aims. For this design, “testing” is inclusive of pilot studies with an outcome measure and related hypothesis [ 47 ]. Hybrid Type 3 trials are definitive implementation trials designed to test the effectiveness of an implementation strategy whilst also collecting secondary outcome data on clinical or public health outcomes on a population of interest [ 47 ]. As the implementation aim of the trial is a definitively powered trial, it was not considered relevant to the conduct of feasibility and pilot studies in the field and will not be discussed.

Embedding of feasibility and pilot studies within Type 1 and Type 2 effectiveness-implementation hybrid trials has been recommended as an efficient way to increase the availability of information and evidence to accelerate the field of implementation science and the development and testing of implementation strategies [ 4 ]. However, implementation feasibility and pilot studies are also undertaken as stand-alone exploratory studies and do not include effectiveness measures in terms of the patient or public health outcomes. As such, in addition to discussing feasibility and pilot trials embedded in hybrid trial designs, we will also refer to stand-alone implementation feasibility and pilot studies.

An overview of guidance (aims, design, measures, sample size and power, progression criteria and reporting) for feasibility and pilot implementation studies can be found in Table 1 .

Purpose (aims)

The primary objective of hybrid type 1 trial is to assess the effectiveness of a clinical or public health intervention (rather than an implementation strategy) on the patient or population health outcomes [ 47 ]. Implementation strategies employed in these trials are often designed to maximise the likelihood of an intervention effect [ 51 ], and may not be intended to represent the strategy that would (or could feasibly), be used to support implementation in more “real world” contexts. Specific aims of implementation feasibility or pilot studies undertaken as part of Hybrid Type 1 trials are therefore formative and descriptive as the implementation strategy has not been fully formed nor will be tested. Thus, the purpose of a Hybrid Type 1 feasibility study is generally to inform the development or refinement of the implementation strategy rather than to test potential effects or mechanisms [ 22 , 47 ]. An example of a Hybrid Type 1 trial by Cabassa and colleagues is provided in Additional file 1 [ 52 ].

In Hybrid Type 2 trial designs, there is a dual purpose to test: (i) the clinical or public health effectiveness of the intervention on clinical or public health outcomes (e.g. measure of disease or health behaviour) and (ii) test or measure the impact of the implementation strategy on implementation outcomes (e.g. adoption of health policy in a community setting) [ 53 ]. However, testing the implementation strategy on implementation outcomes may be a secondary aim in these trials and positioned as a pilot [ 22 ]. In Hybrid Type 2 trial designs, the implementation strategy is more developed than in Hybrid Type 1 trials, resembling that intended for future testing in a definitive implementation randomised controlled trial. The dual testing of the evidence-based intervention and implementation interventions or strategies in Hybrid Type 2 trial designs allows for direct assessment of potential effects of an implementation strategy and exploration of components of the strategy to further refine logic models. Additionally, such trials allow for assessments of the feasibility, utility, acceptability or quality of research methods for use in a planned definitive trial. An example of a Hybrid Type 2 trial design by Barnes and colleagues [ 54 ] is included in Additional file 2 .

Non-hybrid pilot implementation studies are undertaken in the absence of a broader effectiveness trial. Such studies typically occur when the effectiveness of a clinical or public health intervention is well established, but robust strategies to promote its broader uptake and integration into clinical or public health services remain untested [ 15 ]. In these situations, implementation pilot studies may test or explore specific trial methods for a future definitive randomised implementation trial. Similarly, a pilot implementation study may also be undertaken in a way that provides a more rigorous formative evaluation of hypothesised implementation strategy mechanisms [ 55 ], or potential impact of implementation strategies [ 56 ], using similar approaches to that employed in Hybrid Type 2 trials. Examples of potential aims for feasibility and pilot studies are outlined in Table 2 .

For implementation feasibility or pilot studies, as is the case for these types of studies in general, the selection of research design should be guided by the specific research question that the study is seeking to address [ 57 ]. Although almost any study design may be used, researchers should review the merits and potential threats to internal and external validity to help guide the selection of research design for feasibility/pilot testing [ 15 ].

As Hybrid Type 1 trials are primarily concerned with testing the effectiveness of an intervention (rather than implementation strategy), the research design will typically employ power calculations and randomisation procedures at the health outcome level to measure the effect on behaviour, symptoms, functional and/or other clinical or public health outcomes. Hybrid Type 1 feasibility studies may employ a variety of designs usually nested within the experimental group (those receiving the intervention and any form of an implementation support strategy) of the broader efficacy trial [ 47 ]. Consistent with the aims of Hybrid Type 1 feasibility and pilot studies, the research designs employed are likely to be non-comparative. Cross-sectional surveys, interviews or document review, qualitative research or mix methods approaches may be used to assess implementation contextual factors, such as barriers and enablers to implementation and/or the acceptability, perceived feasibility or utility of implementation strategies or research methods [ 47 ].

Pilot implementation studies as part of Hybrid Type 2 designs can make use of the comparative design of the broader effectiveness trial to examine the potential effects of the implementation strategy [ 47 ] and more robustly assess the implementation mechanisms, determinants and influence of broader contextual factors [ 53 ]. In this trial type, mixed method and qualitative methods may complement the findings of between group (implementation strategy arm versus comparison) quantitative comparisons, enable triangulation and provide more comprehensive evidence to inform implementation strategy development and assessment. Stand-alone implementation feasibility and pilot implementation studies are free from the constraints and opportunities of research embedded in broader effectiveness trials. As such, research can be designed in a way that best addresses the explicit implementation objectives of the study. Specifically, non-hybrid pilot studies can maximise the applicability of study findings for future definitive trials by employing methods to directly test trial methods such as recruitment or retention strategies [ 17 ], enabling estimates of implementation strategies effects [ 56 ] or capturing data to explicitly test logic models or strategy mechanisms.

The selection of outcome measures should be linked directly to the objectives of the feasibility or pilot study. Where appropriate, measures should be objective or have suitable psychometric properties, such as evidence of reliability and validity [ 58 , 59 ]. Public health evaluation frameworks often guide the choice of outcome measure in feasibility and pilot implementation work and include RE_AIM [ 60 ], PRECEDE_PROCEED [ 61 ], Proctor and colleagues framework on outcomes for implementation research [ 62 ] and more recently, the “Implementation Mapping” framework [ 63 ]. Recent work by McKay and colleagues suggests a minimum data set of implementation outcomes that includes measures of adoption, reach, dose, fidelity and sustainability [ 46 ]. We discuss selected measures below and provide a summary in Table 3 [ 46 ]. Such measures could be assessed using quantitative or qualitative or mixed methods [ 46 ].

Measures to assess potential implementation strategy effects

In addition to assessing the effects of an intervention on individual clinical or public health outcomes, Hybrid Type 2 trials (and some non-hybrid pilot studies) are interested in measures of the potential effects of an implementation strategy on desired organisational or clinician practice change such as adherence to a guideline, process, clinical standard or delivery of a program [ 62 ]. A range of potential outcomes that could be used to assess implementation strategy effects has been identified, including measures of adoption, reach, fidelity and sustainability [ 46 ]. These outcomes are described in Table 2 , including definitions and examples of how they may be applied to the implementation component of innovation being piloted. Standardised tools to assess these outcomes are often unavailable due to the unique nature of interventions being implemented and the variable (and changing) implementation context in which the research is undertaken [ 64 ]. Researchers may collect outcome data for these measures as part of environmental observations, self-completed checklists or administrative records, audio recording of client sessions or other methods suited to their study and context [ 62 ]. The limitations of such methods, however, need to be considered.

Measures to inform the design or development of the implementation strategy

Measures informing the design or development of the implementation strategy are potentially part of all types of feasibility and pilot implementation studies. An understanding of the determinants of implementation is critical to implementation strategy development. A range of theoretical determinant frameworks have been published which describe factors that may influence intervention implementation [ 65 ], and systematic reviews have been undertaken describing the psychometric properties of many of these measures [ 64 , 66 ]. McKay and colleagues have also identified a priority set of determinants for implementation trials that could be considered for use in implementation feasibility and pilot studies, including measures of context, acceptability, adaptability, feasibility, compatibility, cost, culture, dose, complexity and self-efficacy [ 46 ]. These determinants are described in Table 3 , including definitions and how such measures may be applied to an implementation feasibility or pilot study. Researchers should consider, however, the application of such measures to assess both the intervention that is being implemented (as in a conventional intervention feasibility and pilot study) and the strategy that is being employed to facilitate its implementation, given the importance of the interaction between these factors and implementation success [ 46 ]. Examples of the potential application of measures to both the intervention and its implementation strategies have been outlined elsewhere [ 46 ]. Although a range of quantitative tools could be used to measure such determinants [ 58 , 66 ], qualitative or mixed methods are generally recommended given the capacity of qualitative measures to provide depth to the interpretation of such evaluations [ 40 ].

Measures of potential implementation determinants may be included to build or enhance logic models (Hybrid Type 1 and 2 feasibility and pilot studies) and explore implementation strategy mechanisms (Hybrid Type 2 pilot studies and non-hybrid pilot studies) [ 67 ]. If exploring strategy mechanisms, a hypothesized logic model underpinning the implementation strategy should be articulated including strategy-mechanism linkages, which are required to guide the measurement of key determinants [ 55 , 63 ]. An important determinant which can complicate logic model specification and measurement is the process of adaptation—modifications to the intervention or its delivery (implementation), through the input of service providers or implementers [ 68 ]. Logic models should specify components of implementation strategies thought to be “core” to their effects and those which are thought to be “non-core” where adaptation may occur without adversely impacting on effects. Stirman and colleagues propose a method for assessing adaptations that could be considered for use in pilot and feasibility studies of implementation trials [ 69 ]. Figure 2 provides an example of some of the implementation logic model components that may be developed or refined as part of feasibility or pilot studies of implementation [ 15 , 63 ].

figure 2

Example of components of an Implementation logic model

Measures to assess the feasibility of study methods

Measures of implementation feasibility and pilot study methods are similar to those of conventional studies for clinical or public health interventions. For example, standard measures of study participation and thresholds for study attrition (e.g. >20%) rates [ 73 ] can be employed in implementation studies [ 67 ]. Previous studies have also surveyed study data collectors to assess the success of blinding strategies [ 74 ]. Researchers may also consider assessing participation or adherence to implementation data collection procedures, the comprehension of survey items, data management strategies or other measures of feasibility of study methods [ 15 ].

Pilot study sample size and power

In effectiveness trials, power calculations and sample size decisions are primarily based on the detection of a clinically meaningful difference in measures of the effects of the intervention on the patient or public health outcomes such as behaviour, disease, symptomatology or functional outcomes [ 24 ]. In this context, the available study sample for implementation measures included in Hybrid Type 1 or 2 feasibility and pilot studies may be constrained by the sample and power calculations of the broader effectiveness trial in which they are embedded [ 47 ]. Nonetheless, a justification for the anticipated sample size for all implementation feasibility or pilot studies (hybrid or stand-alone) is recommended [ 18 ], to ensure that implementation measures and outcomes achieve sufficient estimates of precision to be useful. For Hybrid type 2 and relevant stand-alone implementation pilot studies, sample size calculations for implementation outcomes should seek to achieve adequate estimates of precision deemed sufficient to inform progression to a fully powered trial [ 18 ].

Progression criteria

Stating progression criteria when reporting feasibility and pilot studies is recommended as part of the CONSORT 2010 extension to randomised pilot and feasibility trials guidelines [ 18 ]. Generally, it is recommended that progression criteria should be set a priori and be specific to the feasibility measures, components and/or outcomes assessed in the study [ 18 ]. While little guidance is available, ideas around suitable progression criteria include assessment of uncertainties around feasibility, meeting recruitment targets, cost-effectiveness and refining causal hypotheses to be tested in future trials [ 17 ]. When developing progression criteria, the use of guidelines is suggested rather than strict thresholds [ 18 ], in order to allow for appropriate interpretation and exploration of potential solutions, for example, the use of a traffic light system with varying levels of acceptability [ 17 , 24 ]. For example, Thabane and colleagues recommend that, in general, the outcome of a pilot study can be one of the following: (i) stop—main study not feasible (red); (ii) continue, but modify protocol—feasible with modifications (yellow); (iii) continue without modifications, but monitor closely—feasible with close monitoring and (iv) continue without modifications (green) (44)p5.

As the goal of Hybrid Type 1 implementation component is usually formative, it may not be necessary to set additional progression criteria in terms of the implementation outcomes and measures examined. As Hybrid Type 2 trials test an intervention and can pilot an implementation strategy, criteria for these and non-hybrid pilot studies may set progression criteria based on evidence of potential effects but may also consider the feasibility of trial methods, service provider, organisational or patient (or community) acceptability, fit with organisational systems and cost-effectiveness [ 17 ]. In many instances, the progression of implementation pilot studies will often require the input and agreement of stakeholders [ 27 ]. As such, the establishment of progression criteria and the interpretation of pilot and feasibility study findings in the context of such criteria require stakeholder input [ 27 ].

Reporting suggestions

As formal reporting guidelines do not exist for hybrid trial designs, we would recommend that feasibility and pilot studies as part of hybrid designs draw upon best practice recommendations from relevant reporting standards such as the CONSORT extension for randomised pilot and feasibility trials, the Standards for Reporting Implementation Studies (STaRI) guidelines and the Template for Intervention Description and Replication (TIDieR) guide as well as any other design relevant reporting standards [ 48 , 50 , 75 ]. These, and further reporting guidelines, specific to the particular research design chosen, can be accessed as part of the EQUATOR (Enhancing the QUAility and Transparency Of health Research) network—a repository for reporting guidance [ 76 ]. In addition, researchers should specify the type of implementation feasibility or pilot study being undertaken using accepted definitions. If applicable, specification and justification behind the choice of hybrid trial design should also be stated. In line with existing recommendations for reporting of implementation trials generally, reporting on the referent of outcomes (e.g. specifying if the measure in relation to the specific intervention or the implementation strategy) [ 62 ], is also particularly pertinent when reporting hybrid trial designs.

Concerns are often raised regarding the quality of implementation trials and their capacity to contribute to the collective evidence base [ 3 ]. Although there have been many recent developments in the standardisation of guidance for implementation trials, information on the conduct of feasibility and pilot studies for implementation interventions remains limited, potentially contributing to a lack of exploratory work in this area and a limited evidence base to inform effective implementation intervention design and conduct [ 15 ]. To address this, we synthesised the existing literature and provide commentary and guidance for the conduct of implementation feasibility and pilot studies. To our knowledge, this work is the first to do so and is an important first step to the development of standardised guidelines for implementation-related feasibility and pilot studies.

Availability of data and materials

Not applicable.

Abbreviations

Randomised controlled trial

Consolidated Standards of Reporting Trials

Enhancing the QUAility and Transparency Of health Research

Standards for Reporting Implementation Studies

Strengthening the Reporting of Observational Studies in Epidemiology

Template for Intervention Description and Replication

National Institute of Health Research

Quality Enhancement Research Initiative

Bauer MS, Damschroder L, Hagedorn H, Smith J, Kilbourne AM. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3:32.

Article   Google Scholar  

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M, et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

Eccles MP, Armstrong D, Baker R, Cleary K, Davies H, Davies S, et al. An implementation research agenda. Implement Sci. 2009;4:18.

Department of Veterans Health Administration. Implementation Guide. Health Services Research & Development, Quality Enhancement Research Initiative. Updated 2013.

Peters DH, Nhan TT, Adam T. Implementation research: a practical guide; 2013.

Google Scholar  

Neta G, Sanchez MA, Chambers DA, Phillips SM, Leyva B, Cynkin L, et al. Implementation science in cancer prevention and control: a decade of grant funding by the National Cancer Institute and future directions. Implement Sci. 2015;10:4.

Foy R, Sales A, Wensing M, Aarons GA, Flottorp S, Kent B, et al. Implementation science: a reappraisal of our journal mission and scope. Implement Sci. 2015;10:51.

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

Leeman J, Birken SA, Powell BJ, Rohweder C, Shea CM. Beyond "implementation strategies": classifying the full range of strategies used in implementation science and practice. Implement Sci. 2017;12(1):125.

Eldridge SM, Lancaster GA, Campbell MJ, Thabane L, Hopewell S, Coleman CL, et al. Defining feasibility and pilot studies in preparation for randomised controlled trials: development of a conceptual framework. PLoS One. 2016;11(3):e0150205.

Article   CAS   Google Scholar  

Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69(2):123–57.

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the expert recommendations for implementing change (ERIC) project. Implement Sci. 2015;10:21.

Lewis CC, Stanick C, Lyon A, Darnell D, Locke J, Puspitasari A, et al. Proceedings of the fourth biennial conference of the Society for Implementation Research Collaboration (SIRC) 2017: implementation mechanisms: what makes implementation work and why? Part 1. Implement Sci. 2018;13(Suppl 2):30.

Levati S, Campbell P, Frost R, Dougall N, Wells M, Donaldson C, et al. Optimisation of complex health interventions prior to a randomised controlled trial: a scoping review of strategies used. Pilot Feasibility Stud. 2016;2:17.

Bowen DJ, Kreuter M, Spring B, Cofta-Woerpel L, Linnan L, Weiner D, et al. How we design feasibility studies. Am J Prev Med. 2009;36(5):452–7.

Eccles M, Grimshaw J, Walker A, Johnston M, Pitts N. Changing the behavior of healthcare professionals: the use of theory in promoting the uptake of research findings. J Clin Epidemiol. 2005;58(2):107–12.

Hallingberg B, Turley R, Segrott J, Wight D, Craig P, Moore L, et al. Exploratory studies to decide whether and how to proceed with full-scale evaluations of public health interventions: a systematic review of guidance. Pilot Feasibility Stud. 2018;4:104.

Eldridge SM, Chan CL, Campbell MJ, Bond CM, Hopewell S, Thabane L, et al. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. Pilot Feasibility Stud. 2016;2:64.

Proctor EK, Powell BJ, Baumann AA, Hamilton AM, Santens RL. Writing implementation research grant proposals: ten key ingredients. Implement Sci. 2012;7:96.

Stetler CB, Legro MW, Wallace CM, Bowman C, Guihan M, Hagedorn H, et al. The role of formative evaluation in implementation research and the QUERI experience. J Gen Intern Med. 2006;21(Suppl 2):S1–8.

Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Admin Pol Ment Health. 2011;38(1):4–23.

Johnson AL, Ecker AH, Fletcher TL, Hundt N, Kauth MR, Martin LA, et al. Increasing the impact of randomized controlled trials: an example of a hybrid effectiveness-implementation design in psychotherapy research. Transl Behav Med. 2018.

Arain M, Campbell MJ, Cooper CL, Lancaster GA. What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Med Res Methodol. 2010;10(1):67.

Avery KN, Williamson PR, Gamble C, O’Connell Francischetto E, Metcalfe C, Davidson P, et al. Informing efficient randomised controlled trials: exploration of challenges in developing progression criteria for internal pilot studies. BMJ Open. 2017;7(2):e013537.

Bell ML, Whitehead AL, Julious SA. Guidance for using pilot studies to inform the design of intervention trials with continuous outcomes. J Clin Epidemiol. 2018;10:153–7.

Billingham SAM, Whitehead AL, Julious SA. An audit of sample sizes for pilot and feasibility trials being undertaken in the United Kingdom registered in the United Kingdom clinical research Network database. BMC Med Res Methodol. 2013;13(1):104.

Bugge C, Williams B, Hagen S, Logan J, Glazener C, Pringle S, et al. A process for decision-making after pilot and feasibility trials (ADePT): development following a feasibility study of a complex intervention for pelvic organ prolapse. Trials. 2013;14:353.

Charlesworth G, Burnell K, Hoe J, Orrell M, Russell I. Acceptance checklist for clinical effectiveness pilot trials: a systematic approach. BMC Med Res Methodol. 2013;13(1):78.

Eldridge SM, Costelloe CE, Kahan BC, Lancaster GA, Kerry SM. How big should the pilot study for my cluster randomised trial be? Stat Methods Med Res. 2016;25(3):1039–56.

Fletcher A, Jamal F, Moore G, Evans RE, Murphy S, Bonell C. Realist complex intervention science: applying realist principles across all phases of the Medical Research Council framework for developing and evaluating complex interventions. Evaluation (Lond). 2016;22(3):286–303.

Hampson LV, Williamson PR, Wilby MJ, Jaki T. A framework for prospectively defining progression rules for internal pilot studies monitoring recruitment. Stat Methods Med Res. 2018;27(12):3612–27.

Kraemer HC, Mintz J, Noda A, Tinklenberg J, Yesavage JA. Caution regarding the use of pilot studies to guide power calculations for study proposals. Arch Gen Psychiatry. 2006;63(5):484–9.

Smith LJ, Harrison MB. Framework for planning and conducting pilot studies. Ostomy Wound Manage. 2009;55(12):34–48.

Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004;10(2):307–12.

Leon AC, Davis LL, Kraemer HC. The role and interpretation of pilot studies in clinical research. J Psychiatr Res. 2011;45(5):626–9.

Medical Research Council. A framework for development and evaluation of RCTs for complex interventions to improve health. London: Medical Research Council; 2000.

Möhler R, Bartoszek G, Meyer G. Quality of reporting of complex healthcare interventions and applicability of the CReDECI list - a survey of publications indexed in PubMed. BMC Med Res Methodol. 2013;13(1):125.

Möhler R, Köpke S, Meyer G. Criteria for reporting the development and evaluation of complex interventions in healthcare: revised guideline (CReDECI 2). Trials. 2015;16(1):204.

National Institute for Health Research. Definitions of feasibility vs pilot stuides [Available from: https://www.nihr.ac.uk/documents/guidance-on-applying-for-feasibility-studies/20474 ].

O'Cathain A, Hoddinott P, Lewin S, Thomas KJ, Young B, Adamson J, et al. Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers. Pilot Feasibility Stud. 2015;1:32.

Shanyinde M, Pickering RM, Weatherall M. Questions asked and answered in pilot and feasibility randomized controlled trials. BMC Med Res Methodol. 2011;11(1):117.

Teare MD, Dimairo M, Shephard N, Hayman A, Whitehead A, Walters SJ. Sample size requirements to estimate key design parameters from external pilot randomised controlled trials: a simulation study. Trials. 2014;15(1):264.

Thabane L, Lancaster G. Improving the efficiency of trials using innovative pilot designs: the next phase in the conduct and reporting of pilot and feasibility studies. Pilot Feasibility Stud. 2017;4(1):14.

Thabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, et al. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. 2010;10:1.

Westlund E. E.a. S. The nonuse, misuse, and proper use of pilot studies in experimental evaluation research. Am J Eval. 2016;38(2):246–61.

McKay H, Naylor PJ, Lau E, Gray SM, Wolfenden L, Milat A, et al. Implementation and scale-up of physical activity and behavioural nutrition interventions: an evaluation roadmap. Int J Behav Nutr Phys Act. 2019;16(1):102.

Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26.

Equator Network. Standards for reporting implementation studies (StaRI) statement 2017 [Available from: http://www.equator-network.org/reporting-guidelines/stari-statement/ ].

Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. PLoS Med. 2007;4(10):e297–e.

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687.

Schliep ME, Alonzo CN, Morris MA. Beyond RCTs: innovations in research design and methods to advance implementation science. Evid Based Commun Assess Inter. 2017;11(3-4):82–98.

Cabassa LJ, Stefancic A, O'Hara K, El-Bassel N, Lewis-Fernández R, Luchsinger JA, et al. Peer-led healthy lifestyle program in supportive housing: study protocol for a randomized controlled trial. Trials. 2015;16:388.

Landes SJ, McBain SA, Curran GM. Reprint of: An introduction to effectiveness-implementation hybrid designs. J Psychiatr Res. 2020;283:112630.

Barnes C, Grady A, Nathan N, Wolfenden L, Pond N, McFayden T, Ward DS, Vaughn AE, Yoong SL. A pilot randomised controlled trial of a web-based implementation intervention to increase child intake of fruit and vegetables within childcare centres. Pilot and Feasibility Studies. 2020. https://doi.org/10.1186/s40814-020-00707-w .

Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6:136.

Department of Veterans Health Affairs. Implementation Guide. Health Services Research & Development, Quality Enhancement Research Initiative. 2013.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258.

Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108.

Lewis CC, Mettert KD, Dorsey CN, Martinez RG, Weiner BJ, Nolen E, et al. An updated protocol for a systematic review of implementation-related measures. Syst Rev. 2018;7(1):66.

Glasgow RE, Klesges LM, Dzewaltowski DA, Estabrooks PA, Vogt TM. Evaluating the impact of health promotion programs: using the RE-AIM framework to form summary measures for decision making involving complex issues. Health Educ Res. 2006;21(5):688–94.

Green L, Kreuter M. Health promotion planning: an educational and ecological approach. Mountain View: Mayfield Publishing; 1999.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38(2):65–76.

Fernandez ME, Ten Hoor GA, van Lieshout S, Rodriguez SA, Beidas RS, Parcel G, et al. Implementation mapping: using intervention mapping to develop implementation strategies. Front Public Health. 2019;7:158.

Lewis CC, Weiner BJ, Stanick C, Fischer SM. Advancing implementation science through measure development and evaluation: a study protocol. Implement Sci. 2015;10:102.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

Clinton-McHarg T, Yoong SL, Tzelepis F, Regan T, Fielding A, Skelton E, et al. Psychometric properties of implementation measures for public health and community settings and mapping of constructs against the consolidated framework for implementation research: a systematic review. Implement Sci. 2016;11(1):148.

Moore CG, Carter RE, Nietert PJ, Stewart PW. Recommendations for planning pilot studies in clinical and translational research. Clin Transl Sci. 2011;4(5):332–7.

Pérez D, Van der Stuyft P, Zabala MC, Castro M, Lefèvre P. A modified theoretical framework to assess implementation fidelity of adaptive public health interventions. Implement Sci. 2016;11(1):91.

Stirman SW, Miller CJ, Toder K, Calloway A. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implement Sci. 2013;8:65.

Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci. 2007;2:40.

Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol. 2008;41(3-4):327–50.

Saunders RP, Evans MH, Joshi P. Developing a process-evaluation plan for assessing health promotion program implementation: a how-to guide. Health Promot Pract. 2005;6(2):134–47.

Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Wyse RJ, Wolfenden L, Campbell E, Brennan L, Campbell KJ, Fletcher A, et al. A cluster randomised trial of a telephone-based intervention for parents to increase fruit and vegetable consumption in their 3- to 5-year-old children: study protocol. BMC Public Health. 2010;10:216.

Consort Transparent Reporting of Trials. Pilot and Feasibility Trials 2016 [Available from: http://www.consort-statement.org/extensions/overview/pilotandfeasibility ].

Equator Network. Ehancing the QUAlity and Transparency Of health Research. [Avaliable from: https://www.equator-network.org/ ].

Download references

Acknowledgements

Associate Professor Luke Wolfenden receives salary support from a NHMRC Career Development Fellowship (grant ID: APP1128348) and Heart Foundation Future Leader Fellowship (grant ID: 101175). Dr Sze Lin Yoong is a postdoctoral research fellow funded by the National Heart Foundation. A/Prof Maureen C. Ashe is supported by the Canada Research Chairs program.

Author information

Authors and affiliations.

School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia

Nicole Pearson, Sze Lin Yoong & Luke Wolfenden

Hunter New England Population Health, Locked Bag 10, Wallsend, NSW 2287, Australia

School of Exercise Science, Physical and Health Education, Faculty of Education, University of Victoria, PO Box 3015 STN CSC, Victoria, BC, V8W 3P1, Canada

Patti-Jean Naylor

Center for Health Promotion and Prevention Research, University of Texas Health Science Center at Houston School of Public Health, Houston, TX, 77204, USA

Maria Fernandez

Department of Family Practice, University of British Columbia (UBC) and Centre for Hip Health and Mobility, University Boulevard, Vancouver, BC, V6T 1Z3, Canada

Maureen C. Ashe

You can also search for this author in PubMed   Google Scholar

Contributions

NP and LW led the development of the manuscript. NP, LW, NP, MCA, PN, MF and SY contributed to the drafting and final approval of the manuscript.

Corresponding author

Correspondence to Nicole Pearson .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors have no financial or non-financial interests to declare .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

Example of a Hybrid Type 1 trial. Summary of publication by Cabassa et al.

Additional file 2.

Example of a Hybrid Type 2 trial. Summary of publication by Barnes et al.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Pearson, N., Naylor, PJ., Ashe, M.C. et al. Guidance for conducting feasibility and pilot studies for implementation trials. Pilot Feasibility Stud 6 , 167 (2020). https://doi.org/10.1186/s40814-020-00634-w

Download citation

Received : 08 January 2020

Accepted : 18 June 2020

Published : 31 October 2020

DOI : https://doi.org/10.1186/s40814-020-00634-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Feasibility
  • Hybrid trial designs
  • Implementation science

Pilot and Feasibility Studies

ISSN: 2055-5784

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research methodology pilot study

research methodology pilot study

How to Conduct Effective Pilot Tests: Tips and Tricks

research methodology pilot study

Introduction

What is the main purpose of pilot testing, what qualitative research approaches rely on a pilot test, what are the benefits of a pilot study, how do you conduct a pilot study, steps after evaluation of pilot testing.

Successful qualitative research projects often begin with an essential step: pilot testing. Similar to beta testing for computer programs and online services, a pilot test, sometimes referred to as a small-scale preliminary study or pilot study, is to trial a design on a smaller scale before embarking on the main study.

Whether the focus is psychiatric research, a randomized controlled trial, or any other project, conducting a pilot test provides invaluable data, allowing research teams to refine their approach, optimize their evaluation criteria, and better predict the outcomes of a full-scale project.

research methodology pilot study

Pilot testing, or the act of conducting a pilot study, is a crucial phase in the process, especially in qualitative and social science research . It serves as a preparatory step , a preliminary test, allowing researchers to evaluate, refine, and if necessary, redesign aspects of their study before full implementation, as well as determine the cost of a full study.

Pilot studies for assessing feasibility

One of the most significant purposes of a pilot test is to assess the feasibility of and identify potential design issues in the main study. It provides insights into whether a study's design is practical and achievable.

For instance, a research team might find that the originally planned method of interviewing is too time-consuming for a larger study or that participants may not be as forthcoming as hoped. Such insights from a feasibility study can save time, effort, and resources in the long run.

During pilot testing, a researcher can also determine how many or what kinds of participants might be needed for the main study to achieve meaningful results. It helps in ensuring that the target population is adequately represented without overwhelming the team with excessive data .

research methodology pilot study

Refining research methods

A pilot study with a small sample size offers a testing ground for the instruments, tools, or techniques that the researchers plan to use.

For example, suppose a project involves using a new interview technique. In that case, the pilot group can provide feedback on the clarity of questions, the flow of the interview, or even the comfort level of the interaction. This feedback from a carefully selected group is vital in refining the tools to ensure that the main study captures the richest insights possible.

No design is perfect from the outset. Pilot testing acts as a litmus test, highlighting any potential challenges or issues that might arise during the full-scale project.

By identifying these hurdles in advance, researchers can preemptively devise solutions, ensuring smoother execution when the full study is conducted.

research methodology pilot study

Gathering preliminary data

While the primary aim of pilot testing is not necessarily data collection for the main study, the knowledge garnered can be incredibly valuable for improving the current study or building toward a future study.

During the pilot phase of a research project, patterns, anomalies, or unexpected results can emerge. These can lead researchers to refine their propositions or research objectives, adjusting them to better align with observed realities.

Beyond its direct application to the design of the research, the initial findings from a pilot study can have broader, more strategic uses. When seeking funding for a full-scale project, having tangible results, even if they're preliminary, can lend credibility and weight to a research proposal.

Demonstrating that a concept has been tested, even on a small scale, and has yielded insightful data can make a compelling case to potential sponsors or stakeholders.

Pilot studies are a foundational component of many approaches in qualitative research . The exploratory and interpretative nature of qualitative methodologies means that research tools and strategies often benefit from preliminary testing to ensure their effectiveness.

In ethnographic research , where the goal is to study cultures and communities in-depth, pilot studies help researchers become familiar with the environment and its people. A brief preliminary visit can aid in understanding local dynamics, forging initial relationships, and refining methods to respect cultural sensitivities.

Grounded theory research , which seeks to develop theories grounded in empirical data, often starts with pilot studies. These preliminary tests aid in refining the interview protocols and sampling strategies, ensuring that the main study captures data that genuinely represents and informs the emerging theory.

Narrative research relies on the collection of stories from individuals about their experiences. Given the depth and nuance of personal narratives, a pilot test can be instrumental in determining the most effective ways to prompt and capture these stories while ensuring participants feel comfortable and understood.

Phenomenological research , which endeavors to understand the essence of participants' lived experiences around a phenomenon, often employs pilot testing to refine interview questions. It ensures that these questions elicit detailed, rich descriptions of experiences without leading or influencing the participants' responses.

In the field of case study research , where a particular case (or a few cases) is studied in-depth, pilot studies can help in delineating the boundaries of the case, deciding on the data collection methods , and anticipating potential challenges in data gathering or interpretation.

Lastly, psychiatric research, which delves into understanding mental processes, behaviors, and disorders, frequently employs pilot studies, especially when introducing new therapeutic techniques or interventions. A small-scale preliminary study can help identify any potential risks or issues before applying a new method or tool more broadly.

research methodology pilot study

ATLAS.ti handles all research projects big and small

Qualitative analysis made easy with our powerful interface. See how with a free trial.

A pilot study, being a precursor to the main research, is not merely a preliminary step; it's a vital one for the researcher. These initial investigations through pilot studies, while smaller in scale, can prove consequential in terms of benefits they offer to the research process.

Ensuring methodological rigor

At its core, a pilot study is a testing ground for the tools, techniques, and strategies that will be employed in the main study. By test-driving these elements, you can identify weaknesses or areas of improvement in the methodology.

This helps in ensuring that when the full future study is conducted, the methods used are sound, reliable, and capable of yielding meaningful results. For instance, if an interview question consistently confuses participants during the pilot phase, you can revise it for clarity in the main study.

Optimizing resource allocation

One of the significant advantages of pilot testing is the potential for resource optimization. You can gain insights into the time, effort, and funds required for various activities, allowing for more accurate budgeting and scheduling.

Moreover, by preempting potential challenges or obstacles, a pilot study can prevent costly mistakes or oversights when scaling up to the full research. For example, discovering that a particular method is inefficient during the pilot phase can save countless hours and resources in the larger study.

Enhancing participant experience and ethical considerations

The qualitative researcher often delves deep into participants' personal experiences, emotions, and perceptions. A pilot study provides an opportunity to ensure that the research process is respectful, sensitive, and ethically sound.

By trialing interactions with a smaller group, those who conduct the study can refine their approach to ensure participants feel valued, understood, and comfortable. This not only enhances the quality of the insights collected but also fosters trust and rapport with the research subjects.

In sum, the benefits of conducting a pilot study extend far beyond mere preliminary testing. They fortify the research process, ensuring studies are rigorous, efficient, and ethically sound.

As such, pilot studies remain a cornerstone of robust qualitative research , laying the groundwork for meaningful and impactful insights.

A pilot study is an integral phase in the process , acting as a bridge between initial study design and the full-scale project by providing information for future guidance. To generate actionable insights and pave the way for a successful full study, there are key steps researchers need to follow.

Defining objectives and scope

Before diving into the pilot study, it's essential to clearly define its objectives. What specific aspects of the main study are you testing? Is it the data collection methods , the feasibility of the research design of the study, or the clarity of the interview questions?

When researchers answer these questions, they can gain insight on whether the pilot study remains manageable and yields specific, actionable insights from a completed pilot.

Selecting a representative sample

For a pilot study to be effective, the sample chosen should be a good representation of the target population. This doesn't mean it needs to be large; after all, it's a small-scale preliminary study.

However, it should capture the diversity and characteristics of the population to provide a realistic preview of how the research might unfold. Think about how your selected group addresses the needs of your study and evaluate whether their contributions to the research can help you answer your research questions.

Collecting and analyzing data

Once the objectives are set and the participants are chosen, the next step is data collection . Employ the same tools, methods, or interventions you plan to use in the research. Finally, analyze what you have collected with a keen eye for patterns, anomalies, or unexpected outcomes.

This phase isn't just about collecting preliminary insights for the main study but about gauging the effectiveness of your methods and drawing insights to refine your approach.

research methodology pilot study

Reflections on the design of your study should follow pilot testing. The final design that you decide on should be comprehensively informed by any useful insight you gather from your pilot study.

Adjust study methods

Pilot studies are especially useful when they help identify design issues. You can adjust aspects of your study if you found they did not prove effective in collecting insights during your pilot study.

Identify opportunities for richer collection

Pilot testing is not merely a phase to iron out mistakes and shortcomings. A good pilot study should also allow you to identify aspects of your study that were successful and would be even more successful if fully optimized. If there are interview questions that resonated with research participants, for example, think about how those questions can be better utilized in a full-scale study.

Conduct and refine your research with ATLAS.ti

From study design to key insights, make ATLAS.ti the platform for your analysis. Download a free trial today.

research methodology pilot study

Listen-Hard

The Importance of Pilot Studies in Psychology: Exploring Their Purpose and Execution

research methodology pilot study

Pilot studies play a crucial role in the field of psychology, helping researchers test the feasibility of their research design, identify potential issues, and fine-tune their methods. By estimating sample sizes and conducting preliminary analyses, pilot studies pave the way for successful research projects.

In this article, we will delve into the purpose and execution of pilot studies in psychology, exploring their benefits, limitations, and the key steps involved in conducting them. Let’s unravel the significance of pilot studies in shaping high-quality research.

  • Pilot studies are small-scale studies conducted before a main research project to test feasibility, identify potential issues, and fine-tune research methods.
  • They help save time and resources, increase validity and reliability, and enhance research quality, making them an essential part of the research process in psychology.
  • Pilot studies have limitations such as small sample size, potential bias, and limited generalizability, but they still provide valuable insights and allow for refinement of research questions.
  • 1 What Are Pilot Studies?
  • 2.1 Testing Feasibility Of Research Design
  • 2.2 Identifying And Addressing Potential Issues
  • 2.3 Fine-tuning Research Methods
  • 2.4 Estimating Sample Size
  • 3.1 Selecting Participants
  • 3.2 Administering Measures
  • 3.3 Analyzing Data
  • 4.1 Saves Time And Resources
  • 4.2 Increases Validity And Reliability
  • 4.3 Enhances Research Quality
  • 4.4 Allows For Refinement Of Research Questions
  • 5.1 Small Sample Size
  • 5.2 Potential Bias
  • 5.3 Limited Generalizability
  • 5.4 Cannot Predict Final Results
  • 6.1 What is the purpose of pilot studies in psychology?
  • 6.2 How do pilot studies benefit the execution of a research study?
  • 6.3 Are pilot studies necessary for all research studies in psychology?
  • 6.4 What factors should be considered when designing a pilot study?
  • 6.5 Can pilot studies be used for any type of research in psychology?
  • 6.6 Is it necessary to analyze data from a pilot study?

What Are Pilot Studies?

Pilot studies are preliminary investigations conducted before the main study to assess the feasibility of the research design and intervention.

These studies play a crucial role in determining the practicality and viability of implementing a particular research approach or experimental procedure.

By conducting pilot studies, researchers can gather valuable insights into the potential challenges, limitations, and strengths of their proposed methodologies, ensuring that the main study is well-designed and optimized.

Pilot studies help refine research protocols, identify potential sources of bias, and refine the selection criteria for participants, ultimately enhancing the validity and reliability of the main study results.

What Is The Purpose Of Pilot Studies In Psychology?

The purpose of pilot studies in psychology is to test the feasibility of research design and hypothesis testing before conducting full-scale studies.

Pilot studies play a crucial role in providing researchers with valuable insights into the potential challenges and strengths of their research methodologies. By conducting a small-scale version of the study, researchers can identify any practical issues, refine data collection techniques, and determine the optimal sample size for the main study.

Moreover, pilot studies help in assessing the reliability and validity of measurement tools, ensuring that the chosen instruments accurately capture the intended variables. They also allow researchers to assess the feasibility of recruiting participants, testing procedures, and data analysis methods.

Testing Feasibility Of Research Design

One crucial aspect of pilot studies is testing the feasibility of the research design through preliminary analyses and hypothesis testing.

These initial investigations serve as vital tools in refining the research methodology before the full-scale study commences. They allow researchers to uncover any potential flaws or logistical challenges that may arise during the main research phase. With proof of concept studies, researchers can determine if their methods and procedures are suitable for the intended study population and if the data collection instruments are effective. Hypothesis testing in pilot studies provides insight into the potential outcomes, helping researchers refine their research questions and hypotheses for the main study.

Identifying And Addressing Potential Issues

Another key objective of pilot studies is to identify and address potential issues related to interventions , recruitment procedures, and data integrity.

By conducting pilot studies, researchers can pinpoint any flaws or challenges in the intervention methods being tested. This early detection allows for adjustments to be made before scaling up to larger trials, ultimately improving the efficacy and safety of the intervention. Pilot studies help in refining the recruitment strategies by revealing obstacles that may hinder participant enrollment or retention.

These studies play a crucial role in ensuring the integrity of the collected data. By testing data collection tools and procedures beforehand, researchers can enhance the accuracy and reliability of the data obtained during the main trial. This proactive approach minimizes the risk of data errors or inconsistencies, thus enhancing the overall quality of the research findings.

Fine-tuning Research Methods

Pilot studies play a crucial role in fine-tuning research methods, including assessment procedures, treatment fidelity, and overall research design.

By conducting pilot studies, researchers can identify any flaws or limitations in their planned methods and make necessary adjustments to ensure that the study procedures are effective and valid.

  • Assessment procedures can be tested and refined to ensure that data collection methods are reliable and sensitive to the research question at hand.
  • Treatment fidelity, a critical aspect in clinical trials, can be evaluated and improved to enhance the consistency and integrity of the intervention delivery.
  • Study design enhancements, such as optimizing randomization processes or refining inclusion/exclusion criteria, can be implemented to strengthen the overall research framework.

Estimating Sample Size

Pilot studies aid in estimating the sample size required for the main study through power analyses and ensuring precision in data collection.

Pilot studies play a crucial role in the initial phase of research to guide researchers in determining the appropriate number of participants for their main study.

By conducting a pilot study, researchers can assess the feasibility of their research design, methodology, and instruments, which directly impacts the accuracy and reliability of the data collected.

Through meticulous data collection and analysis during pilot studies, researchers can refine their research protocols and identify potential pitfalls that could affect the results of the main study, ultimately leading to more robust and impactful research outcomes.

How Are Pilot Studies Conducted?

Pilot studies are conducted by meticulously collecting data, analyzing the results, and assessing the feasibility of the proposed research design.

During the data collection phase of a pilot study, researchers typically utilize various methods such as surveys, interviews, observations, or experiments to gather relevant information. This data is then meticulously organized and coded for analysis using statistical software or qualitative research tools.

The next crucial step involves analyzing the findings to identify patterns, trends, or relationships within the data. This process helps researchers gain insights into potential outcomes and refine their research questions or hypotheses.

Simultaneously, evaluating the feasibility of the research design involves assessing factors like sample size adequacy, research instruments’ effectiveness, and potential limitations that may impact the main study’s success.

Selecting Participants

One of the initial steps in conducting pilot studies involves selecting participants from the target patient population using appropriate recruitment and randomization methods.

When considering participant selection for pilot studies, researchers must carefully plan their recruitment strategies to ensure a diverse and representative sample. Engaging with healthcare providers, patient advocacy groups, and community organizations can aid in reaching potential participants. Leveraging online platforms and social media can widen the reach and attract individuals who meet the study criteria.

Randomization plays a crucial role in minimizing bias and ensuring the validity of study results. By randomly assigning participants to different study groups, researchers can control for confounding variables and enhance the robustness of their findings.

Administering Measures

During pilot studies, researchers administer various measures to participants, ensuring comprehensive assessment processes using appropriate data collection tools and source documentation.

One common approach in pilot studies is to utilize a combination of quantitative and qualitative data collection methods to gather a holistic understanding of the research variables. Researchers may employ surveys, interviews, focus groups, or observational techniques to gather data efficiently.

Data triangulation is often utilized to enhance the credibility and validity of the findings by cross-validating information from multiple sources. Plus primary data collection, secondary sources such as existing literature, reports, and databases may also be referenced, providing valuable context and insights for the study.

Analyzing Data

Following data collection, pilot studies involve analyzing the data using statistical tests, conducting inferential comparisons, and performing sensitivity analyses to ensure robust results.

Data analysis in pilot studies typically begins with descriptive statistics to summarize the collected data, providing insights into central tendencies and dispersion. Subsequently, researchers often apply parametric or non-parametric statistical tests based on the nature of the data and research questions. These tests help in assessing the significance of observed differences or relationships, guiding the interpretation of findings. Inferential comparisons are conducted to draw conclusions about the population based on the data sample. Sensitivity analysis is employed to evaluate the impact of varying assumptions or methods, ensuring the reliability and validity of the study results.

What Are The Benefits Of Conducting A Pilot Study?

Conducting a pilot study offers numerous benefits such as saving time and resources, enhancing validity, improving research quality, and refining research questions.

A significant advantage of pilot studies lies in the efficient allocation of limited resources. By conducting a pilot phase before the main study, researchers can detect and address potential issues and logistical challenges early on. This early detection helps in optimizing the use of resources, preventing unnecessary expenditures, and streamlining the research process.

Saves Time And Resources

One key benefit of pilot studies is their ability to save time and resources by identifying funding limitations , improvement opportunities, and potential methodological changes early in the research process.

By recognizing funding constraints at an early stage, researchers can make informed decisions on how to allocate resources effectively. This not only ensures that the study stays within budget but also maximizes the impact of the available funds.

  • Pilot studies can highlight areas for improvement in data collection methods or study design. Making these adjustments early can prevent costly mistakes further down the line and enhance the overall quality of the research.
  • Pilot studies offer the opportunity to test different methodological enhancements before committing to a full-scale study. This allows researchers to refine their approach, streamline processes, and increase the study’s efficiency.

Increases Validity And Reliability

Pilot studies enhance the validity and reliability of research outcomes by ensuring data integrity, identifying risk factors, and validating research designs.

One key aspect of pilot studies is the thorough data integrity checks they provide. By meticulously examining the data collection methods and processes, researchers can identify any inconsistencies or errors early on, ensuring that the final results are accurate and reliable.

  • Another crucial contribution of pilot studies is the identification of potential risk factors that may affect the outcome of the research. By conducting a small-scale trial run of the study, researchers can pinpoint any factors that could bias the results or introduce unwanted variability.
  • Pilot studies play a vital role in validating research designs . By testing the feasibility and effectiveness of the chosen methodology, researchers can make necessary adjustments before proceeding to the full-scale study, thereby ensuring that the research design is robust and reliable.

Enhances Research Quality

Another advantage of conducting pilot studies is the enhancement of research quality by refining the research process, optimizing research tools, and evaluating treatment adherence.

Through refining the research process in a pilot study, researchers can identify and rectify potential flaws, errors, or inefficiencies that may arise in the actual main study, thus significantly improving the overall research quality.

Optimizing research tools during a pilot study helps in determining the most effective and accurate instruments for data collection, analysis, and measurement. Evaluating treatment adherence in a pilot study allows researchers to assess the feasibility and effectiveness of the chosen treatment protocols, aiding in making necessary adjustments for optimal outcomes.

Allows For Refinement Of Research Questions

Pilot studies enable researchers to refine research questions based on feedback from grant reviewers, optimize research funding allocation, and enhance research utilization strategies.

When researchers conduct pilot studies, they are essentially testing the feasibility and potential impact of their proposed research projects. This initial step not only helps in gauging the viability of the research but also opens up avenues for constructive criticism from grant reviewers . Incorporating suggestions from these reviewers is vital as it ensures that the research questions are clearly defined and aligned with the funding objectives.

By refining the research questions through pilot studies, researchers significantly increase their chances of receiving funding. Grant reviewers appreciate the meticulous approach taken towards refining the research design, which ultimately leads to more effective research utilization strategies .

What Are The Limitations Of Pilot Studies?

Despite their benefits, pilot studies have limitations such as small sample sizes, potential bias risks, and limited generalizability of results.

One major drawback of pilot studies is the small sample sizes they typically involve. This lack of a large and diverse sample can lead to skewed results and limit the applicability of findings to the broader population.

In addition, pilot studies are susceptible to potential bias risks , as researchers may unknowingly introduce biases through factors like participant selection, data collection methods, or analysis techniques.

The limited generalizability of pilot study results is a common concern. The findings may not accurately represent the characteristics or behaviors of the entire target population, making it challenging to extrapolate the outcomes to a broader context.

Small Sample Size

One major limitation of pilot studies is the reliance on small sample data, which can lead to publication bias and selective reporting of results.

Small sample sizes in pilot studies pose a significant challenge as they may not be representative of the larger population, which can skew the results and affect the generalizability of the findings. This limitation is intensified by the potential for publication bias, where studies with more significant results are more likely to be published, leading to an overestimation of the treatment effects. Selective reporting risks are heightened with limited sample data, as researchers may emphasize positive outcomes and downplay negative or null findings, distorting the true picture of the study’s outcomes.

Potential Bias

Another limitation of pilot studies is the potential for bias due to adverse events, psychological consequences, and adverse outcomes influencing the study findings.

Adverse events within the study population can lead to bias by affecting participant responses or behavior during the pilot phase. The psychological impacts on participants may alter their engagement levels with the study protocol, introducing variability in data collection. Adverse consequences experienced during the pilot study might skew the results by influencing decision-making processes and data interpretation. It is crucial for researchers to address these sources of bias meticulously to ensure the reliability and validity of the pilot study outcomes.

Bias can manifest in subtle ways and significantly impact the overall outcome of the research project.

Limited Generalizability

Pilot studies face limitations in generalizability due to factors like the absence of a control group and the need for large subsequent randomized controlled trials (RCTs) to validate findings.

One of the primary challenges in pilot studies is the lack of a control group, which hinders the ability to establish the causal relationship between the intervention and the outcomes observed. This absence makes it difficult to determine if the results are truly attributable to the intervention or if other factors influenced the outcomes.

However , these initial studies play a crucial role in exploring new interventions or treatments, providing valuable insights and generating hypotheses to be further examined in more extensive RCTs. Despite their significant contributions to the preliminary understanding of a concept, pilot studies alone are not sufficient to inform widespread practice.

Cannot Predict Final Results

Pilot studies cannot definitively predict final results due to uncertainties regarding statistical power, clinical significance, and methodological aspects that may impact the main study outcomes.

Statistical power, which refers to the likelihood of detecting a true effect when it exists, is often limited in pilot studies due to their small sample sizes. This limitation makes it challenging to draw accurate conclusions about the effectiveness of an intervention or treatment.

Clinical significance, which measures the practical importance of study findings in real-world applications, can be difficult to ascertain in pilot studies where outcomes may not fully represent the broader population.

Methodological considerations, such as study design, data collection methods, and biases, further complicate outcome prediction in pilot studies by introducing potential sources of error and uncertainty.

Frequently Asked Questions

What is the purpose of pilot studies in psychology.

The purpose of pilot studies in psychology is to test and refine research methods before conducting a larger study. This allows researchers to identify any issues or limitations in their methods and make necessary adjustments before moving forward.

How do pilot studies benefit the execution of a research study?

Pilot studies help to improve the execution of a research study by providing a trial run and allowing researchers to identify and address any potential problems or challenges. This can save time and resources in the long run and increase the validity of the study.

Are pilot studies necessary for all research studies in psychology?

While not required, pilot studies are highly recommended for all research studies in psychology. They can greatly improve the quality and reliability of the results and help researchers avoid potential pitfalls or biases.

What factors should be considered when designing a pilot study?

When designing a pilot study, researchers should consider the sample size, selection criteria, data collection methods, and potential confounding variables. It is also important to have a clear research question and hypothesis in mind.

Can pilot studies be used for any type of research in psychology?

Yes, pilot studies can be used for various types of research in psychology, including experimental studies, surveys, and observational studies. They can also be beneficial for both qualitative and quantitative research.

Is it necessary to analyze data from a pilot study?

Yes, it is important to analyze data from a pilot study in order to assess the feasibility and effectiveness of the proposed methods. This can also provide valuable insights for further refining the research design.

' src=

Dr. Henry Foster is a neuropsychologist with a focus on cognitive disorders and brain rehabilitation. His clinical work involves assessing and treating individuals with brain injuries and neurodegenerative diseases. Through his writing, Dr. Foster shares insights into the brain’s ability to heal and adapt, offering hope and practical advice for patients and families navigating the challenges of cognitive impairments.

Similar Posts

Signal Detection Theory in Psychology: Explanation and Applications

Signal Detection Theory in Psychology: Explanation and Applications

The article was last updated by Sofia Alvarez on February 8, 2024. Signal Detection Theory is a fundamental concept in psychology that aims to understand…

Understanding the Importance of Case Studies in Psychology

Understanding the Importance of Case Studies in Psychology

The article was last updated by Rachel Liu on February 5, 2024. Case studies play a crucial role in the field of psychology, providing researchers…

Common Characteristics of Science Shared by Psychology

Common Characteristics of Science Shared by Psychology

The article was last updated by Nicholas Reed on February 6, 2024. Curious about the relationship between psychology and science? In this article, we explore…

Decoding the Meaning of PST in Psychology

Decoding the Meaning of PST in Psychology

The article was last updated by Alicia Rhodes on February 9, 2024. Have you ever wondered what PST in psychology really means? In this article,…

Understanding the Role of ‘Bobo’ in Psychological Research

Understanding the Role of ‘Bobo’ in Psychological Research

The article was last updated by Dr. Naomi Kessler on February 5, 2024. Have you ever heard of ‘Bobo’ in psychological research? This article explores…

The Science of Psychology: Unveiling the Meaning Behind its Classification

The Science of Psychology: Unveiling the Meaning Behind its Classification

The article was last updated by Marcus Wong on February 5, 2024. Have you ever wondered what exactly psychology is and how it delves into…

Book cover

Information Systems Research pp 137–149 Cite as

Piloting and Feasibility Studies in IS Research

  • Mohammed Ali 2  
  • First Online: 16 September 2023

160 Accesses

This chapter aims to cover process of conducting a pilot study to assess the feasibility of an IS research project. This entails conducting preliminary fieldwork on a target population to test the research questions and/or hypotheses on a small scale to determine its potential feasibility on a larger scale. Therefore, the chapter emphasises the need to explore procedures of conducting an effective pilot study in contemporary IS research projects. In addition to the chapter contents, definitions, facts, tables, figures, activities, and case studies are provided to reinforce researcher and practitioner learning of the piloting procedures used in contemporary IS research.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Anderson, R. (2008). New MRC guidance on evaluating complex interventions. Bmj, 337 .

Google Scholar  

Arain, M., Campbell, M. J., Cooper, C. L., & Lancaster, G. A. (2010). What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Medical Research Methodology, 10 (1), 67.

Article   Google Scholar  

Arnold, D. M., Burns, K. E., Adhikari, N. K., Kho, M. E., Meade, M. O., & Cook, D. J. (2009). The design and interpretation of pilot trials in clinical research in critical care. Critical Care Medicine, 37 (1), S69–S74.

Eldridge, S. M., Lancaster, G. A., Campbell, M. J., Thabane, L., Hopewell, S., Coleman, C. L., & Bond, C. M. (2016). Defining feasibility and pilot studies in preparation for randomised controlled trials: Development of a conceptual framework. PloS One, 11 (3), e0150205.

Fisher, P. (2012). Ethics in qualitative research: ‘Vulnerability’, citizenship and human rights. Ethics and Social Welfare, 6 (1), 2–17.

Malmqvist, J., Hellberg, K., Möllås, G., Rose, R., & Shevlin, M. (2019). Conducting the pilot study: a neglected part of the research process? methodological findings supporting the importance of piloting in qualitative research studies. International Journal of Qualitative Methods, 18 .

Thabane, L., Ma, J., Chu, R., Cheng, J., Ismaila, A., Rios, L. P., … & Goldsmith, C. H. (2010). A tutorial on pilot studies: The what, why and how. BMC Medical Research Methodology, 10 (1), 1–10.

Download references

Author information

Authors and affiliations.

Salford Business School, University of Salford, Manchester, UK

Mohammed Ali

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mohammed Ali .

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Ali, M. (2023). Piloting and Feasibility Studies in IS Research. In: Information Systems Research. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-25470-3_8

Download citation

DOI : https://doi.org/10.1007/978-3-031-25470-3_8

Published : 16 September 2023

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-031-25469-7

Online ISBN : 978-3-031-25470-3

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

What is a pilot or feasibility study? A review of current practice and editorial policy

Mubashir arain.

1 Health Services Research, ScHARR, University of Sheffield, Regent Court, Regent St Sheffield S1 4DA, UK

Michael J Campbell

Cindy l cooper, gillian a lancaster.

2 Department of Mathematics and Statistics, University of Lancaster LA1 4YF, UK

In 2004, a review of pilot studies published in seven major medical journals during 2000-01 recommended that the statistical analysis of such studies should be either mainly descriptive or focus on sample size estimation, while results from hypothesis testing must be interpreted with caution. We revisited these journals to see whether the subsequent recommendations have changed the practice of reporting pilot studies. We also conducted a survey to identify the methodological components in registered research studies which are described as 'pilot' or 'feasibility' studies. We extended this survey to grant-awarding bodies and editors of medical journals to discover their policies regarding the function and reporting of pilot studies.

Papers from 2007-08 in seven medical journals were screened to retrieve published pilot studies. Reports of registered and completed studies on the UK Clinical Research Network (UKCRN) Portfolio database were retrieved and scrutinized. Guidance on the conduct and reporting of pilot studies was retrieved from the websites of three grant giving bodies and seven journal editors were canvassed.

54 pilot or feasibility studies published in 2007-8 were found, of which 26 (48%) were pilot studies of interventions and the remainder feasibility studies. The majority incorporated hypothesis-testing (81%), a control arm (69%) and a randomization procedure (62%). Most (81%) pointed towards the need for further research. Only 8 out of 90 pilot studies identified by the earlier review led to subsequent main studies. Twelve studies which were interventional pilot/feasibility studies and which included testing of some component of the research process were identified through the UKCRN Portfolio database. There was no clear distinction in use of the terms 'pilot' and 'feasibility'. Five journal editors replied to our entreaty. In general they were loathe to publish studies described as 'pilot'.

Pilot studies are still poorly reported, with inappropriate emphasis on hypothesis-testing. Authors should be aware of the different requirements of pilot studies, feasibility studies and main studies and report them appropriately. Authors should be explicit as to the purpose of a pilot study. The definitions of feasibility and pilot studies vary and we make proposals here to clarify terminology.

A brief definition is that a pilot study is a 'small study for helping to design a further confirmatory study'[ 1 ]. A very useful discussion of exactly what is a pilot study has been given by Thabane et al. [ 2 ] Such kinds of study may have various purposes such as testing study procedures, validity of tools, estimation of the recruitment rate, and estimation of parameters such as the variance of the outcome variable to calculate sample size etc. In pharmacological trials they may be referred to as 'proof of concept' or Phase I or Phase II studies. It has become apparent to us when reviewing research proposals that small studies with all the trappings of a major study, such as randomization and hypothesis testing may be labeled a 'pilot' because they do not have the power to test clinically meaningful hypotheses. The authors of such studies perhaps hope that reviewers will regard a 'pilot' more favourably than a small clinical trial. This lead us to ask when it is legitimate to label a study as a 'pilot' or 'feasibility' study, and what features should be included in these types of studies.

Lancaster et al [ 3 ] conducted a review of seven major medical journals in 2000-1 to produce evidence regarding the components of pilot studies for randomized controlled trials. Their search included both 'pilot' and 'feasibility' studies as keywords. They reported certain recommendations: having clear objectives in a pilot study, inappropriateness of mixing pilot data with main research study, using mainly descriptive statistics obtained and caution regarding the use of hypothesis testing for conclusions. Arnold et al [ 1 ] recently reviewed pilot studies particularly related to critical care medicine by searching the literature from 1997 to 2007. They provided narrative descriptions of some pilot papers particularly those describing critical care medicine procedures. They pointed out that few pilot trials later evolved into subsequent published major trials. They made useful distinctions between: pilot work which is any background research to inform a future study, a pilot study which has specific hypotheses, objectives and methodology and a pilot trial which is a stand-alone pilot study and includes a randomization procedure. They excluded feasibility studies from their consideration.

Thabane et al [ 2 ] gave a checklist of what they think should be included in a pilot study. They included 'feasibility' or 'vanguard' studies but did not distinguish them from pilot studies. They provided a good discussion on how to interpret a pilot study. They stress that not only the outcome or surrogate outcome for the subsequent main study should be described but also that a pilot study should have feasibility outcomes which should be clearly defined and described. Their article was opinion based and not supported by a review of current practice.

The objective of this paper is to provide writers and reviewers of research proposals with evidence from a variety of sources for which components they should expect, and which are unnecessary or unhelpful, in a study which is labeled as a pilot or feasibility study. To do this we repeated Lancaster et al's [ 3 ] review for current papers see if there has been any change in how pilot studies were reported since their study. As many pilot studies are never published we also identified pilot studies which were registered with the UK Clinical Research Network (UKCRN) Portfolio Database. This aims to be a "complete picture of the clinical research which is currently taking place across the UK". All studies included have to have been peer reviewed through a formal independent process. We examined the websites of some grant giving bodies to find their definition of a pilot study and their funding policy toward them. Finally we contacted editors of leading medical journals to discover their policy of accepting studies described as 'pilot' or 'feasibility'.

Literature survey

MEDLINE, Web of Science and university library data bases were searched for the years 2007-8 using the same key words "Pilot" or "Feasibility" as used by Lancaster et al. [ 3 ]. We reviewed the same four general medicine journals: the British Medical Journal (BMJ), Lancet, the New England Journal of Medicine (NEJM) and the Journal of American Medical Association (JAMA) and the same three specialist journals: British Journal of Surgery (BJS), British Journal of Cancer (BJC), British Journal of Obstetrics and Gynecology (BJOG). We excluded review papers. The full text of the relevant papers was obtained. GL reviewed 20 papers and classified them into groups as described in her original paper [ 3 ]. Subsequently MA, in discussion with MC, designed a data extraction form to classify the papers. We changed one category from GL's original paper. We separated the category 'Phase I/II trials' from the 'Piloting new treatment, technique, combination of treatments' category. We then classified the remaining paper into the categories described in Table ​ Table1. 1 . The total number of research papers by journal was obtained by searching journal article with abstracts (excluding reviews) using Pubmed. We searched citations to see whether the pilot studies identified by Lancaster et al [ 3 ] eventually led to main trials.

Literature search using key words "Pilot" OR "Feasibility"

1 excluded Review = 8, Commentaries = 4, News = 3, Indirectly referring to previous pilot = 9

2 from Lancaster et al [ 1 ]

Portfolio database review

The (UKCRN) Portfolio Database was searched for the terms 'feasibility' or 'pilot' in the title or research summary. Duplicate cases and studies classified as 'observational' were omitted. From the remaining studies those classified as 'closed' were selected to exclude studies which may not have started or progressed. Data were extracted directly from the research summary of the database or where that was insufficient the principle investigator was contacted for related publications or study protocols.

Editor and funding agency survey

We wrote to the seven medical journal editors of the same journals used by Lancaster et al. [ 3 ], (BMJ, Lancet, NEJM, JAMA. BJS, BJC and BJOG) and looked at the policies of three funding agencies (British Medical Research Council, Research for Patient Benefit and NETSCC (National Institute for Health Research Trials and Studies Coordinating Centre). We wished to explore whether there was any specified policy of the journal for publishing pilot trials and how the editors defined a pilot study. We also wished to see if there was funding for pilot studies.

Initially 77 papers were found in the target journals for 2007-8 but 23 were review papers or commentaries or indirectly referred to the word "pilot" or "feasibility" and were not actually pilot studies leaving a total of 54 papers. Table ​ Table1 1 shows the results by journal and by type of study and also shows the numbers reported by Lancaster et al. [ 3 ] for 2000-01 in the same medical journals. There was a decrease in the proportion of pilot studies published over the period of time, however the difference was not statistically significant (2.0% vs 1.6%; X 2 = 1.6, P = 0.2). It is noticeable that the Phase I or Phase II studies are largely confined to the cancer journals.

Lancaster et al [ 3 ] found that 50% of pilot studies reported the intention of further work yet we identified only 8 (8.8%) which were followed up by a major study. Of these 2 (25%) were published in the same journal as the pilot.

Twenty-six of the studies found in 2007-8 were described as pilot or feasibility studies for randomized clinical trials (RCTs) including Phase II studies. Table ​ Table2 2 gives the numbers of studies which describe specific components of RCTs. Sample size calculations were performed and reported in 9 (36%) of the studies. Hypothesis testing and performing inferential statistics to report significant results was observed in 21 (81%) of pilot studies. The processes of blinding was observed in only 5 (20%) although the randomization procedure was applied or tested in 16 (62%) studies. Similarly a control group was assigned in most of the studies (n = 18; 69%). As many as 21 (81%) of pilot studies suggested the need for further investigation of the tested drug or procedure and did not report conclusive results on the basis of their pilot data. The median number of participants was 76, inter-quartile range (42, 216).

Literature survey: Frequency of methodological components appearing in pilot or feasibility studies of interventions (n = 26 1 ) in 2007-8

1 Pilot studies = 14 Feasibility studies = 12

Of the 54 studies in 2007-8, a total of 20 were described as 'pilot' and 34 were described as 'feasibility' studies. Table ​ Table3 3 contrasts those which were identified by the keyword 'pilot' with those identified by 'feasibility'. Those using 'pilot' were more likely to have a pre-study sample size estimate, to use randomization and to use a control group. In the 'pilot' group 16(80%) suggested further study, in contrast to 15 (44%) in the 'feasibility' group.

Literature survey: Comparison of studies (n = 54) using the key words feasibility or pilot

1 1 degree of freedom

* z-statistic (Mann-Whitney test)

A total of 34 studies were identified using the term 'feasibility' or 'pilot' in the title or research summary which were prospective interventional studies and were closed, i.e. not currently running and available for analysis. Only 12 studies were interventional pilot/feasibility studies which included testing of some component of the research process. Of these 5 were referred to as 'feasibility', 6 as 'pilot' and 1 as both 'feasibility' and 'pilot' (Table ​ (Table4 4 ).

Portfolio database survey: comparison of components in studies termed pilot or feasibility

The methodological components tested within these studies were: estimation of sample size; number of subjects eligible; resources (e.g. cost), time scale; population-related (e.g. exclusion criteria), randomisation process/acceptability; data collection systems/forms; outcome measures; follow-up (response rates, adherence); overall design; whole trial feasibility. In addition to one or more of these, some studies also looked at clinical outcomes including: feasibility/acceptability of intervention; dose, efficacy and safety of intervention.

The results are shown in Table ​ Table4. 4 . Pilot studies alone included estimation of sample size for a future bigger study and tested a greater number of components in each study. The majority of the pilots and the feasibility studies ran the whole study 'in miniature' as it would be in the full study, with or without randomization.

As an example of a pilot study consider 'CHOICES: A pilot patient preference randomised controlled trial of admission to a Women's Crisis House compared with psychiatric hospital admissions' http://www.iop.kcl.ac.uk/projects/default.aspx?id=10290 . This study looked at multiple components of a potential bigger study. It aimed to determine the proportion of women unwilling to be randomised, the feasibility of a patient preference RCT design, the outcome and cost measures to determine which outcome measures to use, the recruitment and drop out rates; and to estimate the levels of outcome variability to calculate sample sizes for the main study. It also intended to develop a user focused and designed instrument which is the outcome from the study. The sample size was 70.

The editors of five (out of seven) medical journals responded to our request for information regarding publishing policy for pilot studies. Four of the journals did not have a specified policy about publishing pilot studies and mostly reported that pilot trials cannot be published if the standard is lower than a full clinical trial requirement. The Lancet has started creating space for preliminary phase I trials and set a different standard for preliminary studies. Most of the other journals do not encourage the publication of pilot studies because they consider them less rigorous than main studies. Nevertheless some editors accepted pilot studies for publication by compromising only on the requirement for a pre-study sample size calculation. All other methodological issued were considered as important as for the full trials, such as trial registration, randomization, hypothesis testing, statistical analysis and reporting according to the CONSORT guidelines.

All three funding bodies made a point to note that pilot and feasibility studies would be considered for funding. Thabane et al [ 2 ] provided a list of websites which define pilot or feasibility studies. We considered the NETSCC definition to be most helpful and to most closely mirror what investigators are doing and it is given below.

NETSCC definition of pilot and feasibility studies http://www.netscc.ac.uk/glossary/

Feasibility Studies

Feasibility Studies are pieces of research done before a main study. They are used to estimate important parameters that are needed to design the main study. For instance:

• standard deviation of the outcome measure, which is needed in some cases to estimate sample size,

• willingness of participants to be randomised,

• willingness of clinicians to recruit participants,

• number of eligible patients,

• characteristics of the proposed outcome measure and in some cases feasibility studies might involve designing a suitable outcome measure,

• follow-up rates, response rates to questionnaires, adherence/compliance rates, ICCs in cluster trials, etc.

Feasibility studies for randomised controlled trials may not themselves be randomised. Crucially, feasibility studies do not evaluate the outcome of interest; that is left to the main study.

If a feasibility study is a small randomised controlled trial, it need not have a primary outcome and the usual sort of power calculation is not normally undertaken. Instead the sample size should be adequate to estimate the critical parameters (e.g. recruitment rate) to the necessary degree of precision.

Pilot studies

A Pilot Study is a version of the main study that is run in miniature to test whether the components of the main study can all work together. It is focused on the processes of the main study, for example to ensure recruitment, randomisation, treatment, and follow-up assessments all run smoothly. It will therefore resemble the main study in many respects. In some cases this will be the first phase of the substantive study and data from the pilot phase may contribute to the final analysis; this can be referred to as an internal pilot. Alternatively at the end of the pilot study the data may be analysed and set aside, a so-called external pilot.

In our repeat of Lancaster et al's study [ 3 ] we found that the reporting of pilot studies was still poor. It is generally accepted that small, underpowered clinical trials are unethical [ 4 ]. Thus it is not an excuse to label such a study as a pilot and hope to make it ethical. We have shown that pilot studies have different objectives to RCTs and these should be clearly described. Participants in such studies should be informed that they are in a pilot study and that there may not be a further larger study.

It is helpful to make a more formal distinction between a 'pilot' and a 'feasibility' study. We found that studies labeled 'feasibility' were conducted with more flexible methodology compared to those labeled 'pilot'. For example the term 'feasibility' has been used for large scale studies such as a screening programme applied at a population level to determine the initial feasibility of the programme. On the other hand 'pilot' studies were reported with more rigorous methodological components like sample size estimation, randomization and control group selection than studies labeled 'feasibility'. We found the NETSCC definition to be the most helpful since it distinguishes between these types of study.

In addition it was observed that most of the pilot studies report their results as inconclusive, with the intention of conducting a further, larger study. In contrast, several of the feasibility studies did not admit such an intention. On the basis of their intention one would have expected about 45 of the studies identified by Lancaster et al in 2000/1 to have been followed by a bigger study whereas we only found 8. This would reflect the opinion of most of the journal editors and experts who responded to our survey, who felt that pilot studies rarely act as a precursor for a bigger study. The main reason given was that if the pilot shows significant results then researchers may not find it necessary to conduct the main trial. In addition if the results are unfavorable or the authors find an unfeasible procedure, the main study is less likely to be considered useful. Our limited review of funding bodies was encouraging. Certainly when reviewing grant applications, we have found it helpful to have the results of a pilot study included in the bid. We think that authors of pilots studies should be explicit as to their purpose, e.g. to test a new procedure in preparation for a clinical trial. We also think that authors of proposals for pilot studies should be more explicit as to the criteria which lead to further studies being abandoned, and that this should be an important part of the proposal.

In the Portfolio Database review, only pilot studies cited an intention to estimate sample size calculations for future studies and the majority of pilot studies were full studies run with smaller sample sizes to test out a number of methodological components and clinical outcomes simultaneously. In comparison the feasibility studies tended to focus on fewer methodological components within individual studies. For example, the 6 pilot studies reported the intention to evaluate a total of 17 methodological components whereas in the 5 feasibility studies a total of only 6 methodological components were specifically identified as being under investigation (Table ​ (Table4). 4 ). However, both pilot and feasibility studies included trials run as complete studies, including randomization, but with sample sizes smaller than would be intended in the full study and the distinction between the two terms was not clear-cut.

Another reason for conducting a pilot study is to provide information to enable a sample size calculation in a subsequent main study. However since pilot studies tend to be small, the results should be interpreted with caution [ 5 ]. Only a small proportion of published pilot studies reported pre-study sample size calculations. Most journal editors reported that a sample size calculation is not a mandatory criterion for publishing pilot studies and suggested that it should not be done.

Some authors suggest that analysis of pilot studies should mainly be descriptive,[ 3 , 6 ] as hypothesis testing requires a powered sample size which is usually not available in pilot studies. In addition, inferential statistics and testing hypothesis for effectiveness require a control arm which may not be present in all pilot studies. However most of the pilot interventional studies in this review contained a control group and the authors performed and reported hypothesis testing for one or more variables. Some tested the effectiveness of an intervention and others just performed statistical testing to discover any important associations in the study variables. Observed practice is not necessarily good practice and we concur with Thabane et al [ 2 ] that any testing of an intervention needs to be reported cautiously.

The views of the journal editors, albeit from a small sample, were not particularly encouraging and reflected the experience of Lancaster et al [ 3 ]. Pilot studies, by their nature, will not produce 'significant' (i.e P < 0.05) results. We believe that publishing the results of well conducted pilot or feasibility studies is important for research, irrespective of outcome.. There is an increasing awareness that publishing only 'significant' results can lead to considerably error [ 7 ]. The journals we considered were all established, paper journals and perhaps the newer electronic journals will be more willing to consider the publication of the results from these types of studies.

We may expect that trials will increasingly be used to evaluate 'complex interventions'[ 8 , 9 ]. The MRC guidelines [ 8 ] explicitly suggest that preliminary studies, including pilots, be used prior to any major trial which seeks to evaluate a package of interventions (such as an educational course), rather than a single intervention (such as a drug). Thus it is likely that reviewers will be increasingly asked to pronounce on these and will require guidance as to how to review them.

Conclusions

We conclude that pilot studies are still poorly reported, with inappropriate emphasis on hypothesis-testing. We believe authors should be aware of the different requirements of pilot studies and feasibility studies and report them appropriately. We found that in practice the definitions of feasibility and pilot studies are not distinct and vary between health research funding bodies and we suggest use of the NETSCC definition to clarify terminology.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

MA reviewed the papers of 2000/1 and those of 2007/8 under the supervision of MC and helped to draft the manuscript. MC conceived of the study, and participated in its design and coordination and drafted the manuscript. CC conducted the portfolio database study and commented on the manuscript. GA conducted the original study, reviewed 20 papers and commented on the manuscript. All authors read and approved the final manuscript.

Pre-publication history

The pre-publication history for this paper can be accessed here:

http://www.biomedcentral.com/1471-2288/10/67/prepub

  • Arnold DM, Burns KE, Adhikari NK, Kho ME, Meade MO, Cook DJ. McMaster Critical Care Interest Group. The design and interpretation of pilot trials in clinical research in critical care. Crit Care Med. 2009; 37 (Suppl 1):S69–74. doi: 10.1097/CCM.0b013e3181920e33. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Thabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, Robson R, Thabane M, Goldsmith CH. A tutorial on pilot studies: The what, why and How. BMC Medical Research Methodology. 2010; 10 :1. doi: 10.1186/1471-2288-10-1. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004; 10 :307–12. doi: 10.1111/j..2002.384.doc.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Halpern SD, Karlawish JH, Berlin JA. The continuing unethical conduct of underpowered clinical trials. JAMA. 2002; 288 :358–62. doi: 10.1001/jama.288.3.358. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kraemer HC, Mintz J, Noda A, Tinklenberg J, Yesavage JA. Caution regarding the use of pilot studies to guide power calculations for study proposals. Arch Gen Psychiatry. 2006; 63 :484–9. doi: 10.1001/archpsyc.63.5.484. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grimes DA, Schulz KF. Descriptive studies: what they can and cannot do. Lancet. 2002; 359 :145–9. doi: 10.1016/S0140-6736(02)07373-7. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ioannidis JPA. Why Most Published Research Findings Are False. PLoS Med. 2005; 2 (8):e124. doi: 10.1371/journal.pmed.0020124. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lancaster GA, Campbell MJ, Eldridge S, Farrin A, Marchant M, Muller S, Perera R, Peters TJ, Prevost AT, Rait G. Trials in Primary Care: issues in the design of complex intervention. Statistical Methods in Medical Research. 2010. to appear. [ PubMed ]
  • Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: new guidance. Medical Research Council; 2008. http://www.mrc.ac.uk/Utilities/Documentrecord/index.htm?d=MRC004871 [ PMC free article ] [ PubMed ] [ Google Scholar ]

IMAGES

  1. Pilot study

    research methodology pilot study

  2. Flowchart of the Research Methodology (highligted to Preliminary and...

    research methodology pilot study

  3. Pilot Studies: Common Uses and Misuses

    research methodology pilot study

  4. REL Blog

    research methodology pilot study

  5. pilot test in research methodology

    research methodology pilot study

  6. Pilot study

    research methodology pilot study

VIDEO

  1. Terminology In Research || Pilot Study || Analysis || Types Of Research By Sunil Tailor Sir|| Part 5

  2. PSY 2120: Why study research methods in psychology?

  3. Part 1: Designing the Methodology

  4. Part 6: Research Studies

  5. Pilot Testing in UX Research

  6. Mastering Research Methodology

COMMENTS

  1. Pilot Study in Research: Definition & Examples

    Advantages. Limitations. Examples. A pilot study, also known as a feasibility study, is a small-scale preliminary study conducted before the main research to check the feasibility or improve the research design. Pilot studies can be very important before conducting a full-scale research project, helping design the research methods and protocol.

  2. Conducting the Pilot Study: A Neglected Part of the Research Process

    There are several critical aspects related to the implementation such as the pilot study size, the methods, and the content of the pilot study. Sometimes, especially in large research projects, a number of pilot studies may be needed and qualitative as well as quantitative methods may be used (van Teijlingen & Hundley, 2001).

  3. A tutorial on pilot studies: the what, why and how

    Pilot studies can be conducted in both quantitative and qualitative studies. Adopting a similar approach to Lancaster et al.[], we focus on quantitative pilot studies - particularly those done prior to full-scale phase III trialsPhase I trials are non-randomized studies designed to investigate the pharmacokinetics of a drug (i.e. how a drug is distributed and metabolized in the body) including ...

  4. Guidelines for Designing and Evaluating Feasibility Pilot Studies

    Pilot studies are a necessary first step to assess the feasibility of methods and procedures to be used in a larger study. Some consider pilot studies to be a subset of feasibility studies (), while others regard feasibility studies as a subset of pilot studies.As a result, the terms have been used interchangeably ().Pilot studies have been used to estimate effect sizes to determine the sample ...

  5. Introduction of a pilot study

    A pilot study is the first step of the entire research protocol and is often a smaller-sized study assisting in planning and modification of the main study [, ]. More specifically, in large-scale clinical studies, the pilot or small-scale study often precedes the main trial to analyze its validity. Before a pilot study begins, researchers must ...

  6. A tutorial on pilot studies: the what, why and how

    Pilot studies for phase III trials - which are comparative randomized trials designed to provide preliminary evidence on the clinical efficacy of a drug or intervention - are routinely performed in many clinical areas. ... Researchers use information gathered in pilot studies to refine or modify the research methodology for a study and to ...

  7. What is a pilot study?

    After an interpretation of results, pilot studies should conclude with one of the following: (1) the main study is not feasible; (2) the main study is feasible, with changes to the protocol; (3) the main study is feasible without changes to the protocol OR. (4) the main study is feasible with close monitoring.

  8. What Pilot Studies Are and Why They Matter

    A pilot study is a preliminary small-scale study that researchers conduct in order to help them decide how best to conduct a large-scale research project. Using a pilot study, a researcher can identify or refine a research question, figure out what methods are best for pursuing it, and estimate how much time and resources will be necessary to ...

  9. Methods

    5 Piloting may help you secure funding and support for your research. 6 You can use a pilot to get feedback on your study design. 7 Keep an eye out for ethical considerations when piloting. 8 Be transparent about how your pilot informs the design of your main study. 9 Explore unknowns.

  10. Why Is a Pilot Study Important in Research?

    You can determine the feasibility of your research design, with a pilot study before you start. This is a preliminary, small-scale "rehearsal" in which you test the methods you plan to use for your research project. You will use the results to guide the methodology of your large-scale investigation. Pilot studies should be performed for ...

  11. What is a pilot or feasibility study? A review of current practice and

    A brief definition is that a pilot study is a 'small study for helping to design a further confirmatory study'[].A very useful discussion of exactly what is a pilot study has been given by Thabane et al. [] Such kinds of study may have various purposes such as testing study procedures, validity of tools, estimation of the recruitment rate, and estimation of parameters such as the variance of ...

  12. A tutorial on pilot studies: the what, why and how

    Conducting a pilot prior to the main study can enhance the likelihood of success of the main study and potentially help to avoid doomed main studies. The objective of this paper is to provide a detailed examination of the key aspects of pilot studies for phase III trials including: 1) the general reasons for conducting a pilot study; 2) the ...

  13. Guidance for conducting feasibility and pilot studies for

    Cross-sectional surveys, interviews or document review, qualitative research or mix methods approaches may be used to assess implementation contextual factors, such as barriers and enablers to implementation and/or the acceptability, perceived feasibility or utility of implementation strategies or research methods . Pilot implementation studies ...

  14. Doing A Pilot Study: Why Is It Essential?

    A pilot study is one of the essential stages in a research project. This paper aims to describe the importance of and steps involved in executing a pilot study by using an example of a descriptive study in primary care. The process of testing the feasibility of the project proposal, recruitment of subjects, research tool and data analysis was ...

  15. Full article: Guidance for using pilot studies to inform the design of

    We illustrate how to use pilot studies to inform the design of future randomized controlled trials (RCTs) so that the likelihood of answering the research question is high. We show how pilot studies can address each of the objectives listed earlier, how to optimally design a pilot trial, and how to perform sample size sensitivity analysis.

  16. How to Conduct Pilot Tests

    Refining research methods. A pilot study with a small sample size offers a testing ground for the instruments, tools, or techniques that the researchers plan to use. For example, suppose a project involves using a new interview technique. In that case, the pilot group can provide feedback on the clarity of questions, the flow of the interview ...

  17. (PDF) Introduction of a pilot study

    A pilot study is the first. step of the entire research p rotocol and is often a smaller-sized. study assisting in planning and m odification of the main study. [1,2]. More specifically, in large ...

  18. The Importance of Pilot Studies in Psychology: Exploring Their Purpose

    Pilot studies play a crucial role in fine-tuning research methods, including assessment procedures, treatment fidelity, and overall research design. By conducting pilot studies, researchers can identify any flaws or limitations in their planned methods and make necessary adjustments to ensure that the study procedures are effective and valid.

  19. Piloting and Feasibility Studies in IS Research

    Key Takeaways. 1. A pilot study is a small-scale preliminary study conducted to evaluate feasibility of the key steps in a future study. 2. Any research method can be applied to a pilot study, but some of the most frequently used are randomised control trails and surveys. 3.

  20. The Role and Interpretation of Pilot Studies in Clinical Research

    A pilot study is, "A small-scale test of the methods and procedures to be used on a larger scale …" (Porta, 2008). The fundamental purpose of conducting a pilot study is to examine the feasibility of an approach that is intended to ultimately be used in a larger scale study. This applies to all types of research studies.

  21. (PDF) Piloting for Interviews in Qualitative Research

    The study will adopt Majid et al.'s (2017) pilot study procedure, as shown in Figure 3. Experts have recommended that a pilot study sample be 10% of the sample from the primary study (Connelly, 2008).

  22. What is a pilot or feasibility study? A review of current practice and

    Pilot studies. A Pilot Study is a version of the main study that is run in miniature to test whether the components of the main study can all work together. It is focused on the processes of the main study, for example to ensure recruitment, randomisation, treatment, and follow-up assessments all run smoothly.

  23. PDF CHAPTER 5 PILOT STUDY 1. INTRODUCTION

    CHAPTER 5 PILOT STUDY. 1. INTRODUCTION. The pilot study of the current research was the first step of the practical application of the Gestalt play therapy emotional intelligence programme for primary school children. It is also the last step of the first grouping of steps of an intervention research study (De Vos, 2002:409-418) namely ...