Our websites may use cookies to personalize and enhance your experience. By continuing without changing your cookie settings, you agree to this collection. For more information, please see our University Websites Privacy Notice .

Neag School of Education

Educational Research Basics by Del Siegle

Single subject research.

“ Single subject research (also known as single case experiments) is popular in the fields of special education and counseling. This research design is useful when the researcher is attempting to change the behavior of an individual or a small group of individuals and wishes to document that change. Unlike true experiments where the researcher randomly assigns participants to a control and treatment group, in single subject research the participant serves as both the control and treatment group. The researcher uses line graphs to show the effects of a particular intervention or treatment.  An important factor of single subject research is that only one variable is changed at a time. Single subject research designs are “weak when it comes to external validity….Studies involving single-subject designs that show a particular treatment to be effective in changing behavior must rely on replication–across individuals rather than groups–if such results are be found worthy of generalization” (Fraenkel & Wallen, 2006, p. 318).

Suppose a researcher wished to investigate the effect of praise on reducing disruptive behavior over many days. First she would need to establish a baseline of how frequently the disruptions occurred. She would measure how many disruptions occurred each day for several days. In the example below, the target student was disruptive seven times on the first day, six times on the second day, and seven times on the third day. Note how the sequence of time is depicted on the x-axis (horizontal axis) and the dependent variable (outcome variable) is depicted on the y-axis (vertical axis).

image002

Once a baseline of behavior has been established (when a consistent pattern emerges with at least three data points), the intervention begins. The researcher continues to plot the frequency of behavior while implementing the intervention of praise.

image004

In this example, we can see that the frequency of disruptions decreased once praise began. The design in this example is known as an A-B design. The baseline period is referred to as A and the intervention period is identified as B.

image006

Another design is the A-B-A design. An A-B-A design (also known as a reversal design) involves discontinuing the intervention and returning to a nontreatment condition.

image008

Sometimes an individual’s behavior is so severe that the researcher cannot wait to establish a baseline and must begin with an intervention. In this case, a B-A-B design is used. The intervention is implemented immediately (before establishing a baseline). This is followed by a measurement without the intervention and then a repeat of the intervention.

image010

Multiple-Baseline Design

Sometimes, a researcher may be interested in addressing several issues for one student or a single issue for several students. In this case, a multiple-baseline design is used.

“In a multiple baseline across subjects design, the researcher introduces the intervention to different persons at different times. The significance of this is that if a behavior changes only after the intervention is presented, and this behavior change is seen successively in each subject’s data, the effects can more likely be credited to the intervention itself as opposed to other variables. Multiple-baseline designs do not require the intervention to be withdrawn. Instead, each subject’s own data are compared between intervention and nonintervention behaviors, resulting in each subject acting as his or her own control (Kazdin, 1982). An added benefit of this design, and all single-case designs, is the immediacy of the data. Instead of waiting until postintervention to take measures on the behavior, single-case research prescribes continuous data collection and visual monitoring of that data displayed graphically, allowing for immediate instructional decision-making. Students, therefore, do not linger in an intervention that is not working for them, making the graphic display of single-case research combined with differentiated instruction responsive to the needs of students.” (Geisler, Hessler, Gardner, & Lovelace, 2009)

image012

Regardless of the research design, the line graphs used to illustrate the data contain a set of common elements.

image014

Generally, in single subject research we count the number of times something occurs in a given time period and see if it occurs more or less often in that time period after implementing an intervention. For example, we might measure how many baskets someone makes while shooting for 2 minutes. We would repeat that at least three times to get our baseline. Next, we would test some intervention. We might play music while shooting, give encouragement while shooting, or video the person while shooting to see if our intervention influenced the number of shots made. After the 3 baseline measurements (3 sets of 2 minute shooting), we would measure several more times (sets of 2 minute shooting) after the intervention and plot the time points (number of baskets made in 2 minutes for each of the measured time points). This works well for behaviors that are distinct and can be counted.

Sometimes behaviors come and go over time (such as being off task in a classroom or not listening during a coaching session). The way we can record these is to select a period of time (say 5 minutes) and mark down every 10 seconds whether our participant is on task. We make a minimum of three sets of 5 minute observations for a baseline, implement an intervention, and then make more sets of 5 minute observations with the intervention in place. We use this method rather than counting how many times someone is off task because one could continually be off task and that would only be a count of 1 since the person was continually off task. Someone who might be off task twice for 15 second would be off task twice for a score of 2. However, the second person is certainly not off task twice as much as the first person. Therefore, recording whether the person is off task at 10-second intervals gives a more accurate picture. The person continually off task would have a score of 30 (off task at every second interval for 5 minutes) and the person off task twice for a short time would have a score of 2 (off task only during 2 of the 10 second interval measures.

I also have additional information about how to record single-subject research data .

I hope this helps you better understand single subject research.

I have created a PowerPoint on Single Subject Research , which also available below as a video.

I have also created instructions for creating single-subject research design graphs with Excel .

Fraenkel, J. R., & Wallen, N. E. (2006). How to design and evaluate research in education (6th ed.). Boston, MA: McGraw Hill.

Geisler, J. L., Hessler, T., Gardner, R., III, & Lovelace, T. S. (2009). Differentiated writing interventions for high-achieving urban African American elementary students. Journal of Advanced Academics, 20, 214–247.

Del Siegle, Ph.D. University of Connecticut [email protected] www.delsiegle.info

Revised 02/02/2024

a single case study intervention

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 22 November 2022

Single case studies are a powerful tool for developing, testing and extending theories

  • Lyndsey Nickels   ORCID: orcid.org/0000-0002-0311-3524 1 , 2 ,
  • Simon Fischer-Baum   ORCID: orcid.org/0000-0002-6067-0538 3 &
  • Wendy Best   ORCID: orcid.org/0000-0001-8375-5916 4  

Nature Reviews Psychology volume  1 ,  pages 733–747 ( 2022 ) Cite this article

660 Accesses

5 Citations

26 Altmetric

Metrics details

  • Neurological disorders

Psychology embraces a diverse range of methodologies. However, most rely on averaging group data to draw conclusions. In this Perspective, we argue that single case methodology is a valuable tool for developing and extending psychological theories. We stress the importance of single case and case series research, drawing on classic and contemporary cases in which cognitive and perceptual deficits provide insights into typical cognitive processes in domains such as memory, delusions, reading and face perception. We unpack the key features of single case methodology, describe its strengths, its value in adjudicating between theories, and outline its benefits for a better understanding of deficits and hence more appropriate interventions. The unique insights that single case studies have provided illustrate the value of in-depth investigation within an individual. Single case methodology has an important place in the psychologist’s toolkit and it should be valued as a primary research tool.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 digital issues and online access to articles

55,14 € per year

only 4,60 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

a single case study intervention

Similar content being viewed by others

a single case study intervention

Loneliness trajectories over three decades are associated with conspiracist worldviews in midlife

a single case study intervention

Microdosing with psilocybin mushrooms: a double-blind placebo-controlled study

a single case study intervention

Determinants of behaviour and their efficacy as targets of behavioural change interventions

Corkin, S. Permanent Present Tense: The Unforgettable Life Of The Amnesic Patient, H. M . Vol. XIX, 364 (Basic Books, 2013).

Lilienfeld, S. O. Psychology: From Inquiry To Understanding (Pearson, 2019).

Schacter, D. L., Gilbert, D. T., Nock, M. K. & Wegner, D. M. Psychology (Worth Publishers, 2019).

Eysenck, M. W. & Brysbaert, M. Fundamentals Of Cognition (Routledge, 2018).

Squire, L. R. Memory and brain systems: 1969–2009. J. Neurosci. 29 , 12711–12716 (2009).

Article   PubMed   PubMed Central   Google Scholar  

Corkin, S. What’s new with the amnesic patient H.M.? Nat. Rev. Neurosci. 3 , 153–160 (2002).

Article   PubMed   Google Scholar  

Schubert, T. M. et al. Lack of awareness despite complex visual processing: evidence from event-related potentials in a case of selective metamorphopsia. Proc. Natl Acad. Sci. USA 117 , 16055–16064 (2020).

Behrmann, M. & Plaut, D. C. Bilateral hemispheric processing of words and faces: evidence from word impairments in prosopagnosia and face impairments in pure alexia. Cereb. Cortex 24 , 1102–1118 (2014).

Plaut, D. C. & Behrmann, M. Complementary neural representations for faces and words: a computational exploration. Cogn. Neuropsychol. 28 , 251–275 (2011).

Haxby, J. V. et al. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293 , 2425–2430 (2001).

Hirshorn, E. A. et al. Decoding and disrupting left midfusiform gyrus activity during word reading. Proc. Natl Acad. Sci. USA 113 , 8162–8167 (2016).

Kosakowski, H. L. et al. Selective responses to faces, scenes, and bodies in the ventral visual pathway of infants. Curr. Biol. 32 , 265–274.e5 (2022).

Harlow, J. Passage of an iron rod through the head. Boston Med. Surgical J . https://doi.org/10.1176/jnp.11.2.281 (1848).

Broca, P. Remarks on the seat of the faculty of articulated language, following an observation of aphemia (loss of speech). Bull. Soc. Anat. 6 , 330–357 (1861).

Google Scholar  

Dejerine, J. Contribution A L’étude Anatomo-pathologique Et Clinique Des Différentes Variétés De Cécité Verbale: I. Cécité Verbale Avec Agraphie Ou Troubles Très Marqués De L’écriture; II. Cécité Verbale Pure Avec Intégrité De L’écriture Spontanée Et Sous Dictée (Société de Biologie, 1892).

Liepmann, H. Das Krankheitsbild der Apraxie (“motorischen Asymbolie”) auf Grund eines Falles von einseitiger Apraxie (Fortsetzung). Eur. Neurol. 8 , 102–116 (1900).

Article   Google Scholar  

Basso, A., Spinnler, H., Vallar, G. & Zanobio, M. E. Left hemisphere damage and selective impairment of auditory verbal short-term memory. A case study. Neuropsychologia 20 , 263–274 (1982).

Humphreys, G. W. & Riddoch, M. J. The fractionation of visual agnosia. In Visual Object Processing: A Cognitive Neuropsychological Approach 281–306 (Lawrence Erlbaum, 1987).

Whitworth, A., Webster, J. & Howard, D. A Cognitive Neuropsychological Approach To Assessment And Intervention In Aphasia (Psychology Press, 2014).

Caramazza, A. On drawing inferences about the structure of normal cognitive systems from the analysis of patterns of impaired performance: the case for single-patient studies. Brain Cogn. 5 , 41–66 (1986).

Caramazza, A. & McCloskey, M. The case for single-patient studies. Cogn. Neuropsychol. 5 , 517–527 (1988).

Shallice, T. Cognitive neuropsychology and its vicissitudes: the fate of Caramazza’s axioms. Cogn. Neuropsychol. 32 , 385–411 (2015).

Shallice, T. From Neuropsychology To Mental Structure (Cambridge Univ. Press, 1988).

Coltheart, M. Assumptions and methods in cognitive neuropscyhology. In The Handbook Of Cognitive Neuropsychology: What Deficits Reveal About The Human Mind (ed. Rapp, B.) 3–22 (Psychology Press, 2001).

McCloskey, M. & Chaisilprungraung, T. The value of cognitive neuropsychology: the case of vision research. Cogn. Neuropsychol. 34 , 412–419 (2017).

McCloskey, M. The future of cognitive neuropsychology. In The Handbook Of Cognitive Neuropsychology: What Deficits Reveal About The Human Mind (ed. Rapp, B.) 593–610 (Psychology Press, 2001).

Lashley, K. S. In search of the engram. In Physiological Mechanisms in Animal Behavior 454–482 (Academic Press, 1950).

Squire, L. R. & Wixted, J. T. The cognitive neuroscience of human memory since H.M. Annu. Rev. Neurosci. 34 , 259–288 (2011).

Stone, G. O., Vanhoy, M. & Orden, G. C. V. Perception is a two-way street: feedforward and feedback phonology in visual word recognition. J. Mem. Lang. 36 , 337–359 (1997).

Perfetti, C. A. The psycholinguistics of spelling and reading. In Learning To Spell: Research, Theory, And Practice Across Languages 21–38 (Lawrence Erlbaum, 1997).

Nickels, L. The autocue? self-generated phonemic cues in the treatment of a disorder of reading and naming. Cogn. Neuropsychol. 9 , 155–182 (1992).

Rapp, B., Benzing, L. & Caramazza, A. The autonomy of lexical orthography. Cogn. Neuropsychol. 14 , 71–104 (1997).

Bonin, P., Roux, S. & Barry, C. Translating nonverbal pictures into verbal word names. Understanding lexical access and retrieval. In Past, Present, And Future Contributions Of Cognitive Writing Research To Cognitive Psychology 315–522 (Psychology Press, 2011).

Bonin, P., Fayol, M. & Gombert, J.-E. Role of phonological and orthographic codes in picture naming and writing: an interference paradigm study. Cah. Psychol. Cogn./Current Psychol. Cogn. 16 , 299–324 (1997).

Bonin, P., Fayol, M. & Peereman, R. Masked form priming in writing words from pictures: evidence for direct retrieval of orthographic codes. Acta Psychol. 99 , 311–328 (1998).

Bentin, S., Allison, T., Puce, A., Perez, E. & McCarthy, G. Electrophysiological studies of face perception in humans. J. Cogn. Neurosci. 8 , 551–565 (1996).

Jeffreys, D. A. Evoked potential studies of face and object processing. Vis. Cogn. 3 , 1–38 (1996).

Laganaro, M., Morand, S., Michel, C. M., Spinelli, L. & Schnider, A. ERP correlates of word production before and after stroke in an aphasic patient. J. Cogn. Neurosci. 23 , 374–381 (2011).

Indefrey, P. & Levelt, W. J. M. The spatial and temporal signatures of word production components. Cognition 92 , 101–144 (2004).

Valente, A., Burki, A. & Laganaro, M. ERP correlates of word production predictors in picture naming: a trial by trial multiple regression analysis from stimulus onset to response. Front. Neurosci. 8 , 390 (2014).

Kittredge, A. K., Dell, G. S., Verkuilen, J. & Schwartz, M. F. Where is the effect of frequency in word production? Insights from aphasic picture-naming errors. Cogn. Neuropsychol. 25 , 463–492 (2008).

Domdei, N. et al. Ultra-high contrast retinal display system for single photoreceptor psychophysics. Biomed. Opt. Express 9 , 157 (2018).

Poldrack, R. A. et al. Long-term neural and physiological phenotyping of a single human. Nat. Commun. 6 , 8885 (2015).

Coltheart, M. The assumptions of cognitive neuropsychology: reflections on Caramazza (1984, 1986). Cogn. Neuropsychol. 34 , 397–402 (2017).

Badecker, W. & Caramazza, A. A final brief in the case against agrammatism: the role of theory in the selection of data. Cognition 24 , 277–282 (1986).

Fischer-Baum, S. Making sense of deviance: Identifying dissociating cases within the case series approach. Cogn. Neuropsychol. 30 , 597–617 (2013).

Nickels, L., Howard, D. & Best, W. On the use of different methodologies in cognitive neuropsychology: drink deep and from several sources. Cogn. Neuropsychol. 28 , 475–485 (2011).

Dell, G. S. & Schwartz, M. F. Who’s in and who’s out? Inclusion criteria, model evaluation, and the treatment of exceptions in case series. Cogn. Neuropsychol. 28 , 515–520 (2011).

Schwartz, M. F. & Dell, G. S. Case series investigations in cognitive neuropsychology. Cogn. Neuropsychol. 27 , 477–494 (2010).

Cohen, J. A power primer. Psychol. Bull. 112 , 155–159 (1992).

Martin, R. C. & Allen, C. Case studies in neuropsychology. In APA Handbook Of Research Methods In Psychology Vol. 2 Research Designs: Quantitative, Qualitative, Neuropsychological, And Biological (eds Cooper, H. et al.) 633–646 (American Psychological Association, 2012).

Leivada, E., Westergaard, M., Duñabeitia, J. A. & Rothman, J. On the phantom-like appearance of bilingualism effects on neurocognition: (how) should we proceed? Bilingualism 24 , 197–210 (2021).

Arnett, J. J. The neglected 95%: why American psychology needs to become less American. Am. Psychol. 63 , 602–614 (2008).

Stolz, J. A., Besner, D. & Carr, T. H. Implications of measures of reliability for theories of priming: activity in semantic memory is inherently noisy and uncoordinated. Vis. Cogn. 12 , 284–336 (2005).

Cipora, K. et al. A minority pulls the sample mean: on the individual prevalence of robust group-level cognitive phenomena — the instance of the SNARC effect. Preprint at psyArXiv https://doi.org/10.31234/osf.io/bwyr3 (2019).

Andrews, S., Lo, S. & Xia, V. Individual differences in automatic semantic priming. J. Exp. Psychol. Hum. Percept. Perform. 43 , 1025–1039 (2017).

Tan, L. C. & Yap, M. J. Are individual differences in masked repetition and semantic priming reliable? Vis. Cogn. 24 , 182–200 (2016).

Olsson-Collentine, A., Wicherts, J. M. & van Assen, M. A. L. M. Heterogeneity in direct replications in psychology and its association with effect size. Psychol. Bull. 146 , 922–940 (2020).

Gratton, C. & Braga, R. M. Editorial overview: deep imaging of the individual brain: past, practice, and promise. Curr. Opin. Behav. Sci. 40 , iii–vi (2021).

Fedorenko, E. The early origins and the growing popularity of the individual-subject analytic approach in human neuroscience. Curr. Opin. Behav. Sci. 40 , 105–112 (2021).

Xue, A. et al. The detailed organization of the human cerebellum estimated by intrinsic functional connectivity within the individual. J. Neurophysiol. 125 , 358–384 (2021).

Petit, S. et al. Toward an individualized neural assessment of receptive language in children. J. Speech Lang. Hear. Res. 63 , 2361–2385 (2020).

Jung, K.-H. et al. Heterogeneity of cerebral white matter lesions and clinical correlates in older adults. Stroke 52 , 620–630 (2021).

Falcon, M. I., Jirsa, V. & Solodkin, A. A new neuroinformatics approach to personalized medicine in neurology: the virtual brain. Curr. Opin. Neurol. 29 , 429–436 (2016).

Duncan, G. J., Engel, M., Claessens, A. & Dowsett, C. J. Replication and robustness in developmental research. Dev. Psychol. 50 , 2417–2425 (2014).

Open Science Collaboration. Estimating the reproducibility of psychological science. Science 349 , aac4716 (2015).

Tackett, J. L., Brandes, C. M., King, K. M. & Markon, K. E. Psychology’s replication crisis and clinical psychological science. Annu. Rev. Clin. Psychol. 15 , 579–604 (2019).

Munafò, M. R. et al. A manifesto for reproducible science. Nat. Hum. Behav. 1 , 0021 (2017).

Oldfield, R. C. & Wingfield, A. The time it takes to name an object. Nature 202 , 1031–1032 (1964).

Oldfield, R. C. & Wingfield, A. Response latencies in naming objects. Q. J. Exp. Psychol. 17 , 273–281 (1965).

Brysbaert, M. How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables. J. Cogn. 2 , 16 (2019).

Brysbaert, M. Power considerations in bilingualism research: time to step up our game. Bilingualism https://doi.org/10.1017/S1366728920000437 (2020).

Machery, E. What is a replication? Phil. Sci. 87 , 545–567 (2020).

Nosek, B. A. & Errington, T. M. What is replication? PLoS Biol. 18 , e3000691 (2020).

Li, X., Huang, L., Yao, P. & Hyönä, J. Universal and specific reading mechanisms across different writing systems. Nat. Rev. Psychol. 1 , 133–144 (2022).

Rapp, B. (Ed.) The Handbook Of Cognitive Neuropsychology: What Deficits Reveal About The Human Mind (Psychology Press, 2001).

Code, C. et al. Classic Cases In Neuropsychology (Psychology Press, 1996).

Patterson, K., Marshall, J. C. & Coltheart, M. Surface Dyslexia: Neuropsychological And Cognitive Studies Of Phonological Reading (Routledge, 2017).

Marshall, J. C. & Newcombe, F. Patterns of paralexia: a psycholinguistic approach. J. Psycholinguist. Res. 2 , 175–199 (1973).

Castles, A. & Coltheart, M. Varieties of developmental dyslexia. Cognition 47 , 149–180 (1993).

Khentov-Kraus, L. & Friedmann, N. Vowel letter dyslexia. Cogn. Neuropsychol. 35 , 223–270 (2018).

Winskel, H. Orthographic and phonological parafoveal processing of consonants, vowels, and tones when reading Thai. Appl. Psycholinguist. 32 , 739–759 (2011).

Hepner, C., McCloskey, M. & Rapp, B. Do reading and spelling share orthographic representations? Evidence from developmental dysgraphia. Cogn. Neuropsychol. 34 , 119–143 (2017).

Hanley, J. R. & Sotiropoulos, A. Developmental surface dysgraphia without surface dyslexia. Cogn. Neuropsychol. 35 , 333–341 (2018).

Zihl, J. & Heywood, C. A. The contribution of single case studies to the neuroscience of vision: single case studies in vision neuroscience. Psych. J. 5 , 5–17 (2016).

Bouvier, S. E. & Engel, S. A. Behavioral deficits and cortical damage loci in cerebral achromatopsia. Cereb. Cortex 16 , 183–191 (2006).

Zihl, J. & Heywood, C. A. The contribution of LM to the neuroscience of movement vision. Front. Integr. Neurosci. 9 , 6 (2015).

Dotan, D. & Friedmann, N. Separate mechanisms for number reading and word reading: evidence from selective impairments. Cortex 114 , 176–192 (2019).

McCloskey, M. & Schubert, T. Shared versus separate processes for letter and digit identification. Cogn. Neuropsychol. 31 , 437–460 (2014).

Fayol, M. & Seron, X. On numerical representations. Insights from experimental, neuropsychological, and developmental research. In Handbook of Mathematical Cognition (ed. Campbell, J.) 3–23 (Psychological Press, 2005).

Bornstein, B. & Kidron, D. P. Prosopagnosia. J. Neurol. Neurosurg. Psychiat. 22 , 124–131 (1959).

Kühn, C. D., Gerlach, C., Andersen, K. B., Poulsen, M. & Starrfelt, R. Face recognition in developmental dyslexia: evidence for dissociation between faces and words. Cogn. Neuropsychol. 38 , 107–115 (2021).

Barton, J. J. S., Albonico, A., Susilo, T., Duchaine, B. & Corrow, S. L. Object recognition in acquired and developmental prosopagnosia. Cogn. Neuropsychol. 36 , 54–84 (2019).

Renault, B., Signoret, J.-L., Debruille, B., Breton, F. & Bolgert, F. Brain potentials reveal covert facial recognition in prosopagnosia. Neuropsychologia 27 , 905–912 (1989).

Bauer, R. M. Autonomic recognition of names and faces in prosopagnosia: a neuropsychological application of the guilty knowledge test. Neuropsychologia 22 , 457–469 (1984).

Haan, E. H. F., de, Young, A. & Newcombe, F. Face recognition without awareness. Cogn. Neuropsychol. 4 , 385–415 (1987).

Ellis, H. D. & Lewis, M. B. Capgras delusion: a window on face recognition. Trends Cogn. Sci. 5 , 149–156 (2001).

Ellis, H. D., Young, A. W., Quayle, A. H. & De Pauw, K. W. Reduced autonomic responses to faces in Capgras delusion. Proc. R. Soc. Lond. B 264 , 1085–1092 (1997).

Collins, M. N., Hawthorne, M. E., Gribbin, N. & Jacobson, R. Capgras’ syndrome with organic disorders. Postgrad. Med. J. 66 , 1064–1067 (1990).

Enoch, D., Puri, B. K. & Ball, H. Uncommon Psychiatric Syndromes 5th edn (Routledge, 2020).

Tranel, D., Damasio, H. & Damasio, A. R. Double dissociation between overt and covert face recognition. J. Cogn. Neurosci. 7 , 425–432 (1995).

Brighetti, G., Bonifacci, P., Borlimi, R. & Ottaviani, C. “Far from the heart far from the eye”: evidence from the Capgras delusion. Cogn. Neuropsychiat. 12 , 189–197 (2007).

Coltheart, M., Langdon, R. & McKay, R. Delusional belief. Annu. Rev. Psychol. 62 , 271–298 (2011).

Coltheart, M. Cognitive neuropsychiatry and delusional belief. Q. J. Exp. Psychol. 60 , 1041–1062 (2007).

Coltheart, M. & Davies, M. How unexpected observations lead to new beliefs: a Peircean pathway. Conscious. Cogn. 87 , 103037 (2021).

Coltheart, M. & Davies, M. Failure of hypothesis evaluation as a factor in delusional belief. Cogn. Neuropsychiat. 26 , 213–230 (2021).

McCloskey, M. et al. A developmental deficit in localizing objects from vision. Psychol. Sci. 6 , 112–117 (1995).

McCloskey, M., Valtonen, J. & Cohen Sherman, J. Representing orientation: a coordinate-system hypothesis and evidence from developmental deficits. Cogn. Neuropsychol. 23 , 680–713 (2006).

McCloskey, M. Spatial representations and multiple-visual-systems hypotheses: evidence from a developmental deficit in visual location and orientation processing. Cortex 40 , 677–694 (2004).

Gregory, E. & McCloskey, M. Mirror-image confusions: implications for representation and processing of object orientation. Cognition 116 , 110–129 (2010).

Gregory, E., Landau, B. & McCloskey, M. Representation of object orientation in children: evidence from mirror-image confusions. Vis. Cogn. 19 , 1035–1062 (2011).

Laine, M. & Martin, N. Cognitive neuropsychology has been, is, and will be significant to aphasiology. Aphasiology 26 , 1362–1376 (2012).

Howard, D. & Patterson, K. The Pyramids And Palm Trees Test: A Test Of Semantic Access From Words And Pictures (Thames Valley Test Co., 1992).

Kay, J., Lesser, R. & Coltheart, M. PALPA: Psycholinguistic Assessments Of Language Processing In Aphasia. 2: Picture & Word Semantics, Sentence Comprehension (Erlbaum, 2001).

Franklin, S. Dissociations in auditory word comprehension; evidence from nine fluent aphasic patients. Aphasiology 3 , 189–207 (1989).

Howard, D., Swinburn, K. & Porter, G. Putting the CAT out: what the comprehensive aphasia test has to offer. Aphasiology 24 , 56–74 (2010).

Conti-Ramsden, G., Crutchley, A. & Botting, N. The extent to which psychometric tests differentiate subgroups of children with SLI. J. Speech Lang. Hear. Res. 40 , 765–777 (1997).

Bishop, D. V. M. & McArthur, G. M. Individual differences in auditory processing in specific language impairment: a follow-up study using event-related potentials and behavioural thresholds. Cortex 41 , 327–341 (2005).

Bishop, D. V. M., Snowling, M. J., Thompson, P. A. & Greenhalgh, T., and the CATALISE-2 consortium. Phase 2 of CATALISE: a multinational and multidisciplinary Delphi consensus study of problems with language development: terminology. J. Child. Psychol. Psychiat. 58 , 1068–1080 (2017).

Wilson, A. J. et al. Principles underlying the design of ‘the number race’, an adaptive computer game for remediation of dyscalculia. Behav. Brain Funct. 2 , 19 (2006).

Basso, A. & Marangolo, P. Cognitive neuropsychological rehabilitation: the emperor’s new clothes? Neuropsychol. Rehabil. 10 , 219–229 (2000).

Murad, M. H., Asi, N., Alsawas, M. & Alahdab, F. New evidence pyramid. Evidence-based Med. 21 , 125–127 (2016).

Greenhalgh, T., Howick, J. & Maskrey, N., for the Evidence Based Medicine Renaissance Group. Evidence based medicine: a movement in crisis? Br. Med. J. 348 , g3725–g3725 (2014).

Best, W., Ping Sze, W., Edmundson, A. & Nickels, L. What counts as evidence? Swimming against the tide: valuing both clinically informed experimentally controlled case series and randomized controlled trials in intervention research. Evidence-based Commun. Assess. Interv. 13 , 107–135 (2019).

Best, W. et al. Understanding differing outcomes from semantic and phonological interventions with children with word-finding difficulties: a group and case series study. Cortex 134 , 145–161 (2021).

OCEBM Levels of Evidence Working Group. The Oxford Levels of Evidence 2. CEBM https://www.cebm.ox.ac.uk/resources/levels-of-evidence/ocebm-levels-of-evidence (2011).

Holler, D. E., Behrmann, M. & Snow, J. C. Real-world size coding of solid objects, but not 2-D or 3-D images, in visual agnosia patients with bilateral ventral lesions. Cortex 119 , 555–568 (2019).

Duchaine, B. C., Yovel, G., Butterworth, E. J. & Nakayama, K. Prosopagnosia as an impairment to face-specific mechanisms: elimination of the alternative hypotheses in a developmental case. Cogn. Neuropsychol. 23 , 714–747 (2006).

Hartley, T. et al. The hippocampus is required for short-term topographical memory in humans. Hippocampus 17 , 34–48 (2007).

Pishnamazi, M. et al. Attentional bias towards and away from fearful faces is modulated by developmental amygdala damage. Cortex 81 , 24–34 (2016).

Rapp, B., Fischer-Baum, S. & Miozzo, M. Modality and morphology: what we write may not be what we say. Psychol. Sci. 26 , 892–902 (2015).

Yong, K. X. X., Warren, J. D., Warrington, E. K. & Crutch, S. J. Intact reading in patients with profound early visual dysfunction. Cortex 49 , 2294–2306 (2013).

Rockland, K. S. & Van Hoesen, G. W. Direct temporal–occipital feedback connections to striate cortex (V1) in the macaque monkey. Cereb. Cortex 4 , 300–313 (1994).

Haynes, J.-D., Driver, J. & Rees, G. Visibility reflects dynamic changes of effective connectivity between V1 and fusiform cortex. Neuron 46 , 811–821 (2005).

Tanaka, K. Mechanisms of visual object recognition: monkey and human studies. Curr. Opin. Neurobiol. 7 , 523–529 (1997).

Fischer-Baum, S., McCloskey, M. & Rapp, B. Representation of letter position in spelling: evidence from acquired dysgraphia. Cognition 115 , 466–490 (2010).

Houghton, G. The problem of serial order: a neural network model of sequence learning and recall. In Current Research In Natural Language Generation (eds Dale, R., Mellish, C. & Zock, M.) 287–319 (Academic Press, 1990).

Fieder, N., Nickels, L., Biedermann, B. & Best, W. From “some butter” to “a butter”: an investigation of mass and count representation and processing. Cogn. Neuropsychol. 31 , 313–349 (2014).

Fieder, N., Nickels, L., Biedermann, B. & Best, W. How ‘some garlic’ becomes ‘a garlic’ or ‘some onion’: mass and count processing in aphasia. Neuropsychologia 75 , 626–645 (2015).

Schröder, A., Burchert, F. & Stadie, N. Training-induced improvement of noncanonical sentence production does not generalize to comprehension: evidence for modality-specific processes. Cogn. Neuropsychol. 32 , 195–220 (2015).

Stadie, N. et al. Unambiguous generalization effects after treatment of non-canonical sentence production in German agrammatism. Brain Lang. 104 , 211–229 (2008).

Schapiro, A. C., Gregory, E., Landau, B., McCloskey, M. & Turk-Browne, N. B. The necessity of the medial temporal lobe for statistical learning. J. Cogn. Neurosci. 26 , 1736–1747 (2014).

Schapiro, A. C., Kustner, L. V. & Turk-Browne, N. B. Shaping of object representations in the human medial temporal lobe based on temporal regularities. Curr. Biol. 22 , 1622–1627 (2012).

Baddeley, A., Vargha-Khadem, F. & Mishkin, M. Preserved recognition in a case of developmental amnesia: implications for the acaquisition of semantic memory? J. Cogn. Neurosci. 13 , 357–369 (2001).

Snyder, J. J. & Chatterjee, A. Spatial-temporal anisometries following right parietal damage. Neuropsychologia 42 , 1703–1708 (2004).

Ashkenazi, S., Henik, A., Ifergane, G. & Shelef, I. Basic numerical processing in left intraparietal sulcus (IPS) acalculia. Cortex 44 , 439–448 (2008).

Lebrun, M.-A., Moreau, P., McNally-Gagnon, A., Mignault Goulet, G. & Peretz, I. Congenital amusia in childhood: a case study. Cortex 48 , 683–688 (2012).

Vannuscorps, G., Andres, M. & Pillon, A. When does action comprehension need motor involvement? Evidence from upper limb aplasia. Cogn. Neuropsychol. 30 , 253–283 (2013).

Jeannerod, M. Neural simulation of action: a unifying mechanism for motor cognition. NeuroImage 14 , S103–S109 (2001).

Blakemore, S.-J. & Decety, J. From the perception of action to the understanding of intention. Nat. Rev. Neurosci. 2 , 561–567 (2001).

Rizzolatti, G. & Craighero, L. The mirror-neuron system. Annu. Rev. Neurosci. 27 , 169–192 (2004).

Forde, E. M. E., Humphreys, G. W. & Remoundou, M. Disordered knowledge of action order in action disorganisation syndrome. Neurocase 10 , 19–28 (2004).

Mazzi, C. & Savazzi, S. The glamor of old-style single-case studies in the neuroimaging era: insights from a patient with hemianopia. Front. Psychol. 10 , 965 (2019).

Coltheart, M. What has functional neuroimaging told us about the mind (so far)? (Position Paper Presented to the European Cognitive Neuropsychology Workshop, Bressanone, 2005). Cortex 42 , 323–331 (2006).

Page, M. P. A. What can’t functional neuroimaging tell the cognitive psychologist? Cortex 42 , 428–443 (2006).

Blank, I. A., Kiran, S. & Fedorenko, E. Can neuroimaging help aphasia researchers? Addressing generalizability, variability, and interpretability. Cogn. Neuropsychol. 34 , 377–393 (2017).

Niv, Y. The primacy of behavioral research for understanding the brain. Behav. Neurosci. 135 , 601–609 (2021).

Crawford, J. R. & Howell, D. C. Comparing an individual’s test score against norms derived from small samples. Clin. Neuropsychol. 12 , 482–486 (1998).

Crawford, J. R., Garthwaite, P. H. & Ryan, K. Comparing a single case to a control sample: testing for neuropsychological deficits and dissociations in the presence of covariates. Cortex 47 , 1166–1178 (2011).

McIntosh, R. D. & Rittmo, J. Ö. Power calculations in single-case neuropsychology: a practical primer. Cortex 135 , 146–158 (2021).

Patterson, K. & Plaut, D. C. “Shallow draughts intoxicate the brain”: lessons from cognitive science for cognitive neuropsychology. Top. Cogn. Sci. 1 , 39–58 (2009).

Lambon Ralph, M. A., Patterson, K. & Plaut, D. C. Finite case series or infinite single-case studies? Comments on “Case series investigations in cognitive neuropsychology” by Schwartz and Dell (2010). Cogn. Neuropsychol. 28 , 466–474 (2011).

Horien, C., Shen, X., Scheinost, D. & Constable, R. T. The individual functional connectome is unique and stable over months to years. NeuroImage 189 , 676–687 (2019).

Epelbaum, S. et al. Pure alexia as a disconnection syndrome: new diffusion imaging evidence for an old concept. Cortex 44 , 962–974 (2008).

Fischer-Baum, S. & Campana, G. Neuroplasticity and the logic of cognitive neuropsychology. Cogn. Neuropsychol. 34 , 403–411 (2017).

Paul, S., Baca, E. & Fischer-Baum, S. Cerebellar contributions to orthographic working memory: a single case cognitive neuropsychological investigation. Neuropsychologia 171 , 108242 (2022).

Feinstein, J. S., Adolphs, R., Damasio, A. & Tranel, D. The human amygdala and the induction and experience of fear. Curr. Biol. 21 , 34–38 (2011).

Crawford, J., Garthwaite, P. & Gray, C. Wanted: fully operational definitions of dissociations in single-case studies. Cortex 39 , 357–370 (2003).

McIntosh, R. D. Simple dissociations for a higher-powered neuropsychology. Cortex 103 , 256–265 (2018).

McIntosh, R. D. & Brooks, J. L. Current tests and trends in single-case neuropsychology. Cortex 47 , 1151–1159 (2011).

Best, W., Schröder, A. & Herbert, R. An investigation of a relative impairment in naming non-living items: theoretical and methodological implications. J. Neurolinguistics 19 , 96–123 (2006).

Franklin, S., Howard, D. & Patterson, K. Abstract word anomia. Cogn. Neuropsychol. 12 , 549–566 (1995).

Coltheart, M., Patterson, K. E. & Marshall, J. C. Deep Dyslexia (Routledge, 1980).

Nickels, L., Kohnen, S. & Biedermann, B. An untapped resource: treatment as a tool for revealing the nature of cognitive processes. Cogn. Neuropsychol. 27 , 539–562 (2010).

Download references

Acknowledgements

The authors thank all of those pioneers of and advocates for single case study research who have mentored, inspired and encouraged us over the years, and the many other colleagues with whom we have discussed these issues.

Author information

Authors and affiliations.

School of Psychological Sciences & Macquarie University Centre for Reading, Macquarie University, Sydney, New South Wales, Australia

Lyndsey Nickels

NHMRC Centre of Research Excellence in Aphasia Recovery and Rehabilitation, Australia

Psychological Sciences, Rice University, Houston, TX, USA

Simon Fischer-Baum

Psychology and Language Sciences, University College London, London, UK

You can also search for this author in PubMed   Google Scholar

Contributions

L.N. led and was primarily responsible for the structuring and writing of the manuscript. All authors contributed to all aspects of the article.

Corresponding author

Correspondence to Lyndsey Nickels .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Reviews Psychology thanks Yanchao Bi, Rob McIntosh, and the other, anonymous, reviewer for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Nickels, L., Fischer-Baum, S. & Best, W. Single case studies are a powerful tool for developing, testing and extending theories. Nat Rev Psychol 1 , 733–747 (2022). https://doi.org/10.1038/s44159-022-00127-y

Download citation

Accepted : 13 October 2022

Published : 22 November 2022

Issue Date : December 2022

DOI : https://doi.org/10.1038/s44159-022-00127-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

a single case study intervention

  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Single-Case Experimental Designs

Introduction, general overviews and primary textbooks.

  • Textbooks in Applied Behavior Analysis
  • Types of Single-Case Experimental Designs
  • Model Building and Randomization in Single-Case Experimental Designs
  • Visual Analysis of Single-Case Experimental Designs
  • Effect Size Estimates in Single-Case Experimental Designs
  • Reporting Single-Case Design Intervention Research

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Action Research
  • Ambulatory Assessment in Behavioral Science
  • Effect Size
  • Mediation Analysis
  • Path Models
  • Research Methods for Studying Daily Life

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Data Visualization
  • Remote Work
  • Workforce Training Evaluation
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Single-Case Experimental Designs by S. Andrew Garbacz , Thomas R. Kratochwill LAST REVIEWED: 29 July 2020 LAST MODIFIED: 29 July 2020 DOI: 10.1093/obo/9780199828340-0265

Single-case experimental designs are a family of experimental designs that are characterized by researcher manipulation of an independent variable and repeated measurement of a dependent variable before (i.e., baseline) and after (i.e., intervention phase) introducing the independent variable. In single-case experimental designs a case is the unit of intervention and analysis (e.g., a child, a school). Because measurement within each case is conducted before and after manipulation of the independent variable, the case typically serves as its own control. Experimental variants of single-case designs provide a basis for determining a causal relation by replication of the intervention through (a) introducing and withdrawing the independent variable, (b) manipulating the independent variable across different phases, and (c) introducing the independent variable in a staggered fashion across different points in time. Due to their economy of resources, single-case designs may be useful during development activities and allow for rapid replication across studies.

Several sources provide overviews of single-case experimental designs. Barlow, et al. 2009 includes an overview for the development of single-case experimental designs, describes key considerations for designing and conducting single-case experimental design research, and reviews procedural elements, assessment strategies, and replication considerations. Kazdin 2011 provides detailed coverage of single-case experimental design variants as well as approaches for evaluating data in single-case experimental designs. Kratochwill and Levin 2014 describes key methodological features that underlie single-case experimental designs, including philosophical and statistical foundations and data evaluation. Ledford and Gast 2018 covers research conceptualization and writing, design variants within single-case experimental design, definitions of variables and associated measurement, and approaches to organize and evaluate data. Riley-Tillman and Burns 2009 provides a practical orientation to single-case experimental designs to facilitate uptake and use in applied settings.

Barlow, D. H., M. K. Nock, and M. Hersen, eds. 2009. Single case experimental designs: Strategies for studying behavior change . 3d ed. New York: Pearson.

A comprehensive reference about the process of designing and conducting single-case experimental design studies. Chapters are integrative but can stand alone.

Kazdin, A. E. 2011. Single-case research designs: Methods for clinical and applied settings . 2d ed. New York: Oxford Univ. Press.

A complete overview and description of single-case experimental design variants as well as information about data evaluation.

Kratochwill, T. R., and J. R. Levin, eds. 2014. Single-case intervention research: Methodological and statistical advances . New York: Routledge.

The authors describe in depth the methodological and analytic considerations necessary for designing and conducting research that uses a single-case experimental design. In addition, the text includes chapters from leaders in psychology and education who provide critical perspectives about the use of single-case experimental designs.

Ledford, J. R., and D. L. Gast, eds. 2018. Single case research methodology: Applications in special education and behavioral sciences . New York: Routledge.

Covers the research process from writing literature reviews, to designing, conducting, and evaluating single-case experimental design studies.

Riley-Tillman, T. C., and M. K. Burns. 2009. Evaluating education interventions: Single-case design for measuring response to intervention . New York: Guilford Press.

Focuses on accelerating uptake and use of single-case experimental designs in applied settings. This book provides a practical, “nuts and bolts” orientation to conducting single-case experimental design research.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Psychology »
  • Meet the Editorial Board »
  • Abnormal Psychology
  • Academic Assessment
  • Acculturation and Health
  • Action Regulation Theory
  • Addictive Behavior
  • Adolescence
  • Adoption, Social, Psychological, and Evolutionary Perspect...
  • Advanced Theory of Mind
  • Affective Forecasting
  • Affirmative Action
  • Ageism at Work
  • Allport, Gordon
  • Alzheimer’s Disease
  • Analysis of Covariance (ANCOVA)
  • Animal Behavior
  • Animal Learning
  • Anxiety Disorders
  • Art and Aesthetics, Psychology of
  • Artificial Intelligence, Machine Learning, and Psychology
  • Assessment and Clinical Applications of Individual Differe...
  • Attachment in Social and Emotional Development across the ...
  • Attention-Deficit/Hyperactivity Disorder (ADHD) in Adults
  • Attention-Deficit/Hyperactivity Disorder (ADHD) in Childre...
  • Attitudinal Ambivalence
  • Attraction in Close Relationships
  • Attribution Theory
  • Authoritarian Personality
  • Bayesian Statistical Methods in Psychology
  • Behavior Therapy, Rational Emotive
  • Behavioral Economics
  • Behavioral Genetics
  • Belief Perseverance
  • Bereavement and Grief
  • Biological Psychology
  • Birth Order
  • Body Image in Men and Women
  • Bystander Effect
  • Categorical Data Analysis in Psychology
  • Childhood and Adolescence, Peer Victimization and Bullying...
  • Clark, Mamie Phipps
  • Clinical Neuropsychology
  • Clinical Psychology
  • Cognitive Consistency Theories
  • Cognitive Dissonance Theory
  • Cognitive Neuroscience
  • Communication, Nonverbal Cues and
  • Comparative Psychology
  • Competence to Stand Trial: Restoration Services
  • Competency to Stand Trial
  • Computational Psychology
  • Conflict Management in the Workplace
  • Conformity, Compliance, and Obedience
  • Consciousness
  • Coping Processes
  • Correspondence Analysis in Psychology
  • Counseling Psychology
  • Creativity at Work
  • Critical Thinking
  • Cross-Cultural Psychology
  • Cultural Psychology
  • Daily Life, Research Methods for Studying
  • Data Science Methods for Psychology
  • Data Sharing in Psychology
  • Death and Dying
  • Deceiving and Detecting Deceit
  • Defensive Processes
  • Depressive Disorders
  • Development, Prenatal
  • Developmental Psychology (Cognitive)
  • Developmental Psychology (Social)
  • Diagnostic and Statistical Manual of Mental Disorders (DSM...
  • Discrimination
  • Dissociative Disorders
  • Drugs and Behavior
  • Eating Disorders
  • Ecological Psychology
  • Educational Settings, Assessment of Thinking in
  • Embodiment and Embodied Cognition
  • Emerging Adulthood
  • Emotional Intelligence
  • Empathy and Altruism
  • Employee Stress and Well-Being
  • Environmental Neuroscience and Environmental Psychology
  • Ethics in Psychological Practice
  • Event Perception
  • Evolutionary Psychology
  • Expansive Posture
  • Experimental Existential Psychology
  • Exploratory Data Analysis
  • Eyewitness Testimony
  • Eysenck, Hans
  • Factor Analysis
  • Festinger, Leon
  • Five-Factor Model of Personality
  • Flynn Effect, The
  • Forensic Psychology
  • Forgiveness
  • Friendships, Children's
  • Fundamental Attribution Error/Correspondence Bias
  • Gambler's Fallacy
  • Game Theory and Psychology
  • Geropsychology, Clinical
  • Global Mental Health
  • Habit Formation and Behavior Change
  • Health Psychology
  • Health Psychology Research and Practice, Measurement in
  • Heider, Fritz
  • Heuristics and Biases
  • History of Psychology
  • Human Factors
  • Humanistic Psychology
  • Implicit Association Test (IAT)
  • Industrial and Organizational Psychology
  • Inferential Statistics in Psychology
  • Insanity Defense, The
  • Intelligence
  • Intelligence, Crystallized and Fluid
  • Intercultural Psychology
  • Intergroup Conflict
  • International Classification of Diseases and Related Healt...
  • International Psychology
  • Interviewing in Forensic Settings
  • Intimate Partner Violence, Psychological Perspectives on
  • Introversion–Extraversion
  • Item Response Theory
  • Law, Psychology and
  • Lazarus, Richard
  • Learned Helplessness
  • Learning Theory
  • Learning versus Performance
  • LGBTQ+ Romantic Relationships
  • Lie Detection in a Forensic Context
  • Life-Span Development
  • Locus of Control
  • Loneliness and Health
  • Mathematical Psychology
  • Meaning in Life
  • Mechanisms and Processes of Peer Contagion
  • Media Violence, Psychological Perspectives on
  • Memories, Autobiographical
  • Memories, Flashbulb
  • Memories, Repressed and Recovered
  • Memory, False
  • Memory, Human
  • Memory, Implicit versus Explicit
  • Memory in Educational Settings
  • Memory, Semantic
  • Meta-Analysis
  • Metacognition
  • Metaphor, Psychological Perspectives on
  • Microaggressions
  • Military Psychology
  • Mindfulness
  • Mindfulness and Education
  • Minnesota Multiphasic Personality Inventory (MMPI)
  • Money, Psychology of
  • Moral Conviction
  • Moral Development
  • Moral Psychology
  • Moral Reasoning
  • Nature versus Nurture Debate in Psychology
  • Neuroscience of Associative Learning
  • Nonergodicity in Psychology and Neuroscience
  • Nonparametric Statistical Analysis in Psychology
  • Observational (Non-Randomized) Studies
  • Obsessive-Complusive Disorder (OCD)
  • Occupational Health Psychology
  • Olfaction, Human
  • Operant Conditioning
  • Optimism and Pessimism
  • Organizational Justice
  • Parenting Stress
  • Parenting Styles
  • Parents' Beliefs about Children
  • Peace Psychology
  • Perception, Person
  • Performance Appraisal
  • Personality and Health
  • Personality Disorders
  • Personality Psychology
  • Person-Centered and Experiential Psychotherapies: From Car...
  • Phenomenological Psychology
  • Placebo Effects in Psychology
  • Play Behavior
  • Positive Psychological Capital (PsyCap)
  • Positive Psychology
  • Posttraumatic Stress Disorder (PTSD)
  • Prejudice and Stereotyping
  • Pretrial Publicity
  • Prisoner's Dilemma
  • Problem Solving and Decision Making
  • Procrastination
  • Prosocial Behavior
  • Prosocial Spending and Well-Being
  • Protocol Analysis
  • Psycholinguistics
  • Psychological Literacy
  • Psychological Perspectives on Food and Eating
  • Psychology, Political
  • Psychoneuroimmunology
  • Psychophysics, Visual
  • Psychotherapy
  • Psychotic Disorders
  • Publication Bias in Psychology
  • Reasoning, Counterfactual
  • Rehabilitation Psychology
  • Relationships
  • Reliability–Contemporary Psychometric Conceptions
  • Religion, Psychology and
  • Replication Initiatives in Psychology
  • Research Methods
  • Risk Taking
  • Role of the Expert Witness in Forensic Psychology, The
  • Sample Size Planning for Statistical Power and Accurate Es...
  • Schizophrenic Disorders
  • School Psychology
  • School Psychology, Counseling Services in
  • Self, Gender and
  • Self, Psychology of the
  • Self-Construal
  • Self-Control
  • Self-Deception
  • Self-Determination Theory
  • Self-Efficacy
  • Self-Esteem
  • Self-Monitoring
  • Self-Regulation in Educational Settings
  • Self-Report Tests, Measures, and Inventories in Clinical P...
  • Sensation Seeking
  • Sex and Gender
  • Sexual Minority Parenting
  • Sexual Orientation
  • Signal Detection Theory and its Applications
  • Simpson's Paradox in Psychology
  • Single People
  • Single-Case Experimental Designs
  • Skinner, B.F.
  • Sleep and Dreaming
  • Small Groups
  • Social Class and Social Status
  • Social Cognition
  • Social Neuroscience
  • Social Support
  • Social Touch and Massage Therapy Research
  • Somatoform Disorders
  • Spatial Attention
  • Sports Psychology
  • Stanford Prison Experiment (SPE): Icon and Controversy
  • Stereotype Threat
  • Stereotypes
  • Stress and Coping, Psychology of
  • Student Success in College
  • Subjective Wellbeing Homeostasis
  • Taste, Psychological Perspectives on
  • Teaching of Psychology
  • Terror Management Theory
  • Testing and Assessment
  • The Concept of Validity in Psychological Assessment
  • The Neuroscience of Emotion Regulation
  • The Reasoned Action Approach and the Theories of Reasoned ...
  • The Weapon Focus Effect in Eyewitness Memory
  • Theory of Mind
  • Therapy, Cognitive-Behavioral
  • Thinking Skills in Educational Settings
  • Time Perception
  • Trait Perspective
  • Trauma Psychology
  • Twin Studies
  • Type A Behavior Pattern (Coronary Prone Personality)
  • Unconscious Processes
  • Video Games and Violent Content
  • Virtues and Character Strengths
  • Women and Science, Technology, Engineering, and Math (STEM...
  • Women, Psychology of
  • Work Well-Being
  • Wundt, Wilhelm
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [66.249.64.20|91.193.111.216]
  • 91.193.111.216

A descriptive analysis of assessment measures on the effectiveness of a comprehensive stuttering intervention approach: A single case study

Affiliation.

  • 1 Department of Speech Pathology and Audiology, Faculty of Humanities, University of the Witwatersrand, Johannesburg. [email protected].
  • PMID: 32370524
  • PMCID: PMC7203267
  • DOI: 10.4102/sajcd.v67i1.648

Background: For effective client outcomes, stuttering assessment and intervention approaches need to be aligned. This encompasses using assessment and intervention approaches that address the three multidimensional constructs of stuttering, namely core behaviours, secondary behaviours and negative feelings and attitudes.

Objective: The study aimed to explore whether multiple assessment measures could be used to describe the effectiveness of a comprehensive stuttering intervention approach, undergirded by the International Classification of Functioning, Disability and Health (ICF) framework.

Method: A single-subject case design was employed with one male adult who stutters. Data was collected by administering the Stuttering Severity Instrument-Fourth Edition (SSI-4) and Overall Assessment of the Speaker's Experience of Stuttering-Adults (OASES-A) at three testing periods (pre-intervention, immediately post-intervention and 7 months post-intervention), and a semi-structured interview schedule immediately post-intervention. Descriptive statistics was used to analyse the SSI-4 and OASES-A, and thematic analysis was conducted to evaluate the participant's interview schedule responses.

Results: The participant's total scores, impact scores and severity ratings of both the SSI-4 and OASES decreased across the three testing periods. The main theme of effectiveness of the comprehensive stuttering intervention to reduce aspects of disability emerged from the participant's responses.

Conclusion: Evaluation of the results from the assessment measures revealed that the comprehensive stuttering intervention approach was effective in reducing the participant's core behaviours, secondary behaviours and negative feelings and attitudes. Assessment and management of fluency disorders should promote a client-specific multidimensional approach that extends beyond the core behaviours and secondary behaviours, by addressing the underlying social and emotional facets of fluency disorders.

Keywords: ICF; OASES; SSI-4; South Africa.; case study; comprehensive approach; person who stutters; speech-language pathologist; stuttering intervention.

Publication types

  • Case Reports
  • Middle Aged
  • Quality of Life
  • Speech-Language Pathology / methods*
  • Stuttering / psychology
  • Stuttering / therapy*
  • Surveys and Questionnaires

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • For authors
  • New editors
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Online First
  • The power of Para sport: the effect of performance-focused swimming training on motor function in adolescents with cerebral palsy and high support needs (GMFCS IV) – a single-case experimental design with 30-month follow-up
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-0368-6410 Iain Mayank Dutia 1 , 2 ,
  • Mark Connick 1 , 3 ,
  • Emma Beckman 1 ,
  • Leanne Johnston 4 ,
  • Paula Wilson 1 ,
  • Angelo Macaro 1 ,
  • Jennifer O'Sullivan 1 ,
  • http://orcid.org/0000-0002-2011-3382 Sean Tweedy 1
  • 1 The University of Queensland School of Human Movement and Nutrition Sciences , Saint Lucia , Queensland , Australia
  • 2 School of Allied Health , Australian Catholic University - Brisbane Campus , Banyo , Queensland , Australia
  • 3 School of Exercise and Nutrition Sciences , Queensland University of Technology , Brisbane , Queensland , Australia
  • 4 The University of Queensland School of Health and Rehabilitation Sciences , Saint Lucia , Queensland , Australia
  • Correspondence to Dr Iain Mayank Dutia, The University of Queensland School of Human Movement and Nutrition Sciences, Saint Lucia, QLD 4072, Australia; i.dutia{at}uq.edu.au

Objective This study aims to evaluate the effect of a performance-focused swimming programme on motor function in previously untrained adolescents with cerebral palsy and high support needs (CPHSN) and to determine whether the motor decline typical of adolescents with CPHSN occurred in these swimmers.

Methods A Multiple-Baseline, Single-Case Experimental Design (MB-SCED) study comprising five phases and a 30-month follow-up was conducted. Participants were two males and one female, all aged 15 years, untrained and with CPHSN. The intervention was a 46-month swimming training programme, focused exclusively on improving performance. Outcomes were swim performance (velocity); training load (rating of perceived exertion min/week; swim distance/week) and Gross Motor Function Measure-66-Item Set (GMFM-66). MB-SCED data were analysed using interrupted time-series simulation analysis. Motor function over 46 months was modelled (generalised additive model) using GMFM-66 scores and compared with a model of predicted motor decline.

Results Improvements in GMFM-66 scores in response to training were significant (p<0.001), and two periods of training withdrawal each resulted in significant motor decline (p≤0.001). Participant motor function remained above baseline levels for the study duration, and, importantly, participants did not experience the motor decline typical of other adolescents with CPHSN. Weekly training volumes were also commensurate with WHO recommended physical activity levels.

Conclusions Results suggest that adolescents with CPHSN who meet physical activity guidelines through participation in competitive swimming may prevent motor decline. However, this population is clinically complex, and in order to permit safe, effective participation in competitive sport, priority should be placed on the development of programmes delivered by skilled multiprofessional teams.

Trial registration number ACTRN12616000326493.

  • Para-Athletes
  • Rehabilitation

Data availability statement

Data are available on reasonable request.

https://doi.org/10.1136/bjsports-2023-107689

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

Compared with ambulant people with cerebral palsy, gross motor function declines in non-ambulant people with cerebral palsy and high support needs (CPHSN). These patients are also less physically active, and it is plausible that relative inactivity contributes to motor decline; however, this premise has not been investigated.

WHAT THIS STUDY ADDS

This study demonstrated that previously inactive adolescents with CPHSN who undertook performance-focused swimming training with multiprofessional guidance over 46 months improved sports performance and maintained gross motor function during a life stage when population-based modelling predicted gross motor decline.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

This study introduces the novel concept of ‘Para Sport as Medicine’ and suggests that performance-focused sports training programmes delivered by multiprofessional teams may be an effective means of preventing motor decline among people with CPHSN, as well as conferring a range of psychosocial and well-being benefits.

Cerebral palsy (CP) is the most common neuromotor disorder affecting children and non-progression of the underlying neuropathology is a defining feature of CP. 1 In children with CP who are ambulant—Gross Motor Function Classification System (GMFCS) levels I and II—gross motor function improves from birth to approximately 7–9 years of age and then plateaus. However, among children with CP who are non-ambulant, have high support needs and are classified as GMFCS levels IV and V (CPHSN), early developmental gains are generally followed by a decline in motor function throughout adolescence and into early adulthood. 2 The underlying causes of this decline are poorly understood although it has been suggested that reduced access to neurological care, the development of new neurological conditions 3 or poor management of hypertonia during periods of growth 4 may contribute.

We posit that insufficient habitual physical activity may contribute to motor function decline in adolescents with CPHSN. Specifically, the majority of children and adolescents with CP are insufficiently active for good health 5 and, compared with those who are ambulant, those with CPHSN are more sedentary and less physically active. 6 It is plausible that the relatively greater gross motor decline of adolescents with CPHSN is caused, at least in part, by their relatively low levels of habitual physical activity.

Unfortunately, people with CPHSN are grossly under-represented in exercise training studies. A recent review identified that only 3% of participants were either GMFCS level IV or V. 7 While evidence indicates that physical activity can improve gross motor function in people with CP, the effect has only been demonstrated in children at GMFCS levels I and II, 8 and as they do not experience the motor decline associated with GMFCS IV and V, the generalisability of the finding is not known. Additionally, interventions have been brief (8–12 weeks) with limited follow-up, so the extent to which improvements are maintained is not known. 8

Competitive Para swimming is a type of physical activity open to people with CPHSN, and it provides them with critical avenues to engage in competitive swimming, one of Australia’s most popular and culturally significant sports, particularly among children and adolescents. 9 Para swimmers devote the majority of their sports participation time to performance-focused training, defined as training that is planned and undertaken for the primary purpose of maximising sports performance. 9 Because of its focus on performance enhancement—primarily maximising swimming velocity 10 —performance-focused swimming training is clearly distinct from other conventional aquatic therapies (including hydrotherapy) which explicitly focus on therapeutic outcomes and which have been shown to be effective. 11 However, while performance-focused training does not have therapeutic goals, personal testimony from experienced Para swimmers with CP indicates they attribute large, meaningful improvements in physical function to such training. 9 However, to date, the veracity of this testimony has not been evaluated in swimmers with CP, including swimmers with CPHSN.

Investigating whether performance-focused swimming training prevents motor decline in people with CPHSN presents considerable challenges. Severe functional limitations increase the time cost of participation for people with CPHSN by 8–13 times, 12 significantly increasing research costs. Further, the heterogeneity that characterises CP is greatest in this population, who are often affected by a greater number of comorbidities that are more severe 13 and many of these comorbidities (eg, seizure disorders, eating and drinking difficulties and pain) act as independent prognostic variables in exercise training trials. In group-based research designs such as randomised controlled trials (RCTs), the result of this heterogeneity is predictable, systematic between-participant differences in exercise training responses which act to amplify noise and threaten internal validity.

Single-case experimental research designs (SCEDs), where each participant acts as their own control, account for logistical and research design challenges in people with CPHSN and offer a methodologically robust alternative to RCTs. 14 The SCED generates high-level evidence (equivalent to RCTs 15 ) using small samples, permits tailoring of the intervention to each participant and produces individual outcomes—the SCED is one of the few designs in which it is possible to detect if, when and to what extent each participant responds to the intervention. 16 These design features are particularly advantageous for studies in people with CPHSN: a relatively small, heterogeneous population who require tailored interventions and support which meets their personal needs, and who have been largely excluded from the literature to date. 10

Therefore, this study employed a single-case experimental design to address two primary aims: to evaluate the effect of a performance-focused swimming programme on gross motor functioning in previously untrained, inactive adolescents with CPHSN and to determine whether the motor decline typical of adolescents with CPHSN occurred in swimmers who trained and competed regularly for one Paralympic cycle over a 46-month period.

The ParaSTART (Sports Training And Research Team) programme was established to facilitate research presented in this manuscript and other projects. ‘Para’ indicates a focus on people who are eligible to compete in Para sport. 17 The programme specialises in physically demanding Para sports training for people with high support needs—those using wheeled mobility and requiring personal assistance for fundamental tasks of daily living. A brief vignette is available here —https://youtu.be/HxCRf7hHj7k .

Participants

Young, inactive people with CPHSN were recruited from a 30 km radius from the University of Queensland, Brisbane, Australia. Key inclusion and exclusion criteria are described fully in the published protocol. 10 Participants were one female and two males with CPHSN, aged 15–16 years on enrolment, classified as GMFCS IV. None were achieving WHO physical activity guidelines, and they had not previously participated in performance-focused sports training. Two other participants were screened and did not meet inclusion criteria, and one other participant was screened and excluded due to contraindication to the intervention (see online supplemental appendix 1 ). Included participants provided assent, and participants’ parents/guardians provided informed consent on enrolment. Table 1 describes clinical characteristics, sport classes and stroke preference of each participant.

Supplemental material

  • View inline

Participant characteristics

Study design

To evaluate the effect of a performance-focused swimming programme on gross motor functioning, a Multiple-Baseline, Single-Case Experimental Design (MB-SCED) was used. It took place over 16 months, between March 2017 and July 2018, and comprised five phases A1 (baseline)-B1-A2-B2-A3: where ‘A’ phases represent periods of no training or training withdrawal, and ‘B’ phases represent training exposures, each being 16 weeks duration, all standard training block duration for competitive swimmers. 10 18 Two features of this design make it particularly strong. First, there are repeated measures throughout all phases—a total of 102 data collection points, exceeding the 75 data points required for this design according to SCED guidelines. 19 Second, the transitions between training and withdrawal phases were temporally staggered, 10 and the 5-phase design presented a total of 12 opportunities to detect an experimental effect, 4 transitions for each participant (from an A-B or B-A phase). The MB-SCED methods are reported fully in the published protocol. 10 The trial was registered (Australian and New Zealand Clinical Trial Registry number ACTRN12616000326493).

Following the MB-SCED, a 30-month follow-up period commenced during which participants continued a schedule of regular training and monitoring. Data from the full 46 months (16-month MB-SCED and 30-month follow-up period) provided a basis for comparing participant motor function with predicted motor decline. 2 During the 30-month follow-up period, the training phases were extended from 16 weeks to longer training blocks aligning with the competitive swimming season, and the withdrawal phases were incorporated between seasons to facilitate recovery. Gross motor function, swimming performance and training load were longitudinally monitored.

Intervention

The intervention comprised performance-focused swimming training over the course of four consecutive competitive swimming seasons (one Paralympic cycle). The term ‘performance-focused’ refers to the fact that the sole aim of all strategies employed was to improve competitive swimming performance over 50 m. A comprehensive description of the training programme is available in the published protocol. 10 Training aimed to achieve three main goals:

Improve water safety skills.

Minimise hydrodynamic drag forces.

Maximise propulsive forces.

Training was delivered by a multiprofessional team comprising qualified physiotherapists, exercise physiologists and swim coaches, supported by a multiprofessional medical team. Training session frequency increased from once per week to five times per week as the training phases progressed. Training session intensity and duration varied but aimed to gradually increase over time. The participants were paired with a typically developing volunteer training buddy who provided training assistance.

In the MB-SCED, repeated measures of swimming performance and gross motor function were conducted throughout five phases: A1 (Baseline)-B1-A2-B2-A3 with staggered exposure/withdrawal sequences. 10 In accordance with SCED guidelines, 19 a minimum of five data points occurred for each participant in each phase. 10 During the baseline phase, participant 1 completed 5 data points, participant 2 completed 8 data points and participant 3 completed 11 data points. All participants then completed: phase B1 (8 data points), phase A2 (5 data points), phase B2 (8 data points) and phase A3 (5 data points). 10

Swimming velocity

A full description of the test protocol, including rationale, is reported in the published prototol. 10 To summarise, each participant completed a maximum-effort swimming trial. The duration of each participant’s test was based on the 2017 World Para Swimming Championships 50 m freestyle qualifying time for the participant’s class, and they swam their preferred stroke as fast and as far as possible in this allotted time. The distance covered was recorded and average swimming velocity was calculated.

Gross motor function

The Gross Motor Function Measure-66-Item Set (GMFM-66-IS) has excellent levels of overall agreement with the full version of the GMFM-66 when measuring change over time (Intraclass Correlation Coefficient or ICC≥0.9). 20 Given the time-intensive nature of the full test, the short item set was appropriate for use in this study as a repeated measure of gross motor function. Scores for the tasks within the item set were entered into the Gross Motor Ability Estimator programme to obtain the final GMFM-66-item score.

Training load

Training load comprised the frequency (training sessions per week), duration (minutes spent training) and intensity, which in this study was quantified using the session-RPE (rating of perceived exertion) method. 21 Each participant rated each training session intensity on the OMNI RPE scale 22 which ranges from 0 (extremely easy) to 10 (extremely hard), and this rating was multiplied by the session duration to produce a given number of session RPE minutes. Weekly totals for RPE minutes were calculated.

Randomisation/blinding

Assessments were conducted by a physiotherapist with expertise in the assessment of gross motor function in people with CP. The assessor was blinded to the intervention and whether each participant was in a period of training or withdrawal at the time of assessment. Participants were randomised to either a 10-week (5 data point), 16-week (8 data point) or 22-week (11 data point) baseline period.

Statistical methods

Interrupted time-series simulation (ITSSIM) analysis 23 was used to calculate a standardised mean difference effect size, d, and an unstandardised mean difference, D, for each participant’s transition from A-B or B-A in the outcomes of swimming performance and GMFM-66. Standardised effect sizes were interpreted as follows: small, 0.20–0.49; moderate, 0.50–0.80 and large, greater than 0.80. 24 The 5-phase design comprised a total of 12 transitions—4 transitions (from an A-B or B-A phase) for each of three participants. The criterion for inferring causality was statistically significant effects for at least three transitions 25 ).

The longitudinal non-linear fluctuations in GMFM, as a function of participant age, were evaluated using a generalised additive model with a penalised cubic regression spline basis function and visualised using the ‘ggplot’ function from the ‘ggplot2’ package (R Studio V.1.3.1056, PBC, Boston, Massachusetts, USA). A Gaussian distribution with an identity link function was used to produce the general additive model. Five knots were included in the model, positioned at quartiles of the observed data points. The fitted smoothed coefficients resulting from the analysis were plotted along with the 95% CIs. The GMFM fluctuations that could be expected to occur in people who are of the same age as those in the current study were plotted according to the original models developed by Hanna et al . 2

Equity, diversity and inclusion and patient involvement statement

Equity and patient voice were fundamental to our justification for this study and at the forefront of the discussion of results, implications for future research and clinical practice. This work is driven by the voices of people with disabilities who have high support needs and who acted as consumer advisors for the ParaSTART programme of research. Our research team comprises both males and females from three countries and includes senior, mid-career and early-career academics.

MB-SCED to evaluate the effects of a performance-focused swimming programme

Figure 1 presents training load, GMFM-66 and swimming performance data for each participant over the five phases of the 16-month SCED study. Training load is presented graphically in the three panels on the left side of figure 1 . It shows that, in accordance with a multiple baseline design, the baseline (phase A1) is 11, 17 and 23 weeks for participants 1, 2 and 3, respectively. Training load during baseline and the two withdrawal periods (A2 and A3) was zero. Table 2 presents an overview of the training load completed in each of the two training phases phase B1 and B2. Total RPE minutes accrued during B1 were 10888, 7572 and 11 028 for participants 1, 2 and 3, respectively. Total RPE minutes accrued during B2 were 14673, 10 008 and 11 164, respectively.

  • Download figure
  • Open in new tab
  • Download powerpoint

Training load, swimming performance and GMFM-66 data for each participant throughout the five-phase A1-B1-A2-B2-A3 SCED study. GMFM-66, Gross Motor Function Measure-66-Item Set; SCED, Single-Case Experimental Design.

Training load data for each participant presented by training phase

The three middle panels of figure 1 present swimming velocity, with each participant achieving greater swimming velocity in each training phase. Table 3 presents the results of the ITSSIM analysis for swimming velocity. Data are presented for each participant and each phase transition. All participants achieved increases in swimming velocity each time the intervention was introduced (transitions A1-B1 and A2-B2), and effect sizes were moderate-large (0.61–3.75). In one instance—participant 3, transition A2-B2—the increase in swimming velocity was not statistically significant (p=0.11). Responses to withdrawal of the intervention (transitions B1-A2 and B2-A3) were more variable. Swimming performance in participants 2 and 3 decreased in the B1-A2 transition and effect sizes were moderate-large (−0.69 to −1.82); swimming velocity increased in participant 1 in this transition, though the effect size was small (0.39). Swimming velocity in participants 1 and 3 decreased in the B2-A3 transition and effect sizes were large (−1.17 to −1.64); swimming velocity increased in participant 2 in this transition, though the effect size was small (0.29).

Swimming performance and GMFM-66-IS ITSSIM results

GMFM-66 scores are presented in the three right panels of figure 1 and the results of the ITSSIM analysis are presented in table 3 . All participants achieved increases in GMFM-66 score each time the intervention was introduced (transitions A1-B1 and A2-B2). Effect sizes were large in transition A1-B1 (1.15–2.26), but small-moderate in transition A2-B2 (0.11–0.74). GMFM-66 score decreased in all participants each time the intervention was withdrawn (transitions B1-A2 and B2-A3). Effect sizes were moderate-large in transition B1-A2 (−0.50 to −2.01) and small-large in transition B2-A3 (−0.47 to −2.28).

Comparison of measured and predicted motor function over 46 months

The raw weekly training load and modelled GMFM-66 data for each participant over 46 months are presented in figure 2 . Training load remains relatively consistent over the entire period, although participant 1 has some large peaks in the third training period (aged 17 years). The red line indicates 750 RPE min/week, the volume of activity recommended for people with disabilities by the WHO. 26

Longitudinal training load and GMFM-66 data. The left panel displays training load data for each participant between the age of 15/16 years and 19/20 years (displayed x-axis; note that the baseline period is not temporally represented). The red horizontal line denotes the RPE-minute value commensurate with national physical activity guidelines (750 RPE min/week). The right panel displays modelled GMFM data for each participant, with 95% CIs, and the red line denotes the projected trajectory of motor decline, 2 from the median of baseline GMFM-66 scores. GMFM-66, Gross Motor Function Measure-66-Item Set; LCI, lower confidence interval; RPE, rating of perceived exertion; UCI, upper confidence interval.

The three right-hand panels of figure 2 present modelled GMFM-66 data for each participant. Scores increase in the first year of training (age 15–16 years), and then plateau in the subsequent 3 years into late adolescence. The red line in each GMFM panel is the predicted trajectory for GMFM-66 scores. 2 For each participant, the red line originates from the median GMFM-66 score at baseline for each participant. The upward trend of modelled GMFM-66 measures for each participant contrasts with the predicted downward trend in GMFM-66 indicated by the red line.

There were two main findings from this study. First, a performance-focused swimming training programme comprising training volumes commensurate with WHO physical activity recommendations and delivered by a skilled multiprofessional team conferred improved motor function in previously untrained, physically inactive people with CPHSN. The five-phase SCED demonstrated that motor function improved following training phases and declined following withdrawal phases in all participants, thereby indicating the relationship was causal—performance-focused swimming caused gross motor function to improve.

The second main finding was that, over a 46-month period, participant gross motor function initially improved and then plateaued around the new, improved level. These improvements occurred during a life stage when population-based modelling 2 indicates that motor function typically declines. Specifically, the participants were aged 15/16 years at baseline and their GMFM-66 scores improved by between 2 and 7 points from their median baseline score and then plateaued until age 19/20 years. During the same life stage, population-based modelling predicts mean GMFM-66 scores typically fall by 4.2 points for people with GMFCS level IV CP. 2 Thus, the difference between predicted and measured motor function for participants in this project was between 6.2 and 11.2 points on the GMFM-66 scale, a clinically meaningful difference. The plateau in motor function indicated a ceiling effect—participants may have, at least to some extent, maximised their gross motor capacity as measured using the GMFM-66.

Together, this study’s two main findings indicate that people with CP at GMFCS level IV who achieve physical activity guidelines during adolescence may not only prevent motor decline but improve it. The obverse of this finding is that the high prevalence of physical inactivity in this group during adolescence may account for declines which are currently accepted as clinically inevitable. This may have implications for clinical practice—and highlights the importance of including physical activity interventions as part of routine care of adolescents with CP.

Improvement in swimming velocity for all three participants validated our characterisation of the training programme as ‘performance focused’. Results support the veracity of previously reported athlete testimonies which claim that meaningful improvements in physical function are conferred by performance-focused sports training. 9

We suggest three key features of the programme contributed to the results observed:

The competitive sport context: For young people with CPHSN, competitive sport has a number of advantages, and the views of ParaSTART participants have been published. 27 In addition, competitive sport is age appropriate and culturally significant for many young people with CP; is routinely supported by multiprofessional teams; focuses on achievement of excellence, rather than identifying and remediating motor-sensory impairments and fosters personal interaction and teamwork 28 —critical features for youth who often experience social isolation and challenging periods of transition between adolescence and adulthood. 29

Qualified multiprofessional staff: The delivery team included physiotherapists, exercise physiologists and coaches. They were supported by a medical doctor, dietician, occupational therapist, speech pathologist and sport psychologist. Heterogeneous, complex comorbdities and medical events/issues were managed during the programme: the number of comorbidities/medical issues for participants 1, 2 and 3 were N=5; N=8 and N=3, respectively. Table 1 lists the comorbidities and medical issues of each participant. Participants in this study could not be safely, effectively accommodated in a non-specialist, community-based swimming club.

Transport costs supported: Participants were not independent on public transport and required either a taxi or family member to drive them. Associated expenses were met by research funding and community donations.

The importance of this study is amplified because little is known about exercise training responses in people with CP, GMFCS level IV. 7 In the absence of research evidence, some clinicians and researchers have vastly underestimated the physical capabilities of this group. One recent review stated that people at GMFCS IV and V ‘…will struggle performing structured exercise programmes’ and ‘are unable to perform activities greater than 1.0 MET’. 30 Note that 1.0 METs is the energy expended during quiet sitting. 31 Low rates of physical activity participation and gross under-representation in exercise training trials 7 may result from such assertions and are refuted by results from this study. Future studies should include those with CPHSN.

Methodologically, the SCED used was ideally suited to the study aims. The design generated high-level evidence 14 and conferred a range of advantages 10 including permitting the allocation of time and expertise required to safely supervise participants with severe primary impairments and multiple comorbidities who were at increased risk of serious adverse events (see table 1 ); providing personalised assistance to alleviate the increased time cost associated with training 12 and providing the methodological freedom to individualise training type, duration and intensity without compromising experimental control.

Importantly, the SCED overcame the arguably impossible task of achieving both adequate sample size and satisfactory participant homogeneity in relation to key prognostic variables for a group-level study design. Specifically, we posit that the absence of RCTs investigating responses to sport and exercise training interventions in people with CPHSN may be due, at least in part, to the infeasibility of recruiting a sample that is both large enough to adequately power the trial and also sufficiently homogenous with respect to key prognostic variables (age, sex, neurological subtype, functional effects and comorbidities). Wider use of the SCED may facilitate generation of high-quality Para sport and exercise training evidence in other heterogeneous populations, including people with acquired brain injuries and spinal cord injuries.

Limitations

This study has several limitations. First, the small number of participants and use of the SCED enhanced internal validity in this study but limited external validity. This necessitates cautious interpretation of the generalisability of the results. Second, the age range within the sample was narrow (all aged 15 years on enrolment), and it is possible that children of different ages may respond differently. Further longitudinal studies throughout the known period of decline (from age 7 years to 21 years) are required. Finally, free-living physical activity was not measured during the baseline or withdrawal periods. Although people with CPHSN typically accumulate low volumes of daily activity 6 and no training was conducted during these periods, we did not control for this effect.

This study demonstrated that performance-focused swimming training provided a context for adolescents with CPHSN to accumulate health-enhancing volumes of physical activity, improve their swimming performance and their gross motor function during a life stage when population-based modelling predicts gross motor decline. However, this is a clinically complex population. In order to permit their effective participation in sports, priority should be placed on the development of procedures and programmes that can be delivered by a multiprofessional team. Further research employing SCED methodology is required, with emphasis on replication in this population and in other Para sports.

Ethics statements

Patient consent for publication.

Consent obtained directly from patient(s).

Ethics approval

This study involved human participants and two ethical approvals were provided by the University of Queensland Ethics Committee: UQ approval #2015000831, which applied to the initial 16-month MB-SCED; and UQ approval #2018001472 which applied to the 30-month follow-up study. Participants gave informed consent to participate in the study before taking part.

  • Goldstein M ,
  • Rosenbaum P , et al
  • Rosenbaum PL ,
  • Bartlett DJ , et al
  • Gannotti M ,
  • Hurvitz EA , et al
  • Gormley ME ,
  • Deshpande S
  • Hussey JM , et al
  • Verschuren O ,
  • Peterson MD ,
  • Balemans ACJ , et al
  • Kim Y , et al
  • Clutterbuck G ,
  • Tweedy SM ,
  • Beckman EM ,
  • Johnston LM , et al
  • Connick MJ ,
  • Beckman EM , et al
  • Dimitrijević L ,
  • Aleksandrović M ,
  • Madić D , et al
  • Donohoe A , et al
  • Hollung SJ ,
  • Bakken IJ ,
  • Vik T , et al
  • OCEBM Levels of Evidence Working Group
  • McDonald S ,
  • Vieira R , et al
  • Azevedo R ,
  • Chainok P , et al
  • Hickman RR ,
  • Harris SR , et al
  • Russell DJ ,
  • Rosenbaum PL
  • Wallace LK ,
  • Slattery KM ,
  • Fragala‐Pinkham M ,
  • O′Neil ME ,
  • Lennon N , et al
  • Tarlow KR ,
  • Brossart DF
  • Shadish WR ,
  • Sullivan KJ
  • van der Ploeg HP ,
  • Biddle SJH , et al
  • Enright E ,
  • Connick MJ , et al
  • Aitchison B ,
  • Rushton AB ,
  • Martin P , et al
  • Bagatell N ,
  • Rauch KK , et al
  • Escobar J ,
  • Kenney WL ,
  • Wilmore JH ,

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

Contributors IMD is the overall content guarantor and contributed to project design, data collection, analysis and manuscript preparation. MC contributed to project design. He conducted the analysis of the MB-SCED and longitudinal dataset and contributed to manuscript preparation. EB was a lead researcher on the project and contributed to project design, protocol development and manuscript preparation. LJ was a lead researcher on the project and contributed to project design, recruitment, data analysis and manuscript preparation. PW contributed to project design, data collection and analysis of the MB-SCED and manuscript preparation. AM contributed to project design, data collection and analysis of the MB-SCED and manuscript preparation. JO’S contributed to project design, data collection and analysis of the longitudinal dataset and manuscript preparation. ST was the lead investigator. He oversaw the ParaSTART programme and contributed to all aspects of the project.

Funding We gratefully acknowledge the funders of the work reported in this manuscript: (1) Queensland Academy of Sport; (2) Paralympics Australia; (3) Swimming Australia; (4) Sporting Hasbeens; (5) Gregory Terrace: St Joseph’s College; (6) Pat Rafter Cherish the Children. We also gratefully acknowledge the following individuals/groups who contributed to and facilitated the work reported in this manuscript: (1) The participants and their families; (2) Dr Gaj Panagoda—medical doctor; (3) Dr Jacki Walker—dietician; (4) Minnie Ma—physiotherapist; (5) Jean-Michel Lavalliere—swimming coach; (6) Nathan Seefeld —sport psychologist; (7) UQ Swim Club—community organisation.

Competing interests None declared.

Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

A systematic review of applied single-case research published between 2016 and 2018: Study designs, randomization, data aspects, and data analysis

  • Published: 26 October 2020
  • Volume 53 , pages 1371–1384, ( 2021 )

Cite this article

a single case study intervention

  • René Tanious 1 &
  • Patrick Onghena 1  

5917 Accesses

30 Citations

21 Altmetric

Explore all metrics

Single-case experimental designs (SCEDs) have become a popular research methodology in educational science, psychology, and beyond. The growing popularity has been accompanied by the development of specific guidelines for the conduct and analysis of SCEDs. In this paper, we examine recent practices in the conduct and analysis of SCEDs by systematically reviewing applied SCEDs published over a period of three years (2016–2018). Specifically, we were interested in which designs are most frequently used and how common randomization in the study design is, which data aspects applied single-case researchers analyze, and which analytical methods are used. The systematic review of 423 studies suggests that the multiple baseline design continues to be the most widely used design and that the difference in central tendency level is by far most popular in SCED effect evaluation. Visual analysis paired with descriptive statistics is the most frequently used method of data analysis. However, inferential statistical methods and the inclusion of randomization in the study design are not uncommon. We discuss these results in light of the findings of earlier systematic reviews and suggest future directions for the development of SCED methodology.

Similar content being viewed by others

The use of cronbach’s alpha when developing and reporting research instruments in science education.

a single case study intervention

Theories of Motivation in Education: an Integrative Framework

a single case study intervention

Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations

Avoid common mistakes on your manuscript.

Introduction

In single-case experimental designs (SCEDs) a single entity (e.g., a classroom) is measured repeatedly over time under different manipulations of at least one independent variable (Barlow et al., 2009 ; Kazdin, 2011 ; Ledford & Gast, 2018 ). Experimental control in SCEDs is demonstrated by observing changes in the dependent variable(s) over time under the different manipulations of the independent variable(s). Over the past few decades, the popularity of SCEDs has risen continuously as reflected in the number of published SCED studies (Shadish & Sullivan, 2011 ; Smith, 2012 ; Tanious et al., 2020 ), the development of domain-specific reporting guidelines (e.g., Tate et al., 2016a , 2016b ; Vohra et al., 2016 ), and guidelines on the quality of conduct and analysis of SCEDs (Horner, et al., 2005 ; Kratochwill et al., 2010 , 2013 ).

The What Works Clearinghouse guidelines

In educational science in particular, the US Department of Education has released a highly influential policy document through its What Works Clearinghouse (WWC) panel (Kratochwill et al., 2010 ) Footnote 1 . The WWC guidelines contain recommendations for the conduct and visual analysis of SCEDs. The panel recommended visually analyzing six data aspects of SCEDs: level, trend, variability, overlap, immediacy of the effect, and consistency of data patterns. However, given the subjective nature of visual analysis (e.g., Harrington, 2013 ; Heyvaert & Onghena, 2014 ; Ottenbacher, 1990 ), Kratochwill and Levin ( 2014 ) later called the formation of a panel for recommendations on the statistical analysis of SCEDs “ the highest imminent priority” (p. 232, emphasis in original) on the agenda of SCED methodologists. Furthermore, Kratochwill and Levin—both members of the original panel—contended that advocating for design-specific randomization schemes in line with the recommendations by Edgington ( 1975 , 1980 ) and Levin ( 1994 ) would constitute an important contribution to the development of updated guidelines.

Developments outside the WWC guidelines

Prior to the publication of updated guidelines, important progress had already been made in the development of SCED-specific statistical analyses and design-specific randomization schemes not summarized in the 2010 version of the WWC guidelines. Specifically, three interrelated areas can be distinguished: effect size calculation, inferential statistics, and randomization procedures. Note that this list includes effect size calculation even though the 2010 WWC guidelines include some recommendations for effect size calculation, but with the reference that further research is “badly needed” (p. 23) to develop novel effect size measures comparable to those used in group studies. In the following paragraphs, we give a brief overview of the developments in each area.

Effect size measures

The effect size measures mentioned in the 2010 version of the WWC guidelines mainly concern the data aspect overlap: percentage of non-overlapping data (Scruggs, Mastropieri, & Casto, 1987 ), percentage of all non-overlapping data (Parker et al., 2007 ), and percentage of data points exceeding the median (Ma, 2006 ). Other overlap-based effect size measures are discussed in Parker et al. ( 2011 ). Furthermore, the 2010 guidelines discuss multilevel models, regression models, and a standardized effect size measure proposed by Shadish et al. ( 2008 ) for comparing results between participants in SCEDs. In later years, this measure has been further developed for other designs and meta-analyses (Hedges et al., 2012 ; Hedges et al., 2013 ; Shadish et al., 2014 ) Without mentioning any specific measures, the guidelines further mention effect sizes that compare the different conditions within a single unit and standardize by dividing by the within-phase variance. These effect size measures quantify the data aspect level. Beretvas and Chung ( 2008 ) proposed for example to subtract the mean of the baseline phase from the mean of the intervention phase, and subsequently divide by the pooled within-case standard deviation. Other proposals for quantifying the data aspect level include the slope and level change procedure which corrects for baseline trend (Solanas et al., 2010 ), and the mean baseline reduction which is calculated by subtracting the mean of treatment observations from the mean of baseline observations and subsequently dividing by the mean of the baseline phase (O’Brien & Repp, 1990 ). Efforts have also been made to quantify the other four data aspects. For an overview of the available effect size measures per data aspect, the interested reader is referred to Tanious et al. ( 2020 ). Examples of quantifications for the data aspect trend include the split-middle technique (Kazdin, 1982 ) and ordinary least squares (Kromrey & Foster-Johnson, 1996 ), but many more proposals exist (see e.g., Manolov, 2018 , for an overview and discussion of different trend techniques). Fewer proposals exist for variability, immediacy, and consistency. The WWC guidelines recommend using the standard deviation for within-phase variability. Another option is the use of stability envelopes as suggested by Lane and Gast ( 2014 ). It should be noted, however, that neither of these methods is an effect size measure because they are assessed within a single phase. For the assessment of between-phase variability changes, Kromrey and Foster-Johnson ( 1996 ) recommend using variance ratios. More recently, Levin et al. ( 2020 ) recommended the median absolute deviation for the assessment of variability changes. The WWC guidelines recommend subtracting the mean of the last three baseline data points from the first three intervention data points to assess immediacy. Michiels et al. ( 2017 ) proposed the immediate treatment effect index extending this logic to ABA and ABAB designs. For consistency of data patterns, only one measure currently exists, based on the Manhattan distance between data points from experimentally similar phases (Tanious et al., 2019 ).

Inferential statistics

Inferential statistics are not summarized in the 2010 version of the WWC guidelines. However, inferential statistics do have a long and rich history in debates surrounding the methodology and data analysis of SCEDs. Excellent review articles detailing and explaining the available methods for analyzing data from SCEDs are available in Manolov and Moeyaert ( 2017 ) and Manolov and Solanas ( 2018 ). In situations in which results are compared across participants within or between studies, multilevel models have been proposed. The 2010 guidelines do mention multilevel models, but with the indication that more thorough investigation was needed before their use could be recommended. With few exceptions, such as the pioneering work by Van den Noortgate and Onghena ( 2003 , 2008 ), specific proposals for multilevel analysis of SCEDs had long been lacking. Not surprisingly, the 2010 WWC guidelines gave new impetus for the development of multilevel models for meta-analyzing SCEDs. For example, Moeyaert, Ugille, et al. ( 2014b ) and Moeyaert, Ferron, et al. ( 2014a ) discuss two-level and three-level models for combining results across single cases. Baek et al. ( 2016 ) suggested a visual analytical approach for refining multilevel models for SCEDs. Multilevel models can be used descriptively (i.e., to find an overall treatment effect size), inferentially (i.e., to obtain a p value or confidence interval), or a mix of both.

  • Randomization

One concept that is closely linked to inferential statistics is randomization. In the context of SCEDs, randomization refers to the random assignment of measurements to treatment levels (Onghena & Edgington, 2005 ). Randomization, when ethically and practically feasible, can reduce the risk of bias in SCEDs and strengthen the internal validity of the study (Tate et al., 2013 ). To incorporate randomization into the design, specific randomization schemes are needed, as previously stated (Kratochwill & Levin, 2014 ). In alternation designs, randomization can be introduced by randomly alternating the sequence of conditions, either unrestricted or restricted (e.g., maximum of two consecutive measurements under the same condition) (Onghena & Edgington, 1994 ). In phase designs (e.g., ABAB), multiple baseline designs, and changing criterion designs, where no rapid alternation of treatments takes place, it is possible to randomize the moment of phase change after a minimum number of measurements has taken place in each phase (Marascuilo & Busk, 1988 ; Onghena, 1992 ). In multiple baseline designs, it is also possible to predetermine different baseline phase lengths for each tier and then randomly allocate participants to different baseline phase lengths (Wampold & Worsham, 1986 ). Randomization tests use the randomization actually present in the design for quantifying the probability of the observed effect occurring by chance. These tests are among the earliest data analysis techniques specifically proposed for SCEDs (Edgington, 1967 , 1975 , 1980 ).

The main aim of the present paper is to systematically review the methodological characteristics of recently published SCEDs with an emphasis on the data aspects put forth in the WWC guidelines. Specific research questions are:

What is the frequency of the various single-case design options?

How common is randomization in the study design?

Which data aspects do applied researchers include in their analysis?

What is the frequency of visual and statistical data analysis techniques?

For systematic reviews of SCEDs predating the publication of the WWC guidelines, the interested reader is referred to Hammond and Gast ( 2010 ), Shadish and Sullivan ( 2011 ), and Smith ( 2012 ).

Justification for publication period selection

The present systematic review deals with applied SCED studies published in the period from 2016 to 2018. The reasons for the selection of this period are threefold: relevance, sufficiency, and feasibility. In terms of relevance, there is a noticeable lack of recent systematic reviews dealing with the methodological characteristics of SCEDs in spite of important developments in the field. Apart from the previously mentioned reviews predating the publication of the 2010 WWC guidelines, only two reviews can be mentioned that were published after the WWC guidelines. Solomon ( 2014 ) reviewed indicators of violations of normality and independence in school-based SCED studies until 2012. More recently, Woo et al. ( 2016 ) performed a content analysis of SCED studies published in American Counseling Association journals between 2003 and 2014. However, neither of these reviews deals with published SCEDs in relation to specific guidelines such as WWC. In terms of sufficiency, a three-year period can give sufficient insight into recent trends in applied SCEDs. In addition, it seems reasonable to assume a delay between the publication of guidelines such as WWC and their impact in the field. For example, several discussion articles regarding the WWC guidelines were published in 2013. Wolery ( 2013 ) and Maggin et al. ( 2013 ) pointed out perceived weaknesses in the WWC guidelines, which in turn prompted a reply by the original authors (Hitchcock et al., 2014 ). Discussions like these can help increase the exposure of the guidelines among applied researchers. In terms of feasibility, it is important to note that we did not set any specification on the field of study for inclusion. Therefore, the period of publication had to remain feasible and manageable to read and code all included publications across all different study fields (education, healthcare, counseling, etc.).

Data sources

We performed a broad search of the English-language SCED literature using PubMed and Web of Science. The choice for these two search engines was based on Gusenbauer and Haddaway ( 2019 ), who assessed the eligibility of 26 search engines for systematic reviews. Gusenbauer and Haddaway came to the conclusion that PubMed and Web of Science could be used as primary search engines in systematic reviews, as they fulfilled all necessary requirements such as functionality of Boolean operators and reproducibility of search results in different locations and at different times. We selected only these two of all eligible search engines to keep the size of the project manageable and to prevent excessive overlap between the results. Table 1 gives an overview of the search terms we used and the number of hits per search query. This list does not exclude duplicates between the search terms and between the two search engines. For all designs containing the term “randomized” (e.g., randomized block design) we added the Boolean operator AND specified that the search results must also contain either the term “single-case” or “single-subject”. An initial search for randomized designs without these specifications yielded well over 1000 results per search query.

Study selection

We specifically searched for studies published between 2016 and 2018. We used the date of first online publication to determine whether an article met this criterion (i.e., articles that were published online during this period, even if not yet published in print). Initially, the abstracts and article information of all search results were scanned for general exclusion criteria. In a first step, all articles that fell outside the date range of interest were excluded, as well as articles for which the full text was not available or only available against payment. We only included articles written in English. In a second step, all duplicate articles were deleted. From the remaining unique search results, all articles that did not use any form of single-case experimentation were excluded. Such studies include for example non-experimental forms of case studies. Lastly, all articles not reporting any primary empirical data were excluded from the final sample. Thus, purely methodological articles were discarded. Methodological articles were defined as articles that were within the realm of SCEDs but did not report any empirical data or reported only secondary empirical data. Generally, these articles propose new methods for analyzing SCEDs or perform simulation studies to test existing methods. Similarly, commentaries, systematic reviews, and meta-analyses were excluded from the final sample, as such articles do not contain primary empirical data. In line with systematic review guidelines (Staples & Niazi, 2007 ), the second author verified the accuracy of the selection process. Ten articles were randomly selected from an initial list of all search results for a joint discussion between the authors, and no disagreements about the selection emerged. Figure 1 presents the study attrition diagram.

figure 1

Study attrition diagram

Coding criteria

For all studies, the basic design was coded first. For coding the design, we followed the typology presented in Onghena and Edgington ( 2005 ) and Tate et al. ( 2016a ) with four overarching categories: phase designs, alternation designs, multiple baseline designs, and changing criterion designs. For each of these categories, different design options exist. Common variants of phase designs include for example AB and ABAB, but other forms also exist, such as ABC. Within the alternation designs category the main variants are the completely randomized design, the alternating treatments designs, and the randomized block design. Multiple baseline designs can be conducted across participants, behaviors, or settings. They can be either concurrent, meaning that all participants start the study at the same time, or non-concurrent. Changing criterion designs can employ either a single-value criterion or a range-bound criterion. In addition to these four overarching categories, we added a design category called hybrid Footnote 2 . The hybrid category consists of studies using several design strategies combined, for example a multiple baseline study with an integrated alternating treatments design. For articles reporting more than one study, each study was coded separately. For coding the basic design, we followed the authors’ original description of the study.

Randomization was coded as a dichotomous variable, i.e., either present or not present. In order to be coded as present, some form of randomization had to be present in the design itself, as previously defined in the randomization section. Studies with a fixed order of treatments or phase change moments with randomized stimulus presentation, for example, were coded as randomization not present.

Data aspect

A major contribution of the WWC guidelines was the establishment of six data aspects for the analysis of SCEDs: level, trend, variability, overlap, immediacy, and consistency. Following the guidelines, these data aspects can be defined operationally as follows. Level is the mean score within a phase. The straight line best fitting the data within a phase refers to the trend. The standard deviation or range in a phase represents the data aspect variability. The proportion of data points overlapping between adjacent phases is the data aspect overlap. The immediacy of an effect is assessed by a comparison of the last three data points of an intervention with the first three data points of the subsequent intervention. Finally, consistency Footnote 3 is assessed by comparing data patterns from experimentally similar interventions. In multiple baseline designs, consistency can be assessed horizontally (within series) when more than one phase change is present, and vertically (across series) by comparing experimentally similar phases across participants, behaviors, or settings. It was of course possible that studies reported more than one data aspect or none at all. For studies reporting more than one data aspect, each data aspect was coded separately.

Data analysis

The data analysis methods were coded directly from the authors’ description in the “data analysis” section. If no such section was present, the data analysis methods were coded according to the presentation of the results. Generally, two main forms of data analysis for SCEDs can be distinguished: visual and statistical analysis. In the visual analytical approach, a time series graph of the dependent variable under the different experimental conditions is analyzed to determine treatment effectiveness. The statistical analytical approach can be roughly divided into two categories: descriptive and inferential statistics. Descriptive statistics summarize the data without quantifying the uncertainty in the description. Examples of descriptive statistics include means, standard deviations, and effect sizes. Inferential statistics imply an inference from the observed results to unknown parameter values and quantify the uncertainty for doing so, for example, by providing p values and confidence intervals.

Number of participants

Finally, for each study we coded the number of participants, counting only participants who appeared in the results section. Participants who dropped out prematurely and whose data were not analyzed, were not counted.

General results

For each coding category, the interrater agreement was calculated with the formula \( \frac{\mathrm{no}.\kern0.5em \mathrm{of}\ \mathrm{agreements}}{\mathrm{no}.\kern0.5em \mathrm{of}\ \mathrm{agreements}+\mathrm{no}.\kern0.5em \mathrm{of}\ \mathrm{disagreements}} \) based on ten randomly selected articles. The interrater agreement was as follow: design (90%), analysis (60%), data aspect (80%), randomization (100%), number of participants (80%). Given the initial moderate agreement for analysis, the two authors discussed discrepancies and then reanalyzed a new sample of ten randomly selected articles. The interrater reliability for analysis then increased to 90%.

In total, 406 articles were included in the final sample, which represented 423 studies. One hundred thirty-eight of the 406 articles (34.00%) were published in 2016, 150 articles (36.95%) were published in 2017, and 118 articles (29.06%) were published in 2018. Out of the 423 studies, the most widely used form of SCEDs was the multiple baseline design, which accounted for 49.65% ( N  = 210) of the studies included in the final sample. Across all studies and designs, the median number of participants was three (IQR = 3). The most popular data analysis technique across all studies was visual analysis paired with descriptive statistics, which was used in 48.94% ( N  = 207) of the studies. The average number of data aspects analyzed per study was 2.61 ( SD =  1.63). The most popular data aspect across all designs and studies was level (83.45%, N =  353). Overall, 22.46% ( N  = 95) of the 423 studies included randomization in the design. However, these results vary between the different designs. In the following sections, we therefore present a summary of the results per design. A detailed overview of all the results per design can be found in Table 2 .

Results per design

Phase designs.

Phase designs accounted for 25.53% ( N  = 108) of the studies included in the systematic review. The median number of participants for phase designs was three (IQR = 4). Visual analysis paired with descriptive statistics was the most popular data analysis method for phase designs (40.74%, N  = 44), and the majority of studies analyzed several data aspects (54.62%, N  = 59); 20.37% ( N  = 22) did not report any of the six data aspects. The average number of data aspects analyzed in phase designs was 2.02 ( SD =  2.07). Level was the most frequently analyzed data aspect for phase designs (73.15%, N  = 79). Randomization was very uncommon in phase designs and was included in only 5.56% ( N  = 6) of the studies.

Alternation designs

Alternation designs accounted for 14.42% ( N  = 61) of the studies included in the systematic review. The median number of participants for alternation designs was three (IQR = 1). More than half of the alternation design studies used visual analysis paired with descriptive statistics (57.38%, N  = 35). The majority of alternation design studies analyzed several data aspects (75.41%, N  = 46), while 11.48% ( N  = 7) did not report which data aspect was the focus of analysis. The average number of data aspects analyzed in alternation designs was 2.38 ( SD =  2.06). The most frequently analyzed data aspect for alternation designs was level (85.25%, N =  52). Randomization was used in the majority of alternation designs (59.02%, N  = 36).

Multiple baseline designs

Multiple baseline designs, by a large margin the most prevalent design, accounted for nearly half of all studies (49.65%, N  = 210) included in the systematic review. The median number of participants for multiple baseline designs was four (IQR = 4). A total of 49.52% ( N  = 104) of multiple baseline studies were analyzed using visual analysis paired with descriptive statistics, and the vast majority (80.95%, N  = 170) analyzed several data aspects, while only 7.14% ( N  = 15) did not report any of the six data aspects. The average number of data aspects analyzed in multiple baseline designs was 3.01 ( SD =  1.61). The most popular data aspect was level, which was analyzed in 87.62% ( N =  184) of all multiple baseline designs. Randomization was not uncommon in multiple baseline designs (20.00%, N  = 42).

Changing criterion design

Changing criterion designs accounted for 1.42% ( N  = 6) of the studies included in the systematic review. The median number of participants for changing criterion designs was three (IQR = 0); 66.67% ( N =  4) of changing criterion designs were analyzed using visual analysis paired with descriptive statistics. Half of the changing criterion designs analyzed several data aspects ( N =  3), and one study (16.67%) did not report any data aspect. The average number of data aspects analyzed in changing criterion designs was 1.83 ( SD =  1.39). The most popular data aspect was level (83.33%, N  = 5). None of the changing criterion design studies included randomization in the design.

Hybrid designs

Hybrid designs accounted for 8.98% ( N  = 38) of the studies included in the systematic review. The median number of participants for hybrid designs was three (IQR = 2). A total of 52.63% ( N  = 20) of hybrid designs were analyzed with visual analysis paired with descriptive statistics, and the majority of studies analyzed several data aspects (73.68%, N  = 28); 10.53% ( N  = 4) did not report any of the six data aspects. The average number of data aspects considered for analysis was 2.55 ( SD =  2.02). The most popular data aspect was level (86.84%, N  = 33). Hybrid designs showed the second highest proportion of studies including randomization in the study design (28.95%, N  = 11).

Results per data aspect

Out of the 423 studies included in the systematic review, 72.34% ( N =  306) analyzed several data aspects, 16.08% ( N =  68) analyzed one data aspect, and 11.58% ( N =  49) did not report any of the six data aspects.

Across all designs, level was by far the most frequently analyzed data aspect (83.45%, N =  353). Remarkably, nearly all studies that analyzed more than one data aspect included the data aspect level (96.73%, N =  296). Similarly, for studies analyzing only one data aspect, there was a strong prevalence of level (83.82%, N =  57). For studies that only analyzed level, the most common form of analysis was visual analysis paired with descriptive statistics (54.39%, N =  31).

Trend was the third most popular data aspect. It was analyzed in 45.39% ( N =  192) of all studies included in the systematic review. There were no studies in which trend was the only data aspect analyzed, meaning that trend was always analyzed alongside other data aspects, making it difficult to isolate the analytical methods specifically used to analyze trend.

Variability

The data aspect variability was analyzed in 59.10% ( N =  250) of the studies, making it the second most prominent data aspect. A total of 80.72% ( N =  247) of all studies analyzing several data aspects included variability. However, variability was very rarely the only data aspect analyzed. Only 3.3% ( N =  3) of the studies analyzing only one data aspect focused on variability. All three studies that analyzed only variability did so using visual analysis.

The data aspect overlap was analyzed in 35.70% ( N =  151) of all studies and was thus the fourth most analyzed data aspect. Nearly half of all studies analyzing several data aspects included overlap (47.08%, N =  144). For studies analyzing only one data aspect, overlap was the second most common data aspect after level (10.29%, N =  7). The most common mode of analysis for these studies was descriptive statistics paired with inferential statistics (57.14%, N =  4).

The immediacy of the effect was assessed in 28.61% ( N =  121) of the studies, making it the second least analyzed data aspect; 39.22% ( N =  120) of the studies analyzing several data aspects included immediacy. Only one study analyzed immediacy as the sole data aspect, and this study used visual analysis.

Consistency

Consistency was analyzed in 9.46% ( N =  40) of the studies and was thus by far the least analyzed data aspect. It was analyzed in 13.07% ( N =  40) of the studies analyzing several data aspects and was never the focus of analysis for studies analyzing only one data aspect.

As stated previously, 72.34% ( N =  306) of all studies analyzed several data aspects. For these studies, the average number of data aspects analyzed was 3.39 ( SD =  1.18). The most popular data analysis technique for several data aspects was visual analysis paired with descriptive statistics (56.54%, N =  173).

Not reported

As mentioned previously, 11.58% ( N =  49) did not report any of the six data aspects. For these studies, the most prominent analytical technique was visual analysis alone (61.22%, N =  30). Of all studies not reporting any of the six data aspects, the highest proportion was phase designs (44.90%, N =  22).

Results per analytical method

Visual analysis, without the use of any descriptive or inferential statistics, was the analytical method used in 16.78% ( N =  71) of all included studies. Of all studies using visual analysis, the majority were multiple baseline design studies (45.07%, N =  32). The majority of studies using visual analysis did not report any data aspect (42.25%, N =  30), closely followed by several data aspects (40.85%, N =  29). Randomization was present in 20.53% ( N =  16) of all studies using visual analysis.

Descriptive statistics

Descriptive statistics, without the use of visual analysis, was the analytical method used in 3.78% ( N =  16) of all included studies. The most common designs for studies using descriptive statistics were phase designs and multiple baseline designs (both 43.75%, N =  7). Half of the studies using descriptive statistics (50.00%, N =  8) analyzed the data aspect level, and 37.5% ( N =  6) analyzed several data aspects. One study (6.25%) using descriptive statistics included randomization.

Inferential statistics, without the use of visual analysis, was the analytical method used in 2.84% ( N =  12) of all included studies. The majority of studies using inferential statistics were phase designs (58.33%, N =  7) and did not report any of the six data aspects (58.33%, N =  7). Of the remaining studies, three (25.00%) reported several data aspects, and two (16.67%) analyzed the data aspect level. Two studies (16.67) using inferential statistical analysis included randomization.

Descriptive and inferential statistics

Descriptive statistics combined with inferential statistics, but without the use of visual analysis, accounted for 5.67% ( N  = 24) of all included studies. The majority of studies using this combination of analytical methods were multiple baseline designs (62.5%, N =  15), followed by phase designs (33.33%, N =  8). There were no alternation or hybrid designs using descriptive and inferential statistics. Most of the studies using descriptive and inferential statistics analyzed several data aspects (41.67%, N =  10), followed by the data aspect level (29.17%, N =  7); 16.67% ( N =  4) of the studies using descriptive and inferential statistics included randomization.

Visual and descriptive statistics

As mentioned previously, visual analysis paired with descriptive statistics was the most popular analytical method. This method was used in nearly half (48.94%, N  = 207) of all included studies. The majority of these studies were multiple baseline designs (50.24%, N =  104), followed by phase designs (21.25%, N =  44). This method of analysis was prevalent across all designs. Nearly all of the studies using this combination of analytical methods analyzed either several data aspects (83.57%, N =  173) or level only (14.98%, N =  31). Randomization was present in 19.81% ( N =  41) of all studies using visual and descriptive analysis.

Visual and inferential statistics

Visual analysis paired with inferential statistics accounted for 2.60% ( N  = 11) of the included studies. The largest proportion of these studies were phase designs (45.45%, N  = 5), followed by multiple baseline designs and hybrid designs (both 27.27%, N =  3). This combination of analytical methods was thus not used in alternation or changing criterion designs. The majority of studies using visual analysis and inferential statistics analyzed several data aspects (72.73%, N =  8), while 18.18% ( N =  2) did not report any data aspect. One study (9.10%) included randomization.

Visual, descriptive, and inferential statistics

A combination of visual analysis, descriptive statistics, and inferential statistics was used in 18.44% ( N =  78) of all included studies. The majority of the studies using this combination of analytical methods were multiple baseline designs (56.41%, N =  44), followed by phase designs (23.08%, N =  18). This analytical approach was used in all designs except changing criterion designs. Nearly all studies using a combination of these three analytical methods analyzed several data aspects (97.44%, N =  76). These studies also showed the highest proportion of randomization (38.46%, N =  30).

None of the above

A small proportion of studies did not use any of the above analytical methods (0.95%, N =  4). Three of these studies (75%) were phase designs and did not report any data aspect. One study (25%) was a multiple baseline design that analyzed several data aspects. Randomization was not used in any of these studies.

To our knowledge, the present article is the first systematic review of SCEDs specifically looking at the frequency of the six data aspects in applied research. The systematic review has shown that level is by a large margin the most widely analyzed data aspect in recently published SCEDs. The second most popular data aspect from the WWC guidelines was variability, which was usually assessed alongside level (e.g., a combination of mean and standard deviation or range). The fact that these two data aspects are routinely assessed in group studies may be indicative of a lack of familiarity with SCED-specific analytical methods by applied researchers, but this remains speculative. Phase designs showed the highest proportion of studies not reporting any of the six data aspects and the second lowest number of data aspects analyzed on average, only second to changing criterion designs. This was an unexpected finding given that the WWC guidelines were developed specifically in the context of (and with examples of) phase designs. The multiple baseline design showed the highest number of data aspects analyzed and at the same time the lowest proportion of studies not analyzing any of the six data aspects.

These findings regarding the analysis and reporting of the six data aspects need more contextualization. The selection of data aspects for the analysis depends on the research questions and expected data pattern. For example, if the aim of the intervention is a gradual change over time, then trend becomes more important. If the aim of the intervention is a change in level, then it is import to also assess trend (to verify that the change in level is not just a continuation of a baseline trend) and variability (to assess whether the change in level is caused by excessive variability). In addition, assessing consistency can add information on whether the change in level is consistent over several repetitions of experimental conditions (e.g., in phase designs). Similarly, if an abrupt change in level of target behavior is expected after changing experimental conditions, then immediacy becomes a more relevant data aspect in addition to trend, variability, and level. The important point here is that oftentimes the research team has an idea of the expected data pattern and should choose the analysis of data aspects accordingly. The strong prevalence of level found in the present review could be indicative of a failure to assess other data aspects that may be relevant to demonstrate experimental control over an independent variable.

In line with the findings of earlier systematic reviews (Hammond & Gast, 2010 ; Shadish & Sullivan, 2011 ; Smith, 2012 ), the multiple baseline design continues to be the most frequently used design, and despite the advancement of sophisticated statistical methods for the analysis of SCEDs, two thirds of all studies still relied on visual analysis alone or visual analysis paired with descriptive statistics. A comparison to the findings of Shadish and Sullivan further reveals that the number of participants included in SCEDs has remained steady over the past decade at around three to four participants. The relatively small number of changing criterion designs in the present findings is partly due to the fact that changing criterion designs were often combined with other designs and thus coded in the hybrid category, even though we did not formally quantify that. This finding is supported by the results of Shadish and Sullivan, who found that changing criterion designs are more often used as part of hybrid designs than as a standalone design. Hammond and Gast even excluded changing criterion design from their review due to its low prevalence. They found a total of six changing criterion designs published over a period of 35 years. It should be noted, however, that the low prevalence of changing criterion designs is not indicative of the value of this design.

Regarding randomization, the results cannot be interpreted against earlier benchmarks, as neither Smith nor Shadish and Sullivan or Hammond and Gast quantified the proportion of randomized SCEDs. Overall, randomization in the study design was not uncommon. However, the proportion of randomized SCEDs differed greatly between different designs. The results showed that alternating treatments designs have the highest proportion of studies including randomization. This result was to be expected given that alternating treatments designs are particularly suited to incorporate randomization. In fact, when Barlow and Hayes ( 1979 ) first introduced the alternating treatments design, they emphasized randomization as an important part of the design: “Among other considerations, each design controls for sequential confounding by randomizing the order of treatment […]” (p. 208). Besides that, alternating treatments designs could work with already existing randomization procedures, such as the randomized block procedure proposed by Edgington ( 1967 ). The different design options for alternating treatments designs (e.g., randomized block design) and accompanying randomization procedures are discussed in detail in Manolov and Onghena ( 2018 ). For multiple baseline designs, a staggered introduction of the intervention is needed. Proposals to randomize the order of the introduction of the intervention have been around since the 1980s (Marascuilo & Busk, 1988 ; Wampold & Worsham, 1986 ). These randomization procedures have their counterparts in group studies where particpants are randomdly assigned to treatments or different blocks of treatments. Other randomization procedures for multiple baseline designs are discussed in Levin et al. ( 2018 ). These include the restricted Marascuilo–Busk procedure proposed by Koehler and Levin and the randomization test procedure proposed by Revusky. For phase designs and changing criterion designs, the incorporation of randomization is less evident. For phase designs, Onghena ( 1992 ) proposed a method to randomly determine the moment of phase change between two succesive phases. However, this method is rather uncommon and has no counterpart in group studies. Specific randomization schemes for changing criterion designs have only very recently been proposed (Ferron et al., 2019 ; Manolov et al., 2020 ; Onghena et al., 2019 ), and it remains to be seen how common they will become in applied SCEDs.

Implications for SCED research

The results of the systematic review have several implications for SCED research regarding methodology and analyses. An important finding of the present study is that the frequency of use of randomization differs greatly between different designs. For example, while phase designs were found to be the second most popular design, randomization is used very infrequently for this design type. Multiple baseline designs, as the most frequently used design, showed a higher percentage of randomized studies, but only every fifth study used randomization. Given that randomization in the study design increases the internal and statistical conclusion validity irrespective of the design, it seems paramount to further stress the importance of the inclusion of randomization beyond alternating treatments designs. Another implication concerns the analysis of specific data aspects. While level was by a large margin the most popular data aspect, it is important to stress that conclusions based on only one data aspect may be misleading. This seems particularly relevant for phase designs, which were found to contain the highest proportion of studies not reporting any of the six data aspects and the lowest proportion of studies analyzing several data aspects (apart from changing criterion designs, which only accounted for a very small proportion of the included studies). A final implication concerns the use of analytical methods, in particular triangulation of different methods. Half of the included studies used visual analysis paired with descriptive statistics. These methods should of course not be discarded, as they generate important information about the data, but they cannot make statements regarding the uncertainty of a possible intervention effect. Therefore, triangulation of visual analysis, descriptive statistics, and inferential statistics should form an important part of future guidelines on SCED analysis.

Reflections on updated WWC guidelines

Updated WWC guidelines were recently published, after the present systematic review had been conducted (What Works Clearinghouse, 2020a , 2020c ). Two major changes in the updated guidelines are of direct relevance to the present systematic review: (a) the removal of visual analysis for demonstrating intervention effectiveness and (b) recommendation for a design comparable effect size measure for demonstrating intervention effects (D-CES, Pustejovsky et al., 2014 ; Shadish et al., 2014 ). This highlights a clear shift away from visual analysis towards statistical analysis of SCED data, especially compared to the 2010 guidelines. These changes in the guidelines have prompted responses from the public, to which What Works Clearinghouse ( 2020b ) published a statement addressing the concerns. Several concerns relate to the removal of visual analysis. In response to a concern that visual analysis should be reinstated, the panel clearly states that “visual analysis will not be used to characterize study findings” (p. 3). Another point from the public concerned the analysis of studies where no effect size can be calculated (e.g., due to unavailability of raw data). Even in these instances, the panel does not recommend visual analysis. Rather, “the WWC will extract raw data from those graphs for use in effect size computation” (p. 4). In light of the present findings, these statements are particularly noteworthy. Given that the present review found a strong continued reliance on visual analysis, it remains to be seen if and how the updated WWC guidelines impact the analyses conducted by applied SCED researchers.

Another update of relevance in the recent guidelines concerns the use of design categories. While the 2010 guidelines were demonstrated with the example of a phase design, the updated guidelines include quality rating criteria for each major design option. Given that the present results indicate a very low prevalence of the changing criterion design in applied studies, the inclusion of this design in the updated guidelines may increase the prominence of the changing criterion design. For changing criterion designs, the updated guidelines recommend that “the reversal or withdrawal (AB) design standards should be applied to changing criterion designs” (What Works Clearinghouse, 2020c , p. 80). With phase designs being the second most popular design choice, this could further facilitate the use of the changing criterion design.

While other guidelines on conduct and analysis (e.g., Tate et al., 2013 ), as well as members of the 2010 What Works Clearinghouse panel (Kratochwill & Levin, 2014 ), have clearly highlighted the added value of randomization in the design, the updated guidelines do not include randomization procedures for SCEDs. Regarding changes between experimental conditions, the updated guidelines state that “the independent variable is systematically manipulated, with the researcher determining when and how the independent variable conditions change” (What Works Clearinghouse, 2020c , p. 82). While the frequency of use of randomization differs considerably between different designs, the present review has shown that overall randomization is not uncommon. The inclusion of randomization in the updated guidelines may therefore have offered guidance to applied researchers wishing to incorporate randomization into their SCEDs, and may have further contributed to the popularity of randomization.

Limitations and future research

One limitation of the current study concerns the used databases. SCEDs that were published in journals that are not indexed in these databases may not have been included in our sample. A similar limitation concerns the search terms used in the systematic search. In this systematic review, we focused on the common names “single-case” and “single-subject.” However, as Shadish and Sullivan ( 2011 ) note, SCEDs go by many names. They list several less common alternative terms: instrasubject replication design (Gentile et al., 1972 ), n -of-1 design (Center et al., 1985 -86), intrasubject experimental design (White et al., 1989 ), one-subject experiment (Edgington, 1980 ), and individual organism research (Michael, 1974 ). Even though these terms date back to the 1970s and 1980s, a few authors may still use them to describe their SCED studies. Studies using these terms may not have come up during the systematic search. It should furthermore be noted that we followed the original description provided by the authors for the coding of the design and analysis to reduce bias. We therefore made no judgments regarding the correctness or accuracy of the authors’ naming of the design and analysis techniques.

The systematic review offers several avenues for future research. The first avenue may be to explore more in depth the reasons for the unequal distribution of data aspects. As the systematic review has shown, level is assessed far more often than the other five data aspects. While level is an important data aspect, failing to assess it alongside other data aspects can lead to erroneous conclusions. Gaining an understanding of the reasons for the prevalence of level, for example through author interviews or questionnaires, may help to improve the quality of data analysis in applied SCEDs.

In a similar vein, a second avenue of future research may explore why randomization is much more prevalent in some designs. Apart from the aforementioned differences in randomization procedures between designs, it may be of interest to gain a better understanding of the reasons that applied researchers see for randomizing their SCEDs. As the incorporation of randomization enhances the internal validity of the study design, promoting the inclusion of randomization for designs other than alternation designs will help in advancing the credibility of SCEDs in the scientific community. Searching the methodological sections of the articles that used randomization may be a first step to gain a better understanding of why applied researchers use randomization. Such a text search may reveal how the authors discuss randomization and which reasons they name for randomizing. A related question is how the randomization was actually carried out. For example, was the randomization carried out a priori or in a restricted way taking into account the evolving data pattern? A deeper understanding of the reasons for randomizing and the mechanisms of randomization may be gained by author interviews or questionnaires.

A third avenue of future research may explore in detail the specifics of inferential analytical methods used to analyze SCED data. Within the scope of the present review, we only distinguished between visual, descriptive and inferential statistics. However, deeper insight into the inferential analysis methods and their application to SCED data may help to understand the viewpoint of applied researchers. This may be achieved through a literature review of articles that use inferential analysis. Research questions for such a review may include: Which inferential methods do applied SCED researchers use and what is the frequency of these methods? Are these methods adapted to SCED methodology? And how do applied researchers justify their choice for an inferential method? Similar questions may also be answered for effect size measures understood as descriptive statistics. For example, why do applied researchers choose a particular effect size measure over a competing one? Are these effect size measures adapted to SCED research?

Finally, future research may go into greater detail about the descriptive statistics used in SCEDs. In the present review, we distinguished between two major categories: descriptive and inferential statistics. Effect sizes that were not accompanied by a standard error, confidence limits, or by the result of a significance test were coded in the descriptive statistics category. Effect sizes do however go beyond merely summarizing the data by quantifying the treatment effect between different experimental conditions, contrary to within phase quantifications such as the mean and standard deviation. Therefore, future research may examine in greater detail the use of effect sizes separately from other descriptive statistics such the mean and standard deviation. Such research could focus in depth on the exact methods used to quantify each data aspect in the form of either a quantification (e.g., mean or range) or an effect size measure (e.g., standardized mean difference or variance ratios).

The What Works Clearinghouse panel ( 2020a , 2020c ) has recently released an updated version of the guidelines. We will discuss the updated guidelines in light of the present findings in the Discussion section.

As holds true for most single-case designs, the same design is often described with different terms. For example, Ledford and Gast ( 2018 ) call these designs combination designs, and Moeyaert et al. ( 2020 ) call them combined designs. Given that this is a purely terminological question, it is hard to argue in favor of one term over the other. We do, however, prefer the term hybrid, given that it emphasizes that neither of the designs remains in its pure form. For example, a multiple baseline design with alternating treatments is not just a combination of a multiple baseline design and an alternating treatments design. It is rather a hybrid of the two. This term is also found in recent literature (e.g., Pustejovski & Ferron, 2017 ; Swan et al., 2020 ).

For the present systematic review, we strictly followed the data aspects as outlined in the 2010 What Works Clearinghouse guidelines. While the assessment of consistency of effects is an important data aspect, this data aspect is not described in the guidelines. Therefore, we did not code it in the present review.

Baek, E. K., Petit-Bois, M., Van den Noortgate, W., Beretvas, S. N., & Ferron, J. M. (2016). Using visual analysis to evaluate and refine multilevel models of single-case studies. The Journal of Special Education, 50 , 18-26. https://doi.org/10.1177/0022466914565367 .

Article   Google Scholar  

Barlow, D. H., & Hayes, S. C. (1979). Alternating Treatments Design: One Strategy for Comparing the Effects of Two Treatments in a Single Subject. Journal of Applied Behavior Analysis, 12 , 199-210. https://doi.org/10.1901/jaba.1979.12-199 .

Article   PubMed   PubMed Central   Google Scholar  

Barlow, D. H., Nock, M. K., & Hersen, M. (2009). Single case experimental designs: Strategies for studying behavior change ( 3rd ). Pearson.

Beretvas, S. N., & Chung, H. (2008). A review of meta-analyses of single-subject experimental designs: Methodological issues and practice. Evidence-Based Communication Assessment and Intervention, 2 , 129-141. https://doi.org/10.1080/17489530802446302 .

Center, B. A., Skiba, R. J., & Casey, A. (1985-86). A Methodology for the Quantitative Synthesis of Intra-Subject Design research. Journal of Special Education, 19 , 387–400. https://doi.org/10.1177/002246698501900404 .

Edgington, E. S. (1967). Statistical inference from N=1 experiments. The Journal of Psychology, 65 , 195-199. https://doi.org/10.1080/00223980.1967.10544864 .

Article   PubMed   Google Scholar  

Edgington, E. S. (1975). Randomization tests for one-subject operant experiments. The Journal of Psychology, 90 , 57-68. https://doi.org/10.1080/00223980.1975.9923926 .

Edgington, E. S. (1980). Random assignment and statistical tests for one-subject experiments. Journal of Educational Statistics, 5 , 235-251.

Ferron, J., Rohrer, L. L., & Levin, J. R. (2019). Randomization procedures for changing criterion designs. Behavior Modification https://doi.org/10.1177/0145445519847627 .

Gentile, J. R., Roden, A. H., & Klein, R. D. (1972). An analysis-of-variance model for the intrasubject replication design. Journal of Applied Behavior Analysis, 5 , 193-198. https://doi.org/10.1901/jaba.1972.5-193 .

Gusenbauer, M., & Haddaway, N. R. (2019). Which academic search systems are suitable for systematic Reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed and 26 other Resources. Research Synthesis Methods https://doi.org/10.1002/jrsm.1378 .

Hammond, D., & Gast, D. L. (2010). Descriptive analysis of single subject research designs: 1983—2007. Education and Training in Autism and Developmental Disabilities, 45 , 187-202.

Google Scholar  

Harrington, M. A. (2013). Comparing visual and statistical analysis in single-subject studies. Open Access Dissertations , Retrieved from http://digitalcommons.uri.edu/oa_diss .

Hedges, L. V., Pustejovsky, J. E., & Shadish, W. R. (2012). A standardized mean difference effect size for single case designs. Research Synthesis Methods, 3 , 224-239. https://doi.org/10.1002/jrsm.1052 .

Hedges, L. V., Pustejovsky, J. E., & Shadish, W. R. (2013). A standardized mean difference effect size for multiple baseline designs across individuals. Research Synthesis Methods, 4 , 324-341. https://doi.org/10.1002/jrsm.1086 .

Heyvaert, M., & Onghena, P. (2014). Analysis of single-case data: Randomization tests for measures of effect size. Neuropsychological Rehabilitation, 24 , 507-527. https://doi.org/10.1080/09602011.2013.818564 .

Hitchcock, J. H., Horner, R. H., Kratochwill, T. R., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2014). The What Works Clearinghouse single-case design pilot standards: Who will guard the guards? Remedial and Special Education, 35 , 145-152. https://doi.org/10.1177/0741932513518979 .

Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71 , 165-179. https://doi.org/10.1177/001440290507100203 .

Kazdin, A. E. (1982). Single-case research designs: Methods for clinical and applied settings. Oxford University Press.

Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings ( 2nd ). Oxford University Press.

Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from What Works Clearinghouse: https://files.eric.ed.gov/fulltext/ED510743.pdf

Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2013). Single-case intervention research design standards. Remedial and Special Education, 34 , 26-38. https://doi.org/10.1177/0741932512452794 .

Kratochwill, T. R., & Levin, J. R. (2014). Meta- and statistical analysis of single-case intervention research data: Quantitative gifts and a wish list. Journal of School Psychology, 52 , 231-235. https://doi.org/10.1016/j.jsp.2014.01.003 .

Kromrey, J. D., & Foster-Johnson, L. (1996). Determining the efficacy of intervention: The use of effect sizes for data analysis in single-subject research. The Journal of Experimental Education, 65 , 73-93. https://doi.org/10.1080/00220973.1996.9943464 .

Lane, J. D., & Gast, D. L. (2014). Visual analysis in single case experimental design studies: Brief review and guidelines. Neuropsychological Rehabilitation, 24 , 445-463. https://doi.org/10.1080/09602011.2013.815636 .

Ledford, J. R., & Gast, D. L. (Eds.) (2018). Single case research methodology: Applications in special education and behavioral sciences (3rd). Routledge.

Levin, J. R. (1994). Crafting educational intervention research that's both credible and creditable. Educational Psychology Review, 6 , 231-243. https://doi.org/10.1007/BF02213185 .

Levin, J. R., Ferron, J. M., & Gafurov, B. S. (2018). Comparison of randomization-test procedures for single-case multiple-baseline designs. Developmental Neurorehabilitation, 21 , 290-311. https://doi.org/10.1080/17518423.2016.1197708 .

Levin, J. R., Ferron, J. M., & Gafurov, B. S. (2020). Investigation of single-case multiple-baseline randomization tests of trend and variability. Educational Psychology Review . https://doi.org/10.1007/s10648-020-09549-7 .

Ma, H.-H. (2006). Quantitative synthesis of single-subject researches: Percentage of data points exceeding the median. Behavior Modification, 30 , 598-617. https://doi.org/10.1177/0145445504272974 .

Maggin, D. M., Briesch, A. M., & Chafouleas, S. M. (2013). An application of the What Works Clearinghouse standards for evaluating single-subject research: Synthesis of the self-management literature base. Remedial and Special Education, 34 , 44-58. https://doi.org/10.1177/0741932511435176 .

Manolov, R. (2018). Linear trend in single-case visual and quantitative analyses. Behavior Modification, 42 , 684-706. https://doi.org/10.1177/0145445517726301 .

Manolov, R., & Moeyaert, M. (2017). Recommendations for choosing single-case data analytical techniques. Behavior Therapy, 48 , 97-114. https://doi.org/10.1016/j.beth.2016.04.008 .

Manolov, R., & Onghena, P. (2018). Analyzing data from single-case alternating treatments designs. Psychological Methods, 23 , 480-504. https://doi.org/10.1037/met0000133 .

Manolov, R., & Solanas, A. (2018). Analytical options for single-case experimental designs: Review and application to brain impairment. Brain Impairment, 19 , 18-32. https://doi.org/10.1017/BrImp.2017.17 .

Manolov, R., Solanas, A., & Sierra, V. (2020). Changing Criterion Designs: Integrating Methodological and Data Analysis Recommendations. The Journal of Experimental Education, 88 , 335-350. https://doi.org/10.1080/00220973.2018.1553838 .

Marascuilo, L., & Busk, P. (1988). Combining statistics for multiple-baseline AB and replicated ABAB designs across subjects. Behavioral Assessment, 10 , 1-28.

Michael, J. (1974). Statistical inference for individual organism research: Mixed blessing or curse? Journal of Applied Behavior Analysis, 7 , 647-653. https://doi.org/10.1901/jaba.1974.7-647 .

Michiels, B., Heyvaert, M., Meulders, A., & Onghena, P. (2017). Confidence intervals for single-case effect size measures based on randomization test inversion. Behavior Research Methods, 49 , 363-381. https://doi.org/10.3758/s13428-016-0714-4 .

Moeyaert, M., Akhmedjanova, D., Ferron, J. M., Beretvas, S. N., & Van den Noortgate, W. (2020). Effect size estimation for combined single-case experimental designs. Evidence-Based Communication Assessment and Intervention, 14 , 28-51. https://doi.org/10.1080/17489539.2020.1747146 .

Moeyaert, M., Ferron, J. M., Beretvas, S. N., & Van den Noortgate, W. (2014a). From a single-level analysis to a multilevel analysis of single-case experimental designs. Journal of School Psychology, 52 , 191-211. https://doi.org/10.1016/j.jsp.2013.11.003 .

Moeyaert, M., Ugille, M., Ferron, J. M., Beretvas, S. N., & Van den Noortgate, W. (2014b). Three-level analysis of single-case experimental data: Empirical validation. The Journal of Experimental Education, 82 , 1-21. https://doi.org/10.1080/00220973.2012.745470 .

O’Brien, S., & Repp, A. C. (1990). Reinforcement-based reductive procedures: A review of 20 years of their use with persons with severe or profound retardation. Journal of the Association for Persons with Severe Handicaps, 15 , 148–159. https://doi.org/10.1177/154079699001500307 .

Onghena, P. (1992). Randomization tests for extensions and variations of ABAB single-case experimental designs: A rejoinder. Behavioral Assessment, 14 , 153-172.

Onghena, P., & Edgington, E. S. (1994). Randomization tests for restricted alternating treatment designs. Behaviour Research and Therapy, 32 , 783-786. https://doi.org/10.1016/0005-7967(94)90036-1 .

Onghena, P., & Edgington, E. S. (2005). Customization of pain treatments: Single-case design and analysis. The Clinical Journal of Pain, 21 , 56-68. https://doi.org/10.1097/00002508-200501000-00007 .

Onghena, P., Tanious, R., De, T. K., & Michiels, B. (2019). Randomization tests for changing criterion designs. Behaviour Research and Therapy, 117 , 18-27. https://doi.org/10.1016/j.brat.2019.01.005 .

Ottenbacher, K. J. (1990). When is a picture worth a thousand p values? A comparison of visual and quantitative methods to analyze single subject data. The Journal of Special Education, 23 , 436-449. https://doi.org/10.1177/002246699002300407 .

Parker, R. I., Hagan-Burke, S., & Vannest, K. (2007). Percentage of all non-overlapping data (PAND): An alternative to PND. The Journal of Special Education, 40 , 194-204. https://doi.org/10.1177/00224669070400040101 .

Parker, R. I., Vannest, K. J., & Davis, J. L. (2011). Effect Size in Single-Case Research: A Review of Nine Nonoverlap Techniques. Behavior Modification, 35 , 303-322. https://doi.org/10.1177/0145445511399147 .

Pustejovski, J. E., & Ferron, J. M. (2017). Research synthesis and meta-analysis of single-case designs. In J. M. Kaufmann, D. P. Hallahan, & P. C. Pullen, Handbook of Special Education (pp. 168-185). New York: Routledge.

Chapter   Google Scholar  

Pustejovsky, J. E., Hedges, L. V., & Shadish, W. R. (2014). Design-comparable effect sizes in multiple baseline designs: A general modeling framework. Journal of Educational and Behavioral Statistics, 39 , 368-393. https://doi.org/10.3102/1076998614547577 .

Scruggs, T. E., Mastropieri, M. A., & Casto, G. (1987). The quantitative synthesis of single-subject research: Methodology and validation. Remedial and Special Education, 8 , 24-33. https://doi.org/10.1177/074193258700800206 .

Shadish, W. R., Hedges, L. V., & Pustejovsky, J. E. (2014). Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: A primer and applications. Journal of School Psychology, 52 , 123–147. https://doi.org/10.1016/j.jsp.2013.11.005 .

Shadish, W. R., Rindskopf, D. M., & Hedges, L. V. (2008). The state of the science in the meta-analysis of single-case experimental designs. Evidence-Based Communication Assessment and Intervention, 2 , 188-196. https://doi.org/10.1080/17489530802581603 .

Shadish, W. R., & Sullivan, K. J. (2011). Characteristics of single-case designs used to assess intervention effects in 2008. Behavior Research Methods, 43 , 971-980. https://doi.org/10.3758/s13428-011-0111-y .

Smith, J. D. (2012). Single-case experimental designs: A systematic review of published research and current standards. Psychological Methods, 17 , 510-550. https://doi.org/10.1037/a0029312 .

Solanas, A., Manolov, R., & Onghena, P. (2010). Estimating slope and level change in N=1 designs. Behavior Modification, 34 , 195-218. https://doi.org/10.1177/0145445510363306 .

Solomon, B. G. (2014). Violations of school-based single-case data: Implications for the selection and interpretation of effect sizes. Behavior Modification, 38 , 477-496. https://doi.org/10.1177/0145445513510931 .

Staples, M., & Niazi, M. (2007). Experiences using systematic review guidelines. The Journal of Systems and Software, 80 , 1425-1437. https://doi.org/10.1016/j.jss.2006.09.046 .

Swan, D. M., Pustejovsky, J. E., & Beretvas, S. N. (2020). The impact of response-guided designs on count outcomes in single-case experimental design baselines. Evidence-Based Communication Assessment and Intervention, 14 , 82-107. https://doi.org/10.1080/17489539.2020.1739048 .

Tanious, R., De, T. K., Michiels, B., Van den Noortgate, W., & Onghena, P. (2019). Consistency in single-case ABAB phase designs: A systematic review. Behavior Modification https://doi.org/10.1177/0145445519853793 .

Tanious, R., De, T. K., Michiels, B., Van den Noortgate, W., & Onghena, P. (2020). Assessing consistency in single-case A-B-A-B phase designs. Behavior Modification, 44 , 518-551. https://doi.org/10.1177/0145445519837726 .

Tate, R. L., Perdices, M., Rosenkoetter, U., McDonald, S., Togher, L., Shadish, W. R., … Vohra, S. (2016b). The Single-Case Reporting guideline In BEhavioural Interventions (SCRIBE) 2016: Explanation and Elaboration. Archives of Scientific Psychology, 4 , 1-9. https://doi.org/10.1037/arc0000026 .

Tate, R. L., Perdices, M., Rosenkoetter, U., Shadish, W. R., Vohra, S., Barlow, D. H., … Wilson, B. (2016a). The Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 statement. Aphasiology, 30 , 862-876. https://doi.org/10.1080/02687038.2016.1178022 .

Tate, R. L., Perdices, M., Rosenkoetter, U., Wakim, D., Godbee, K., Togher, L., & McDonald, S. (2013). Revision of a method quality rating scale for single-case experimental designs and n-of-1 trials: The 15-item Risk of Bias in N-of-1 Trials (RoBiNT) Scale. Neuropsychological Rehabilitation, 23 , 619-638. https://doi.org/10.1080/09602011.2013.824383 .

Van den Noortgate, W., & Onghena, P. (2003). Hierarchical linear models for the quantitative integration of effect sizes in single-case research. Behavior Research Methods, Instruments, & Computers, 35 , 1-10. https://doi.org/10.3758/bf03195492 .

Van den Noortgate, W., & Onghena, P. (2008). A multilevel meta-analysis of single-subject experimental design studies. Evidence-Based Communication Assessment and Intervention, 2 , 142-151. https://doi.org/10.1080/17489530802505362 .

Vohra, S., Shamseer, L., Sampson, M., Bukutu, C., Schmid, C. H., Tate, R., … Group, TC (2016). CONSORT extension for reporting N-of-1 trials (CENT) 2015 statement. Journal of Clinical Epidemiology, 76 , 9–17. https://doi.org/10.1016/j.jclinepi.2015.05.004 .

Wampold, B., & Worsham, N. (1986). Randomization tests for multiple-baseline designs. Behavioral Assessment, 8 , 135-143.

What Works Clearinghouse. (2020a). Procedures Handbook (Version 4.1). Retrieved from Institute of Education Sciences: https://ies.ed.gov/ncee/wwc/Docs/referenceresources/WWC-Procedures-Handbook-v4-1-508.pdf

What Works Clearinghouse. (2020b). Responses to comments from the public on updated version 4.1 of the WWC Procedures Handbook and WWC Standards Handbook. Retrieved from Institute of Education Sciences: https://ies.ed.gov/ncee/wwc/Docs/referenceresources/SumResponsePublicComments-v4-1-508.pdf

What Works Clearinghouse. (2020c). Standards Handbook, version 4.1. Retrieved from Institute of Education Sciences: https://ies.ed.gov/ncee/wwc/Docs/referenceresources/WWC-Standards-Handbook-v4-1-508.pdf

White, D. M., Rusch, F. R., Kazdin, A. E., & Hartmann, D. P. (1989). Applications of meta-analysis in individual-subject research. Behavioral Assessment, 11 , 281-296.

Wolery, M. (2013). A commentary: Single-case design technical document of the What Works Clearinghouse. Remedial and Special Education , 39-43. https://doi.org/10.1177/0741932512468038 .

Woo, H., Lu, J., Kuo, P., & Choi, N. (2016). A content analysis of articles focusing on single-case research design: ACA journals between 2003 and 2014. Asia Pacific Journal of Counselling and Psychotherapy, 7 , 118-132. https://doi.org/10.1080/21507686.2016.1199439 .

Download references

Author information

Authors and affiliations.

Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven, Tiensestraat 102, Box 3762, B-3000, Leuven, Belgium

René Tanious & Patrick Onghena

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to René Tanious .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

(DOCX 110 kb)

Rights and permissions

Reprints and permissions

About this article

Tanious, R., Onghena, P. A systematic review of applied single-case research published between 2016 and 2018: Study designs, randomization, data aspects, and data analysis. Behav Res 53 , 1371–1384 (2021). https://doi.org/10.3758/s13428-020-01502-4

Download citation

Accepted : 09 October 2020

Published : 26 October 2020

Issue Date : August 2021

DOI : https://doi.org/10.3758/s13428-020-01502-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Single-case experimental designs
  • Visual analysis
  • Statistical analysis
  • Data aspects
  • Systematic review
  • Find a journal
  • Publish with us
  • Track your research

Search

Climate-Smart Intervention Takes Top 2023 Case Studies in the Environment Prize

Case Studies in the Environment is pleased to announce the winners of the 2023 Case Studies in the Environment Prize Competition .

Eligible submissions are judged for their ability to translate discrete case studies into broad, generalizable findings; for advancing a strong perspective and engaging narrative; for being accessible to their intended audiences; for addressing topics that are important or notable in their novelty, impact, or urgency; and which contribute to the teaching of environmental concepts to students and/or practitioners.

The winning case study from the 2023 competition, “ Building Resilience in Jamaica’s Farming Communities: Insights From a Climate-Smart Intervention ,” from The University of the West Indies’ Donovan Campbell and Shaneica Lester, demonstrates that while climate change poses immense threats to the environment and to human livelihoods, adaptation also provides opportunities to strengthen a community.

“This positivity and sense of agency is critical to the success of climate initiatives,” noted CSE Editor-in-Chief Dr. Jennifer Bernstein. “The editorial team felt that the manuscript exemplifies the journal at its best–identifying and evaluating an important environmental question using robust interdisciplinary methods.”

a single case study intervention

The honorable mention articles from the 2023 competition are “ Teaching the Complex Dynamics of Clean Energy Subsidies With the Help of a Model-as-Game ,” from Rochester Institute of Technology’s Eric Hittinger, Qing Miao, and Eric Williams; and “ Barriers and Facilitators for Successful Community Forestry: Lessons Learned and Practical Applications From Case Studies in India and Guatemala ,” from Vishal Jamkar (University of Minnesota), Megan Butler (Macalester College), and Dean Current (University of Minnesota).

“‘Teaching the Complex Dynamics of Clean Energy Subsidies’ recognizes the value of subsidies, while at the same time acknowledging contextual constraints. The game itself allows students to work through subsidy design via a number of cases, and provides high quality material for use immediately in the classroom. This is a wildly useful tool, and exemplifies what we want to see with respect to accessible pedagogy using environmental case studies as a focus.”

“Barriers and Facilitators for Successful Community Forestry” is the author team’s second case study contribution to the journal, extending the well-developed framework of their previous article, “ Understanding Facilitators and Barriers to Success: Framework for Developing Community Forestry Case Studies ” and applying it to two unique locations.

Both the winning case study and honorable mentions have been made freely available to the public at online.ucpress.edu/cse .

The Case Studies in the Environment team extends their gratitude to everyone who submitted articles for the 2023 competition. For previous Case Studies in the Environment Prize Competition winners, please see our prize competition landing page .

Case Studies in the Environment is a journal of peer-reviewed case study articles and case study pedagogy articles. The journal informs faculty, students, researchers, educators, professionals, and policymakers on case studies and best practices in the environmental sciences and studies. online.ucpress.edu/cse

Facebook

TAGS: Case Studies in the Environment , climate change

CATEGORIES: Awards , Case Studies in the Environment , Environmental Studies , Featured , Journals , Sciences , UC Press News

IMAGES

  1. single case study of intervention

    a single case study intervention

  2. single case study of intervention

    a single case study intervention

  3. An overview of the single-case study approach

    a single case study intervention

  4. single case study of intervention

    a single case study intervention

  5. 49 Free Case Study Templates ( + Case Study Format Examples + )

    a single case study intervention

  6. single case study of intervention

    a single case study intervention

VIDEO

  1. OT Case Study Intervention

  2. Strategic Case Study Intervention Session 2

  3. PH 532: Group 1 Pilot Study Intervention

  4. Intervention Study, DSEM Application 4

  5. Panel Discussion: Active case finding strategies to catalyze TB diagnosis

  6. fhana 3rd single "divine intervention" 試聴用映像

COMMENTS

  1. Single-Case Design, Analysis, and Quality Assessment for Intervention Research

    Single-case studies can provide a viable alternative to large group studies such as randomized clinical trials. Single case studies involve repeated measures, and manipulation of and independent variable. They can be designed to have strong internal validity for assessing causal relationships between interventions and outcomes, and external ...

  2. Single-Case Intervention Research

    A well-written and meaningfully structured compendium that includes the foundational and advanced guidelines for conducting accurate single-case intervention designs. Whether you are an undergraduate or a graduate student, or an applied researcher anywhere along the novice-to-expert column, this book promises to be an invaluable addition to ...

  3. Single-Case Design, Analysis, and Quality Assessment for Intervention

    When rigorously designed, single-case studies can be particularly useful experimental designs in a variety of situations, such as when research resources are limited, studied conditions have low incidences, or when examining effects of novel or expensive interventions. ... Single-Case Design, Analysis, and Quality Assessment for Intervention ...

  4. Single Subject Research

    Multiple-baseline designs do not require the intervention to be withdrawn. Instead, each subject's own data are compared between intervention and nonintervention behaviors, resulting in each subject acting as his or her own control (Kazdin, 1982). An added benefit of this design, and all single-case designs, is the immediacy of the data.

  5. Single-case experimental designs to assess intervention effectiveness

    Single-case experimental designs (SCED) are experimental designs aiming at testing the effect of an intervention using a small number of patients (typically one to three), using repeated measurements, sequential (± randomized) introduction of an intervention and method-specific data analysis, including visual analysis and specific statistics.The aim of this paper is to familiarise ...

  6. Single-Case Intervention Research: Methodological and ...

    Analyzing Single-Case Designs:: d, G, Hierarchical Models, Bayesian Estimators, Generalized Additive Models, and the Hopes and Fears of Researchers About Analyses. Download. XML. The Role of Single-Case Designs in Supporting Rigorous Intervention Development and Evaluation at the Institute of Education Sciences. Download.

  7. Single case studies are a powerful tool for developing, testing and

    The majority of methods in psychology rely on averaging group data to draw conclusions. In this Perspective, Nickels et al. argue that single case methodology is a valuable tool for developing and ...

  8. Single-Case Intervention Research Design Standards

    Enhancing the scientific credibility of single-case intervention research: Randomization to the rescue. Psychological Methods, 15, 122-144. Crossref. ISI. Google Scholar. ... Outcomes of a novel single case study incorporating Rapid Syllable Tra... Go to citation Crossref Google Scholar.

  9. Single-case intervention research: Methodological and statistical advances

    Single-case intervention research has a rich tradition of providing evidence about the efficacy of interventions applied both to solving a diverse range of human problems and to enriching the knowledge base established in many fields of science (Kratochwill, 1978; Kratochwill & Levin, 1992, 2010). In the social sciences the randomized controlled trial (RCT) experiment has, in recent years ...

  10. Single-case intervention research design standards: Additional proposed

    Single-case intervention research design standards have evolved considerably over the past decade. These standards serve the dual role of assisting in single-case design (SCD) intervention research methodology and as guidelines for literature syntheses within a particular research domain. ... Several examples of SCD intervention studies that ...

  11. The Family of Single-Case Experimental Designs

    Abstract. Single-case experimental designs (SCEDs) represent a family of research designs that use experimental methods to study the effects of treatments on outcomes. The fundamental unit of analysis is the single case—which can be an individual, clinic, or community—ideally with replications of effects within and/or between cases.

  12. PDF Single-Case Design Research Methods

    Studies that use a single-case design (SCD) measure outcomes for cases (such as a child or family) repeatedly during multiple phases of a study to determine the success of an intervention. The number of phases in the study will depend on the research questions, intervention, and outcome(s) of interest (see Types of SCDs on page 4 for examples).

  13. Single-Case Experimental Designs

    Single-case experimental designs are a family of experimental designs that are characterized by researcher manipulation of an independent variable and repeated measurement of a dependent variable before (i.e., baseline) and after (i.e., intervention phase) introducing the independent variable. In single-case experimental designs a case is the ...

  14. Single‐case experimental designs: Characteristics, changes, and

    Tactics of Scientific Research (Sidman, 1960) provides a visionary treatise on single-case designs, their scientific underpinnings, and their critical role in understanding behavior. Since the foundational base was provided, single-case designs have proliferated especially in areas of application where they have been used to evaluate interventions with an extraordinary range of clients ...

  15. Single-case intervention research design standards: Additional proposed

    These standards serve the dual role of assisting in single-case design (SCD) intervention research methodology and as guidelines for literature syntheses within a particular research domain. In a recent article (Kratochwill et al., 2021), we argued for a need to clarify key features of these standards.

  16. Case Study Methodology of Qualitative Research: Key Attributes and

    Within a case study research, one may study a single case or multiple cases. Single case studies are most common in case study researches. Yin (2014, p. 59) says that single cases are 'eminently justifiable' under certain conditions: (a) when the case under study is unique or atypical, and hence, its study is revelatory, (b) when the case ...

  17. Single Case Research Design

    Abstract. This chapter addresses the peculiarities, characteristics, and major fallacies of single case research designs. A single case study research design is a collective term for an in-depth analysis of a small non-random sample. The focus on this design is on in-depth.

  18. Meta-analysis of single-case treatment effects on self-injurious

    In examination of the 679 articles, we used the following criteria to select studies or datasets for inclusion: (a) the experimental study used a single-case research design, beginning with a baseline phase that was followed by a treatment phase; (b) the dependent variable was a quantitative measure of SIB (e.g., frequency of head-hitting); (c ...

  19. A descriptive analysis of assessment measures on the ...

    Method: A single-subject case design was employed with one male adult who stutters. Data was collected by administering the Stuttering Severity Instrument-Fourth Edition (SSI-4) and Overall Assessment of the Speaker's Experience of Stuttering-Adults (OASES-A) at three testing periods (pre-intervention, immediately post-intervention and 7 months ...

  20. PDF Design Options for Home Visiting Evaluation SINGLE CASE DESIGN BRIEF

    intervention effect within a single study are required to demonstrate sound experimental control.4 Causality - SCD research allows the researcher to draw a causal argument for the single case or group of cases. This can be distinguished from experimental and control group designs that make causal arguments at only a group level.

  21. Observational and interventional study design types; an overview

    Case-control studies were traditionally referred to as retrospective studies, ... Pre-post studies may be single arm, one group measured before the intervention and again after the intervention, or multiple arms, where there is a comparison between groups. ... Outcomes measured for pre-post intervention studies may be binary health outcomes ...

  22. Behavioral Sleep Interventions for Children with Rare Genetic

    This study evaluated the overall effectiveness and acceptability of function-based behavioral sleep interventions for children with RGNC. Data was collated from a series of experimental single-case research studies with 26 children (18 months to 19 years of age) with a range of RGNC, who received a behavioral sleep intervention.

  23. Single-Case Design, Analysis, and Quality Assessment for Intervention

    Summary of Key Points: Single-case studies can provide a viable alternative to large group studies such as randomized clinical trials. Single-case studies involve repeated measures and manipulation of an independent variable. They can be designed to have strong internal validity for assessing causal relationships between interventions and outcomes, as well as external validity for ...

  24. Single-Case Designs

    Single-case Experimental Designs in Clinical Settings. W.C. Follette, in International Encyclopedia of the Social & Behavioral Sciences, 2001 2 Characteristics of Single-case Design. Single-case designs study intensively the process of change by taking many measures on the same individual subject over a period of time. The degree of control in single-case design experiments can often lead to ...

  25. The power of Para sport: the effect of performance-focused swimming

    Methods A Multiple-Baseline, Single-Case Experimental Design (MB-SCED) study comprising five phases and a 30-month follow-up was conducted. Participants were two males and one female, all aged 15 years, untrained and with CPHSN. The intervention was a 46-month swimming training programme, focused exclusively on improving performance.

  26. Addiction Counseling Studies AS

    The Addiction Counseling Studies AS Degree. Prepares individuals for counseling in alcohol, drug, and other addictions. ... Intervention, Treatment, and Recovery ADST C104X Co-occurring Disorders. ADST C105X Counseling Skills in Addiction Treatment ... ADST C108X Case Management of Addiction Counseling ADST C109X Group Treatment. ADST C110X ...

  27. A systematic review of applied single-case research ...

    Single-case experimental designs (SCEDs) have become a popular research methodology in educational science, psychology, and beyond. The growing popularity has been accompanied by the development of specific guidelines for the conduct and analysis of SCEDs. In this paper, we examine recent practices in the conduct and analysis of SCEDs by systematically reviewing applied SCEDs published over a ...

  28. Impact of Multimodal Intervention on Empathy Levels in Medical ...

    Methods This study utilized a questionnaire-based, pre- and post-test interventional design. Seventy-nine second-year medical students were included after obtaining their informed consent. The students received the intervention through an interactive lecture on communication skills, role-play on selected case studies, and guided reflection.

  29. Climate-Smart Intervention Takes Top 2023 Case Studies in the

    The winning case study from the 2023 competition, "Building Resilience in Jamaica's Farming Communities: Insights From a Climate-Smart Intervention," from The University of the West Indies' Donovan Campbell and Shaneica Lester, demonstrates that while climate change poses immense threats to the envi

  30. Significance of Single-Interval Discrete Attributes: Case Study on Two

    Supervised discretisation is widely considered as far more advantageous than unsupervised transformation of attributes, because it helps to preserve the informative content of a variable, which is useful in classification. After discretisation, based on employed criteria, some attributes can be found irrelevant, and all their values can be represented in a discrete domain by a single interval ...