The impact of response-guided designs on count outcomes in single-case experimental design baselines

Cumulative percentage of stable baselines at a given number of observations for the independent Poisson case, up to 20 observations

Abstract

In single-case experimental design (SCED) research, researchers often choose when to start treatment based on whether the baseline data collected so far are stable, using what is called a response-guided design. There is evidence that response-guided designs are common, and researchers have described a variety of criteria for assessing stability. With many of these criteria, making judgments about stability could yield data with limited variability, which may have consequences for statistical inference and effect size estimates. However, little research has examined the impact of response-guided design on the resulting data. Drawing on both applied and methodological research, we describe several algorithms as models for response-guided design. We use simulation methods to assess how using a response-guided design impacts the baseline data pattern. The simulations generate baseline data in the form of frequency counts, a common type of outcome in SCEDs. Most of the response-guided algorithms we identified lead to baselines with approximately unbiased mean levels, but nearly all of them lead to underestimates in the baseline variance. We discuss implications for the use of response-guided designs in practice and for the plausibility of specific algorithms as representations of actual research practice.

Publication
Evidence-Based Communication Assessment and Intervention, forthcoming
comments powered by Disqus

Related