New article: Design-comparable effect sizes in multiple baseline designs: A general modeling framework
My article with Larry Hedges and Will Shadish, titled “Design-comparable effect sizes in multiple baseline designs: A general modeling framework” has been accepted at Journal of Educational and Behavioral Statistics. The abstract is below. Here’s the article at the journal website. Postprint and supporting materials are available. An R package that implements the proposed methods is available here.
In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general approach for defining effect sizes in multiple baseline designs that are directly comparable to the standardized mean difference from a between-subjects randomized experiment. The target, design-comparable effect size parameter can be estimated using restricted maximum likelihood together with a small-sample correction analogous to Hedges’ g. The approach is demonstrated using hierarchical linear models that include baseline time trends and treatment-by-time interactions. A simulation compares the performance of the proposed estimator to that of an alternative, and an application illustrates the model-fitting process.
- Analyzing single-case designs: d, G, hierarchical models, Bayesian estimators, generalized additive models, and the hopes and fears of researchers about analyses
- Operationally comparable effect sizes for meta-analysis of single-case research
- Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: A primer and applications
- A standardized mean difference effect size for multiple baseline designs
- A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistics