Single-case research

Easily simulate thousands of single-case designs

Earlier this month, I taught at the Summer Research Training Institute on Single-Case Intervention Design and Analysis workshop, sponsored by the Institute of Education Sciences’ National Center for Special Education Research. While I was there, I shared a web-app for simulating data from a single-case design. This is a tool that I put together a couple of years ago as part of my ARPobservation R package, but haven’t ever really publicized or done anything formal with.

New paper: A gradual effects model for single-case designs

I’m very happy to share a new paper, co-authored with my student Danny Swan, “A gradual effects model for single-case designs,” which is now available online at Multivariate Behavioral Research. You can access the published version at the journal website (click here for free access while supplies last) or the pre-print on PsyArxiv (always free!). Here’s the abstract and the supplementary materials. Danny wrote R functions for fitting the model, (available as part of the SingleCaseES package) as well as a slick web interface, if you prefer to point-and-click.

New paper: Using response ratios for meta-analyzing SCDs with behavioral outcomes

I’m pleased to announce that my article “Using response ratios for meta-analyzing SCDs with behavioral outcomes” has been accepted at Journal of School Psychology. There’s a multitude of ways that you can access this work: For the next 6 weeks or so, the published version of the article will be available at the journal website. The pre-print will always remain available at PsyArXiv. Some supporting materials and replication code are available on the Open Science Framework.

New paper: procedural sensitivities of effect size measures for SCDs

I’m very happy to share that my article “Procedural sensitivities of effect sizes for single-case designs with directly observed behavioral outcome measures” has been accepted at Psychological Methods. There’s no need to delay in reading it, since you can check out the pre-print and supporting materials. Here’s the abstract: A wide variety of effect size indices have been proposed for quantifying the magnitude of treatment effects in single-case designs. Commonly used measures include parametric indices such as the standardized mean difference, as well as non-overlap measures such as the percentage of non-overlapping data, improvement rate difference, and non-overlap of all pairs.

Back from the IES PI meeting

I’m just back from the Institute of Education Sciences’ Principle Investigators conference in Washington D.C. It was an envigorating trip for me, and not only because of the opportunity to catch up with colleagues and friends from across the country. A running theme across several of the keynote addresses was the importance of increasing the transparency and replicability of education research, and it was exciting to hear about promising reforms underway and to talk about how to change the norms of our discipline(s).

New working paper: Using log response ratios for meta-analyzing SCDs with behavioral outcomes

One of the papers that came out of my dissertation work (Pustejovsky, 2015) introduced an effect size metric called the log response ratio (or LRR) for use in meta-analysis of single-case research—particularly for single-case studies that measure behavioral outcomes through systematic direct observation. The original paper was pretty technical since it focused mostly on a formal measurement model for behavioral observation data. I’ve just completed a tutorial paper that demonstrates how to use the LRR for meta-analyzing single-case studies with behavioral outcomes.

New tutorial paper on BC-SMD effect sizes

I’m pleased to announce that the Campbell Collaboration has just published a new discussion paper that I wrote with my colleagues Jeff Valentine and Emily Tanner-Smith about between-case standardized mean difference effect sizes for single-case designs. The paper provides a relatively non-technical introduction to BC-SMD effect sizes and a tutorial on how to use the scdhlm web-app for calculating estimates of the BC-SMD for user-provided data. If you have any questions or feedback about the app, please feel free to contact me!

Presentation at IES 2016 PI meeting

I am just back from the Institute of Education Sciences 2016 Principal Investigators meeting. Rob Horner had organized a session titled “Single-case methods: Current status and needed directions” as a tribute to our colleague Will Shadish, who passed away this past year. Rob invited me to give some brief remarks about Will as a mentor, and then to present some of my work with Will and Larry Hedges on effect sizes for single-case research.

What is Tau-U?

Parker, Vannest, Davis, and Sauber (2011) proposed the Tau-U index—actually several indices, rather—as effect size measures for single-case designs. The original paper describes several different indices that involve corrections for trend during the baseline phase, treatment phase, both phases, or neither phase. Without correcting for trends in either phase, the index is equal to the Mann-Whitney (U) statistic calculated by comparing every pair of observations containing one point from each phase, scaled by the total number of such pairs.

New working paper: Procedural sensitivities of SCD effect sizes

I’ve just posted a new version of my working paper, Procedural sensitivities of effect sizes for single-case designs with behavioral outcome measures. The abstract is below. This version is a major update of an earlier paper that focused only on the non-overlap measures. The new version also includes analysis of two other effect sizes (the within-case standardized mean difference and the log response ratio) as well as additional results and more succinct summaries of the main findings.