Effect sizes

New paper: A gradual effects model for single-case designs

I’m very happy to share a new paper, co-authored with my student Danny Swan, “A gradual effects model for single-case designs,” which is now available online at Multivariate Behavioral Research. You can access the published version at the journal website (click here for free access while supplies last) or the pre-print on PsyArxiv (always free!). Here’s the abstract and the supplementary materials. Danny wrote R functions for fitting the model, (available as part of the SingleCaseES package) as well as a slick web interface, if you prefer to point-and-click.

Sampling variance of Pearson r in a two-level design

Consider Pearson’s correlation coefficient, (r), calculated from two variables (X) and (Y) with population correlation (\rho). If one calculates (r) from a simple random sample of (N) observations, then its sampling variance will be approximately [ \text{Var}® \approx \frac{1}{N}\left(1 - \rho^2\right)^2. ] But what if the observations are drawn from a multi-stage sample? If one uses the raw correlation between the observations (ignoring the multi-level structure), then the (r) will actually be a weighted average of within-cluster and between-cluster correlations (see Snijders & Bosker, 2012).

New paper: Using response ratios for meta-analyzing SCDs with behavioral outcomes

I’m pleased to announce that my article “Using response ratios for meta-analyzing SCDs with behavioral outcomes” has been accepted at Journal of School Psychology. There’s a multitude of ways that you can access this work: For the next 6 weeks or so, the published version of the article will be available at the journal website. The pre-print will always remain available at PsyArXiv. Some supporting materials and replication code are available on the Open Science Framework.

New paper: procedural sensitivities of effect size measures for SCDs

I’m very happy to share that my article “Procedural sensitivities of effect sizes for single-case designs with directly observed behavioral outcome measures” has been accepted at Psychological Methods. There’s no need to delay in reading it, since you can check out the pre-print and supporting materials. Here’s the abstract: A wide variety of effect size indices have been proposed for quantifying the magnitude of treatment effects in single-case designs. Commonly used measures include parametric indices such as the standardized mean difference, as well as non-overlap measures such as the percentage of non-overlapping data, improvement rate difference, and non-overlap of all pairs.

New working paper: Using log response ratios for meta-analyzing SCDs with behavioral outcomes

One of the papers that came out of my dissertation work (Pustejovsky, 2015) introduced an effect size metric called the log response ratio (or LRR) for use in meta-analysis of single-case research—particularly for single-case studies that measure behavioral outcomes through systematic direct observation. The original paper was pretty technical since it focused mostly on a formal measurement model for behavioral observation data. I’ve just completed a tutorial paper that demonstrates how to use the LRR for meta-analyzing single-case studies with behavioral outcomes.

New tutorial paper on BC-SMD effect sizes

I’m pleased to announce that the Campbell Collaboration has just published a new discussion paper that I wrote with my colleagues Jeff Valentine and Emily Tanner-Smith about between-case standardized mean difference effect sizes for single-case designs. The paper provides a relatively non-technical introduction to BC-SMD effect sizes and a tutorial on how to use the scdhlm web-app for calculating estimates of the BC-SMD for user-provided data. If you have any questions or feedback about the app, please feel free to contact me!

Presentation at IES 2016 PI meeting

I am just back from the Institute of Education Sciences 2016 Principal Investigators meeting. Rob Horner had organized a session titled “Single-case methods: Current status and needed directions” as a tribute to our colleague Will Shadish, who passed away this past year. Rob invited me to give some brief remarks about Will as a mentor, and then to present some of my work with Will and Larry Hedges on effect sizes for single-case research.

What is Tau-U?

Parker, Vannest, Davis, and Sauber (2011) proposed the Tau-U index—actually several indices, rather—as effect size measures for single-case designs. The original paper describes several different indices that involve corrections for trend during the baseline phase, treatment phase, both phases, or neither phase. Without correcting for trends in either phase, the index is equal to the Mann-Whitney (U) statistic calculated by comparing every pair of observations containing one point from each phase, scaled by the total number of such pairs.

New working paper: Procedural sensitivities of SCD effect sizes

I’ve just posted a new version of my working paper, Procedural sensitivities of effect sizes for single-case designs with behavioral outcome measures. The abstract is below. This version is a major update of an earlier paper that focused only on the non-overlap measures. The new version also includes analysis of two other effect sizes (the within-case standardized mean difference and the log response ratio) as well as additional results and more succinct summaries of the main findings.

Alternative formulas for the standardized mean difference

The standardized mean difference (SMD) is surely one of the best known and most widely used effect size metrics used in meta-analysis. In generic terms, the SMD parameter is defined as the difference in population means between two groups (often this difference represents the effect of some intervention), scaled by the population standard deviation of the outcome metric. Estimates of the SMD can be obtained from a wide variety of experimental designs, ranging from simple, completely randomized designs, to repeated measures designs, to cluster-randomized trials.