Posts

Earlier this month, I taught at the Summer Research Training Institute on Single-Case Intervention Design and Analysis workshop, sponsored by the Institute of Education Sciences’ National Center for Special Education Research. While I was there, I shared a web-app for simulating data from a single-case design. This is a tool that I put together a couple of years ago as part of my ARPobservation R package, but haven’t ever really publicized or done anything formal with.

CONTINUE READING

I’m very happy to share a new paper, co-authored with my student Danny Swan, “A gradual effects model for single-case designs,” which is now available online at Multivariate Behavioral Research. You can access the published version at the journal website (click here for free access while supplies last) or the pre-print on PsyArxiv (always free!). Here’s the abstract and the supplementary materials. Danny wrote R functions for fitting the model, (available as part of the SingleCaseES package) as well as a slick web interface, if you prefer to point-and-click.

CONTINUE READING

Last night I attended a joint meetup between the Austin R User Group and R Ladies Austin, which was great fun. The evening featured several lightning talks on a range of topics, from breaking into data science to network visualization to starting your own blog. I gave a talk about sandwich standard errors and my clubSandwich R package. Here are links to some of the talks: Caitlin Hudon: Getting Plugged into Data Science Claire McWhite: A quick intro to networks Nathaniel Woodward: Blogdown Demo!

CONTINUE READING

Consider Pearson’s correlation coefficient, \(r\), calculated from two variables \(X\) and \(Y\) with population correlation \(\rho\). If one calculates \(r\) from a simple random sample of \(N\) observations, then its sampling variance will be approximately \[ \text{Var}(r) \approx \frac{1}{N}\left(1 - \rho^2\right)^2. \] But what if the observations are drawn from a multi-stage sample? If one uses the raw correlation between the observations (ignoring the multi-level structure), then the \(r\) will actually be a weighted average of within-cluster and between-cluster correlations (see Snijders & Bosker, 2012).

CONTINUE READING

The delta method is surely one of the most useful techniques in classical statistical theory. It’s perhaps a bit odd to put it this way, but I would say that the delta method is something like the precursor to the bootstrap, in terms of its utility and broad range of applications—both are “first-line” tools for solving statistical problems. There are many good references on the delta-method, ranging from the Wikipedia page to a short introduction in The American Statistician (Oehlert, 1992).

CONTINUE READING

I’m pleased to announce that my article “Using response ratios for meta-analyzing SCDs with behavioral outcomes” has been accepted at Journal of School Psychology. There’s a multitude of ways that you can access this work: For the next 6 weeks or so, the published version of the article will be available at the journal website. The pre-print will always remain available at PsyArXiv. Some supporting materials and replication code are available on the Open Science Framework.

CONTINUE READING

I’m very happy to share that my article “Procedural sensitivities of effect sizes for single-case designs with directly observed behavioral outcome measures” has been accepted at Psychological Methods. There’s no need to delay in reading it, since you can check out the pre-print and supporting materials. Here’s the abstract: A wide variety of effect size indices have been proposed for quantifying the magnitude of treatment effects in single-case designs. Commonly used measures include parametric indices such as the standardized mean difference, as well as non-overlap measures such as the percentage of non-overlapping data, improvement rate difference, and non-overlap of all pairs.

CONTINUE READING

I’m just back from the Institute of Education Sciences’ Principle Investigators conference in Washington D.C. It was an envigorating trip for me, and not only because of the opportunity to catch up with colleagues and friends from across the country. A running theme across several of the keynote addresses was the importance of increasing the transparency and replicability of education research, and it was exciting to hear about promising reforms underway and to talk about how to change the norms of our discipline(s).

CONTINUE READING

I just covered instrumental variables in my course on causal inference, and so I have two-stage least squares (2SLS) estimation on the brain. In this post I’ll share something I realized in the course of prepping for class: that standard errors from 2SLS estimation are equivalent to delta method standard errors based on the Wald IV estimator. (I’m no econometrician, so this had never occurred to me before. Perhaps it will be interesting to other non-econometrician readers.

CONTINUE READING

A colleague recently asked me about how to apply cluster-robust hypothesis tests and confidence intervals, as calculated with the clubSandwich package, when dealing with multiply imputed datasets. Standard methods (i.e., Rubin’s rules) for pooling estimates from multiple imputed datasets are developed under the assumption that the full-data estimates are approximately normally distributed. However, this might not be reasonable when working with test statistics based on cluster-robust variance estimators, which can be imprecise when the number of clusters is small or the design matrix of predictors is unbalanced in certain ways.

CONTINUE READING