The multivariate delta method

The delta method is surely one of the most useful techniques in classical statistical theory. It’s perhaps a bit odd to put it this way, but I would say that the delta method is something like the precursor to the bootstrap, in terms of its utility and broad range of applications—both are “first-line” tools for solving statistical problems.

New paper: Using response ratios for meta-analyzing SCDs with behavioral outcomes

I’m pleased to announce that my article “Using response ratios for meta-analyzing SCDs with behavioral outcomes” has been accepted at Journal of School Psychology. There’s a multitude of ways that you can access this work:

New paper: procedural sensitivities of effect size measures for SCDs

I’m very happy to share that my article “Procedural sensitivities of effect sizes for single-case designs with directly observed behavioral outcome measures” has been accepted at Psychological Methods. There’s no need to delay in reading it, since you can check out the pre-print and supporting materials.

Back from the IES PI meeting

I’m just back from the Institute of Education Sciences’ Principle Investigators conference in Washington D.C. It was an envigorating trip for me, and not only because of the opportunity to catch up with colleagues and friends from across the country.

2SLS standard errors and the delta-method

I just covered instrumental variables in my course on causal inference, and so I have two-stage least squares (2SLS) estimation on the brain. In this post I’ll share something I realized in the course of prepping for class: that standard errors from 2SLS estimation are equivalent to delta method standard errors based on the Wald IV estimator.

Pooling clubSandwich results across multiple imputations

A colleague recently asked me about how to apply cluster-robust hypothesis tests and confidence intervals, as calculated with the clubSandwich package, when dealing with multiply imputed datasets. Standard methods (i.

Imputing covariance matrices for meta-analysis of correlated effects

In many systematic reviews, it is common for eligible studies to contribute effect size estimates from not just one, but multiple relevant outcome measures, for a common sample of participants.

The siren song of significance

How is statistical analysis like the Odyssey? Here’s an analogy that I used in my research methods course last semester to explain the purpose of study pre-registration. If you’ve ever read the Odyssey, you’ll recall the story of the Sirens, the enchanting lady-monsters whose singing lures to certain death any sailor who hears them.

You wanna PEESE of d's?

Publication bias—or more generally, outcome reporting bias or dissemination bias—is recognized as a critical threat to the validity of findings from research syntheses. In the areas with which I am most familiar (education and psychology), it has become more or less a requirement for research synthesis projects to conduct analyses to detect the presence of systematic outcome reporting biases.

New working paper: Using log response ratios for meta-analyzing SCDs with behavioral outcomes

One of the papers that came out of my dissertation work (Pustejovsky, 2015) introduced an effect size metric called the log response ratio (or LRR) for use in meta-analysis of single-case research—particularly for single-case studies that measure behavioral outcomes through systematic direct observation.