James E. Pustejovsky

I am a statistician and associate professor in the School of Education at the University of Wisconsin-Madison, where I teach in the Educational Psychology Department and the graduate program in Quantitative Methods. My research involves developing statistical methods for problems in education, psychology, and other areas of social science research, with a focus on methods related to research synthesis and meta-analysis.


  • Meta-analysis
  • Causal inference
  • Robust statistical methods
  • Education statistics
  • Single case experimental designs


  • PhD in Statistics, 2013

    Northwestern University

  • BA in Economics, 2003

    Boston College

Recent Posts

Cohen's $d_z$ makes me dizzy when considering measurement error

\[ \def\Pr{{\text{Pr}}} \def\E{{\text{E}}} \def\Var{{\text{Var}}} \def\Cov{{\text{Cov}}} \def\cor{{\text{cor}}} \def\bm{\mathbf} \def\bs{\boldsymbol} \] Meta-analyses in education, psychology, and related fields rely heavily of Cohen’s \(d\), or the standardized mean difference effect size, for quantitatively describing the magnitude and direction of intervention effects.

Corrigendum to Pustejovsky and Tipton (2018), redux

\[ \def\Pr{{\text{Pr}}} \def\E{{\text{E}}} \def\Var{{\text{Var}}} \def\Cov{{\text{Cov}}} \def\bm{\mathbf} \def\bs{\boldsymbol} \] UPDATE, March 8, 2023: The correction to our paper has now been published at Journal of Business and Economic Statistics. It is available at https://doi.

Corrigendum to Pustejovsky and Tipton (2018)

\[ \def\Pr{{\text{Pr}}} \def\E{{\text{E}}} \def\Var{{\text{Var}}} \def\Cov{{\text{Cov}}} \def\bm{\mathbf} \def\bs{\boldsymbol} \] In my 2018 paper with Beth Tipton, published in the Journal of Business and Economic Statistics, we considered how to do cluster-robust variance estimation in fixed effects models estimated by weighted (or unweighted) least squares.

Variance component estimates in meta-analysis with mis-specified sampling correlation

\[ \def\Pr{{\text{Pr}}} \def\E{{\text{E}}} \def\Var{{\text{Var}}} \def\Cov{{\text{Cov}}} \] In a recent paper with Beth Tipton, we proposed new working models for meta-analyses involving dependent effect sizes. The central idea of our approach is to use a working model that captures the main features of the effect size data, such as by allowing for both between- and within-study heterogeneity in the true effect sizes (rather than only between-study heterogeneity).

Implications of mean-variance relationships for standardized mean differences

I spend more time than I probably should discussing meta-analysis problems on the R-SIG-meta-analysis listserv. The questions that folks pose there are often quite interesting—especially when they’re motivated by issues that they’re wrestling with while trying to complete meta-analysis projects in their diverse fields.

Working papers

Recent Publications

Comparison of competing approaches to analyzing cross-classified data: Random effects models, ordinary least squares, or fixed effects with cluster robust standard errors

Cross-classified random effects modeling (CCREM) is a common approach for analyzing cross-classified data in education. However, when the focus of a study is on the regression coefficients at level …

Between-case standardized mean differences: Flexible methods for single-case designs

Single-case designs (SCDs) are a class of research methods for evaluating the effects of academic and behavioral interventions in educational and clinical settings. Although visual analysis is …

Power approximations for overall average effects in meta-analysis of dependent effect sizes

Meta-analytic models for dependent effect sizes have grown increasingly sophisticated over the last few decades, which has created challenges for a priori power calculations. We introduce power …

Investigating narrative performance in children with developmental language disorder: A systematic review and meta-analysis

Purpose: Speech-language pathologists (SLPs) typically examine narrative performance when completing a comprehensive language assessment. However, there is significant variability in the methodologies …

Multi-level meta-analysis of single-case experimental designs using robust variance estimation

Single-case experimental designs (SCEDs) are used to study the effects of interventions on the behavior of individual cases, by making comparisons between repeated measurements of an outcome under …

Recent Presentations

A matter of emphasis: Comparison of working models for meta-analysis of dependent effect sizes

The state of single case synthesis: Premises, tools, and possibilities

Selective reporting in meta-analysis of dependent effect size estimates

Publication bias and other forms of selective outcome reporting are important threats to the validity of findings from research syntheses—even undermining their special status for informing evidence-based practice and policy guidance.

Easy, cluster-robust standard errors with the clubSandwich package

Cluster-robust variance estimation methods (also known as sandwich estimators, linearization estimators, or simply “clustered” standard errors) are a standard inferential tool in many …

Four things every quantitative social scientist should know about meta-analysis

Meta-analysis is a set of statistical tools for synthesizing results across multiple sources of evidence. Meta-analyses of intervention research are often taken as a gold standard for informing …



Power for Meta-Analysis of Dependent Effects


Information Matrices for ‘lmeStruct’ and ‘glsStruct’ Objects


Helper package to assist in running simulation studies


Simulate systematic direct observation data


Cluster-robust variance estimation


Between-case SMD for single-case designs


Single-case design effect size calculator


Cluster-wild bootstrap for meta-regression


Current Advisees


Young Ri Lee

Graduate student


Man Chen

Graduate student


Paulina Grekov

Graduate student



Megha Joshi

Quantitative Researcher


Christopher Runyon

Measurement Scientist



Whatev's Donkey

Lab mascot