James E. Pustejovsky

I am a statistician and associate professor in the School of Education at the University of Wisconsin-Madison, where I teach in the Educational Psychology Department and the graduate program in Quantitative Methods. My research involves developing statistical methods for problems in education, psychology, and other areas of social science research, with a focus on methods related to research synthesis and meta-analysis.


  • Meta-analysis
  • Causal inference
  • Robust statistical methods
  • Education statistics
  • Single case experimental designs


  • PhD in Statistics, 2013

    Northwestern University

  • BA in Economics, 2003

    Boston College

Recent Posts

Finding the distribution of significant effect sizes

In basic meta-analysis, where each study contributes just a single effect size estimate, there has been a lot of work devoted to developing models for selective reporting. Most of these models formulate the selection process as a function of the statistical significance of the effect size estimate; some also allow for the possibility that the precision of the study’s effect influences the probability of selection (i.

The Woodbury identity

As in many parts of life, statistics is full of little bits of knowledge that are useful if you happen to know them, but which hardly anybody ever bothers to mention.

An ANCOVA puzzler

Doing effect size calculations for meta-analysis is a good way to lose your faith in humanity—or at least your faith in researchers’ abilities to do anything like sensible statistical inference.

From Longhorn to Badger

It’s taken me a while to finally get around to updating my website with some personal news. I’ve moved from UT Austin to the UW Madison School of Education, where I am now an associate professor in the Educational Psychology Department’s Quantitative Methods program.

What do meta-analysts mean by 'multivariate' meta-analysis?

If you’ve ever had class with me or attended one of my presentations, you’ve probably heard me grouse about how statisticians are mostly awful about naming things.1 A lot of the terminology in our field is pretty bad and ineloquent.

Working papers

Recent Publications

Meta-Analysis with robust variance estimation: Expanding the range of working models

In prevention science and related fields, large meta-analyses are common, and these analyses often involve dependent effect size estimates. Robust variance estimation (RVE) methods provide a way to …

A systematic review and meta‐analysis of effects of psychosocial interventions on spiritual well‐being in adults with cancer

Objective Spiritual well‐being (SpWb) is an important dimension of health‐related quality of life for many cancer patients. Accordingly, an increasing number of psychosocial intervention studies have …

Systematic review and meta-analysis of stay-play-talk interventions for improving social behaviors of young children

Stay-play-talk (SPT) is a peer-mediated intervention which involves training peer implementers to stay in proximity to, play with, and talk to a focal child who has disabilities or lower social …

Evaluating meta-analytic methods to detect selective reporting in the presence of dependent effect sizes

Meta-analysis is a set of statistical tools used to synthesize results from multiple studies evaluating a common research question. Two methodological challenges when conducting meta-analysis include …

The impact of response-guided designs on count outcomes in single-case experimental design baselines

In single-case experimental design (SCED) research, researchers often choose when to start treatment based on whether the baseline data collected so far are stable, using what is called a …

Recent Presentations

Synthesis of dependent effect sizes: Versatile models through metafor and clubSandwich

Across scientific fields, large meta-analyses often involve dependent effect size estimates. Robust variance estimation (RVE) methods provide a way to include all dependent effect sizes in a single …

A generalized excess significance test for selective outcome reporting with dependent effect sizes

Log response ratio effect sizes: Rationale and methods for single case designs with behavioral outcomes

Evaluating meta-analytic methods to detect outcome reporting bias in the presence of dependent effect sizes

An examination of measurement procedures and baseline behavioral outcomes in single-case research



Information Matrices for ‘lmeStruct’ and ‘glsStruct’ Objects


Helper package to assist in running simulation studies


Simulate systematic direct observation data


Cluster-robust variance estimation


Between-case SMD for single-case designs


Single-case design effect size calculator


Current Advisees


Man Chen

Graduate student


Megha Joshi

Doctoral candidate


Young Ri Lee

Graduate student



Christopher Runyon

Measurement Scientist



Whatev's Donkey

Lab mascot