James E. Pustejovsky

I am a statistician and associate professor in the School of Education at the University of Wisconsin-Madison, where I teach in the Educational Psychology Department and the graduate program in Quantitative Methods. My research involves developing statistical methods for problems in education, psychology, and other areas of social science research, with a focus on methods related to research synthesis and meta-analysis.

Interests

  • Meta-analysis
  • Causal inference
  • Robust statistical methods
  • Education statistics
  • Single case experimental designs

Education

  • PhD in Statistics, 2013

    Northwestern University

  • BA in Economics, 2003

    Boston College

Recent Posts

Cluster-Bootstrapping a meta-analytic selection model

In this post, we will sketch out what we think is a promising and pragmatic method for examining selective reporting while also accounting for effect size dependency. The method is to use a cluster-level bootstrap, which involves re-sampling clusters of observations to approximate the sampling distribution of an estimator. To illustrate this technique, we will demonstrate how to bootstrap a Vevea-Hedges selection model.

Cohen's $d_z$ makes me dizzy when considering measurement error

Meta-analyses in education, psychology, and related fields rely heavily of Cohen’s $d$, or the standardized mean difference effect size, for quantitatively describing the magnitude and direction of intervention effects. In these fields, Cohen’s $d$ is so pervasive that its use is nearly automatic, and analysts rarely question its utility or consider alternatives (response ratios, anyone? POMP?). Despite this state of affairs, working with Cohen’s $d$ is theoretically challenging because the standardized mean difference metric does not have a singular definition. Rather, its definition depends on the choice of the standardizing variance used in the denominator.

Corrigendum to Pustejovsky and Tipton (2018), redux

In my 2018 paper with Beth Tipton, published in the Journal of Business and Economic Statistics, we considered how to do cluster-robust variance estimation in fixed effects models estimated by weighted (or unweighted) least squares. We were recently alerted that Theorem 2 in the paper is incorrect as stated. It turns out, the conditions in the original version of the theorem are too general. A more limited version of the Theorem does actually hold, but only for models estimated using ordinary (unweighted) least squares, under a working model that assumes independent, homoskedastic errors. In this post, I’ll give the revised theorem, following the notation and setup of the previous post (so better read that first, or what follows won’t make much sense!).

Corrigendum to Pustejovsky and Tipton (2018)

In my 2018 paper with Beth Tipton, published in the Journal of Business and Economic Statistics, we considered how to do cluster-robust variance estimation in fixed effects models estimated by weighted (or unweighted) least squares. A careful reader recently alerted us to a problem with Theorem 2 in the paper, which concerns a computational short cut for a certain cluster-robust variance estimator in models with cluster-specific fixed effects. The theorem is incorrect as stated, and we are currently working on issuing a correction for the published version of the paper. In the interim, this post details the problem with Theorem 2. I’ll first review the CR2 variance estimator, then describe the assertion of the theorem, and then provide a numerical counter-example demonstrating that the assertion is not correct as stated.

Variance component estimates in meta-analysis with mis-specified sampling correlation

\[ \def\Pr{{\text{Pr}}} \def\E{{\text{E}}} \def\Var{{\text{Var}}} \def\Cov{{\text{Cov}}} \] In a recent paper with Beth Tipton, we proposed new working models for meta-analyses involving dependent effect sizes. The central idea of our approach is to use a working model that captures the main features of the effect size data, such as by allowing for both between- and within-study heterogeneity in the true effect sizes (rather than only between-study heterogeneity).

Working papers

Recent Publications

Comparison of competing approaches to analyzing cross-classified data: Random effects models, ordinary least squares, or fixed effects with cluster robust standard errors

Cross-classified random effects modeling (CCREM) is a common approach for analyzing cross-classified data in education. However, when the focus of a study is on the regression coefficients at level …

Between-case standardized mean differences: Flexible methods for single-case designs

Single-case designs (SCDs) are a class of research methods for evaluating the effects of academic and behavioral interventions in educational and clinical settings. Although visual analysis is …

Power approximations for overall average effects in meta-analysis of dependent effect sizes

Meta-analytic models for dependent effect sizes have grown increasingly sophisticated over the last few decades, which has created challenges for a priori power calculations. We introduce power …

Investigating narrative performance in children with developmental language disorder: A systematic review and meta-analysis

Purpose: Speech-language pathologists (SLPs) typically examine narrative performance when completing a comprehensive language assessment. However, there is significant variability in the methodologies …

Recent Presentations

Determining the Timing of Phase Changes: Some Statistical Perspective

Calculating Effect Sizes for Single-Case Research: An Introduction to the SingleCaseES and scdhlm Web Applications and R Packages

This workshop will provide an introduction to effect size calculations for single-case research designs, focused on two interactive web applications (or “apps”) and accompanying R packages. I will …

Effect size measures for single-case research: Conceptual, practical, and statistical considerations

Quantitative analysis of single-case research and n-of-1 experiments often focuses on calculation of effect size measures, or numerical indices describing the direction and strength of an …

Empirical benchmarks for between-case standardized mean differences from single-case multiple baseline designs examining academic interventions.

The between-case standardized mean difference (BC-SMD) is an effect size measure for single-case designs (SCDs) that it puts findings on the same scale as standardized mean differences for …

Discussion of ‘Moving from What Works to What Replicates: Promoting the Systematic Replication of Results.’

Software

POMADE

Power for Meta-Analysis of Dependent Effects

lmeInfo

Information Matrices for ‘lmeStruct’ and ‘glsStruct’ Objects

simhelpers

Helper package to assist in running simulation studies

ARPobservation

Simulate systematic direct observation data

clubSandwich

Cluster-robust variance estimation

scdhlm

Between-case SMD for single-case designs

SingleCaseES

Single-case design effect size calculator

wildmeta

Cluster-wild bootstrap for meta-regression

Students

Current Advisees

Avatar

Man Chen

Graduate student

Avatar

Young Ri Lee

Graduate student

Avatar

Paulina Grekov

Graduate student

Alumni

Avatar

Megha Joshi

Quantitative Researcher

Avatar

Christopher Runyon

Measurement Scientist

Mascot

Avatar

Whatev's Donkey

Lab mascot

Contact