James E. Pustejovsky

I am a statistician and associate professor in the School of Education at the University of Wisconsin-Madison, where I teach in the Educational Psychology Department and the graduate program in Quantitative Methods. My research involves developing statistical methods for problems in education, psychology, and other areas of social science research, with a focus on methods related to research synthesis and meta-analysis.

Interests

  • Meta-analysis
  • Causal inference
  • Robust statistical methods
  • Education statistics
  • Single case experimental designs

Education

  • PhD in Statistics, 2013

    Northwestern University

  • BA in Economics, 2003

    Boston College

Recent Posts

Implementing Consul's generalized Poisson distribution in Stan

\[ \def\Pr{{\text{Pr}}} \def\E{{\text{E}}} \def\Var{{\text{Var}}} \def\Cov{{\text{Cov}}} \def\bm{\mathbf} \def\bs{\boldsymbol} \] For a project I am working on, we are using Stan to fit generalized random effects location-scale models to a bunch of count data.

Implementing Efron's double Poisson distribution in Stan

\[ \def\Pr{{\text{Pr}}} \def\E{{\text{E}}} \def\Var{{\text{Var}}} \def\Cov{{\text{Cov}}} \def\bm{\mathbf} \def\bs{\boldsymbol} \] For a project I am working on, we are using Stan to fit generalized random effects location-scale models to a bunch of count data.

Cluster-Bootstrapping a meta-analytic selection model

In this post, we will sketch out what we think is a promising and pragmatic method for examining selective reporting while also accounting for effect size dependency. The method is to use a cluster-level bootstrap, which involves re-sampling clusters of observations to approximate the sampling distribution of an estimator. To illustrate this technique, we will demonstrate how to bootstrap a Vevea-Hedges selection model.

Cohen's $d_z$ makes me dizzy when considering measurement error

Meta-analyses in education, psychology, and related fields rely heavily of Cohen’s $d$, or the standardized mean difference effect size, for quantitatively describing the magnitude and direction of intervention effects. In these fields, Cohen’s $d$ is so pervasive that its use is nearly automatic, and analysts rarely question its utility or consider alternatives (response ratios, anyone? POMP?). Despite this state of affairs, working with Cohen’s $d$ is theoretically challenging because the standardized mean difference metric does not have a singular definition. Rather, its definition depends on the choice of the standardizing variance used in the denominator.

Corrigendum to Pustejovsky and Tipton (2018), redux

In my 2018 paper with Beth Tipton, published in the Journal of Business and Economic Statistics, we considered how to do cluster-robust variance estimation in fixed effects models estimated by weighted (or unweighted) least squares. We were recently alerted that Theorem 2 in the paper is incorrect as stated. It turns out, the conditions in the original version of the theorem are too general. A more limited version of the Theorem does actually hold, but only for models estimated using ordinary (unweighted) least squares, under a working model that assumes independent, homoskedastic errors. In this post, I’ll give the revised theorem, following the notation and setup of the previous post (so better read that first, or what follows won’t make much sense!).

Working papers

Conducting power analysis for meta-analysis of dependent effect sizes: Common guidelines and an introduction to the POMADE R package

Sample size and statistical power are important factors to consider when planning a research syn-thesis. Power analysis methods have been developed for fixed effect or random effects models, but until …

Conducting power analysis for meta-analysis of dependent effect sizes: Common guidelines and an introduction to the POMADE R package

Sample size and statistical power are important factors to consider when planning a research synthesis. Power analysis methods have been developed for fixed effect or random effects models, but until …

Recent Publications

Equivalences between ad hoc strategies and meta-analytic models for dependent effect sizes

Meta-analyses of educational research findings frequently involve statistically dependent effect size estimates. Meta-analysts have often addressed dependence issues using ad hoc approaches that …

The efficacy of combining cognitive training and non-invasive brain stimulation: A transdiagnostic systematic review and meta-analysis

Over the past decade, an increasing number of studies investigated the innovative approach of supplementing cognitive training (CT) with non-invasive brain stimulation (NIBS) to increase the effects …

Comparison of competing approaches to analyzing cross-classified data: Random effects models, ordinary least squares, or fixed effects with cluster robust standard errors

Cross-classified random effects modeling (CCREM) is a common approach for analyzing cross-classified data in education. However, when the focus of a study is on the regression coefficients at level …

Recent Presentations

Discussion of Stabilizing measures to reconcile accuracy and equity in performance measurement

Equity-related moderator analysis in syntheses of dependent effect sizes: Conceptual and statistical considerations

Background/Context In meta-analyses examining educational interventions, researchers seek to understand the distribution of intervention impacts, in order to draw generalizations about what works, for whom, and under what conditions. One common way to examine equity implications in such reviews is through moderator analysis, which involves modeling how intervention effect sizes vary depending on the characteristics of primary study participants.

Determining the Timing of Phase Changes: Some Statistical Perspective

Calculating Effect Sizes for Single-Case Research: An Introduction to the SingleCaseES and scdhlm Web Applications and R Packages

This workshop will provide an introduction to effect size calculations for single-case research designs, focused on two interactive web applications (or “apps”) and accompanying R packages. I will …

Effect size measures for single-case research: Conceptual, practical, and statistical considerations

Quantitative analysis of single-case research and n-of-1 experiments often focuses on calculation of effect size measures, or numerical indices describing the direction and strength of an …

Software

POMADE

Power for Meta-Analysis of Dependent Effects

lmeInfo

Information Matrices for ‘lmeStruct’ and ‘glsStruct’ Objects

simhelpers

Helper package to assist in running simulation studies

ARPobservation

Simulate systematic direct observation data

clubSandwich

Cluster-robust variance estimation

scdhlm

Between-case SMD for single-case designs

SingleCaseES

Single-case design effect size calculator

wildmeta

Cluster-wild bootstrap for meta-regression

Students

Current Advisees

Avatar

Man Chen

Graduate student

Avatar

Paulina Grekov

Graduate student

Alumni

Avatar

Megha Joshi

Quantitative Researcher

Avatar

Young Ri Lee

Postdoctoral Scholar

Avatar

Daniel M. Swan

Research Associate

Avatar

Christopher Runyon

Measurement Scientist

Avatar

Gleb Furman

Senior Quantitative Research Scientist

Mascot

Avatar

Whatev's Donkey

Lab mascot

Contact