# James E. Pustejovsky

I am a statistician and associate professor in the School of Education at the University of Wisconsin-Madison, where I teach in the Educational Psychology Department and the graduate program in Quantitative Methods. My research involves developing statistical methods for problems in education, psychology, and other areas of social science research, with a focus on methods related to research synthesis and meta-analysis.

### Interests

• Meta-analysis
• Causal inference
• Robust statistical methods
• Education statistics
• Single case experimental designs

### Education

• PhD in Statistics, 2013

Northwestern University

• BA in Economics, 2003

Boston College

# Recent Posts

### Implementing Efron's double Poisson distribution in Stan

$\def\Pr{{\text{Pr}}} \def\E{{\text{E}}} \def\Var{{\text{Var}}} \def\Cov{{\text{Cov}}} \def\bm{\mathbf} \def\bs{\boldsymbol}$ For a project I am working on, we are using Stan to fit generalized random effects location-scale models to a bunch of count data.

### Cluster-Bootstrapping a meta-analytic selection model

In this post, we will sketch out what we think is a promising and pragmatic method for examining selective reporting while also accounting for effect size dependency. The method is to use a cluster-level bootstrap, which involves re-sampling clusters of observations to approximate the sampling distribution of an estimator. To illustrate this technique, we will demonstrate how to bootstrap a Vevea-Hedges selection model.

### Cohen's $d_z$ makes me dizzy when considering measurement error

Meta-analyses in education, psychology, and related fields rely heavily of Cohen’s $d$, or the standardized mean difference effect size, for quantitatively describing the magnitude and direction of intervention effects. In these fields, Cohen’s $d$ is so pervasive that its use is nearly automatic, and analysts rarely question its utility or consider alternatives (response ratios, anyone? POMP?). Despite this state of affairs, working with Cohen’s $d$ is theoretically challenging because the standardized mean difference metric does not have a singular definition. Rather, its definition depends on the choice of the standardizing variance used in the denominator.

### Corrigendum to Pustejovsky and Tipton (2018), redux

In my 2018 paper with Beth Tipton, published in the Journal of Business and Economic Statistics, we considered how to do cluster-robust variance estimation in fixed effects models estimated by weighted (or unweighted) least squares. We were recently alerted that Theorem 2 in the paper is incorrect as stated. It turns out, the conditions in the original version of the theorem are too general. A more limited version of the Theorem does actually hold, but only for models estimated using ordinary (unweighted) least squares, under a working model that assumes independent, homoskedastic errors. In this post, I’ll give the revised theorem, following the notation and setup of the previous post (so better read that first, or what follows won’t make much sense!).

### Corrigendum to Pustejovsky and Tipton (2018)

In my 2018 paper with Beth Tipton, published in the Journal of Business and Economic Statistics, we considered how to do cluster-robust variance estimation in fixed effects models estimated by weighted (or unweighted) least squares. A careful reader recently alerted us to a problem with Theorem 2 in the paper, which concerns a computational short cut for a certain cluster-robust variance estimator in models with cluster-specific fixed effects. The theorem is incorrect as stated, and we are currently working on issuing a correction for the published version of the paper. In the interim, this post details the problem with Theorem 2. I’ll first review the CR2 variance estimator, then describe the assertion of the theorem, and then provide a numerical counter-example demonstrating that the assertion is not correct as stated.

# Working papers

### Equivalences between ad hoc strategies and meta-analytic models for dependent effect sizes

Meta-analyses of educational research findings frequently involve statistically dependent effect size estimates. Meta-analysts have often addressed dependence issues using ad hoc approaches that …

### High replicability of newly-discovered social-behavioral findings is achievable.

Failures to replicate evidence of new discoveries have forced scientists to ask whether this unreliability is due to suboptimal implementation of optimal methods or whether presumptively optimal …

# Recent Publications

### The efficacy of combining cognitive training and non-invasive brain stimulation: A transdiagnostic systematic review and meta-analysis

Over the past decade, an increasing number of studies investigated the innovative approach of supplementing cognitive training (CT) with non-invasive brain stimulation (NIBS) to increase the effects …

### Systematic review of variables related to instruction in augmentative and alternative communication implementation: Group and single-case design

Purpose: This article provides a systematic review and analysis of group and single-case studies addressing augmentative and alternative communication (AAC) intervention with school-aged persons …

### Comparison of competing approaches to analyzing cross-classified data: Random effects models, ordinary least squares, or fixed effects with cluster robust standard errors

Cross-classified random effects modeling (CCREM) is a common approach for analyzing cross-classified data in education. However, when the focus of a study is on the regression coefficients at level …

### Between-case standardized mean differences: Flexible methods for single-case designs

Single-case designs (SCDs) are a class of research methods for evaluating the effects of academic and behavioral interventions in educational and clinical settings. Although visual analysis is …

### Power approximations for overall average effects in meta-analysis of dependent effect sizes

Meta-analytic models for dependent effect sizes have grown increasingly sophisticated over the last few decades, which has created challenges for a priori power calculations. We introduce power …

# Recent Presentations

### Equity-related moderator analysis in syntheses of dependent effect sizes: Conceptual and statistical considerations

Background/Context In meta-analyses examining educational interventions, researchers seek to understand the distribution of intervention impacts, in order to draw generalizations about what works, for whom, and under what conditions. One common way to examine equity implications in such reviews is through moderator analysis, which involves modeling how intervention effect sizes vary depending on the characteristics of primary study participants.

### Calculating Effect Sizes for Single-Case Research: An Introduction to the SingleCaseES and scdhlm Web Applications and R Packages

This workshop will provide an introduction to effect size calculations for single-case research designs, focused on two interactive web applications (or “apps”) and accompanying R packages. I will …

### Effect size measures for single-case research: Conceptual, practical, and statistical considerations

Quantitative analysis of single-case research and n-of-1 experiments often focuses on calculation of effect size measures, or numerical indices describing the direction and strength of an …

# Software

Power for Meta-Analysis of Dependent Effects

#### lmeInfo

Information Matrices for ‘lmeStruct’ and ‘glsStruct’ Objects

#### simhelpers

Helper package to assist in running simulation studies

#### ARPobservation

Simulate systematic direct observation data

#### clubSandwich

Cluster-robust variance estimation

#### scdhlm

Between-case SMD for single-case designs

#### SingleCaseES

Single-case design effect size calculator

#### wildmeta

Cluster-wild bootstrap for meta-regression