James E. Pustejovsky

I am a statistician and assistant professor in the College of Education at the University of Texas at Austin, where I teach in the Educational Psychology Department and the graduate program in Quantitative Methods. My research involves developing statistical methods for problems in education, psychology, and other areas of social science research, with a focus on methods related to research synthesis and meta-analysis.

Interests

  • Meta-analysis
  • Causal inference
  • Robust statistical methods
  • Education statistics
  • Single case experimental designs

Education

  • PhD in Statistics, 2013

    Northwestern University

  • BA in Economics, 2003

    Boston College

Recent Posts

An update on code folding with blogdown + Academic theme

About a year ago I added a code-folding feature to my site, following an approach developed by Sébastien Rochette. I recently updated my site to work with the latest version of the Academic theme for Hugo, and it turns out that this broke my code-folding implementation.

Simulating correlated standardized mean differences for meta-analysis

As I’ve discussed in previous posts, meta-analyses in psychology, education, and other areas often include studies that contribute multiple, statistically dependent effect size estimates. I’m interested in methods for meta-analyzing and meta-regressing effect sizes from data structures like this, and studying this sort of thing often entails conducting Monte Carlo simulations.

Sometimes, aggregating effect sizes is fine

In meta-analyses of psychology, education, and other social science research, it is very common that some of the included studies report more than one relevant effect size. For example, in a meta-analysis of intervention effects on reading outcomes, some studies may have used multiple measures of reading outcomes (each of which meets inclusion criteria), or may have measured outcomes at multiple follow-up times; some studies might have also investigated more than one version of an intervention, and it might be of interest to include effect sizes comparing each version to the no-intervention control condition; and it’s even possible that some studies may have all of these features, potentially contributing lots of effect size estimates.

Code folding with blogdown + Academic theme

2020-05-03 This post describes an implementation of code folding for an older version of the Academic Theme. It does not work with Academic 4.+. See my updated instructions to get it working with newer versions of Academic.

CRAN downloads of my packages

At AERA this past weekend, one of the recurring themes was how software availability (and its usability and default features) influences how people conduct meta-analyses. That got me thinking about the R packages that I’ve developed, how to understand the extent to which people are using them, how they’re being used, and so on.

Working papers

Evaluating meta-analytic methods to detect selective reporting in the presence of dependent effect sizes

Meta-analysis is a set of statistical tools used to synthesize results from multiple studies evaluating a common research question. Two methodological challenges when conducting meta-analysis include …

Systematic review and meta-analysis of stay-play-talk interventions for improving social behaviors of young children

Stay-play-talk (SPT) is a peer-mediated intervention which involves training peer implementers to stay in proximity to, play with, and talk to a focal child who has disabilities or lower social …

Recent Publications

The impact of response-guided designs on count outcomes in single-case experimental design baselines

In single-case experimental design (SCED) research, researchers often choose when to start treatment based on whether the baseline data collected so far are stable, using what is called a …

Psychosocial interventions for cancer survivors: A meta-analysis of effects on positive affect

Purpose Positive affect has demonstrated unique benefits in the context of health-related stress and is emerging as an important target for psychosocial interventions. The primary objective of this …

An examination of measurement procedures and characteristics of baseline outcome data in single-case research

There has been growing interest in using statistical methods to analyze data and estimate effect size indices from studies that use single-case designs (SCDs), as a complement to traditional visual …

Interventions to enhance self-efficacy in cancer patients and survivors: A meta-analysis of randomized controlled trials

Objective: Self-efficacy expectations are associated with improvements in problematic outcomes widely considered clinically significant (i.e., emotional distress, fatigue, pain), related to positive …

Examining the effects of social stories on challenging behavior and prosocial skills in young children: A systematic review and meta-analysis

Social stories are a commonly used intervention practice in early childhood special education. Recent systematic reviews have documented the evidence-base for social stories, but findings are mixed. …

Recent Presentations

A generalized excess significance test for selective outcome reporting with dependent effect sizes

Log response ratio effect sizes: Rationale and methods for single case designs with behavioral outcomes

Evaluating meta-analytic methods to detect outcome reporting bias in the presence of dependent effect sizes

An examination of measurement procedures and baseline behavioral outcomes in single-case research

The impact of response-guided designs on count outcomes in single-case design baselines

Software

lmeInfo

Information Matrices for ‘lmeStruct’ and ‘glsStruct’ Objects

simhelpers

Helper package to assist in running simulation studies

ARPobservation

Simulate systematic direct observation data

clubSandwich

Cluster-robust variance estimation

scdhlm

Between-case SMD for single-case designs

SingleCaseES

Single-case design effect size calculator

Contact