Recent Posts

More Posts

As I’ve discussed in previous posts, meta-analyses in psychology, education, and other areas often include studies that contribute multiple, statistically dependent effect size estimates. I’m interested in methods for meta-analyzing and meta-regressing effect sizes from data structures like this, and studying this sort of thing often entails conducting Monte Carlo simulations. Monte Carlo simulations involve generating artificial data—in this case, a set of studies, each of which has one or more dependent effect size estimates—that follows a certain distributional model, applying different analytic methods to the artificial data, and then repeating the process a bunch of times.


In meta-analyses of psychology, education, and other social science research, it is very common that some of the included studies report more than one relevant effect size. For example, in a meta-analysis of intervention effects on reading outcomes, some studies may have used multiple measures of reading outcomes (each of which meets inclusion criteria), or may have measured outcomes at multiple follow-up times; some studies might have also investigated more than one version of an intervention, and it might be of interest to include effect sizes comparing each version to the no-intervention control condition; and it’s even possible that some studies may have all of these features, potentially contributing lots of effect size estimates.


Rmarkdown documents now have a very nifty code folding option, which allows the reader of a compiled html document to toggle whether to view or hide code chunks. However, the feature is not supported in blogdown, the popular Rmarkdown-based website/blog creation package. I recently ran across an implementation of codefolding for blogdown, developed by Sébastien Rochette. I have been putzing around, trying to get it to work with my blog, which uses the Hugo Academic theme—alas, to no avail.


At AERA this past weekend, one of the recurring themes was how software availability (and its usability and default features) influences how people conduct meta-analyses. That got me thinking about the R packages that I’ve developed, how to understand the extent to which people are using them, how they’re being used, and so on. I’ve had badges on my github repos for a while now: clubSandwich: ARPobservation: scdhlm: SingleCaseES: These statistics come from the METACRAN site, which makes available data on daily downloads of all packages on CRAN (one of the main repositories for sharing R packages).


This year, Dr. Laura Dunne and I are serving as program co-chairs for the AERA special interest group on Systematic Reviews and Meta-Analysis, which is a great group of scholars interested in the methodology and application of research synthesis to questions in education and the broader social sciences. We had a strong batch of submissions to the SIG and (since we’re new and still a fairly small group) only a few sessions to fill with them.


Recent Publications

More Publications

(2019). Interventions to enhance self-efficacy in cancer patients and survivors: A meta-analysis of randomized controlled trials. Psycho-Oncology, forthcoming.

Dataset Journal

(2019). Examining the effects of social stories on challenging behavior and prosocial skills in young children: A systematic review and meta-analysis. Topics in Early Childhood Special Education, forthcoming.

Preprint Code Dataset Journal

(2019). Effects of psychosocial interventions on meaning and purpose in adults with cancer: A systematic review and meta-analysis. Cancer, forthcoming.


(2019). An examination of measurement procedures and characteristics of baseline outcome data in single-case research. Behavior Modification, forthcoming.

Preprint Code Dataset Journal

Recent Presentations

More Talks

A generalized excess significance test for selective outcome reporting with dependent effect sizes
Log response ratio effect sizes: Rationale and methods for single case designs with behavioral outcomes
Evaluating meta-analytic methods to detect outcome reporting bias in the presence of dependent effect sizes
The impact of response-guided designs on count outcomes in single-case design baselines
An examination of measurement procedures and baseline behavioral outcomes in single-case research



Simulate systematic direct observation data.


Single-case design effect size calculator.


cluster-robust variance estimation.


Between-case SMD for single-case designs.