simulation

Comparison of competing approaches to analyzing cross-classified data: Random effects models, ordinary least squares, or fixed effects with cluster robust standard errors

Cross-classified random effects modeling (CCREM) is a common approach for analyzing cross-classified data in education. However, when the focus of a study is on the regression coefficients at level one rather than on the random effects, ordinary …

simhelpers

Helper package to assist in running simulation studies

Simulating correlated standardized mean differences for meta-analysis

As I’ve discussed in previous posts, meta-analyses in psychology, education, and other areas often include studies that contribute multiple, statistically dependent effect size estimates. I’m interested in methods for meta-analyzing and meta-regressing effect sizes from data structures like this, and studying this sort of thing often entails conducting Monte Carlo simulations.

Easily simulate thousands of single-case designs

Earlier this month, I taught at the Summer Research Training Institute on Single-Case Intervention Design and Analysis workshop, sponsored by the Institute of Education Sciences’ National Center for Special Education Research.

New paper: procedural sensitivities of effect size measures for SCDs

I’m very happy to share that my article “Procedural sensitivities of effect sizes for single-case designs with directly observed behavioral outcome measures” has been accepted at Psychological Methods. There’s no need to delay in reading it, since you can check out the pre-print and supporting materials.

You wanna PEESE of d's?

Publication bias—or more generally, outcome reporting bias or dissemination bias—is recognized as a critical threat to the validity of findings from research syntheses. In the areas with which I am most familiar (education and psychology), it has become more or less a requirement for research synthesis projects to conduct analyses to detect the presence of systematic outcome reporting biases.

New working paper: Procedural sensitivities of SCD effect sizes

I’ve just posted a new version of my working paper, Procedural sensitivities of effect sizes for single-case designs with behavioral outcome measures. The abstract is below. This version is a major update of an earlier paper that focused only on the non-overlap measures.

Simulation studies in R (Fall, 2016 version)

In today’s Quant Methods colloquium, I gave an introduction to the logic and purposes of Monte Carlo simulation studies, with examples written in R. Here are the slides from my presentation.

ARPobservation

Simulate systematic direct observation data

Update: parallel R on the TACC

I have learned from Mr. Yaakoub El Khamra that he and the good folks at TACC have made some modifications to TACC’s custom MPI implementation and R build in order to correct bugs in Rmpi and snow that were causing crashes.