Meta-analysis

Simulating correlated standardized mean differences for meta-analysis

As I’ve discussed in previous posts, meta-analyses in psychology, education, and other areas often include studies that contribute multiple, statistically dependent effect size estimates. I’m interested in methods for meta-analyzing and meta-regressing effect sizes from data structures like this, and studying this sort of thing often entails conducting Monte Carlo simulations. Monte Carlo simulations involve generating artificial data—in this case, a set of studies, each of which has one or more dependent effect size estimates—that follows a certain distributional model, applying different analytic methods to the artificial data, and then repeating the process a bunch of times.

Sometimes, aggregating effect sizes is fine

In meta-analyses of psychology, education, and other social science research, it is very common that some of the included studies report more than one relevant effect size. For example, in a meta-analysis of intervention effects on reading outcomes, some studies may have used multiple measures of reading outcomes (each of which meets inclusion criteria), or may have measured outcomes at multiple follow-up times; some studies might have also investigated more than one version of an intervention, and it might be of interest to include effect sizes comparing each version to the no-intervention control condition; and it’s even possible that some studies may have all of these features, potentially contributing lots of effect size estimates.

CRAN downloads of my packages

At AERA this past weekend, one of the recurring themes was how software availability (and its usability and default features) influences how people conduct meta-analyses. That got me thinking about the R packages that I’ve developed, how to understand the extent to which people are using them, how they’re being used, and so on. I’ve had badges on my github repos for a while now: clubSandwich: ARPobservation: scdhlm: SingleCaseES: These statistics come from the METACRAN site, which makes available data on daily downloads of all packages on CRAN (one of the main repositories for sharing R packages).

Systematic Reviews and Meta-analysis SIG at AERA 2019

This year, Dr. Laura Dunne and I are serving as program co-chairs for the AERA special interest group on Systematic Reviews and Meta-Analysis, which is a great group of scholars interested in the methodology and application of research synthesis to questions in education and the broader social sciences. We had a strong batch of submissions to the SIG and (since we’re new and still a fairly small group) only a few sessions to fill with them.

Sampling variance of Pearson r in a two-level design

Consider Pearson’s correlation coefficient, (r), calculated from two variables (X) and (Y) with population correlation (\rho). If one calculates (r) from a simple random sample of (N) observations, then its sampling variance will be approximately [ \text{Var}® \approx \frac{1}{N}\left(1 - \rho^2\right)^2. ] But what if the observations are drawn from a multi-stage sample? If one uses the raw correlation between the observations (ignoring the multi-level structure), then the (r) will actually be a weighted average of within-cluster and between-cluster correlations (see Snijders & Bosker, 2012).

Imputing covariance matrices for meta-analysis of correlated effects

In many systematic reviews, it is common for eligible studies to contribute effect size estimates from not just one, but multiple relevant outcome measures, for a common sample of participants. If those outcomes are correlated, then so too will be the effect size estimates. To estimate the degree of correlation, you would need the sample correlation among the outcomes—information that is woefully uncommon for primary studies to report (and best of luck to you if you try to follow up with author queries).

You wanna PEESE of d's?

Publication bias—or more generally, outcome reporting bias or dissemination bias—is recognized as a critical threat to the validity of findings from research syntheses. In the areas with which I am most familiar (education and psychology), it has become more or less a requirement for research synthesis projects to conduct analyses to detect the presence of systematic outcome reporting biases. Some analyses go further by trying correct for its distorting effects on average effect size estimates.

Alternative formulas for the standardized mean difference

The standardized mean difference (SMD) is surely one of the best known and most widely used effect size metrics used in meta-analysis. In generic terms, the SMD parameter is defined as the difference in population means between two groups (often this difference represents the effect of some intervention), scaled by the population standard deviation of the outcome metric. Estimates of the SMD can be obtained from a wide variety of experimental designs, ranging from simple, completely randomized designs, to repeated measures designs, to cluster-randomized trials.

Special Education Pro-Sem

Correlations between standardized mean differences

Several students and colleagues have asked me recently about an issue that comes up in multivariate meta-analysis when some of the studies include multiple treatment groups and multiple outcome measures. In this situation, one might want to include effect size estimates for each treatment group and each outcome measure. In order to do so in fully multivariate meta-analysis, estimates of the covariances among all of these efffect sizes are needed. The covariance among effect sizes arises for several reasons: