Meta-analysis

Sampling variance of Pearson r in a two-level design

Consider Pearson’s correlation coefficient, (r), calculated from two variables (X) and (Y) with population correlation (\rho). If one calculates (r) from a simple random sample of (N) observations, then its sampling variance will be approximately [ \text{Var}® \approx \frac{1}{N}\left(1 - \rho^2\right)^2. ] But what if the observations are drawn from a multi-stage sample? If one uses the raw correlation between the observations (ignoring the multi-level structure), then the (r) will actually be a weighted average of within-cluster and between-cluster correlations (see Snijders & Bosker, 2012).

Imputing covariance matrices for meta-analysis of correlated effects

In many systematic reviews, it is common for eligible studies to contribute effect size estimates from not just one, but multiple relevant outcome measures, for a common sample of participants. If those outcomes are correlated, then so too will be the effect size estimates. To estimate the degree of correlation, you would need the sample correlation among the outcomes—information that is woefully uncommon for primary studies to report (and best of luck to you if you try to follow up with author queries).

You wanna PEESE of d's?

Publication bias—or more generally, outcome reporting bias or dissemination bias—is recognized as a critical threat to the validity of findings from research syntheses. In the areas with which I am most familiar (education and psychology), it has become more or less a requirement for research synthesis projects to conduct analyses to detect the presence of systematic outcome reporting biases. Some analyses go further by trying correct for its distorting effects on average effect size estimates.

Alternative formulas for the standardized mean difference

The standardized mean difference (SMD) is surely one of the best known and most widely used effect size metrics used in meta-analysis. In generic terms, the SMD parameter is defined as the difference in population means between two groups (often this difference represents the effect of some intervention), scaled by the population standard deviation of the outcome metric. Estimates of the SMD can be obtained from a wide variety of experimental designs, ranging from simple, completely randomized designs, to repeated measures designs, to cluster-randomized trials.

Special Education Pro-Sem

Correlations between standardized mean differences

Several students and colleagues have asked me recently about an issue that comes up in multivariate meta-analysis when some of the studies include multiple treatment groups and multiple outcome measures. In this situation, one might want to include effect size estimates for each treatment group and each outcome measure. In order to do so in fully multivariate meta-analysis, estimates of the covariances among all of these efffect sizes are needed. The covariance among effect sizes arises for several reasons:

The clubSandwich package for meta-analysis with RVE

I’ve recently been working on small-sample correction methods for hypothesis tests in linear regression models with cluster-robust variance estimation. My colleague (and grad-schoolmate) Beth Tipton has developed small-sample adjustments for t-tests (of single regression coefficients) in the context of meta-regression models with robust variance estimation, and together we have developed methods for multiple-contrast hypothesis tests. We have an R package (called clubSandwich) that implements all this stuff, not only for meta-regression models but also for other models and contexts where cluster-robust variance estimation is often used.

Meta-sandwich with extra mustard

In an earlier post about sandwich standard errors for multi-variate meta-analysis, I mentioned that Beth Tipton has recently proposed small-sample corrections for the covariance estimators and t-tests, based on the bias-reduced linearization approach of McCaffrey, Bell, and Botts (2001). You can find her forthcoming paper on the adjustments here. My understanding is that these small-sample corrections are important because the uncorrected sandwich estimators can lead to under-statement of uncertainty and inflated type I error rates when a given meta-regression coefficient is estimated from only a small or moderately sized sample of independent studies (or clusters of studies).

Another meta-sandwich

In a previous post, I provided some code to do robust variance estimation with metafor and sandwich. Here’s another example, replicating some more of the calculations from Tanner-Smith & Tipton (2013). (See here for the complete code.) As a starting point, here are the results produced by the robumeta package: library(grid) library(robumeta) data(corrdat) rho <- 0.8 HTJ <- robu(effectsize ~ males + college + binge, data = corrdat, modelweights = "CORR", rho = rho, studynum = studyid, var.

A meta-sandwich

A common problem arising in many areas of meta-analysis is how to synthesize a set of effect sizes when the set includes multiple effect size estimates from the same study. It’s often not possible to obtain all of the information you’d need in order to estimate the sampling covariances between those effect sizes, yet without that information, established approaches to modeling dependent effect sizes become inaccurate. Hedges, Tipton, & Johnson (2010, HTJ hereafter) proposed the use of cluster-robust standard errors for multi-variate meta-analysis.