Selective reporting in meta-analysis of dependent effect size estimates


Date
2022-02-08 6:00 PM
Event
Stanford Quantitative Sciences Unit Research Methods Seminar
Location
online

Publication bias and other forms of selective outcome reporting are important threats to the validity of findings from research syntheses—even undermining their special status for informing evidence-based practice and policy guidance. An array of methods have been proposed for detecting selective outcome reporting, but nearly all of the available statistical tests are premised on the assumption that each study contributes a single effect size, which is statistically independent of the other effect sizes in the analysis. In practice, however, it is very common for meta-analyses to include studies that contribute multiple, statistically dependent effect sizes (e.g., effect sizes for multiple, related outcome measures, effect sizes at different follow-up times, or effect sizes from multiple replications based on a common protocol). In this talk, I will review these issues and describe the range of methods that synthesists currently use to examine selective reporting issues under effect size dependence. I then describe a new test for diagnosing selective reporting by comparing the observed number of statistically significant effect sizes to the number expected based on the power of included studies to detect the estimated average effect. This test generalizes the Test of Excess Significance (TES; Ioannidis & Trikalinos, 2007) and is closely related to the score test under a simple version of the Vevea and Hedges (1995) selection model. It uses cluster-robust sandwich estimation methods to handle dependence of effect sizes nested within studies. I report some simulation evidence on the power of this new test relative to existing alternatives and discuss further directions for investigating selective reporting issues in meta-analysis.