meta-analysis

Meta-Analysis with robust variance estimation: Expanding the range of working models

In prevention science and related fields, large meta-analyses are common, and these analyses often involve dependent effect size estimates. Robust variance estimation (RVE) methods provide a way to include all dependent effect sizes in a single …

Cluster wild bootstrapping to handle dependent effect sizes in meta-analysis with a small number of studies

The most common and well-known meta-regression models work under the assumption that there is only one effect size estimate per study and that the estimates are independent. However, meta-analytic reviews of social science research often include …

Examining the effects of social stories on challenging behavior and prosocial skills in young children: A systematic review and meta-analysis

Social stories are a commonly used intervention practice in early childhood special education. Recent systematic reviews have documented the evidence-base for social stories, but findings are mixed. We examined the efficacy of social stories for …

Variance component estimates in meta-analysis with mis-specified sampling correlation

\[ \def\Pr{{\text{Pr}}} \def\E{{\text{E}}} \def\Var{{\text{Var}}} \def\Cov{{\text{Cov}}} \] In a recent paper with Beth Tipton, we proposed new working models for meta-analyses involving dependent effect sizes. The central idea of our approach is to use a working model that captures the main features of the effect size data, such as by allowing for both between- and within-study heterogeneity in the true effect sizes (rather than only between-study heterogeneity).

Implications of mean-variance relationships for standardized mean differences

A question came up on the R-SIG-meta-analysis listserv about whether it was reasonable to use the standardized mean difference metric for synthesizing studies where the outcomes are measured as proportions. I think this is an interesting question because, while the SMD could work perfectly fine as an effect size metric for proportions, there are also other alternatives that could be considered, such as odds ratios or response ratios or raw differences in proportions. Further, there are some situations where the SMD has disadvantages for synthesizing contrasts between proportions. Thus, it's a situation where one has to make a choice about the effect size metric, and where the most common metric (the SMD) might not be the right answer. In this post, I want to provide a bit more detail regarding why I think mean-variance relationships in raw data can signal that the standardized mean differences might be less useful as an effect size metric compared to alternatives.

A systematic review and meta‐analysis of effects of psychosocial interventions on spiritual well‐being in adults with cancer

__Objective__ Spiritual well‐being (SpWb) is an important dimension of health‐related quality of life for many cancer patients. Accordingly, an increasing number of psychosocial intervention studies have included SpWb as a study endpoint, and may …

Systematic review and meta-analysis of stay-play-talk interventions for improving social behaviors of young children

Stay-play-talk (SPT) is a peer-mediated intervention which involves training peer implementers to stay in proximity to, play with, and talk to a focal child who has disabilities or lower social competence. This systematic review and meta-analysis …

An ANCOVA puzzler

Doing effect size calculations for meta-analysis is a good way to lose your faith in humanity—or at least your faith in researchers’ abilities to do anything like sensible statistical inference.

Evaluating meta-analytic methods to detect selective reporting in the presence of dependent effect sizes

Meta-analysis is a set of statistical tools used to synthesize results from multiple studies evaluating a common research question. Two methodological challenges when conducting meta-analysis include selective reporting and correlated dependent …

What do meta-analysts mean by 'multivariate' meta-analysis?

If you’ve ever had class with me or attended one of my presentations, you’ve probably heard me grouse about how statisticians are mostly awful about naming things.1 A lot of the terminology in our field is pretty bad and ineloquent.