effect size

Distribution of the number of significant effect sizes

A while back, I posted the outline of a problem about the number of significant effect size estimates in a study that reports multiple outcomes. This problem interests me because it connects to the issue of selective reporting of study results, which creates problems for meta-analysis.

Between-case standardized mean differences: Flexible methods for single-case designs

Single-case designs (SCDs) are a class of research methods for evaluating the effects of academic and behavioral interventions in educational and clinical settings. Although visual analysis is typically the first and main method for primary analysis …

Cohen's $d_z$ makes me dizzy when considering measurement error

Meta-analyses in education, psychology, and related fields rely heavily of Cohen's $d$, or the standardized mean difference effect size, for quantitatively describing the magnitude and direction of intervention effects. In these fields, Cohen's $d$ is so pervasive that its use is nearly automatic, and analysts rarely question its utility or consider alternatives (response ratios, anyone? POMP?). Despite this state of affairs, working with Cohen's $d$ is theoretically challenging because the standardized mean difference metric does not have a singular definition. Rather, its definition depends on the choice of the standardizing variance used in the denominator.

Single case design research in Special Education: Next generation standards and considerations

Single case design has a long history of use for assessing intervention effectiveness for children with disabilities. Although these designs have been widely employed for more than 50 years, recent years have been especially dynamic in terms of …

Standardized mean differences in single-group, repeated measures designs

I received a question from a colleague about computing variances and covariances for standardized mean difference effect sizes from a design involving a single group, measured repeatedly over time.

Finding the distribution of significant effect sizes

In basic meta-analysis, where each study contributes just a single effect size estimate, there has been a lot of work devoted to developing models for selective reporting. Most of these models formulate the selection process as a function of the statistical significance of the effect size estimate; some also allow for the possibility that the precision of the study’s effect influences the probability of selection (i.

An ANCOVA puzzler

Doing effect size calculations for meta-analysis is a good way to lose your faith in humanity—or at least your faith in researchers’ abilities to do anything like sensible statistical inference.

lmeInfo

Information Matrices for 'lmeStruct' and 'glsStruct' Objects

Simulating correlated standardized mean differences for meta-analysis

As I’ve discussed in previous posts, meta-analyses in psychology, education, and other areas often include studies that contribute multiple, statistically dependent effect size estimates. I’m interested in methods for meta-analyzing and meta-regressing effect sizes from data structures like this, and studying this sort of thing often entails conducting Monte Carlo simulations.

Sometimes, aggregating effect sizes is fine

In meta-analyses of psychology, education, and other social science research, it is very common that some of the included studies report more than one relevant effect size. For example, in a meta-analysis of intervention effects on reading outcomes, some studies may have used multiple measures of reading outcomes (each of which meets inclusion criteria), or may have measured outcomes at multiple follow-up times; some studies might have also investigated more than one version of an intervention, and it might be of interest to include effect sizes comparing each version to the no-intervention control condition; and it’s even possible that some studies may have all of these features, potentially contributing lots of effect size estimates.