Presentations & Posters


Model-Building Considerations in Meta-Analysis of Dependent Effect Sizes

In fields ranging from Education to Economics to Ecology, meta-analysts often encounter complicated data structures, in which some or all primary studies include multiple effect size estimates. These estimates may be correlated because they are based on data from a common sample or a partially overlapping sample, or may be statistically dependent due to use of common study operations.


Discussion of Stabilizing measures to reconcile accuracy and equity in performance measurement

Equity-related moderator analysis in syntheses of dependent effect sizes: Conceptual and statistical considerations

Background/Context In meta-analyses examining educational interventions, researchers seek to understand the distribution of intervention impacts, in order to draw generalizations about what works, for whom, and under what conditions. One common way to examine equity implications in such reviews is through moderator analysis, which involves modeling how intervention effect sizes vary depending on the characteristics of primary study participants.

Determining the Timing of Phase Changes: Some Statistical Perspective

Calculating Effect Sizes for Single-Case Research: An Introduction to the SingleCaseES and scdhlm Web Applications and R Packages

This workshop will provide an introduction to effect size calculations for single-case research designs, focused on two interactive web applications (or “apps”) and accompanying R packages. I will …

Effect size measures for single-case research: Conceptual, practical, and statistical considerations

Quantitative analysis of single-case research and n-of-1 experiments often focuses on calculation of effect size measures, or numerical indices describing the direction and strength of an …

Empirical benchmarks for between-case standardized mean differences from single-case multiple baseline designs examining academic interventions.

The between-case standardized mean difference (BC-SMD) is an effect size measure for single-case designs (SCDs) that it puts findings on the same scale as standardized mean differences for …

Discussion of ‘Moving from What Works to What Replicates: Promoting the Systematic Replication of Results.’

Clustered bootstrapping for selective reporting models in meta-analysis with dependent effects

In many fields, quantitative meta-analyses involve dependent effect sizes, which occur when primary studies included in a synthesis contain more than one relevant estimate of the relation between …


A matter of emphasis: Comparison of working models for meta-analysis of dependent effect sizes

The state of single case synthesis: Premises, tools, and possibilities

Selective reporting in meta-analysis of dependent effect size estimates

Publication bias and other forms of selective outcome reporting are important threats to the validity of findings from research syntheses—even undermining their special status for informing evidence-based practice and policy guidance.

Easy, cluster-robust standard errors with the clubSandwich package

Cluster-robust variance estimation methods (also known as sandwich estimators, linearization estimators, or simply “clustered” standard errors) are a standard inferential tool in many …


Four things every quantitative social scientist should know about meta-analysis

Meta-analysis is a set of statistical tools for synthesizing results across multiple sources of evidence. Meta-analyses of intervention research are often taken as a gold standard for informing …

Synthesis of dependent effect sizes: Robust variance estimation with clubSandwich

Large meta-analyses often involve dependent effect sizes, but where the exact form of the dependence is unknown. Meta-analysis with robust variance estimation handles this problem through …

Statistical frontiers for selective reporting and publication bias

This workshop will cover methods to investigate selective reporting in meta-analysis of statistically dependent effect sizes, which are a common feature of systematic reviews in psychology. The workshop is organized into two sections.

Synthesis of dependent effect sizes: Versatile models through metafor and clubSandwich

Across scientific fields, large meta-analyses often involve dependent effect size estimates. Robust variance estimation (RVE) methods provide a way to include all dependent effect sizes in a single …


A generalized excess significance test for selective outcome reporting with dependent effect sizes

Log response ratio effect sizes: Rationale and methods for single case designs with behavioral outcomes

Evaluating meta-analytic methods to detect outcome reporting bias in the presence of dependent effect sizes

The impact of response-guided designs on count outcomes in single-case design baselines

An examination of measurement procedures and baseline behavioral outcomes in single-case research

Small-sample cluster-robust variance estimators for two-stage least squares models


Combining robust variance estimation with models for dependent effect sizes

Combining robust variance estimation with models for dependent effect sizes

Meta-analysis of single-case research: A brief and breezy tour

Meta-analysis of dependent effects: A review and consolidation of methods

Randomization inference for single-case experimental designs

A gradual effects model for single case designs


Heteroskedasticity-robust tests in linear regression: A review and evaluation of small-sample corrections

Using response ratios for meta-analyzing single-case designs with behavioral outcomes

A nonlinear intervention analysis model for treatment reversal single-case designs

Small sample corrections for use of cluster-robust standard errors in the analysis of school-based experiments


Effect sizes for single-case research

When large samples act small: The importance of small-sample adjustments for cluster-robust inference in impact evaluations


Small-sample adjustments for multiple-contrast hypothesis tests of meta-regressions using robust variance estimation

Operational sensitivities of non-overlap effect sizes for single-case experimental designs

Small-sample adjustments for F-tests using robust variance estimation in meta-regression

Observation procedures and Markov Chain models for estimating the prevalence and incidence of a state behavior

Small-sample adjustments for tests of moderators and model fit using robust variance estimation in meta-regression


Four methods of analyzing partial interval recording data, with application to single-case research

Addressing construct invalidity in partial interval recording data

On internal validity in multiple baseline designs


Some Markov models for direct observation of behavior

Effect sizes and measurement comparability for meta-analysis of single-case research

Observation procedures and Markov chain models for estimating the prevalence and incidence of a behavior

Operationally comparable effect sizes for meta-analysis of single-case research


Some implications of behavioral observation procedures for meta-analysis of single-case research


Question-order effects in social network name generators