Posts

Inverting partitioned matrices

There’s lots of linear algebra out there that’s quite useful for statistics, but that I never learned in school or never had cause to study in depth. In the same spirit as my previous post on the Woodbury identity, I thought I would share my notes on another helpful bit of math about matrices. At some point in high school or college, you might have learned how to invert a small matrix by hand. It turns out that there’s a straight-forward generalization of this formula to matrices of arbitrary size, but that are partitioned into four pieces.

Standardized mean differences in single-group, repeated measures designs

I received a question from a colleague about computing variances and covariances for standardized mean difference effect sizes from a design involving a single group, measured repeatedly over time.

Finding the distribution of significant effect sizes

In basic meta-analysis, where each study contributes just a single effect size estimate, there has been a lot of work devoted to developing models for selective reporting. Most of these models formulate the selection process as a function of the statistical significance of the effect size estimate; some also allow for the possibility that the precision of the study’s effect influences the probability of selection (i.

The Woodbury identity

As in many parts of life, statistics is full of little bits of knowledge that are useful if you happen to know them, but which hardly anybody ever bothers to mention.

An ANCOVA puzzler

Doing effect size calculations for meta-analysis is a good way to lose your faith in humanity—or at least your faith in researchers’ abilities to do anything like sensible statistical inference.

From Longhorn to Badger

It’s taken me a while to finally get around to updating my website with some personal news. I’ve moved from UT Austin to the UW Madison School of Education, where I am now an associate professor in the Educational Psychology Department’s Quantitative Methods program.

What do meta-analysts mean by 'multivariate' meta-analysis?

If you’ve ever had class with me or attended one of my presentations, you’ve probably heard me grouse about how statisticians are mostly awful about naming things.1 A lot of the terminology in our field is pretty bad and ineloquent.

Weighting in multivariate meta-analysis

One common question about multivariate/multi-level meta-analysis is how such models assign weight to individual effect size estimates. When a version of the question came up recently on the R-sig-meta-analysis listserv, Dr. Wolfgang Viechtbauer offered a whole blog post in reply, demonstrating how weights work in simpler fixed effect and random effects meta-analysis and then how things get more complicated in multivariate models. In this post, I’ll try to add some further intuition on how weights work in certain multivariate meta-analysis models. Most of the discussion will apply to models that include multiple level of random effects, but no predictors. I’ll also comment briefly on meta-regression models with only study-level predictor variables, and finally give some pointers to work on more complicated models.

An update on code folding with blogdown + Academic theme

UPDATED November 21, 2020. Thanks to Allen O’Brien for pointing out a bug in the codefolding code, which led to the last code chunk defaulting to hidden rather than open.

Simulating correlated standardized mean differences for meta-analysis

As I’ve discussed in previous posts, meta-analyses in psychology, education, and other areas often include studies that contribute multiple, statistically dependent effect size estimates. I’m interested in methods for meta-analyzing and meta-regressing effect sizes from data structures like this, and studying this sort of thing often entails conducting Monte Carlo simulations.