Effect sizes in hierarchical models


#1

Not a Stan-specific Q, but I’m curious if there are any standard practices for computing posterior distributions for effect sizes of any sort for a hierarchical model with “within-subjects” effects. The approach I’m using now is to compute two kinds of effect size:

  1. A “between-Ss” effect size where, in every sample from the posterior, I divide the value of the parameter representing the effect by the value of the parameter representing the SD of deviations in how this effect manifests across participants.

  2. A “within-Ss” effect size where, for each subject, I get the variance of the posterior for that subject’s effect, compute the mean of these variances across subjects, then use the square-root of this value as the denominator when computing an effect size from each sample of the coefficient.

I’m pretty confident in the appropriateness of method #1 (though feel free to correct me), but #2 feels a bit ad hoc and has the likely problematic behaviour of reducing the effect size estimates when there is less data and therefore more widely dispersed estimates for each subject’s effect. Any thoughts?


#2

We’re usually not trying to calculate these kinds of things, since number of standard deviations away from zero isn’t particularly meaningful. Maybe you could evaluate Pr[theta > 0], but we usually don’t do that either as we’re not trying to make binary choices about effects existing or not.

I’m not actually familiar with the classical definitions, but if you have a vector beta of random effects with a distribution

beta ~ normal(mu, sigma);

then I don’t understand what beta[n] / sigma is supposed to represent because sigma tells you the variation among the beta[i], not the information about a single beta[i], which can be very different. I could see looking at something like the posterior mean of beta[n] divided by the posterior standard deviation of beta[n] as a proxy for Pr[beta[n] > 0] or something like that, but I’d just look at a Bayesian quantity of interest directly.