Rstanarm: extracting variance components

  • Operating System: Windows 7
  • rstanarm Version: 2.17.4

I’m an ecologist and often conduct repeated measures experiments on animals to look at individual differences in behaviour and metabolism. I’m keen to start using rstanarm to compute repeatability estimates (intraclass correlation coefficients) and slope-intercept correlations for various traits that I look at. However, I am having a hard time figuring out how to extract variance components from rstanarm objects. I’ve pasted examples below of how I do this with MCMCglmm (full script and data here: https://bit.ly/2LTCGBi) but would greatly appreciate advice on how to do the same with rstanarm. I know that print() returns the random effects slope-intercept correlation coefficient but there are no SD estimates or 95% credibility intervals associated with it.

Thanks!

  1. calculating repeatability (ICC)

#calculate agreement repeatability using the posterior distributions [R = Vind / (Vind + Ve)], where Vind = inter-individual variance and Ve = residual variance
#$VCV is the output of the model corresponding to random effect variances
R.MAS<-MAS$VCV[,“ID”]/(MAS$VCV[,“ID”]+MAS$VCV[,“units”])
#get estimates of the marginal parameter modes (i.e. repeatability estimate) and create Highest Posterior Density (HPD) intervals for the parameter estimates (i.e. 95% CI interval)
​posterior.mode(R.MAS)
HPDinterval(R.MAS)

  1. slope-intercept correlation

#we can calculate the correlation between the random intercepts and slopes by dividing the covariance by the square root of the product of the variances
R.int.slp<-RS.MAS$VCV[,“TRIALz:(Intercept).ID”]/sqrt((RS.MAS$VCV[,"(Intercept):(Intercept).ID"]*RS.MAS$VCV[,“TRIALz:TRIALz.ID”]))
​posterior.mode(R.int.slp)
HPDinterval(R.int.slp)

For something like this you want to call as.matrix() or as.data.frame() to get the raw draws. Like

library(rstanarm)
post <- stan_lmer(Reaction ~ Days + (Days | Subject), lme4::sleepstudy)
df <- as.data.frame(post)
Vind <- df$`Sigma[Subject:(Intercept),(Intercept)]`
R.MAS <- Vind / (Vind + df$sigma^2)
R.int.slp <- df$`Sigma[Subject:Days,(Intercept)]` / 
             sqrt(Vind * df$`Sigma[Subject:Days,Days]`)

But don’t waste your time trying to calculate the posterior mode of anything.

Thanks very much for the speedy answer, Ben! The example code works like a charm.

When running a generalized model (e.g. stan_glmer.nb), what column in ‘df’ corresponds to the residual variance (i.e. ‘sigma’)? Is it ‘reciprocal_dispersion’?

Also, why do you advise against calculating the mode and CIs? How would you report this estimate instead?

Outside of a Gaussian, linear, no link function model, the idea of a decomposition of variance does not make sense. The general, correct, and Bayesian way to do things is

PPD <- posterior_predict(post)
vars <- apply(PPD, MARGIN = 1, FUN = var)

PPD_0 <- posterior_predict(post, re.form = ~ 0)
vars_0 <- apply(PPD_0, MARGIN = 1, FUN = var)

summary(vars - vars_0)

In general, it is better to just show a plot of the entire margin of the posterior distribution that is of interest, but if you are going to focus on a number it should usually be the mean or median. The posterior mode is not useful.

I guess my confusion comes from the fact that several authors propose methods for variance decomposition from frequentist GLMMs. For example, see:

Nakagawa et al (2017) http://rsif.royalsocietypublishing.org/content/14/134/20170213
Johnson (2014) https://besjournals.onlinelibrary.wiley.com/doi/abs/10.1111/2041-210X.12225
Nakagawa and Schielzeth (2013) https://besjournals.onlinelibrary.wiley.com/doi/abs/10.1111/j.2041-210x.2012.00261.x

I understand that extracting variance components from GLMMs is not straightforward and that there is no consensus on how to do this. Is you opinion that even though it can be done, it should not? I’d be grateful if you could expand on why ‘it makes no sense’. Many thanks-

If it is not a linear model, then there is no clean decomposition. Frequentists have to do stuff like use the delta method, but if you use Stan, then you can just generate posterior predictions from the model and see how the variance across the data changes when you include or nullify a group-specific term.

1 Like

Would you say that the ratio between vars_0 and vars is conceptionally comparable to an ICC (Vind / (Vind + df$sigma^2))?

Also, I have read that the ICC is not meaningful for random-slope-intercept models, because the ICC would then vary by the values of the random slope (Goldstein H, Browne W, Rasbash J. 2010. Partitioning Variation in Multilevel Models. Understanding Statistics, 1:4, 223-231 (doi: 10.1207/S15328031US0104_02)), so there’s no “unique” ICC. Is this also a matter for Bayesian models, respectively for the approch to calculate the posterior predictive distributions to “decompose” the variance?

Yes to both

May I ask a follow-up question: How would you deal with, e.g., nested models? Like:

stan_lmer(Reaction ~ Days + (1 | group / subgroup ), sleepstudy)

Would you still use PPD <- posterior_predict(post) for the posterior predictions?

And would be possible, probably using the re.form-argument, to extract the variances for the different levels, like:

PPD1 <- posterior_predict(post, re.form =  ~ group)
PPD2 <- posterior_predict(post, re.form =  ~ group:subgroup)

I can’t remember if that is the exact syntax, but something like that.

Hi Ben,

I know this thread is a bit old, but it has been absolutely invaluable on a project I’m working on looking at fish group behavior.

I’ve run into the problem that people in our field are absolutely wedded to Nakagawa’s ICC, which is inherently limited to a 0-1 scale. However, when the conditional and unconditional draws have similar variances, and you show the entire distribution of variance ratios, the median will be close to 0 (but positive), but many draws will be below 0. This can lead to 95% CIs that overlap 0 and absolutely freak out the animal behavior researchers, even though there is really nothing wrong with the method.

I was wondering if you (or maybe @strengejacke, whose work here and with the performance package has also been super valuable) had any references I could point to for this method, to hopefully assuage colleagues and reviewers? I can usually get people to believe me with a conversation and looking at some plots, but it would be nice to have a reference to quickly point to, if possible.

Again, I can’t express how incredibly useful this single thread has been!

I suppose in the Gaussian, linear, no link, no extrapolation to new groups case, you could do

PPD_0 <- posterior_predict(post, re.form = ~ 0)
b <- as.matrix(post)
b <- b[ , startsWith(colnames(b), "b[")]
PPD <- PPD_0 + b %*% Matrix::t(get_z(post))

vars <- apply(PPD, MARGIN = 1, FUN = var)
vars_0 <- apply(PPD_0, MARGIN = 1, FUN = var)
summary(vars - vars_0)

to keep the differences positive.

Unfortunately, I’m dealing with a negative binomial model in this case, so it sounds like this wouldn’t quite work for me.

I think I feel comfortable explaining why the values go below 0, but do you know of any references I could point to for doing the basic method of taking posterior draws with/without group-level effects and calculating the variances? Thanks!

It is basically the same ideas as in
http://www.stat.columbia.edu/~gelman/research/published/bayes_R2_v3.pdf
but Laplace probably said it somewhere first. It is just math.

1 Like

I have a follow-up question on the definition of the ICC as the ratio vars_0 / vars. Is there any reference for this definition? I have tried it and it gives me ICC values that range far beyond 1. (I am using cumulative (logit) regression models, fitted in brms.)

1 Like