Elpd_loo for second level of a multilevel model

Dear all,

if I have a two-level hierarchical model of n subjects with a random intercept of g groups, is it possible to calculate the log density (lpdf) of the second level? I know that the syntax for both levels would be like this:

log_lik1[n] = multi_normal_lpdf(x[n,] | mu[n,], L_Sigma);
log_lik2[g] = multi_normal_lpdf(alpha[g,] | alpha_mu[g,], L_Sigma2);

The difference is that x is my observed data, whereas alpha is just a random intercept (non observable). Is this aprpoach still valid and am I able to use log_lik2 for the loo estimation? I want to compare two models, one without a group level predictor and one with a group level predictor.

Thank you very much,
Best
Oliver

Short answer: it’s not the same as LOO, but it can be something useful. I need more time for the long answer, but hopefully have time to answer later this week.

Aki

Longer answer.

Using this we can use PSIS-LOO to approximate predictive density of x[n,] given all other observations and priors.

Using this we can use PSIS-LOO to approximate predictive density of alpha[g,] given all observations x and all priors except the prior for alpha[g,] . So it’s kind of leave-prior-for-one-alpha-out. As alpha[g,.] is not fixed observation as x is, it would also integrate over the distribution of alpha[g,] and thus it would be an expectation of the density (ie. normalization term). This would measure how much information data provides about the alpha[g,] when prior is ignored. This could be compared to corresponding expected posterior density to measure prior-data conflict (using O’Hagan’s terminology from the 2001 paper “HSSS model criticism”) Instead of “conflict” we could talk about how influential the prior is . It’s possible that this is not the best criterion for prior-data conflict or prior influence, but this is easy to compute with PSIS-LOO. It would be interesting to have a case study on this. I don’t know how would you use it for model comparison.

You can use LOO for this, too. The benefit of LOO is that it doesn’t matter what is the form of your model, only the accuracy of predictions matter.

Aki