I also had trouble understanding 1.13, even though I coded the model and put it into an R package. Here are some things I did to understand the model:
- I recoded the model using longer names rather than letters. This helped me avoid confusion around variables such as
N
s andK
s. - I have simulated fake data, but do not trust my simulation process, so I used known data. Specifically, the
radon
data from Gelman and Hill that is in the chapter 13. The data is also in Datasets for rstanarm examples — rstanarm-datasets • rstanarm (mc-stan.org) - I ended up working through the model line-by-line with with
lme4::lmer()
and posted the solution in my answer to my own question: Understanding output from 1.13 Multivariate priors for hierarchical models example - Modeling - The Stan Forums (mc-stan.org) - Realize that this is a regression on regression coefficients. This fact took me weeks to understand the first time and wrap my head around.
- Think about how matrices such as
gamma
andbeta
map from data to parameter or parameters to other parameters. I’ve had to think about linear algebra a lot to figure this out.
My sticking point was understanding the Cholesky factor and how to back transform, hence my question.
@Bob_Carpenter and the Stan Core team: Would the Stan team please consider show how to get Omega
back in in the 1.13 tutorial. You include it for the Gaussian Process tutorial, but I found this but only through a Google search of the manual.
@andrewgelman Please consider using a side-by-side comparison between lme4::lmear()
and the 1.16 example for Applied Regression and Multilevel Models See my linked, side by side example here: Understanding output from 1.13 Multivariate priors for hierarchical models example - Modeling - The Stan Forums (mc-stan.org)