I came across a few publications about Bayesian analysis of LMM and got confused on the formulas.
Generally, from my previous knowledge, an LMM is Y=X\beta+Zu+e with two variance matrices that Var(u)=G and Var(e)=R.
Then the expectation and variance of Y are E[Y] = X\beta, Var[Y] = ZGZ^\top+R. I believe it is true if \hat{\beta} and \hat{u}, also known as BLUE and BLUP, are estimated by REML approaches. In Bayesian analysis, I found a paper here and another paper that use the same expression.
However, @paul.buerkner in his paper brms pointed out that E[Y] =X\beta + Zu that’s because
we want to make explicit that u is a model parameter in the same manner as \beta so that uncertainty in its estimates can be naturally evaluated. In fact, this is an important advantage of Bayesian MCMC methods as compared to maximum likelihood approaches, which do not treat u as a parameter, but assume that it is part of the error term instead.
Then Var[Y]=R. Same in this paper that implements LMM in stan
.
I am confused on this issue.
- If the first expression is correct, then the exponential term in the log-likelihood density function is (Y-X\beta)^\top(ZGZ^\top+R)^{-1}(Y-X\beta);
- If the second expression is correct, it becomes (Y-X\beta-Zu)^\top R^{-1}(Y-X\beta-Zu)
Which is correct, or they are both correct in Bayesian analysis?
Thank you for reading this post.