Sampling over an uncertain variance covariance matrix

With this thread being brought up again here, I’ve been rereading this answer, and I have a quick question that might make me look like an idiot. But in the spirit of it being better to temporarily look like an idiot in order to learn something than to not and remain confused, I’ll ask it anyway. It’s about this:

Here, as I understand it,

is just the log sum of the likelihoods for each datapoint under the given multivariate normal distribution with means mu and scaled phylogenetic covariance matrix tau*invA[,,k].

So in very simple terms, is that just sum(log(likelihood))? If so, what I’m not 100% clear on, and I wanted to check is why, in the marginalisation step here:

each element of lp which, as I as I understand it is a sum(log(likelihood)) term under the k th covariance matrix is being exponentiated before being logged and summed again.

In section 15.2 of the Stan manual, there’s an example, which, as far as I can tell, is exactly analogous, but there you have an additional factor of -log(K) (actually T in the example, but to keep notation straight I’ll keep with K) from the uniform prior across each value of the discrete parameter. I don’t fully understand where that’s going in this case, as there should still be an implicit uniform prior across each covariance matrix…

Why is lp[k] = multi_normal_lpdf(y | mu, tau * invA[,,k]) rather than lp[k] = -log(K) + multi_normal_lpdf(y | mu, tau * invA[,,k])?

1 Like