I am trying to follow the example posted in this blog
It seems to fit a multivariate gaussian mixture model using a lkj prior
as shown here
data {
int D; //number of dimensions
int K; //number of gaussians
int N; //number of data
vector[D] y[N]; //data
}
parameters {
simplex[K] theta; //mixing proportions
ordered[D] mu[K]; //mixture component means
cholesky_factor_corr[D] L[K]; //cholesky factor of covariance
}
model {
real ps[K];
for(k in 1:K){
mu[k] ~ normal(0,3);
L[k] ~ lkj_corr_cholesky(4);
}
for (n in 1:N){
for (k in 1:K){
ps[k] = log(theta[k])+multi_normal_cholesky_lpdf(y[n] | mu[k], L[k]); //increment log probability of the gaussian
}
target += log_sum_exp(ps);
}
}
I am confused by the fact that she samples
L[k] ~ lkj_corr_cholesky(4);
and then puts it into
multi_normal_cholesky_lpdf
I would have thought you need to also have a prior for the variance part of covariance matrix, is that not correct