The Lcorr ~ lkj_corr_cholesky(1)
is just a uniform distribution, so itās not doing anything.
We donāt generally recommend the super-wide priors youāre providing unless you expect mu
and sigma
to be on the scales defined. We strongly disrecommend hard interval contraints, like the one you put on zetaāthey can have a hugely biasing effect on the posterior that most of our users donāt expect.
Itād help to indent this:
for ( i in 1:I ) { for ( k in 1:K ) { zeta[k,i] ~ normal(0,1); }
beta[1:K,i] = mu + diag_pre_multiply(sigma, Lcorr) * zeta[1:K,i] ;
}
to the more scannable
for ( i in 1:I ) {
for ( k in 1:K )
zeta[k,i] ~ normal(0,1);
beta[1:K,i] = mu + diag_pre_multiply(sigma, Lcorr) * zeta[1:K,i] ;
}
then itās clearer you can replace it with this:
to_vector(zeta) ~ normal(0, 1);
{
matrix[K, K] Lcov = diag_pre_multiply(sigma, Lcorr);
for (i in 1:I)
beta[1:K, i] = mu + Lcov * zeta[1:K, i];
}
This would then be more efficient if you didnāt need to tear zeta
apart the way you do, which would arise if you represent it as a transposed array of vectors, so you could write that last line product as Lcov * zeta_t[i]
. Thatād defeat the simple vectorization of zeta
's prior, but itās probably worth it, but overall, itād be a very minor effect compared to only computing Lcov
once, which should be a big win.
I didnāt undertand the scaling by 10 in the generated quantities definition of beta. You might as well make beta a transformed parameter since you define it anyway each leapfrog step.