sorry for warmng up something that has been discussed before, but I’m a bit dense when it comes to things like truncation and possible Jacobian adjustments.
I am aware of this post
and this
and it’s really helpful. Now, the following is a valid stan program.
data {
int D;
vector[D] ytrunc;
cov_matrix[D] S;
}
transformed data {
vector[D] mu = rep_vector(0, D);
cholesky_factor_cov[D] L = cholesky_decompose(S);
}
parameters {
vector<upper=ytrunc>[D] Y;
}
model {
Y ~ multi_normal_cholesky(mu, L);
}
The model compiles and runs, and sampled Y values are not larger than ytrunc. I wonder if this model would be a correct implementation of a truncated multivariate normal?
I ran the model above, as well as a model with make_stuff as in the linked posts, and it seems there are subtle differences, and I would like to understand what the underlying difference is.
Sorry if this is totally clear and answered somewhere in the documentation.
For your model, the density underlying multi_normal_cholesky() is still the usual, untruncated mvn density. It is not normalized to account for the truncation, which could cause inference problems if this was embedded in a larger model.
Looking at the tMVN document from your first link, I think that the Stan model you specified is similar to the rejection sampling approach mentioned on p. 5. If you are only looking to draw samples of Y without evaluating the underlying density, then the document shows that rejection sampling is less efficient than the more complicated approaches.