So everything looks great up until now. However, what if we want to constrain the correlation between the variables B and C to be 0? I am trying to find out how to build a matrix like that but I don’t understand enough about the underlying mechanisms of matrices in Stan. Any help would be appreciated!

Mike, I’m not sure that this solution will always work. For example, I think the lkj could possibly give you a positive definite matrix with correlations of .9 everywhere. Then, if you change one of those correlations to zero, the matrix is no longer positive definite.

This probably does not matter for some applications, but it might matter if you are trying to define a prior distribution.

Good point. Really highlights the restricted utility of the multivariate normal as a structure for achieving inference on relationships. I’ve started doing more SEM stuff lately, which give much better control over things.

In the transformed parameters block you’d create a new y_raw; where you’d place the zeroes where you want (keeping track of which vector index corresponds to the matrix index in the correlation matrix). Then pass that to matrix[K, K] y = cholesky_corr_constrain_lp(y_raw_new, K);

Hello, thank you very much for your helpful piece of code and I apologize for coming back to this only after so much time. I would just like to ask a few questions to better understand the code.

When you say to only declare as many non-zero off diagonal elements as I need, to which part of the code does that refer to? Should I modify the indices of the for loops in the functions block so that the elements which should be zero are skipped, or does this refer to some other part of the code?

Next question, if I am creating the y_raw_new in the transformed parameters block, should I remove y_raw from the parameters block? What purpose does y_raw serve at that point?

Furthermore, I’m not fully sure I understand the whole idea around this approach. I have been trying out some code that I am unsure of and I have successfuly constrained some elements of the Choleksy matrix to be 0. However, whether or not that constraint will also be translated to the correlation matrix depends on the other elements of the Cholesky matrix due to the rules of the matrix algebra.

Could you please just confirm that I have understood something wrong and that the method that you describe can also be used to constrain correlations in the correlation matrix to zero, and not just the Choleksy factors?

I have posted some code / ideas that will let you do this in a few different threads, the best overview probably starts here: Partial-pooling of correlation (or covariance) matrices? - #4 by BenH
To fix the correlation to zero you just need to ensure the relevant value of the vector you pass in to the constraincorsqrt function is zero.

Thank you very much, this looks really helpful! I would just like to ask when it comes to ensuring that the relevant elements of the vector passed to the function are zero, would it suffice to put e.g. rawcor[1] = 0; and rawcor[3] = 0; in the transformed parameters block?