# Constraints on LKJ prior

Hi, is it possible to put constraints on LKJ prior for the correlation matrix?

I have a model with multiple varying intercepts and slopes, and I want to assign them a multivariate prior. However, I would like to force some elements of the the correlation matrix to be positive, negative, or zero. Is there a way to do this while still using the LKJ prior for the correlation matrix and following the cholesky_factor_corr trick? Thank you.

Not really. Or more specifically, you could so something like that but then it wouldnâ€™t be an LKJ distribution. See

Ben, thanks for this info! Iâ€™m hoping you could expand on what you wrote in the linked gist. For example, you write, â€śAll this is easier if you can reorder the variables so that the fixed correlations are toward the left and the top of the correlation matrix.â€ť Would you be able to put together a full working example using the 4 x 4 covariance matrix in the gist (and possibly a counter-example when you donâ€™t reorder, what the difficulty is)?

Can you elaborate on when fixing values that no value of \Omega between [-1,1] will satisfy the equation? Do you mean the equation that you wrote in the gist?

Iâ€™ve encountered wanting to do something like this and it seems others are too. I could see this being a case study or going in the manual.

Wow. Thanks. The math there is a bit much for me to understand. But is it fair to say that it is better not to use the LKJ prior and the cholesky_factor_corr trick in this situation? I think I can construct the correlation matrix by block or by element and then sample it that way.

The first column (or row, since it is symmetric) of a correlation matrix is unrestricted under the LKJ transformation (as distinct from the LKJ probability distribution). Thus, so is its Cholesky factor. Here is an example with the Cholesky factor of a 3x3 correlation matrix:
L = \begin{bmatrix} 1 & 0 & 0 \\ a & \sqrt{1 - a^2} & 0\\ b & c & \sqrt{1 - b^2 - c^2} \end{bmatrix}

So, if \boldsymbol{\Sigma} = \mathbf{L}\mathbf{L}^\top, then \Sigma_{ij} is the dot product of the i-th and j-th rows of \mathbf{L}. If you want to restrict either \Sigma_{21} or \Sigma_{31} to be zero, that is easy: Just impose a = 0 or b = 0 respectively. However, technically that restriction means it is not LKJ prior any more, so you should not do L ~ lkj_cor_cholesky(eta);. If you wanted to impose the restriction that \Sigma_{32} = ba + c\sqrt{1 - a^2} + 0\sqrt{1 - b^2 - c^2} = 0, that is a bit more complicated. If you already have a and b, then c = -\frac{ba}{\sqrt{1 - a^2}}, but then L_{33} = \sqrt{1 - b^2 - \frac{b^2a^2}{1 - a^2}} might not be real, depending on what b and a are.

2 Likes

Do L_{22} and L_{33} have to be the positive square roots? Or can they be the negative ones? I think this might make a difference.

All of the diagonal elements are defined to be non-negative.

Ah yes. Of course they have to be positive because it is a Cholesky decomposition. Thanks.