Difference in lkj_corr_cholesky_lpdf and lkj_corr_lpdf outputs with same corresponding inputs

Hello,

I am starting to get into multivariate modelling and was playing with the above mentioned functions by exposing them in RStan and found consistent differences in the lpdf’s with the “same” (L_Omega or Omega) inputs. Is this a significant difference?

functions {
  real lkj_cor_lpdf(matrix y, real eta) {
    return lkj_corr_lpdf(y | eta);
  }
  matrix lkj_cor_rng(int K, real eta) {
    return lkj_corr_rng(K, eta);
  }
  real lkj_cor_cholesky_lpdf(matrix L, real eta) {
    return lkj_corr_cholesky_lpdf(L | eta);
  }
  matrix lkj_cor_cholesky_rng(int K, real eta) {
    return lkj_corr_cholesky_rng(K, eta);
  }
  real st_cauchy_rng(real mu, real sigma) {
    return cauchy_rng(mu, sigma);
  }
  matrix corr_matrix_cholesky(matrix L_Omega) {
    return multiply_lower_tri_self_transpose(L_Omega);
  }
}
expose_stan_functions(lkj)
eta <- 0.7
L_Omega <- lkj_cor_cholesky_rng(5, eta)
L_Sigma <- replicate(5, abs(st_cauchy_rng(0, 0.5)))
Omega <- corr_matrix_cholesky(L_Omega)

all.equal(Omega, L_Omega %*% t(L_Omega))
lkj_cor_cholesky_lpdf(L_Omega, eta)
lkj_cor_lpdf(Omega, eta)
[1] TRUE
[1] -4.491901
[1] -2.382085

Also, when using eta = 1, the lkj_corr_lpdf function never changes?
RStan 2.19.2

You should input Omega into lkj_corr_lpdf and input L into lkj_corr_cholesky_lpdf. In other words, lkj_corr_lpdf is the log-density of a correlation matrix under the LKJ distriution and lkj_corr_cholesky_lpdf is the log-density of the Cholesky factor of a correlation that has a LKJ distribution.

That is because the density of a LKJ distribution is constant when \eta = 1, although that is not so in the case of lkj_corr_cholesky_lpdf(L | eta) due to the Jacobian term.

Thanks for the \eta=1 explanation.

I thought that was what I was doing here?

lkj_cor_cholesky_lpdf(L_Omega, eta) // stan::lkj_corr_cholesky_lpdf
lkj_cor_lpdf(Omega, eta)            // stan::lkj_corr_lpdf

Yes, that is right.

There’s a Jacobian in play here for the transform. You get different answers for the same reason that

lognormal(a | mu, sigma) != normal(log(a) | u, sigma)

The Stan reference manual chapter on constraint transforms walks through the Jacobian because it’s what we use for encoding covariance matrices.

1 Like

Thanks!

I am now in a rabbit-hole of understanding Jacobians transforms/adjustments. I learned about Jacobians in school but didn’t bother to think it was important in applied stats.