# Wrong random numbers generated with lkj_corr_rng with eta = 1.0

Hi, I am trying to understand the lkj distribution and encountered that lkj_corr_rng with eta = 1.0 did not generate a flat prior. Based on the documentation 24.1 LKJ Correlation Distribution | Stan Functions Reference (mc-stan.org), it should be flat.

Here is my code to reproduce the issue. My pystan version is 2.19.1.1.

print(pystan.__version__)

test_code = """
data {

}
transformed data {
int<lower=0> N;
real<lower=0> eta;

N = 4;
eta = 1.0;
}
parameters {

}
model {

}
generated quantities {
corr_matrix[N] Omega;

Omega = lkj_corr_rng(N,eta);
}
"""
test_eta = pystan.StanModel(model_code=test_code)
fit = test_eta.sampling( iter=1000000, warmup=0, chains=1, seed=1,algorithm="Fixed_param")
fit_df = fit.to_dataframe()

fig = plt.figure(figsize=(5*(4-0.5),5*(4-0.5)))
ax = fig.subplots(dim,dim)

bins=np.arange(-1, 1 + 2/50, 2/50)

for i in range(dim):
for j in range(dim):
key = "Omega["+ str(i+1) + "," + str(j+1) + "]"
ax[i,j].hist(fit_df[key], bins=bins,density=True)
ax[i,j].set_xlabel("Correlation COefficient")
ax[i,j].set_ylabel("Probability")
ax[i,j].set_title("rho[" + str(i+1) + "," + str(j+1) + "]")
ax[i,j].set_xlim([-1,1])

fig.tight_layout()
fig.show()
fig.savefig("eta_test.png")


Here is the plot.

Note that when N = 2, it is a flat prior.

Is this a bug or did I make a mistake somewhere?

I did a little more experiment on an eta value. For N = 4, eta = 0.01 seems to be a flat prior.

LKJ(1) is not supposed to yield uniform margins for the correlations. It is “flat” in the sense of placing equal prior density over every possible correlation matrix, but the space of allowable correlation matrices contains more volume where correlations are low than where correlations are high.

See the Stan users guide on the LKJ:

1 Like

@jsocolar

Despite being the identity over correlation matrices, the marginal distribution over the entries in that matrix (i.e., the correlations) is not uniform between -1 and 1. Rather, it concentrates around zero as the dimensionality increases due to the complex constraints.

I missed this statement. So, if I understand it correctly, the requirement of the positive semidefiniteness on a correlation matrix prevent it from being uniform.

1 Like

I think the first picture in this blog post helps understanding why the flat distribution in high dimensions does not have flat marginals.

3 Likes

Exactly. It’s also possible to gain a little bit of intuition about this by thinking about the meaning of correlations. If variates \theta_1 and \theta_2 are very strongly correlated, then if \theta_3 is strongly correlated with one it must be strongly correlated with both. So there are a bunch of disallowed matrices whenever any correlations are high. That’s another way (beyond the relatively abstract concept of positive semidefiniteness) to understand why @spinkney’s figure that @nhuurre linked above pinches down to tiny points in the corners. Once two correlations are 1 (meaning \theta_1 \propto \theta_2 and \theta_1 \propto \theta_3), the third correlation is required to be 1.

2 Likes