Orthogonal Correlation Matrix(?) for Identifiability

Hi all,

Question about parameterizing a correlation/covariance matrix.

As Rick Farouni discusses here: Bayesian Factor Analysis, one way to make latent multivariate normals identifiable is to set the values for the covariance matrix N \sim (0, 1). However, this isn’t always the most interpretable prior to use.

Is there some way to create the equivalent of an orthogonal LKJ prior? My thought on the “dumb” way to do this is a separate \text{Uniform} \sim (-1, 1) for each element of the lower triangle of the correlation matrix, and then an ordered prior on the variance, however, if anyone has tried something like this before, or has thoughts, I’d be curious to hear them.



Forgive me - but I don’t see where it says to use a covariance matrix of standard normals. Am I missing it somewhere?

The EFA approach taken by Rick there seems to be done by using 1) a diagonal latent covariance matrix 2) A particular (i.e., lower diagonal, tall, positive-diagonal) loading structure that identifies sign and stops latent column label switching [well, in theory it does, in practice I find it does not do so well enough].