Yes, multi_normal_lpdf requires an inversion (technically, we donâ€™t invert, we use matrix division for more stabilty). If there are multiple `y`

(e.g., an array of vectors), the the solve only happens once.

`multi_normal_precision_lpdf`

does not require an inversion if you can work with precision matrices instead of covariance. Also, `multi_normal_cholesky_lpdf`

works with a Cholesky factor rather than the full matrix, which makes inversion easy. And itâ€™s a much more stable parameterization if the covariance matrix is a parameter. But obviously it wonâ€™t work in something like a GP, where we first build the covariance matrix, then factor it.

That shouldnâ€™t be too challenging. It only requires working out derivatives of `inverse(A + U * C * V)`

to be efficient. As with other additions to Stan, itâ€™s a matter of motivating someone to do it.

When does this structure of covariance matrix come up? Will that structure give us something full rank? As is, we require positive-definite covariance and canâ€™t deal with only positive semi-definite covariance matrices.