I’m using Stan to create a general type of model which I’d like to fit very freqently on hundreds of similar-but-not the same data-sets.

I’m interested in selecting hyperparameter values for out-of-sample prediction performance, effectively more for regularization than for modelling known prior information. Ideally, I’d like that those hyperparameters to be on some intuitive scale like 0 to 1 (e.g. flat prior to zero variance).

The usual solution would be to rescale all data before model fitting, so that constant scale-values on priors make sense across data sets. However my model involves a multivariate normal which provides a prior on other latent state variables, and as such, the ‘data’ (parameters) are internal to the Stan model.

One solution would be to fit the model twice:

- Estimate the multivariate normal with weakly informative priors
- Take the estimates of the mvnorm’s scale parameters, discount them by some amount, and feed them in as known-data for the scale parameters of the mvnorm during a second model fit

However, this is wasteful and janky.

Is it possible to use transformations within the model to supply a multivariate normal with scale-invariant variance parameters?