I understand my error now. In my head I was thinking that a weight of .60 to .70 felt like much stronger support for a model than a 1 SE LOO difference. I now see that this conceptualization is incorrect.
This is super helpful. I understand now.
I only set priors on regression intercepts and and coefficients (both N(0,5)). I scaled my regression inputs by 2 sd’s (to facilitate comparisons between the coefficients since x1 is dichotomous and x2 is continuous). For my 5 outcome measures, they are logit transformed probabilities (doing multivariate beta regression isn’t possible or I would have done it) that haven’t otherwise been transformed. I have a sample size of 4,524.
Here’s the loo output for model 5:
Computed from 4000 by 4524 log-likelihood matrix
Estimate SE
elpd_loo -13563.7 187.1
p_loo 45.2 2.0
looic 27127.4 374.3
------
Monte Carlo SE of elpd_loo is 0.1.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.