Do you happen to have any documentation explaining the loo_R2() function? Trying to compare it to the bayes_R2() function based on Gelman et al (2017, and 2018)? When I compare two different regression models, the ordering of preference between these two models (i.e. which is ‘better’) changes, depending on which of these statistics I calculate (the bayesR2 or the looR2). I also note that the loo_R2() does not produce any Bayesian certainty interval estimates for me and complains about the “Pareto k diagnostic values are too high.”

LOO-R^2 is described in the online appendix of Gelman, Goodrich, Gabry, and Vehtari (2018). R-squared for Bayesian regression models. The American Statistician.

That is possible, because Bayesian R^2 is over-optimistic as it is using the same data to compute posterior and R^2. LOO-R^2 uses LOO-CV to estimate what would R^2 be for new independent data coming from the same data generating process.

It’s more difficult as there are N different leave-one-out posteriors. That online appendix uses Bayesian bootstrap to give alternative non-parametric uncertainties.