Inflating posterior uncertainty based on LOO?

I am fitting a linear regression model to 10 data points and then making posterior predictions on thousands of points. I am worried about understating the uncertainty of the predictions. One reason for this worry is the difference between the in-sample estimated residual standard deviation sigma from the LOO residual standard deviation sigma_loo with the latter being about 25% larger than the former and outside of its posterior 95% interval.

I could try to replace (or scale) sigma by (or with) sigma_loo but this seems crude and somewhat incoherent from a Bayesian perspective. Is there a standard way to achieve my goal of incorporating the LOO information about the posterior uncertainty into the posterior predictive? Or an argument for why the original sigma is adequate?

Are you looking for something like Compute weighted expectations — E_loo • loo, which is also supported in brms Compute Weighted Expectations Using LOO — loo_predict.brmsfit • brms and rstanarm Compute weighted expectations using LOO — loo_predict.stanreg • rstanarm
@avehtari would know the answer to your questions directly.

Can you provide more information how are doing the model fitting and the predictions?

You don’t need this in the usual posterior predictive inference.

Thanks @jd_c and @avehtari.

Aki this is all I needed to hear:

You don’t need this in the usual posterior predictive inference.

I got lost in frequentist thinking with loo. Maybe I can approach the problem (if there is one) more directly by expanding the model with a fat tailed error distribution or stacking with alternative specifications.

1 Like