How to calculate the 95% confidence bounds of the predictive result


A general question, if I want to get the 95% confidence bounds of my model result, how could I do that?
My thought is after I plug in the MAP parameter for my conditional model, I want a larger outer bound of the main area to represent the model uncertainty. Using MCMC in stan, we can get each parameter’s distribution, then I plug the 95% large parameter into my model to get the outer bounds. Does it sound reasonable? Or is there any standard way to calculate the uncertainty?


Take a look at this vignette for doing a posterior predictive check.

The idea is to feed all your MCMC parameter samples through your model to produce a distribution on outputs of your model. Each iteration of MCMC has its own set of parameter values, which can be plugged into the model to produce a prediction value. In general you can’t just take a quantile or expectation from the posterior on parameter values and derive a quantile or expectation from the predictive distribution.


I use the HDInterval R package and feed it the posterior samples of one of my parameters, which outputs 95% highest density posterior intervals, which are generally preferable to 95% credible intervals (which I think is what you are asking) because they can be used for asymmetrical distributions. Another option, better than HDIs, is to use lowest posterior loss intervals (the benefit of LDL over HDI is that they can be reparameterized without problem), for which there is an R package that can do that as well. If you really want 95% credible interval, this post I found on Google might help: