I just wrote a case study that isn’t particularly Stan-related, but uses Stan.

- Bob Carpenter. 2019. DRAFT: For Probabilistic Prediction, Full Bayes is Better than Point Estimators.
- bayes-versus.pdf (364.3 KB)
- Source code [GitHub]

Comments most welcome (especially if you know how to fix knitr’s table rendering in pdf).

Here’s the abstract:

A probabilistic prediction takes the form of a distribution over possible outcomes. With proper scoring rules such as log loss or square error, it is possible to evaluate such a probabilistic prediction against a true outcome. This short note provides simulation-based evaluation of full Bayesian inference, where we average over our estimation uncertianty, and two forms of point estimation, one that uses the posterior mode (max a posteriori) and one that uses the posterior mean (as is typical with variational inference). The example we consider is a simple Bayesian logistic regression with potentially correlated predictors and weakly informative priors. To make a long story short, full Bayes has lower expected log loss and squared error than either of the point estimators.

There’s also a bit on evaluating proper scoring rules.

I should’ve done this ages ago. I’ve done things like this in my repeated binary trial case study, but that was in the context of binomials and it was buried among a lot of other stuff. I commtted the pdf and html to the repo, so if you want the html, it’s there.