For irrelevant reasons, I am comparing probability scale training data predictions derived from glm() and stan_glm() logistic regression models. To obtain probability scale predictions from a stan_glm model, I thought I needed to transform from log-odds to probability via the inverse logit.

However, it seems like no transformation is needed for probability scale predictions, as posterior_predict() with no transform function generates the correct probabilities.

I assume I’m missing something obvious, but it would be nice to know why the transformation is not necessary. Or, if that conclusion is wrong, it would be nice to know what is really going on.

The family argument in stan_glm defaults to gaussian, so you are doing a linear “probability” “model” but if you then apply the standard logistic CDF to its predictions, it sort of yields something close to the right answer.

which generates the posterior distribution of the conditional mean in a logit model. Note that this is not the posterior predictive distribution of the (future) outcomes, which is what is generated by posterior_predict and yields a matrix of 0s and 1s that are predictions for the observable outcomes.

Ah, sorry, I should have looked more closely at my example. I am indeed using family = binomial. And, thank you for clarifying the difference between posterior_linpred and posterior_predict.

I didn’t realize that posterior_predict() generated predictions on the scale of the response, so I was unknowingly transforming 1s and 0s with the inverse logit.