The thing we recommend people do now is think about their priors in terms of the thing that they’re predicting.
Have a look at: https://www.youtube.com/watch?v=ZRpo41l02KQ&t=2694
I think this is specifically motivated by setting priors in a Bernoulli regression.
We can work a quick example for a model that looks like:
\alpha \sim \text{normal}(0, 1)\\
p = \text{logit}^{-1}(\alpha)\\
y \sim \text{bernoulli}(p)
So this implies a prior on p (the transformed parameter) of:
inverse_logit = function(x) {
1 / (1 + exp(-x))
}
hist(inverse_logit(rnorm(100, 0, 1)))
And then if we made the prior standard deviation 10, that implies:
hist(inverse_logit(rnorm(100, 0, 10)))