The Prior Choice Recommendations wiki came up in another thread, and this reminded me of something I’ve wanted to discuss here for a while. I often see folks using a peaked-at-zero (ex. normal(0,1)
) prior for variability parameters (standard deviations, variances). This can be useful in imposing shrinkage in hierarchical models, but I’ve even seen peaked-at-zero priors on things like measurement error, where zero is surely a rather incredible value. For example, here’s a trivial model:
data{
int N ;
real[N] Y ; //model assumes data has been scaled to mean=0, sd=1
}
parameters{
real mu ;
real<lower=0> sigma ;
}
model{
mu ~ normal(0,1) ;
sigma ~ normal(0,1) ; //arguably unreasonable peaked-at-zero prior!
Y ~ normal(mu,sigma) ;
}
where sigma
is given a peaked-at-zero prior, implying that one thinks it most likely that their measurement was achieved with perfect accuracy. Instead, I’ve been recommending folks use something like:
sigma ~ weibull(2,1) //zero-as-incredible prior
Possibly a prior based on gamma()
would also work, I am just more familiar with the weibull()
distribution.
I don’t think I see any content on the Prior Choice Recommendations page related to this topic (though maybe the last bullet from this section counts?), so what does everyone think of the idea of adding an explicit mini-section on this topic?