What if posteriors are not sensitive to priors in Bayesian inference?

Welcome to the Stan Discourse @nguyenthiphuong,

I took the liberty of editing that out, but as @fweber144 mentioned make sure you remove the template so readers will not be distracted and will get to your point quickly (for me that’s one of the most important things when trying to reply: when I don’t have a lot of time I cannot spend time deciphering posts that are overly long or cryptic).

As @fweber144 also pointed out, that is the short (and correct) answer, but it’s a part of a much longer discussion, much of which are subjective opinions on priors being subjective. Another short but important point is: there is really no such thing as an uninformative prior (often used as a substitute for flat/uniform priors); for any given model a certain prior can be uninformative in one case and informative in another (e.g. different labs, one of which has measured some quantity previously, the other that never has). More importantly, there’s no such thing as a default choice of priors, and doing MLE has the implicit assumption of flat priors, but it’s still a choice. So there’s some confusion in the “Frequentist-Bayesian Stat Wars” that is not backed by actual probability theory (though that’s also my personal opinion, and I’m sure we could have a long discussion with diverging points of view…)

1 Like