Measuring sensitivity to priors

I’m running a Bayesian linear regression with non-negativity constraints on some parameters. I can see when this constraint is having a big effect, because the posterior is bunched at 0.

Can I do a similar exercise for assessing how the choice of prior mean and standard deviation is affecting the posterior?

Sure – there is nothing preventing you from making prior adjustments, refitting your model, and seeing how the posterior inferences change. The degree to which your adjustments change your results will suggest how sensitive the posterior is to the prior.

Is there a more systematic method than trial and error?

It really depends on what you mean by “systematic” and what the goal of your sensitivity analysis is.

When constructing a prior model, you should attempt to be principled as to what it is. Ideally, you would inform your priors based on subject matter expertise or logical constraints to constrain parameters to a region of possibility. Strong priors sometimes raise debate as to whether they are valid. A sensitivity analysis could show that major inferences from your model persist/break down when weakening the priors.

There’s not really a “right” here, it’s about investigating and understanding your model and ensuring that the inferences are consistent with both your expertise and what the data tell you.

There’s a pretty good R package for doing this with Stan models GitHub - n-kall/priorsense: priorsense: an R package for prior diagnostics and sensitivity. This package uses importance sampling to avoid computationally expensive re-fits.

8 Likes

In addition of priorsense, adjustr: Stan Model Adjustments and Sensitivity Analyses using Importance Sampling • adjustr can also be useful. priorsense uses a generic alpha-scaling to assess prior and likelihood sensitivity, while adjustr allows explicit definition of alternative priors

9 Likes