If you’re like me, you know that checking the sensitivity of your inferences to priors or other likelihood details is important, especially in complex models. But re-compiling and re-fitting a Stan program for all possible combinations of such adjustments is time-consuming. Doing manual importance sampling is possible, but not particularly user-friendly, and to my knowledge, there aren’t any other alternatives.
To help with this, I’ve created an R package which uses Pareto-smoothed importance sampling to understand how model inferences change under alternative specifications. It’s available here and is perhaps best illustrated by the vignette which walks through a sensitivity analysis of the hierarchical 8-schools model.
Basically the package provides functions to (1) provide the alternative specifications you’d like to explore, (2) do the importance sampling, and (3) examine posterior quantities of interest for each of the specifications.
The workflow/API is still experimental, so I’m open to any suggestions in that direction, or any other comments!