New package for sensitivity analysis using importance sampling

If you’re like me, you know that checking the sensitivity of your inferences to priors or other likelihood details is important, especially in complex models. But re-compiling and re-fitting a Stan program for all possible combinations of such adjustments is time-consuming. Doing manual importance sampling is possible, but not particularly user-friendly, and to my knowledge, there aren’t any other alternatives.

To help with this, I’ve created an R package which uses Pareto-smoothed importance sampling to understand how model inferences change under alternative specifications. It’s available here and is perhaps best illustrated by the vignette which walks through a sensitivity analysis of the hierarchical 8-schools model.

Basically the package provides functions to (1) provide the alternative specifications you’d like to explore, (2) do the importance sampling, and (3) examine posterior quantities of interest for each of the specifications.

The workflow/API is still experimental, so I’m open to any suggestions in that direction, or any other comments!

6 Likes

This looks great. I always wondered if one could use PSIS for sensitivity analysis with Bayes factor, using bridge sampling. Do you think that it would be possible?

Very nice package and I like the systematic way you test the effect of different priors. I would definitely be nice to have this together with brms… ;)

Thank you! The package as-is will work with brms, provided you pass brms_obj$fit instead of just brms_obj. You can use extract_samp_stmts(brms_obj$fit) to get the names of Stan parameters & their sampling statements—these don’t generally match up with the brms output.

I’ll work on implementing direct handling of brms objects so that they can be passed directly.

Edit: I’d forgot I’d already implemented this for the main adjust_weights function—you can just pass a brms fit object directly to it. I’ll add this same functionality to the other functions soon.

2 Likes

I’ll definitely try this out for the next analysis!

This looks to be useful, @cmccartan.

I have so far only red the vignette, so I hope this question is not ignorant:

I wonder if one important condition for getting useful weights is that the baseline-model could successfully explore the posterior (no divergences, no problems with BMFI, …). What do you think?

The validity of the original samples is conditional on successful exploration, and since the importance sampling just re-weights the original samples, the validity of the alternative fits is also conditional on there being no problems with the original sampling.

However, even with a well-specified and well-explored original model, if the alternative models are too different, the distribution of the importance weights can still be problematic and lead to unreliable and highly variable estimates (which can be diagnosed in part with the Pareto k statistic).

So successful exploration is necessary but not sufficient.

3 Likes