I’ve been running a stan program for a rather odd model, a hierarchical model of different experiments which can individually pass, returning a strongly peaked likelihood function, or fail, returning a flat likelihood function. For each experiment, a failure probability is pre computed (treated as data) and the posterior ends up being a mixture of the posteriors generated by the peaked (pass) likelihood and the flat (failure) likelihood.

Because of the odd model setup, if most of the data have a high failure probability, then the posterior distribution contains plateaus in its tails which I think cause divergences (and sometimes in extreme cases low E-BFMI) in the stan model. However, intuitively the results this model produces with the data I have seem to be “correct” and have low

If I use data which produce divergences, the overall shape and median of the posterior is generally unaffected, but the 95% HPD may be significantly affected by the uneven distribution of samples in these tails. Often this isn’t an issue (if the HPD is just very large, the inference made is that more data need to be sampled and so the values don’t need to be particularly accurate), but it would be nice to quantify the uncertainty in the 95% HPD itself as there are some intermediate cases. Perhaps there is something in the generated quantities block that I could do to calculate percentiles? Or perhaps the most naive way to do this would just to be to calculate the difference in percentiles between chains?