SBC uses and guarantees

Hi!

I was wondering what SBC buys me, since the paper keeps saying that it is an approach to validate a computational procedure. So does that mean whenever I see no divergences, then all is fine?? I don’t think so.

So SBC checks if all is fine whenever I sample from a prior, generate data and marginalize over the posterior and that initial prior which should give me back to prior. All nice - but what do I learn from that assuming it works?

Obviously this approach does not fly for improper priors - is this a restriction of the method? I am not a fan of improper priors, but is this a limitation of the approach?

Or does SBC end up telling me that priors need to be chosen sensibly and more on the informative side to get things going?

And then - what does SBC buy me? As it seems to imply that we can even integrate over the prior, does that mean that I can safely calculate Bayes factors for example? Is the SBC test seen as a required property of any model? Do rstanarm models have this property? Is this very strict?

This is just a brain dump in no particular order and comments from the SBC folks would be much appreciated.

(and what would be wrong with a KS test for uniformity as GOF omnibus test (I know plots are better, but I am sometimes lazy or want to automate))

Thanks.

Best,
Sebastian

No divergences just means that you didn’t hit any geometric problems. It doens’t mean, for example, that you’ve explored the whole posterior. SBC checks that.

You learn that the code works as advertised for the functional you’ve computed.

It is a restriction of the method. But if you’re using improper priors, there’s more that can go wrong than your computational method not working.

No.

The problem with Bayes factors isn’t computing them, it’s using them well. SBC tells you nothing about this.

“Required” is a weird word here. It’s a test that can tell you if your computational inference scheme actually computes the model you’ve written down. It seems useful. No idea about rstanarm models. In general, you need to check each time. But I’d be surprised if they didn’t - they’re the models that Stan is built to solve.

We couldn’t come up with a test that had enough sensitivity to make this work.

1 Like

No, it’s a limitation of improper priors that they are not proper. If you currently use improper priors, how do you check that the posterior is proper? There are some simple models for which it is possible to derive analytic posterior and provide conditions for data to get proper posterior, but this is not a trivial task for general models.

It’s not just SBC, but SBC is more useful for you if you choose your model and prior (which are not separate things) sensibly.

Dan responded well for other points.

I don’t, but I wanted to understand the limitations of SBC.

rstanarm uses empricial priors by default (autoscale). I would be very suspicious against these. Once autoscale is off, then all should be good as long as the priors are chosen sanley for a given problem.

In brief, to put this into my words, SBC checks a consistency property which any Bayesian analysis must have. In addition to geometric problems (divergencies), SBC ensure that in our finite world with finite MCMC samples the integration performed through sampling has the properties we are looking for and works fine. Right?

Thanks.

(sounds like I need to code up a new batchtools template for my Stan models to torture our cluster with it)

1 Like

SBC is complementary to diagnostics like divergences, see https://betanalpha.github.io/assets/case_studies/principled_bayesian_workflow.html for an example of how they are used together.

SBC verifies that your computational tools are accurate within the scope of your modeling assumptions. If your model isn’t rich enough to capture the true data generating process then the SBC guarantees won’t really mean anything when fitting your model on real data, which is why divergences and the like are still so critical. That said, it is a really powerful necessary condition that can isolate all kinds of pathologies in a given analysis (especially when you check diagnostics like divergences for each of the fits in the ensemble – see the aforementioned case study for an example of identifying a subtle non-identifiabilitiy).

Finally I mildly disagree with Dan about the problems with Bayes factors not being computational. Yes even if you can compute them accurately then there are serious problems, but in practice we don’t have the tools to compute them accurately in the first place. In any case, you can use SBC to calibrate Bayes factors as well! Just sample from the prior over models, sample model configurations from the prior within that model, then data from the likelihood within that model. Compute Bayes factors for each of the models for each of those observations and then draw multinomial samples from the Bayes factors. The sampled model should be uniformly distributed across the posterior model draws just like in SBC for the parameters.

3 Likes