Feedback to calibrate SBC session plan

Hi all!
Could I please ask for some feedback on the SBC Stanconnect plan? Based on the idea that SBC itself could be calibrated through a simulation-based calibration process, I have drafted three separate discourse posts here with the following summary. I hope to use the well-established stan community to generate reliable posterior on SBC which we can fit together during the session.

  1. prior: priordb to sync prior knowledge on SBC

  2. likelihood: how the audience wish to use SBC

  3. algorithm: how the audience could implement SBC

tagging whom I have discussed SBC issues with and hope to get feedback from @Dashadower @martinmodrak @avehtari @bbbales2 @betanalpha @charlesm93 @paul.buerkner @mans_magnusson @mike-lawrence @TeemuSo @PhilClemson @bnicenboim

Thanks in advance!


Nice Bayesian framework!

I don’t think I can help much with the prior but in my case I have two use cases for SBC.

2. likelihood: algorithm validation (SMC-Stan)
3. algorithm: SBC + posteriordb

2. likelihood: model validation (Stan model + Matlab code for missing Gaussian data)
3. algorithm: SBC + cross validation

(In the latter case I started by using cross validation before moving onto SBC)

Going to second @PhilClemson in giving kudos on the idea.

We want to use SBC to validate phylogenetic samplers, which are quite complicated and niche. These are chains that take a long time to run.

We have started with a JAVA implementation of the heavy-lifting bits. I wonder if we should just perfect that and leave the plotting and analysis to some R implementation.