Best practices for Simulation Based Calibration with hierarchical models

Thanks for this note. Some questions on this figure! @betanalpha

  1. Could you please give some examples of the three: theoretical, empirical validation, and empirical extension of theoretical validation? For theoretical, you mentioned a simple target distribution with provable error bound. The only example I could only think of was normal distribution and Laplace approximation as a distribution-algorithm pair which might lead to zero error. Is this a valid example? And I wish to make sure that viewing Laplace approximation (and other approximation schemes including VI) as an algorithm is ok.

  2. Could @avehtari’s comment on posteriordb project extended as an attempt to widen the territory of empirical validation? In terms of crowdsourcing the fit of posterior and algorithm.

  1. Is SBC under the category of empirical extension of theoretical validation? For example, based on the theoretical validation provided by uniformity proof from SBC paper under strict conditions (e.g. conditional independence of posterior and prior samples given the data), we are exploring the space by perturbing the conditions one by one? I hope there could be some measurable and continuous axis along which the space could be explored both in terms of target distribution and algorithm. Combination of prior parameter with its shape was quite chaotic for one; has there been any attempts to order prior sets? If a distribution which could represent all CDF exists, it might be reasonable to fix the distribution and perturb only its parameter.