I tend to define Simulation-based calibration broadly: every checking that is based on self-consistency introduced in sec.1.1 of this sbc paper. A diagram summary of sbc’s role in bayesian computation can be found on p.17-23 of this slide I used for sbc talk.
There are bunch of precision hyperparameters that affects sbc results under the surface, such as S (number of prior draws), M (number of posterior draws), N (number of data, mostly time series), … See Figure 1 from this Bayesian taxanomy paper (Algorithm’s “setting”).
Is “known parameter” prior draws from a distribution or fixed scalar value? Fixed scalar value makes bias and MSE calculation easy, but less robust than prior draws from a known distribution. The problem with the prior draws from a known distribution is, first you run into a problem of precision (described above, how many prior draws is enough), second there is no fixed distance metric you can use to measure model performance. Performance can differ depending on the chosen metric, but Wasserstein is recommended in this post.