I have just implemented LOO-PIT checks in ArviZ, and I have some questions about the implemented algorithm that I have not been able to find in any of the references. Mainly, I think I have understood the concept of the algorithm and I feel like it could be used with any Bayesian test quantity, but I am not sure about it.
I am uploading 2 pages with my attempt at getting there alone, and where I try to explain a bit better my question because most of the text are actually equations. LOO_PIT_test_function.pdf (123.5 KB)
I would be really grateful if you could explain to me whether or not I am on the right track and why or ig you could point to some literature I may have missed.
I assume you found the vignette and its links to the papers:
Vehtari, A., Gelman, A., and Gabry, J. (2017a). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing . 27(5), 1413–1432. doi:10.1007/s11222-016-9696-4. ( journal, preprint arXiv:1507.04544).
I’m looking for an explanation (online) of why the distribution of LOO-PIT should be uniform if the model is calibrated. Gelman (BAD) p153 says “For continuous data, cross-validation predictive
p-values have uniform distribution if the model is calibrated” but doesn’t seem to explain it. I can’t find an explanation in the vignettes or Gabry (2019). The original Gelfand (1992) is in a book but I was hoping for something online and not too difficult. apols if I’ve just missed it.