Quantitative Posterior Predictive Check Data

Hi all, I’m wondering whether there are goodness-of-fit measures implemented in brms that may complement the output of the pp_check graphic. Perhaps there is a single number that could be computed as the mean distance between the curve over each draw of the PPD and this could also be tabulated? If this is not implemented in brms in some fashion, does anybody know of an appropriate package that is concerned with this?

I understand that the qualitative visual tool for describing how well the model fits the original data is extremely useful, but I am working in an academic field that has not yet embraced the Bayesian way. I’m worried that subjectivity could cloud the determination of “this impressive ZOIB model objectively fits the data better than a Gaussian model” if the reader’s interpretation is “meh, I think this Gaussian model seems fine!”

Any help or guidance you could provide would be much appreciated. Thanks in advance!

I’m confused as to how reducing a Bayesian analysis to a single number gets past the second concern. Are you going to try to use the distribution over that single statistic to compute p-values or confidence intervals?

Posterior predictive checks are in sample and very Bayesian. If you want to measure predictive performance, I’d suggest cross-validation, which is both out of sample and a frequentist measure of performance. Of course, you’d use posterior predictive inference to feed into cross-validation, but the evaluation of the method isn’t Bayesian.