Creating yrep for the ppc_stat function from bayesplot

Hi everyone,

I was recently asked to report a statistic along with posterior predictive distributions that provides some support (either for or against)that my model fitting the data. I found the ppc_stat function in bayes plot and decided to give it a whirl. In creating this plot, and estimating the T statistic, this vignette provides a pretty clear explanation of what the input should look like. Where y is the observed outcome variable and is length 434. yrep is the posterior predictive distribution with 434 columns and 500 rows.

y <- example_y_data()
yrep <- example_yrep_draws()
ppc_stat(y, yrep)

My question arises as to if I am creating the yrep variable correctly. To create yrep as it is intended in the bayesplot documentation, would I use the posterior_predict function from brms? That is, does yrep = something like this?

model  =  brm(formula = Successes | trials(Trials) ~ x + y, 
             data=data,
             family=binomial)
#Plot ppc_stat
y = data$y
yrep = posterior_predict(model,  nsamples = 500)
ppc_stat(y, yrep)

Ultimately, the plot looks like this which seems like a posterior predictive distribution and I am not actually sure what you glean from this as far as model fit.

Any input would be appreciated. Thanks!

2 Likes

Yes, this is roughly what brms does internally when you call pp_check(model, "stat")

Posterior predictive checks are considered failed if the realized data (yrep) are at the extremes or completely out of the range assumed by the posterior predictive distribution.

In posterior predictive checking one usually takes quite a Popperian approach: passing a check can never prove your model is good, but failing a check can tell you that your model is a bad fit (there is then the harder question whether bad fit in that particular check is relevant to your scientici problem). As you try more and more checks and tailor them to be likely to show issues relevant to your scientific questions and they all pass, you can gain some confidence in your model, but only to the degree that the checks were a “severe test”.

Since you explicitly model the overall mean in your model, something really bad would need to happen for your model to not pass the check you’ve shown. Here are some ideas for stronger checks:

  • Use ppc_stat with stat = sd. Binomial models frequently underestimate the underlying variability.
  • Use ppc_bars to see counts in each category
  • Use ppc_stat_grouped and ppc_bars_grouped to see fit in subgroups. Particular interest is when you group by variables that were NOT covariates in your model

But you can be much more creative - and build custom checks for aspects of the data that would be considered interesting, e.g. look at differences between successive rows if there is some ordering in the data.

A bit more background can be found at [2011.01808] Bayesian Workflow and [1709.01449] Visualization in Bayesian workflow

Best of luck with your model!

2 Likes

@martinmodrak thanks so much for your reply! I did some digging in the interim and also found that the mean is not a very strong parameter to base the posterior predictive check on. I found the gabry et al paper a nice read.

One follow-up question. Beyond presenting such checks graphically, do you have any feelings on coupling a posterior predictive check such as the one presented here (but with a more discriminant parameter) with a posterior predictive p value such as presented. by Gelman et al 2013?

Also side note, I have read a couple of your responses on the forum and just want to say you present many thoughtful and well articulated comments – Kudos

1 Like

I am actually not familiar with the posterior predictive p-value, so I can’t comment much, beyond noting that Andrew didn’t include any note of posterior predictve p-value in the workflow preprint :-)

Thanks, glad to have helped.