I ran a linear mixed model with brms and checked the plot from pp_check(). I’m not very familiar with bayesian analysis, so this might be a dumb question.
What should I do if there seems to be a big discrepancy between y and y rep. What does this discrepancy mean? Can I go with this model? If not, how should I resolve the discrepancy?
Here’s my model and the plot from pp_check(), in case these matter:
model <- brm(rating ~ a*b*c + (1|subjectID),
family = gaussian(), data = data, save_pars = save_pars(all=TRUE),
cores = 2, chains = 2, iter = 10000, warmup = 5000,
prior = set_prior('student_t(3, 0, 2.5)', class = 'b'))
Here your data are bimodal and your modeled response is not. This suggests that either the random component of the response is itself bimodal, which can be achieved in
brms using a custom family that is a mixture of two distributions (rather than a single Gaussian as in your model), or that your model is missing a key covariate that can distinguish which mode a point is likely to fall into. Faced with this PPC, I would suggest first examining the sets of points whose response is greter than about 3.5 and the set of points whose response is less than about 3 to see whether I overlooked anything obvious that could differentiate those sets of points. If not, I would consider modeling the response as a mixture of two Gaussians.
@jsocolar answer is good. I would also be curious as to what your response variable
rating is. Is it a continuous unbounded variable? Based on the density plot for the posterior predictive check, I would hazard a guess that it is a variable that is bounded at 0 on the lower end and maybe 5 on the upper end? Is it continuous or discrete? What does
pp_check(model, type="hist") look like?
Building off of @jd_c, your
rating variable looks like it might be ordinal. If so, you might consider ditching the conventional Gaussian likelihood for an ordinal model (see here).