Hello together,
I am struggling with a posterior predictive check in brms that looks weird (no matter what I try). But first about my data: I conducted an experiment investigating jumping to conclusions bias. This means collecting few information before deciding. In different trials individuals could draw between 1 and 10 information. After each draw they could decide if they wanted another draw (information) or decide for option A or B. Less draws before deciding means higher jumping to conclusions biasā¦
Every individual did 24 trials in different scenarios. Furthermore, there were 3 groups.
The distribution of draws to decision (dependent variable) is bimodal with one peak in the middle and a lot of 10Ā“s at the right end.
First I tried a gaussian family. Classic approach. That yielded the following posterior predictive check:
formula:
brm_DTD ā brms::brm(
formula = DTD ~ 1 + group*scen + (1  chiffre),
family = gaussian(link = āidentityā),
data = data_long[!is.na(data_long$DTD), ],
control = list(adapt_delta = 0.80, max_treedepth = 10),
warmup = 2000, iter = 10000, chains = 4,
prior = prior, stanvars = stanvars, sample_prior = TRUE,
seed = 123, cores = parallel::detectCores())
Then I tried a lot of things:

student and skew_normal looked pretty much the same.

I know I have count data, so I tried poisson family predictive tons of value above 10 (even values of 15), so the gaussian fit was better.
 Well, my count data is bounded at 10  so I tried poisson with truncation between 1 and 10 (and 0 and 9 after subtracting 1 from my dependent variable). That looked pretty good concerning the dens_overlay predictive check, but the stat_2d check was not that good. As I am interested in the means I feel not safe interpreting this results (or what do you think?)! Furthermore there were some divergent iterations and I receive a warning message: ā11% of all predicted values were invalid. Increasing argument āntrysā may help.ā

I tried some other models like binomial family and some hurdle models (after transforming my data with abs(dependent_variable 10) so that I had many 0Ā“s instead of 10Ā“s). The dens_overlay check was between horrible and okay, but the stat_2d check was never promisingā¦

I tried a gaussian mixture, but there were thousand of divergent transition and the model took some hours, so that wasnāt very satisfying as wellā¦
What else can I try? I am grateful for every suggestion! Thanks a lotā¦ :)
Another question would be: If I will not find a solution to improve my posterior predictive check, can I still interpret the model? At least it is not worse than classic frequentist testing with lme4, right?
If you need more information please tell meā¦
Yours,
Simon