Hi all,
I am trying to fit a zero-one-inflated beta model to proportional data. See the code I ran below:
brm(bf(
Bias|weights(Weight) ~ 0 + Intercept + attract_AOI1_cen * attract_AOI2_cen * Gender + (1|Subject),
phi ~ attract_AOI1_cen * attract_AOI2_cen * Gender + (1|Subject),
zoi ~ attract_AOI1_cen * attract_AOI2_cen * Gender + (1|Subject),
coi ~ attract_AOI1_cen * attract_AOI2_cen * Gender + (1|Subject)), data=et_attr,
chains = 4, iter = 5000, warmup = 1000, cores = 2,
save_pars = save_pars(all=T),
sample_prior = T,
family=zero_one_inflated_beta(),
prior = prior(normal(0, 0.25), class=“b”))
When looking at the results of the model, they make a lot of sense based on our expectations and the existing literature. Also, the model runs smoothly, without any issues. However, when looking at the posterior predictive check (using pp_check), I noticed a large discrepancy between the y and yrep, with the yreps looking much “smoother” than the original data (see below).
My main question is: how much of an issue is this? I am specifically interested in the predictors that are in the model now. I assume that this discrepancy could indicate that I am missing a specific predictor, but I don’t have any idea which predictor(s). How much of an issue is this with regard to inferences based on the current model?
Thank you in advance!
R version 4.1.3 (2022-03-10)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19044)other attached packages:
[1] brms_2.16.3