Posterior predictive check looks weird - what can I do?

Hello together,

I am struggling with a posterior predictive check in brms that looks weird (no matter what I try). But first about my data: I conducted an experiment investigating jumping to conclusions bias. This means collecting few information before deciding. In different trials individuals could draw between 1 and 10 information. After each draw they could decide if they wanted another draw (information) or decide for option A or B. Less draws before deciding means higher jumping to conclusions biasā€¦

Every individual did 24 trials in different scenarios. Furthermore, there were 3 groups.

The distribution of draws to decision (dependent variable) is bimodal with one peak in the middle and a lot of 10Ā“s at the right end.

First I tried a gaussian family. Classic approach. That yielded the following posterior predictive check:

formula:

brm_DTD ā† brms::brm(
formula = DTD ~ 1 + group*scen + (1 | chiffre),
family = gaussian(link = ā€œidentityā€),
data = data_long[!is.na(data_long$DTD), ],
control = list(adapt_delta = 0.80, max_treedepth = 10),
warmup = 2000, iter = 10000, chains = 4,
prior = prior, stanvars = stanvars, sample_prior = TRUE,
seed = 123, cores = parallel::detectCores())

image

image

image

Then I tried a lot of things:

  • student and skew_normal looked pretty much the same.

  • I know I have count data, so I tried poisson family predictive tons of value above 10 (even values of 15), so the gaussian fit was better.

image

image

  • Well, my count data is bounded at 10 - so I tried poisson with truncation between 1 and 10 (and 0 and 9 after subtracting 1 from my dependent variable). That looked pretty good concerning the dens_overlay predictive check, but the stat_2d check was not that good. As I am interested in the means I feel not safe interpreting this results (or what do you think?)! Furthermore there were some divergent iterations and I receive a warning message: ā€œ11% of all predicted values were invalid. Increasing argument ā€˜ntrysā€™ may help.ā€

image

image

image

  • I tried some other models like binomial family and some hurdle models (after transforming my data with abs(dependent_variable -10) so that I had many 0Ā“s instead of 10Ā“s). The dens_overlay check was between horrible and okay, but the stat_2d check was never promisingā€¦

  • I tried a gaussian mixture, but there were thousand of divergent transition and the model took some hours, so that wasnā€™t very satisfying as wellā€¦

What else can I try? I am grateful for every suggestion! Thanks a lotā€¦ :-)

Another question would be: If I will not find a solution to improve my posterior predictive check, can I still interpret the model? At least it is not worse than classic frequentist testing with lme4, right?

If you need more information please tell meā€¦

Yours,

Simon

3 Likes

How do things look with a negative binomial distribution or a zero inflated negative binomial (if you have some zeros)?

Any chance thereā€™s an additional predictor?

1 Like

Hi JLC,

thank you for the suggestion. I donā€™t think there is another predictor accounting for the bimodality.

I have tried a negbinomial (with dependent variable -1, so I have some zeros. Naturally it goes from 1 to 10ā€¦):

image

image

Same with zero_inflated negbinomial:

image

image

I repeated this with abs(dependent_varable-10), so 1 becomes 9 and 10 becomes 0. I hoped his would improve results, as I have far more 10Ā“s than 1`s. Well, not bad, but not much better than the gaussian and much harder to interpret!

image

image

I also tried a zero inflated poisson on abs(dependent_varable-10)ā€¦ same thing: Not satisfying and very hard to interpretā€¦

image

image

What do you think? Any other distribution that makes sense? And in doubt, can I interpret my gaussian model?

1 Like

Great that you are posting those detailed ploty. If your data are integers and bounded on both sides, then I think ordinal models would be a good choice. If I understand your experiment well, the continuation ratio (cratio in brms) family might be particularly suitable. Also check out the ordinal model tutorial by Paul Burkner for more details (currently on phone, so donā€™t have the link ready, sorry)

Best of luck with the model!

4 Likes

Could you post some pp_checks with type = "bars" to type = "bars_grouped?

Bars can be helpful for investigating discrete outcomes.

1 Like

Of course,

here is my groped bars pp_check for the truncated poisson model (which is the best but still unsatisfying and harder to interpret alternative to the gaussian so far):

image

image

Its not that bad. The shape of the distribution is captured by the truncated poisson quite good, but the mean value and standard deviation is better reproduced by the gaussian model (which fails with respect to the dens_overlay checkā€¦). I am not 100% happy with both so farā€¦

With regard to the faceting I sent you am PM. And thank you @martinmodrak , I will try the cratio as soon as I can!

Ok I just naively changed the brmsfamily argument to ā€œcratioā€. The posterior predictive checks really look too good to be true!

image

image

image

image

image

I have to admit that I have no clue how to interpret the results. I have to read the vignette you mentioned and figure out if I can use the cratio model to answer my hypotheses as good as with the gaussian model but if I can, it is very promising!

Apart from that: Do you think with the given pp_checks is my gaussian standard model a valid choice at all? On the one hand the dens_overlay is really not fine. On the other hand its basically not worse than a typical lme4 frequentist model 99% of the scientists would choose. I am new to bayesian statistics and have no idea how serious this is!

7 Likes

@martinmodrak, do you mean this vignette?

https://journals.sagepub.com/doi/10.1177/2515245918823199

I canā€™t find a description of the cratio family there. Generally, I do not find information about how the cratio family works and how to interprete the results. Maybe you do have a source or a link for me? Thank you very much for your support!

Simon

Yes, thatā€™s the one. It seems it is called ā€œSequential modelsā€ there (I forgot about the naming conflict)

Tough question. Iā€™ll go with Box: ā€œSince all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad.ā€ In other words, depends on you problem and the inferences you need to make. Since it appears the Gaussian matches your mean and standard deviation OK, then if it also does this well in subgroups you care about and you only make inferences about mean, it might not really matter that much.

With that said I personally think the cratio is overall a safer choice and IMHO very closely matches your experiment and is thus highly interpretable (but make sure it matches, I might have misunderstood things).

Thatā€™s because the ordinal families are very flexible and generally have N-1 parameters for N outcomes - so it is not suprpring they fit the overall distribution. The fits in subgroups are of more interest.

3 Likes

Would loo_compare provide useful comparative diagnostics between the models in this case?

2 Likes

Good question! First, there is a technical problem for this specific case: loo canā€™t directly compare fits with continuous families and discrete families. (see the relevant entry in Akiā€™s Cross-validation FAQ )

Second, there is the general consideration that loo while useful should (in my personal opinion) be of second-order interest and the most important input should be formed by the question being asked. loo will try to estimate which model is more likely to be better at predicting out-of-sample data. For many questions, we may not want to optimize prediction capacity, but rather understanding, interpretability or ā€œcorrectnessā€ in some sense. For example, in causal inference, a model that does not adjust for a confounder or that conditions on a collider (or both) might easily be preferred by loo despite not providing a good answer to our question. Similarly when choosing a response distribution, it might sometimes be sensible to choose say a more interpretable option over the one favored by loo.

4 Likes

Thank you very much for the great explanation!

Thank you, I like your point of view! Tomorrow I will read the vignette you recommendedā€¦ I am excited if it solves my problem!

One question that comes to my mind: If I simply have more parameters for the cratio model and as a result my pp_check always looks fantastic - am I not overfitting my model? However, maybe this will become clearer after reading the text tomorrowā€¦ I will come back here after that!

For today, big thanks for your support @JLC and @martinmodrak. Your answers helped me a lot!

1 Like

In case itā€™s at all helpful, here is how I have munged some ppcā€™s for discrete outcomes with more than 1 grouping factor:

# Get posterior predictions
preds <- data %>%
  add_predicted_draws(model,
                      re_formula = NULL,
                      prediction = "Pred",
                      n = 100)

# Calculate median counts per outcome and CI's
pred_counts <- preds %>%
  ungroup() %>%
  select(-.row, -.chain, -.iteration) %>%
  group_by(Cov1,
           Cov2,
           Prediction, 
           .draw) %>%
  summarize(Count = n()) %>%
  ungroup() %>%
  group_by(Cov1,
           Cov2,
           Pred) %>%
  point_interval(
    Count,
    .width = 0.89,
    .point = median,
    .interval = hdci,
    .simple_names = TRUE,
    na.rm = TRUE
  ) 

# Plot it out
ggplot() +
  geom_bar(data = data,
           aes(x = Y, fill = "Observed"), colour = "black") +
  facet_grid(Cov1 ~ Cov2) +
  geom_point(data = pred_counts,
             aes(x = Pred, y = Count, shape = "Predicted"),
             size = 2) +
  geom_errorbar(data = pred_counts,
                aes(x = Pred,
                    ymin = .lower,
                    ymax = .upper),
                width = 0.2)

2 Likes

To avoid overfitting, do cross-validation. You can use the kfold function. But depending on the size of your data, you may want to generate the folds yourself (e.g. 10% from each subject for each fold).

1 Like

Hi Simon @Simon_Wilms, Can you share your codes to make the graph for pp_check() with stat = ā€œmeanā€ that can show values on y-axis! I used the default function and cannot show y-axis values and scales! Thanks