Different result for loo_pit_overlay from brms pp_check vs loo + bayesplot?

Hi,

I seem to be getting different results from pp_check(x, "loo_pit_overay") vs. bayesplot::ppc_loo_pit_overlay() for a multivariate model I’ve been working on using brms.

The model is conceptually straightforward, I think. There are 10 observed responses, each with the same fixed effect and two varying effects, such as:

bf_1 <- bf(Std ~ tune + (1 | matrix) + (1 | day_expt),
             family = gaussian())

bf_2 <- bf(Alt ~ tune + (1 | matrix) + (1 | day_expt),
             family = gaussian())
,...,

bf_10 <- bf(Ho1 ~ tune + (1 | matrix) + (1 | day_expt),
             family = gaussian())

The brm statement is then:

mod1 <- brm(bf_1 + bf_2 + ,..., + bf_10 + set_rescor(TRUE)),
  data = df_mv_as, ...)

The model seems to converge fine. Where I’m getting confused, however, is when I do pp_check(mod1, "loo_pit_overlay", resp = "Std"), for example, I get plots that looks like this:

However, if I do:

l1 <- loo(mod1, save_psis = TRUE, cores =1)
yrep_1 <- posterior_predict(mod1, newdata = df_mv_as, cores = 1)
ppc_loo_pit_overlay(y = df_mv_as$Std, 
  yrep = yrep_1[,,1], 
  lw = weights(l1$psis_object), 
  samples = 100) 

I get a plots that looks like:


Are these two plots telling me the same thing, but I just don’t see it? To me, the first one looks reasonable, but the second one suggests potential issues. Also, the two PIT lines just look like very different things.

Likewise, if I do pp_check(mod1, type = "dens_overlay", resp = "Std", nsamples = 20)
I get something like:

which makes it clearer that something isn’t quite right, so I’m more inclined to believe the second loo-pit plot.


Finally, there were 8 observations (of 352) where k > 0.7. I’m not sure how this could make a difference for this comparison, but I thought it could be important to point out. Actually, the reason I happened upon this potential issue is that I needed to reloo, so I had to do the ppc_loo_pit_overlay by hand via bayesplot as opposed to just pp_check(mod1, "loo_pit_overlay") on the brms model object.

I’m thinking I’m either (1) doing the loo + bayesplot check/code wrong; or (2) I’m just misunderstanding the plots, but it’d be great if someone could offer any suggestions.

Thanks,

Roy

2 Likes

Hi,
not really an expert on this, but since nobody else answered, I’ll give it a try. I looked at how brms computes the psis object and it seems you should call add_criterion(mod1, criterion = "psis") and then access the result via mod1$criteria$psis to get what brms computes - it does some stuff I don’t really understand when preparing the parameters, but I would first check whether you get the same results as brms this way…

If we pin down the problem a bit tighter, we could get Paul involved help us resolve this completely.

3 Likes

Thanks for the suggestion and feedback Martin. Indeed, when I did:

mod_addC <- add_criterion(mod1, criterion = "loo")

I got:

which looks much more consistent with what I got (in above post) using:

ppc_loo_pit_overlay(y = df_mv_as$Std, 
  yrep = yrep_1[,,1], 
  lw = weights(loo1$psis_object), 
  samples = 100) 

However, after reloading all of the same objects back into a new workspace to revisit this issue, it looks like

pp_check(mod1, "loo_pit_overlay", resp = "Std", nsamples = 100)

now also gives me a similarly reasonable representation.

So, I think I must have just somehow got my workspace mixed up (or maybe some other quirk) and that this “problem” was maybe much ado about nothing! Thank you again for the suggestion.

3 Likes