Error using loo for the custom_family beta-binomial

Hi,
I’ve made some models using the brms package and I’ve used the custom_family argument for beta-binomials found in this vignette: add 'beta_binomial2' family · paul-buerkner/custom-brms-families@32c9f2b · GitHub
However, when I try to do post-processing (loo) I get an error for one of my models.

The code I run is as follows:

library(RcmdrMisc)
library(brms)

log_lik_beta_binomial2 <- function(i, draws) {
  mu <- draws$dpars$mu[, i]
  phi <- draws$dpars$phi
  N <- draws$data$trials[i]
  y <- draws$data$Y[i]
  beta_binomial2_lpmf(y, mu, phi, N)
}

predict_beta_binomial2 <- function(i, draws, ...) {
  mu <- draws$dpars$mu[, i]
  phi <- draws$dpars$phi
  N <- draws$data$trials[i]
  beta_binomial2_rng(mu, phi, N)
}

fitted_beta_binomial2 <- function(draws) {
  mu <- draws$dpars$mu
  trials <- draws$data$trials
  trials <- matrix(trials, nrow = nrow(mu), ncol = ncol(mu), byrow = TRUE)
  mu * trials
}

# definition of the custom family
beta_binomial2 <- custom_family(
  "beta_binomial2", 
  dpars = c("mu", "phi"),
  links = c("logit", "log"),
  lb=c(0,0),
  ub=c(1,NA),
  type = "int", vars = "trials[n]",
  log_lik = log_lik_beta_binomial2,
  predict = predict_beta_binomial2,
  fitted = fitted_beta_binomial2
)

# additionally required Stan code
stan_beta_binomial2 <- "
  real beta_binomial2_lpmf(int y, real mu, real phi, int T) {
    return beta_binomial_lpmf(y | T, mu * phi, (1 - mu) * phi);
  }
  real beta_binomial2_rng(real mu, real phi, int T) {
    return beta_binomial_rng(T, mu * phi, (1 - mu) * phi);
  }
"

added_fun<-stanvar(scode=stan_beta_binomial2,block = "functions")

fit1g=brm(Pest|trials(Trials)~(Ants*Treatment*Plant_species*Insects*Ants_Tree*Ants_Feed*Grass*Day_number)+(1|Block/Tree_ID),data=mydata,family = beta_binomial2, stanvars = added_fun,prior = set_prior("normal(0,10000)",class="b"),chains = 4,iter = 20000)

This code works just fine, however, when I then try to calculate the loo using this code:

expose_functions(fit1g, vectorize = TRUE)

log_lik_beta_binomial2 <- function(i, prep) {
  mu <- prep$dpars$mu[, i]
  phi <- prep$dpars$phi
  trials <- prep$data$vint1[i]
  y <- prep$data$Y[i]
  beta_binomial2_lpmf(y, mu, phi, trials)
}

fit1g_loo<-loo(fit1g,reloo = TRUE)

I get this error:

Error in (function (y, mu, phi, T, pstream__ = <pointer: 0x00000000019ce540>)  : 
  Exception: beta_binomial_lpmf: First prior sample size parameter is 0, but must be > 0!  (in 'unknown file name' at line 5)
Called from: (function (y, mu, phi, T, pstream__ = <pointer: 0x00000000019ce540>) 
.Call(<pointer: 0x0000000071282960>, y, mu, phi, T, pstream__))(y = dots[[1L]][[1L]], 
    mu = dots[[2L]][[6743L]], phi = dots[[3L]][[6743L]], T = dots[[4L]][[1L]])

I’ve searched the forum for similar errors, and see that in cases not using loo but just brm, it might be the init argument that should be changed to something smaller than (-2,2). However, my brm code works fine, so I’m not sure if that is the case here.

Any suggestions for solving the error will be greatly appreciated! :-)

  • Operating System: Windows 10
  • brms Version: 2.12.0

Best regards,

2 Likes

Hi,
[Disclaimer: I am pretty new to brms so there might be something obvious going on that I am missing.]
I also use the beta_binomial2 family quite a bit and I did not come across a similar error with my models.
Do other methods, like plot(fit1g), summary(fitg1g) and pp_check(fit1g) suggest a sensible fit for the model? Your model looks quite complicated.
I cannot fit your model because I don’t have your data. Can you provide a minimal reproducible example (preferably with a simpler model) where the error occurs?

2 Likes

Hi,
The error only occurs for my most complicated models, which include ALL interactions, this is of course very unlikely to be the best model. Thus, the only reason why I want to calculate the loo, is to “prove” that this is indeed NOT the best model for my data. I’ve done much simpler models containing only two-way interactions, and they don’t give any problems with the loo. So maybe that’s the problem? Too many interactions?
Summary and plot are no problem, of course they show a really poor fit.
Unfortunately I cannot give a simpler model, as it works fine with simpler models, however, I’m only missing the loo of this particular model. I’ve added an example of my data.

Example_data1.csv (11.0 KB)

Best regards,

  1. just using your data and your model code you posted, I get the following Error:
    Error: The following variables are missing in ‘data’:
    ‘Ants’, ‘Grass’
    I assume in the data file you posted here, these are called ‘Ants_or_control’ and ‘Grass_at_beginning’?

  2. I could not fit the model, because all the initial values were rejected. How did you get around that?

  3. Your prior seems huge. If I am not mistaken, the priors are specified on the inverse logit scale, where -5 means ‘almost never’ and 5 means ‘almost always’. (no idea if this is related to your problem)

  4. Since loo worked for your other models, my guess is that there is something wrong with the model. Can you post the summary and the plot here?

  1. Yes, sorry I didn’t include the code where I define the variables with simpler names.
  2. I did not have that problem, so I’m not sure what I did/did not. I used the exact code written here.
  3. Could be the problem, however I was told by my supervisor to use this prior-distribution to ensure that the prior was large enough to include the actual distribution of the data. I hope that makes sense.
  4. The summary is 882 rows, so I won’t post all of it, but here is some of it:

summary(fit1g)
Family: beta_binomial2
Links: mu = logit; phi = identity
Formula: Pest | trials(Trials) ~ (Ants * Treatment * Plant_Species * Insects * Ants_Tree * Ants_Feed * Grass * Day_number) + (1 | Block/Tree_ID)
Data: mydata (Number of observations: 335)
Samples: 4 chains, each with iter = 20000; warmup = 10000; thin = 1;
total post-warmup samples = 40000

Group-Level Effects:
~Block (Number of levels: 6)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.32 0.16 0.10 0.66 2.18 5 14
~Block:Tree_ID (Number of levels: 72)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.54 0.09 0.43 0.73 1.91 6 23

Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
phi 22.05 2.99 17.23 28.25 1.58 7 17

Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Warning messages:
1: Parts of the model have not converged (some Rhats are > 1.05). Be careful when analysing the results! We recommend running more iterations and/or setting stronger priors.
2: There were 14209 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See Runtime warnings and convergence problems

When doing plot(fit1g), this is the first plot:

Thanks for posting the summary and the plot. It is now pretty clear to me that the sampling did not work properly. Your chains have not converged, and your effective sample size is terrible, especially for so many iterations. It also tells us something is wrong in the Warnings!
I still don’t know exactly why this leads to the error with loo, but this is not a sensible model. You could try fitting it again with MUCH narrower priors and maybe an increased adapt_delta, but still then I don’t know if it would work because the structure is so complicated. You mention you have other models as well, I would advise you to inspect them carefully if they have similar warnings, and if so you might need to refit them as well. I know that this is probably not what you wanted to hear, so I am sorry.

1 Like

Thank you, and no I know this model is terrible and the plot shows that the chains have clearly not converged - and that is exactly what I want to show with this model. It just bugs me that I can’t get the loo as I want to use it as a sorts of model selection tool.
I don’t get any warnings for my other models, so they are just fine (and so are the loo-values). However, getting the loo for this model would be a strong argument in the model selection/reduction process.
If I can’t find/fix the problem, I’ll of course just use the outputs from the summary and plots.

Well, the sampling did not work as intended, so I don’t think you can make the argument with loo that you want to make.
If I understand you correctly, you want to show that including all the interactions does not describe the data well. But all you can say is that including all the interaction, the sampling did not work. The parameter estimates you have from such bad sampling are almost meaningless. So saying that with the estimated parameters, the model makes bad predictions is to be expected. So all you can say is: look, the model was too complicated for sampling.
My wild guess is that the problems with loo stem from the fact that your parameter estimates are far outside the reasonable range, so that the probability of the data evaluates to 0.

3 Likes

Ok, that makes sense. Thank you very much!

2 Likes

+1 to what @BrittaL said - all of our tools like loo provide results conditional on the fact that sampling was succesful. If this is not the case, all bets are off.

If you are willing to invest some time in it, it should be possible to learn exactly why sampling didn’t work for the model. My best bet is that you are trying to fit an underdetermined regression - if at least one of the factors in Ants * Treatment * Plant_Species * Insects * Ants_Tree * Ants_Feed * Grass * Day_number has more than two levels, you have more fixed effects than rows in the dataset (and that’s ignoring the varying intercepts for Block/Tree_ID. A post with more technical discussion on how that looks like and why it breaks Stan: Underdetermined Linear Regression . So even if sampling worked, you would be unable to learn anything about the individual parameters, as they would be completely unconstrained by the data: in such underdetermined contexts, only combinations of parameters are constrained by data, but not each parameter individually…

Hope that clarifies more than confuses.

1 Like

Thank you very much! That really clarifies it, and makes so much sense! I have a factor with more than two levels, so that’s probably the problem.
I’ll mark your comment as solution, so both of you get credit :)