Zero_inflated_poisson with brms

Hi,

I am trying to see how the number of items found on the beach at 14 different sites (within 3 different states) across the Australian coast is affected by different factors (E.g., winds, currents, population). The offset is the effort of sampling (volunteers, hours, length of the beach, and the interval between sampling). All the parameters I am including in the model are already equally scaled.

I am a noob with bayesian and stan/brms models. But previous models were not working for me, the poisson was overdispersed. Therefore, I have tried poisson, negbinomial and zeroinflatedpoisson with brms.

And I having a great number of errors when running a brms model (whatever the family is, although zeroinflated looks to have the best results). See below the warning messages. I have tried to increase the interactions and chains but the errors persist. I attached the script and a subset of the data. Briefly, the errors occur when I introduce the random effects. Without them, the model runs really well.

Since I am a noob I am not to sure how to choose the priors and if this would help. I tried the standard, and few other combinations after using (get_prior) but the warnings persisted.

Let me know if more information is needed to understand the warning messages.

Any help will be much appreciated.

bm1 <- brm(TotalNumberItems ~ scaled_SOI_Index + scaled_Nearest_port_km + LandUse +
     WeatherRegions +scaled_PC1winds+
    scaled_PC1currents+ (1 | State) +(1| Site) + 
    offset(log(EffortDayC)),data = dat1,
    family = zero_inflated_poisson("log"))
pp_check(bm1)

Warning messages:
1: There were 248 divergent transitions after warmup.
2: There were 3751 transitions after warmup that exceeded the maximum treedepth. Increase max_treedepth above 10.
3: There were 1 chains where the estimated Bayesian Fraction of Missing Information was low.
4: Examine the pairs() plot to diagnose sampling problems
5: The largest R-hat is 3.87, indicating chains have not mixed.
Running the chains for more iterations may help.
6: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
Running the chains for more iterations may help.
7: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
Running the chains for more iterations may help.

You say that there are "14 different sites (within 3 different states) " , so it sounds like sites are nested within states, in which case you would write (1|State/Site) . Since there are only 3 States, you likely need to set a narrower prior on the sd for the group level effects for State. The divergent transitions might be coming from the group-level effects. You might try a half-normal prior instead of the default student_t. Since it is on the log scale, try something like normal(0, 2.5) and then maybe even normal(0, 1). But you need to see how those priors effect the estimated sd. It’s only a suggestion. I don’t know what your data is like. Try increasing adapt_delta and the max treedepth like this control = list(adapt_delta = 0.99, max_treedepth=12).
Also, I believe that you can write TotalNumberItems|rate(log(EffortDayC)) ~ and remove offset(log(EffortDayC)) from the righthand side of the formula.

Thank you for your reply. I tried all the suggestions you gave me. But the warnings persisted.

I tried 3 different priors.

The first prior I tried is below.

prior1 =  c(prior(normal(0, 1), "b"), # set normal prior on regression coefficients (mean of 0, location of 3)
            prior(normal(0, 1), "Intercept"),
		prior(normal(0, 1), "sd")) # set normal prior on intercept (mean of 0, location of 3) 

with the following model:

bm1 <- brm(TotalNumberItems|rate(log(EffortDayC)) ~ scaled_SOI_Index + scaled_Nearest_port_km + LandUse +
                                  WeatherRegions +scaled_PC1winds+
                                  scaled_PC1currents+ (1|State/Site),
					    prior=prior3,
					    data = dat1,
					    family = poisson("log"),
					    control = list(adapt_delta = 0.99, max_treedepth=12))

The warnings persisted. When I changed the prior to normal(0,0.3) and the problem continued until I used the prior normal(0,0.1) and finally the warnings improved significantly but this one persists:

Warning messages:
1: There were 3941 transitions after warmup that exceeded the maximum treedepth. Increase max_treedepth above 12.
2: Examine the pairs() plot to diagnose sampling problems

Can I trust these results? Is the prior ok? or too restricted?

You can’t trust the results. You should have 0 divergences, \text{rhat} <1.01, no warning concerning low ESS, or that you have too low BFMI. Stan has lots of diagnostics and that’s a good thing :) Start simple and build up step by step.

For example, a first step would be to check if the mean and the variance in your outcome (TotalNumberItems) differs. If they differ too much then a negative-binomial might be better. (If you have many zeros in the data set then modeling that separately might be a good idea.) Then create a simple null model first,

m0 <- brm(TotalNumberItems ~ 1, data = dat1, family = [poisson or negbinomial or ...], sample_prior = "only")

Note, that we only sample from the default priors above. You can now use pp_check() as always, and check what the priors imply for the outcome. Add your own priors after this, check that you have priors that don’t somehow limit what you would expect as an outcome, and sample with empirical data. Setting priors on the log() scale is not intuitive so an iterative approach where you check what the priors mean for the outcome is, to me, the only sane approach.

In the end, you’d like to see the model’s prediction and the empirical data have a decent fit. Once you have a decent fit start adding some population-level (main, fixed) effects, i.e., perhaps scaled_SOI_Index, and repeat the above steps. At one of the steps, you will probably notice some problems with the model not converging, you then need to figure out how to solve that problem (reparameterization, setting better priors, etc.)

I would also recommend that you download and install cmdstanr and then use install_cmdstan() to install the latest version of cmdstan. In brm(), you can then just add backend = cmdstanr to use cmdstan.

I would echo @torkar comments about starting simple and building to more complex. I would remove the group-level effects and see if the model will run.
Like @torkar mentions, it is hard to get a feel for the priors on the log scale and without seeing the data, so it is hard to know how narrow a certain prior is. Doing prior predictive checks are a good idea. https://arxiv.org/pdf/1709.01449.pdf

Also, you should write TotalNumberItems|rate(EffortDayC) ~ , you do not need the ‘log(EffortDayC)’ in there. Sorry I did not catch that sooner! In the brms manual it states, “Using rate(denom) is equivalent to adding offset(log(denom)) to the linear predictor of the main parameter but the former is arguably more convenient and explicit.”

Andrew Gelman also recently wrote this, which might be helpful. When MCMC fails: The advice we’re giving is wrong. Here’s what we you should be doing instead. (Hint: it’s all about the folk theorem.) « Statistical Modeling, Causal Inference, and Social Science

If it were me, I would try the model with default priors without any group-level effects, just to see what would happen, and go from there (I know I know, not the advice I just linked to above, but I have often found that the problems are frequently in the group-level effects). If you still have problems, even without the group-level effects, then you will need to start very simple (intercept) and then work your way up and try to figure it out, and really think hard about good priors using prior predictive checks.

1 Like