Intriguing fitting issue

  • Operating System: Linux (Debian testing)
  • brms Version: brms_2.9.3 (from Github)

I have trouble fitting a (relatively simple) model using brms.

foo.R (5.3 KB)

Data=dataframe foo in the uploaded file (the data not being mine, I have obfuscated the variable names and factor levels…).

My problem is to fit the dependent variable Dep on a linear model depending on

  • the factors F2, F3 and F4 (the latter being a random effect) ;
  • the boolean variables B1, B2 ;
  • the numeric variables N2, N3, N4, N5.

With the following troubling results :

  • “Small” models containing F2, N2, B1, F3, B2 and N1 fit without difficulty with default brms parameters, either in a gaussian model or a Poisson model (which would be reasonable, Dep being indeed a count), in the latter case, forcing adapt_delta=0.95 (IIRC) avoids divergent transitions.

  • Introducing N4 requires to raise adapt_delta to 0.99 (resp 0.999) to avoid divergent transitions.

  • I managed to fit the “full” gaussian model with the ridiculous and questionable code below (which takes ages to finish) :

system.time(bar ← brm(Dep ~ F2 + N2 + B1 + F3 + B2 + N3 + N4 + N5 + (1|F4),
family=gaussian, data=foo,
prior=prior(student_t(3,0,10), class=“b”),
sample_prior=“yes”,
save_all_pars=TRUE, seed=1723,
control=list(adapt_delta=1-1e-8)))

I haven’t be able to fit the “full” poisson model : with the seed=1723 value, I get one chain sampling in about 10 seconds, anothe one in about 1 miute, the third one needing about 5 minutes and the last stuck in the “sampling” state at about 1200 iterations for more than 10 minutes.

I suspect that the problem is with my data : I may hit a colinearity, but I have been unable to detect it.

Notes:

  • I do not expect to see any credible interval not straddling 0 (except for intercept, of course…). The factor of interest is F3, and establishing that its two contasts are centered around 0 with a small range would be of interest.
  • I have but 4 levels for F4, because sampling from it is “expensive”, but it is fundamentally a randiom effect, and my conclusions should revolve around its variance.
  • Neitherlmer nor glmer report problems about the corresponding frequentist models.

Ideas ?

I assume it is indeed a collinerity issue. If the predictors are perfely collinear, lmer and glmer are (to my knowledge) automatically dropping redundant predictors. Perhaps this is why you don’t see any sampline problems there.

To get the model to converge despite the collinearity you may try out the horseshoe prior on the regression coefficients or use the QR decomposition via argument decomp in brmsformula.

Dear Paul,

Thank you for your answer.

No such luck… but I’m not sure about the parameterization of the horseshoe prior. I’ll try to explore it before reporting further problems.

Please note that, in a related model, the decomp parameter was correctly intrerpreted only when used in an explicit call to brmsformula ; using it as a parameter to brm led to curious misinterpretations… Do you care for a formal issue on brms’s Github site ?

Decomp is not intended to be passed to brm anyway.