Simple Categorical Model with group-level predictors won't converge

Hey guys,

I’m trying to build an MRP-model to predict German federal state election results on Mac OS using brms 2.7. The major complication compared to the standard MRP use cases is the fact that unlike the US there are not just two but 5-7 relevant parties which have to be modeled. Therefore instead of logistic regression I decided to use brms with family = categorical (link=“logit”) so the basic model looks something like this:

    (1 | religion) + (1 | gender) + (1|county/zip)

The model runs okay-ish, taking roughly 20 hours for 2000*3 iterations with 20 000 observations, 8 categories, ~60 counties and ~700 zips and reasonable diagnostics. However I also want to use group-level predictors, for example gdp per capita on county level, so we have somethig like:

    (1 | religion) + (1 | gender) + (1|county/zip) + gdp_capita

and here the model breaks down. While sampling is much faster now the model does not converge at all: Effective sample size is extremely low, Rhat > 2 and even 5000*3 iterations with adapt_delta = 0.99 didn’t change that. Different priors on the beta-coefficients (like normal(0,5) or normal(0,1) or cauchy(0,1) neither.

While debugging I tried to fit the simplest possible model:

vote ~ gdp_capita

and even this one does not converge. The intercept explodes but removing it still leads to implausible values, too high Rhat and similar issues. Fitting it using non-bayesian alternatives (e.g. multinom in nnet) is fast and does not cause any problems or suspicious irregularities.

Does anyone have ideas what the cause of this problem might be? Thanks!

I would suggest to standardize gdp_captia to bring it onto a more reasonable scale. You may also set a weakly informative prior on its effect via, say, prior = prior(normal(0, 5)).

Indeed, that did the trick. Thanks so much, Paul!