Hi, I’m trying to model an ordinal response variable with several predictors, some of which are ordinal as well. Ideally, I would like to fit the model with random effects as the data is micro-longitudinal, and the response is a psychological scale (1-7 likert scale) measured across different items and participants over time. I would also like impose shrinkage on the predictor slopes. So, to my understanding, my end-goal is a relatively complex model and I am aware that it may take quite a while to fit.

I first tried fitting the model with the default gaussian() family, and I’m able to get the model to converge in very reasonable time (1-2 hrs or so, with the default number of chains and iterations). However, when I try to fit even a very basic model with just only few fixed effects and no random effects using the cumulative() family, the sampling time goes through the roof (i.e. the cumulative model with fixed effects only will take much longer than gaussian model with random effects).

My question is, is this cost to using the cumulative() family unavoidable or am I parametrizing the model wrong? I’ve tried imposing relatively informed priors, e.g. setting the prior on intercepts to be centered on the data median with smaller sd, however this did not help much.

Here’s how I would like to fit the “idealized” model, i.e. the one including the random effects. If you want to imagine what I actually tried to fit, drop the random effects & substitute only one or two predictor variables for the “.” .

```
mod <- brm(dfs_value ~ . - ID - dfs_item + (. - ID - dfs_item | ID) + (1 | dfs_item),
data = data_daily_proc,
family = cumulative(threshold = 'flexible'),
prior = c(prior(normal(0, 0.5), class = 'b'),
prior(normal(5, 2.5), class = 'Intercept')),
cores = 4)
```

Thank you very much in advance, and let me know if I should supply some more detail.