I am using stan_glmer (rstanarm package) with a dataset that has 35,000 people. There are a bunch of fixed and random effects in the model, with a mixture of continuous, factors, and dummy variables. For example:

```
mod <- stan_glmer(outcome~var1+var2+var3+var4+var5+var6+var7+var8+
var9+var10+var11+var12+var13+var19+
(1|var20) + (1|var21) + (1|var22) + (1|var23) + (1|var24) +
(1|var25)+(1|var26)+(1|var27)+(1|var28), data=dat, binomial(link = 'logit'),
prior_intercept = normal(0,1), prior = normal(0,1),
cores = 5)
```

I am running it on a fairly powerful server. However, it is taking a very long time to finish. In fact, I had to cancel the process after 4 hours.

Aside from increasing the cores, are there any methods of making this run faster. The dataset is longitudinal and it will get even larger by November.

Thanks