Dear all,

I am very new to the package brms and Bayesian statistics. I am experiencing some problems when running my models and I was also wondering if some of you could give me advice about the most efficient way to run my models.

My dataset contains more than 50000 observations. In my models, the outcome variable and the predictors are ordered categorical variables, each of them measured in a Likert scale from 1 to 4. Consequently, I am using ordinal logistic regression models with monotonic effects. I have two nested random effects (Year nested within Country).

This is the last model I ran:

```
prior1 <- prior(normal(0, 1), class = "b") +
prior(dirichlet(1, 1, 1), class = "simo", coef = "moECOUNJOBREV1") +
prior(dirichlet(1, 1, 1), class = "simo", coef = "moECOUNEDUCATIONREV1")+
prior(dirichlet(1, 1, 1), class = "simo", coef = "moECOUNFOODREV1")+
prior(dirichlet(1, 1, 1), class = "simo", coef = "moECOUNMEDICINEREV1")+
prior(dirichlet(1, 1, 1), class = "simo", coef = "moECOUNCASHREV1")
ecouncermodel<-brm(
formula=DOMINANCEREV~1+mo(ECOUNJOBREV)+mo(ECOUNEDUCATIONREV)+mo(ECOUNFOODREV) +mo(ECOUNMEDICINEREV)+mo(ECOUNCASHREV)+(1|COUNTRY_NAME/YEAR),
family=cumulative("logit"), data=dat, prior=prior1, iter=3000,
control = list(max_treedepth = 12, adapt_delta = 0.99)
```

I had problems before with max_treedepth and adapt_delta and brms suggested me to increased the number of iterations or have stronger priors. Consequently, I increased the max_treedepth, adapt_delta and the number of iterations. However, these changes make running the models very slow. The computer was running the model for 9 days. Moreover, it still returns an error:

Warning messages:

1: In UseMethod(“depth”) :

no applicable method for ‘depth’ applied to an object of class “NULL”

2: In UseMethod(“depth”) :

no applicable method for ‘depth’ applied to an object of class “NULL”

3: In UseMethod(“depth”) :

no applicable method for ‘depth’ applied to an object of class “NULL”

4: In UseMethod(“depth”) :

no applicable method for ‘depth’ applied to an object of class “NULL”

5: In UseMethod(“depth”) :

no applicable method for ‘depth’ applied to an object of class “NULL”

Looking at the output (Rhat=1.02) and the graphs. The problem seem to be the random effects:

Family: cumulative

Links: mu = logit; disc = identity

Formula: DOMINANCEREV ~ 1 + mo(ECOUNJOBREV) + mo(ECOUNEDUCATIONREV) + mo(ECOUNFOODREV) + mo(ECOUNMEDICINEREV) + mo(ECOUNCASHREV) + (1 | COUNTRY_NAME/YEAR)

Data: dat (Number of observations: 52325)

Samples: 4 chains, each with iter = 3000; warmup = 1500; thin = 1;

total post-warmup samples = 6000

Group-Level Effects:

~COUNTRY_NAME (Number of levels: 54)

Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat

sd(Intercept) 0.54 0.29 0.03 0.97 190 1.02

~COUNTRY_NAME:YEAR (Number of levels: 54)

Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat

sd(Intercept) 0.58 0.28 0.03 0.99 200 1.02

Population-Level Effects:

Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat

Intercept[1] -0.82 0.13 -1.07 -0.57 4488 1.00

Intercept[2] 0.53 0.12 0.28 0.78 4477 1.00

Intercept[3] 2.17 0.13 1.91 2.41 4551 1.00

moECOUNJOBREV 0.15 0.02 0.11 0.20 9438 1.00

moECOUNEDUCATIONREV 0.12 0.03 0.06 0.18 8967 1.00

moECOUNFOODREV 0.42 0.05 0.33 0.51 7001 1.00

moECOUNMEDICINEREV 0.17 0.03 0.12 0.23 7047 1.00

moECOUNCASHREV -0.16 0.03 -0.23 -0.09 8287 1.00

Simplex Parameters:

Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat

moECOUNJOBREV1[1] 0.12 0.09 0.01 0.34 8821 1.00

moECOUNJOBREV1[2] 0.80 0.11 0.56 0.97 9882 1.00

moECOUNJOBREV1[3] 0.08 0.06 0.00 0.24 10538 1.00

moECOUNEDUCATIONREV1[1] 0.82 0.12 0.53 0.98 7870 1.00

moECOUNEDUCATIONREV1[2] 0.14 0.11 0.01 0.42 7633 1.00

moECOUNEDUCATIONREV1[3] 0.05 0.05 0.00 0.17 7336 1.00

moECOUNFOODREV1[1] 0.37 0.06 0.26 0.50 8236 1.00

moECOUNFOODREV1[2] 0.24 0.07 0.10 0.39 9735 1.00

moECOUNFOODREV1[3] 0.39 0.08 0.23 0.53 7895 1.00

moECOUNMEDICINEREV1[1] 0.72 0.12 0.48 0.93 6983 1.00

moECOUNMEDICINEREV1[2] 0.15 0.10 0.01 0.38 9299 1.00

moECOUNMEDICINEREV1[3] 0.13 0.09 0.01 0.34 7176 1.00

moECOUNCASHREV1[1] 0.06 0.05 0.00 0.19 10660 1.00

moECOUNCASHREV1[2] 0.29 0.13 0.05 0.58 7704 1.00

moECOUNCASHREV1[3] 0.65 0.13 0.36 0.89 7704 1.00

Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample

is a crude measure of effective sample size, and Rhat is the potential

scale reduction factor on split chains (at convergence, Rhat = 1).

I was wondering if you could give me advice about how:

- Run more models more efficiently (i.e. reducing the time taken to run)
- Solve the problem with the random effects.

Thank you very much.

Best wishes,

Ángel V. Jiménez