Summary:

I’m trying to use the ordinal regression approach with multiple likert items to evaluate what model has the best predictors for the likelihood to retaliate against jaguars for livestock losses (likert response variable 1-7). I have five competing hypotheses of personal norms, attitudes, etc, with between 2-4 likert predictors for each one. I want to use model selection to find the top model, then report the parameter estimates for each one of those likert items, i.e. to see the effect of personal norms + attitudes on retaliation.

If I understand correctly based on the Burkner & Vuorre 2019 paper on ordinal regression, multiple likert items should be converted into long format and then models by response ~ 1 + (1 | likert_items).

However, the five models have a different number of predictors when converted to long format, thus I can’t compare them through model selection with looic or k-fold. I ran these models using each likert item as a separate predictor and I can get a parameter estimate, but I’m afraid that is not a valid approach.

I was wondering if anyone can confirm that multiple likert items have to be modeled as example #1 below, and if there could be any alternative for model selection if that’s the case. Any help would be much appreciated.

```
# how I understand they should be modeled:
bay.jag1.1 <- brm(
formula = jag_pop ~ 1 + (1|Name) + reserve + (1| likert_variable),
data = bayes_data,
family = cumulative("probit"),
chains = 4,
iter = 5000,
prior(normal(0, 5),
class = Intercept),
init = "0",
save_pars = save_pars(all = TRUE)
)
# model with multiple likert items separately, gives estimates but maybe not valid?
bay.jag1.1 <- brm(
formula = jag_pop ~ 1 + (1|Name) + reserve + likert1 + likert2 + likert3,
data = bayes_data,
family = cumulative("probit"),
chains = 4,
iter = 5000,
prior(normal(0, 5),
class = Intercept),
init = "0",
save_pars = save_pars(all = TRUE)
)
```

- Operating System: Windows 10
- R version 4.1.3
- brms Version: 2.19.0