Inconsistencies brms vs ordinal w/same link?

Hi all,
I am (happily) running an ordinal regression model with brms, but noticed some inconsistencies when comparing the results with the same model fitted with the package “ordinal”. Specifically, the results of the “brms” model fitted with link=“probit” appear broadly in line with the results of the “ordinal” model, but only if the latter is fitted with with link=“logit”. In sum, there does not seem to be a 1:1 correspondence between the results of the models fitted with the same link function.
What am I missing?
Best,
Luca

frequentist logit-link ordinal model

clmm.logit ← clmm(contrast ~ (scale(horn) + scale(mass))*Sex + scale(age) + scale(julian.date) + (1|area),
link = “logit”,
threshold = “equidistant”,
data=data.f)

frequentist probit-link ordinal model

clmm.probit ← clmm(contrast ~ (scale(horn) + scale(mass))*Sex + scale(age) + scale(julian.date) + (1|area),
link = “probit”,
threshold = “equidistant”,
data=data.f)

Bayesian logit-link ordinal model

brm.logit ← brm(contrast ~ (scale(horn) + scale(mass))*Sex + scale(age) + scale(julian.date) + (1|area),
family=cumulative(link=“logit”, threshold = “equidistant”),
warmup = 1000, iter = 5000, chains = 4,
data=data.f)

Bayesian probit-link ordinal model

brm.probit ← brm(contrast ~ (scale(horn) + scale(mass))*Sex + scale(age) + scale(julian.date) + (1|area),
family=cumulative(link=“probit”, threshold = “equidistant”),
warmup = 1000, iter = 5000, chains = 4,
data=data.f)

brm.logit
> Population-Level Effects:
> Estimate Est.Error l-95% CI u-95% CI
> Intercept[1] -11.33 2.99 -18.24 -6.64
> Intercept[2] -8.00 2.22 -13.16 -4.52
> Intercept[3] -4.67 1.51 -8.16 -2.32
> Intercept[4] -1.34 0.97 -3.54 0.29
> scalehorn 0.08 0.64 -1.15 1.36
> scalemass 2.72 1.21 0.69 5.45
> SexMales -2.87 1.35 -5.96 -0.67
> scaleage -1.44 0.55 -2.67 -0.52
> scalejulian.date 0.77 0.41 0.08 1.69
> scalehorn:SexMales 1.23 1.05 -0.64 3.51
> scalemass:SexMales -1.77 1.26 -4.51 0.46

brm.probit

Population-Level Effects: 
                                 Estimate Est.Error l-95% CI u-95% CI
Intercept[1]                    -6.58      1.57   -10.00    -3.90
Intercept[2]                    -4.64      1.17    -7.18    -2.66
Intercept[3]                    -2.70      0.80    -4.46    -1.34
Intercept[4]                    -0.77      0.53    -1.91     0.17
scalehorn                       0.05      0.37    -0.66     0.79
scalemass                      1.58      0.65     0.42     2.98
SexMales                      -1.67      0.74    -3.26    -0.38
scaleage                       -0.85      0.30    -1.49    -0.31
scalejulian.date              0.46      0.23     0.05     0.97
scalehorn:SexMales      0.73      0.61    -0.39     2.02
scalemass:SexMales    -1.02      0.68    -2.45     0.24

car::Confint(clmm.logit)

                               Estimate     2.5 %    97.5 %

threshold.1 -6.87 -9.343 -4.403
spacing 2.10 1.409 2.792
scale(horn) -0.07 -0.915 0.774
scale(mass) 1.51 0.263 2.764
SexMales -1.44 -2.852 -0.029
scale(age) -0.75 -1.342 -0.158
scale(julian.date) 0.41 -0.066 0.878
scale(horn):SexMales 0.62 -0.639 1.886
scale(mass):SexMales -0.86 -2.327 0.608

car::Confint(clmm.probit)

                              Estimate      2.5 %    97.5 %

threshold.1 -4.235 -6.191 -2.278
spacing 1.285 0.731 1.838
scale(horn) -0.008 -0.532 0.516
scale(mass) 0.946 0.108 1.784
SexMales -0.892 -1.875 0.091
scale(age) -0.494 -0.910 -0.078
scale(julian.date) 0.266 -0.045 0.577
scale(horn):SexMales 0.383 -0.438 1.203
scale(mass):SexMales -0.551 -1.499 0.398

1 Like

The model is quite complex which may explain some differences. I recommend starting with simplar model and check similarity of the results there, first. Of course, there could be a problem in one of the two packages (or in both) but I need some simpler examples to be able to tell better.

1 Like

Hi Paul, brilliant, thanks :-)
After some inspection, I figured the grouping factor (1|area) was the one messing things up… When I remove it, model results are consistent when using the same link function in “brms” and “ordinal”.
Now, the problem is to which extent can the model with grouping factor be trusted (brms estimates look fine to me)…
Cheers,
L.

1 Like

I would generally tend to trust the Bayesian MCMC results more in multilevel models unless there is some indication of non-convergence.

1 Like

Agreed! Thanks!

1 Like