Violation of proportional odds assumption

Hi everyone,

I am not a statistician and not sure how to proceed with my analysis. I wanted to run an ordinal model in brms (cumulative(probit)) on data from a 7-point-likert-scale. I have two factors (subject shift and group) and two random terms. My main focus is to see if one of the factors (subject shift) has a negative effect or not (if it has a positive effect, it does not matter). However, the proportional odds assumption is violated. In trying to use a different method to test for this violation, I ran an adjacent category model using acat(probit). Here, I find that my factor of interest behaves differently for one threshold.
This is the output:

 Family: acat 
  Links: mu = probit; disc = identity 
Formula: answer ~ cs(subject_shift) + cs(diagnosis) + (1 | subj_uid) + (1 | item) 
   Data: df_all_subjects %>% filter(type %in% c("c", "d")) (Number of observations: 1066) 
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup samples = 4000

Group-Level Effects: 
~item (Number of levels: 24) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.10      0.02     0.07     0.15 1.00     1452     2139

~subj_uid (Number of levels: 90) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.14      0.02     0.10     0.18 1.00     1426     2376

Population-Level Effects: 
                  Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept[1]         -0.21      0.10    -0.40    -0.02 1.00     3536     2811
Intercept[2]          0.09      0.10    -0.11     0.29 1.00     2886     3034
Intercept[3]         -0.13      0.11    -0.35     0.07 1.00     2426     2972
Intercept[4]         -0.33      0.09    -0.51    -0.15 1.00     2554     2594
Intercept[5]         -0.29      0.07    -0.43    -0.15 1.00     2771     2703
Intercept[6]         -0.27      0.06    -0.39    -0.15 1.00     2742     3024
subject_shift1[1]    -0.36      0.18    -0.72     0.00 1.00     3392     2892
subject_shift1[2]     0.49      0.19     0.11     0.88 1.00     2876     2951
subject_shift1[3]    -0.01      0.21    -0.41     0.40 1.00     2741     2768
subject_shift1[4]    -0.08      0.18    -0.42     0.27 1.00     2529     3220
subject_shift1[5]    -0.24      0.14    -0.51     0.02 1.00     2692     2797
subject_shift1[6]    -0.10      0.11    -0.31     0.11 1.00     3328     2767
diagnosis1[1]        -0.58      0.19    -0.95    -0.21 1.00     3724     2875
diagnosis1[2]         0.15      0.19    -0.22     0.53 1.00     3060     3040
diagnosis1[3]        -0.17      0.20    -0.56     0.21 1.00     2775     3013
diagnosis1[4]         0.00      0.17    -0.33     0.34 1.00     2759     2817
diagnosis1[5]         0.03      0.14    -0.24     0.30 1.00     3272     3145
diagnosis1[6]        -0.01      0.11    -0.23     0.20 1.00     4171     3363

Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 
is a crude measure of effective sample size, and Rhat is the potential 
scale reduction factor on split chains (at convergence, Rhat = 1).

If I read the output correcty, I would say, subject shift has a statistically robust positive effect on category 3 as opposed to category 2. I am not sure how to proceed from here. Does it make sense to report this effect and run the ordinal model nevertheless, because the positive effect is not interesting to me? But it might still influence the models output. Or is there a way to run a partial proportional odds model in brms?

All the best and thanks for your help,
Juliane

1 Like

First, sorry we took so long to get to your question, despite it being relevant.

I usually find it more interesting to check whether the model does or does not have problems modelling some aspects of the data. This can be done with posterior predictive checks (which brms has good support for, see Posterior Predictive Checks for brmsfit Objects — pp_check.brmsfit • brms ). The bars_grouped and stat_grouped (with median, max or min as the statistic) might let you explore how well modelled are certain subgroups of your data.

I think this is a bit misleading - while you can say that there is good evidence for positive effect on category 2 (provided your model is a valid approximation to the true data generating process), the posterior intervals for all the categories are quite wide, so say from the 95% CI for subject_shift1[3] being -0.41 0.40, you cannot rule out a quite big effect in either direction.

I don’t think I understand your concern here - do you plan other uses of the model than to infer the effects of subject_shift ?

There are a lot of details that are hard to discuss without understanding what the data actually represent and what is your scientific (as opposed to statistical) question.

Also, one thing I worry about is that while you include cs(subject_shift), shouldn’t you also include subject_shift by itself? I.e. if subject_shift induced an overall trend across all categories, your model wouldn’t capture it, right? Similarly for diagnosis. I am not sure about this - I’ve never run models with category-specific predictors and I don’t claim to understand them well.

There is also a special Bayesian proportional-odds model implementation in the rmsb package, see http://hbiostat.org/R/rms/blrm.html#proportional-odds-and-binary-logistic-regression (I’ve never used it myself, but you may find it useful)

Is it possible?

I have not been involved with this analysis for some time and can unfortunately not answer your question.

1 Like