Interpreting beta estimates from ordinal brms

Dear community,

I am completely new to Bayesian stats and read through several tutorials and a few posts here on the forum but I cannot find a conclusive answer to what I need to do if I want to get standardized effect sizes from an ordinal brms regression. I followed the tutorial of Bürkner & Vuorre but also they did not describe how to get a standardized effect size…

model parameters:
Rating = ordered 7-point Likert scale
condition = visual or imagery
stimuls_type = art or face
a random effect for each participant = ID
a random effect for each stimulus = image_id

(I got this model by creating several beforehand and comparing them with the loo, this model was the winner)

bay_moving_cond_stim_id_id <- brm(Rating ~ Condition + stimulus_type + (1|ID)+ (1|image_id) , data=moving_long, family=cumulative("probit", threshold = "flexible"), chains = 5,
                                  iter = 3000, warmup = 1000, cores = 10)

The result:

 Family: cumulative 
  Links: mu = probit; disc = identity 
Formula: Rating ~ Condition + stimulus_type + (1 | ID) + (1 | image_id) 
   Data: moving_long (Number of observations: 2553) 
  Draws: 5 chains, each with iter = 3000; warmup = 1000; thin = 1;
         total post-warmup draws = 10000

Group-Level Effects: 
~ID (Number of levels: 34) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.60      0.08     0.47     0.79 1.00     1937     3233

~image_id (Number of levels: 40) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.31      0.04     0.23     0.40 1.00     3297     4063

Population-Level Effects: 
                               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept[1]                      -2.26      0.14    -2.53    -1.99 1.00     1598     3043
Intercept[2]                      -1.33      0.13    -1.59    -1.06 1.00     1504     2881
Intercept[3]                      -0.60      0.13    -0.85    -0.34 1.00     1509     2642
Intercept[4]                      -0.14      0.13    -0.39     0.12 1.00     1500     2833
Intercept[5]                       0.75      0.13     0.49     1.01 1.00     1510     2677
Intercept[6]                       1.79      0.14     1.52     2.07 1.00     1640     3412
Conditionimagery_moving_rating    -0.29      0.04    -0.37    -0.20 1.00    15886     6802
stimulus_typeface                 -0.56      0.11    -0.77    -0.35 1.00     2844     4346

So I do get that a change from visual to moving (condition) results in a decrease of the ratings. I also do get that since the beta CI does not cross zero ‘it should be a real effect’ right?
I’d say intuitively from frequentist perspective (if it would be standardized) this is a small effect… Is this true for Bayesian as well?
Also do I need to create a Bayes Factor for this model or so to ‘convince’ a reviewer that this model shows that there is an effect in the data (compared to H0)?

I really appreciate any help!

Thanks for reading this far and have a great day!

Cheers,
Max

1 Like

The CI not crossing zero certainly indicates the effect is strongly evidenced as negative. You can estimate the precise posterior probability the effect is negative using brms’s hypothesis function:

hypothesis(bay_moving_cond_stim_id_id, "Conditionimagery_moving_rating < 0")

Looking at the CI, the posterior probability the effect is < 0 will be very strong for Conditionimagery_moving_rating and should be sufficiently convincing to a reasonable reviewer. In my opinion, Bayes factors are problematic for many reasons – they are hard to interpret and are heavily dependent on the priors used.

You can get an intuitive idea of the effect size by comparing it with the Intercept values. The intercepts are cutpoints on the underlying (latent) continuous standard-normal scale. Intercept[1] is the threshold between ratings of 1 and 2; Intercept[2] is the threshold between ratings 2 and 3; etc. So the width of rating 4 is -0.14 - -0.60 = 0.46; the width of rating 5 is 0.75 - -0.14 = 0.89. This implies that Conditionimagery_moving_rating will move about 2/3 ~ 0.29/0.46 of ratings of 4 down to 3, and about 1/3 ~ 0.29/0.89 of ratings of 5 down to 4. Whether or not that is a “small” effect depends one’s perspective.

In addition to Andy’s responses, bear in mind that you have used the probit link. As a consequence, the \beta coefficients in the model are on the latent standardized normal scale, just like they would be in a frequentist model. Thus in my discipline (clinical psychology), your coefficient for Conditionimagery_moving_rating would be considered small, and your coefficient for stimulus_typeface would be medium. YMMV

Thank you so much for your quick responses!

@andymilne thank you for explaining a bit more in detail how the intercepts interact with the beta coefficient!

@Solomon I am also in the field of psychology, so I share your interpretation, thanks!

My problem was that I did not understand that the underlying latent scale was standardized, which produces standardized beta coefficients (if I understood both of you correctly)

1 Like

Yes, with a probit link, the scale of the predicted values is standard normal, by definition. So, for predictors that are dummy-coded factors or continuous variables that standardized, the effect sizes are standardized.

1 Like

Andy makes a good clarification. The probit link puts the DV on latent z-score scale, but you still have to account for the metrics of the predictor variables. For example, if you have a single standardized predictor, its \beta coefficient will be in a correlation metric with the DV on latent z-score scale. But if the predictor is not standardized, it’s just a \beta coefficient on some other possibly unhelpful metric.