Question about family = cumulative in brms

Dear all,

I am trying to understand what priors to build into a model that looks like this: model ← brm(Response ∼ Group * Condition + (1+Condition|Participant) + (1|Item), data = data, family = cumulative(link=“probit”, threshold=“flexible”))

The response is unordered ordinal, namely likert-scale, while the factors are a 2 x 2 matrix where condition is a within-participants factor.

The model should follow cumulative threshold regression (see attached paper with data) where a similar model but with just one factor and no random slopes is used (p.847). In the article it is also maintained that the estimates yielded (see table below), are SD units that can be interpreted as a standardized effect size, similar to Cohen’s d. What unit are these estimates expressed in? Are they probits in brm?

My real question to you which is mainly due to my lack of knowledge with ordinal regression and Bayesian is then how would you recommend setting the mean and SD for the various coefficients of the priors for my model. Shall I transform probits backwards and select a gaussian distribution?

I don’t quite know what the equation would look like either but I am assuming no residual error, just coefficients for the intercept, the two factors, the interaction, and the two random effects.

Thank you ever so much in advance if you have time to respond. Any help is greatly appreciated.

acceptability-ratings.csv (108.3 KB)
analysis-of-rating-scales-a-pervasive-problem-in-bilingualism-research-and-a-solution-with-bayesian-ordinal-models.pdf (477.5 KB)

Hey,

I’m no expert at brms and stan but i’ve been using the ordinal models for a couple of months for a project. I’ve just been using normal priors N(0,1) for class b and intercept, and it’s been working fine for me. I also added in a group level effect for my random effect but it’s very strong I think (Possible values are 0-2), so you might want to ignore that one.

And yeah I treat estimates and standard error as standardised effect size.

# Weakly informative priors with group level random effect.
normal_priors <- c(prior(normal(0,1), class="Intercept"),
                   prior(normal(0,1), class="b"),
                   prior(gamma(2,1), "sd")) 

Thanks Syrph.

Could you type your brm call out and maybe describe your data a bit more please? I need to understand your data structure a bit better. If I don’t know what unit the estimates are on, I can’t set priors unless they’re non-informative.

Francesco

You might find Solomon Kurz’s detailed blog post on the cumulative probit model helpful (and it’s all done with brms). He starts with a thresholds-only model and works up to various multilevel models. He includes discussion of how to set priors for the intercept thresholds and the various other parameters. Every example includes code to fit and evaluate the model.

1 Like

Dear Joel,

This was very helpful indeed. I now realise the coefficients are expressed as probabilities. My understanding of the priors is much better and I should be able to adapt my 7-level scale to the 6-level scale discussed by Kurz. My only reservation now is that although his bmr calls include family = cumulative and probit which is consistent with the approach in Verissimo, he doesn’t specify thresholds = flexible which ensures that the latent variables are not assumed to be equidistant, in contrast to thresholds “equidistant” which restricts the distance between consecutive thresholds to the same value.

Francesco

1 Like

Glad that was helpful. thresholds="flexible" is the default for the cumulative family, so Kurz’s models are using flexible thresholds. You can see each family’s defaults by running ?brmsfamily in the console.