Thanks to a recent tutorial from @paul.buerkner and Matti Vuorre, I’m realizing that it should be possible to fit ordinal models in brms for an analysis of multiple questions per informant that I’m assisting with (similar to the discussion on page 19/20 of the tutorial).
In essence, participants are asked to respond to a series of questions about funding priorities for 10 budget items. So for question q, person i provides a response on a 5-point likert scale. We hypothesize that these responses vary as a function of an i-level attribute denoted x.
Furthermore, the funding priorities can be broken down into two categories: military and educational. So the dataset looks roughly like this (showing responses for 2 of the n informants).
Now imagine that the researcher has two questions:
- The attribute, x, predicts the emphasis that the individual gives to the different types of funding.
- Relatedly, within individuals, people who rate educational items more highly will rate military items lower even after accounting for variation explained by x.
The first question seems like it can be profitably investigated with a brm model:
brm ( y ~ x * type + (1|i) + (1|q), family = cumulative("probit")
To examine the second question, my intuition is to consider a random coefficients model:
brm ( y ~ x * type + (1 + type|i) + (1|q), family = cumulative("probit")
And then look at the covariance of the i-level intercept and coefficients for insight into the correlation across those two types of questions.
Are there other modeling strategies that I should be considering, though?