Centering ordinal monotonic predictors in cumulative ordinal regression using "brms" R package

I would like to use the brms R package to do a cumulative ordinal regression with an ordinal DV (see Bürkner & Vuorre, 2019) and two ordinal predictors (IV and moderator) and their interaction. Per Bürkner and Charpentier (2020), I plan to use the following code to treat the ordinal predictors as monotonic.

brm(dv ~ mo(iv)*mo(mod), data = dat, family = "cumulative()")

All three variables are single Likert items. If I were treating these variables as continuous in linear regression, I’d center the predictors at their means so the lower-order terms for the IV and moderator could be interpreted as main effects (e.g., effect of IV on DV at mean level of moderator) rather than conditional effects (e.g., effect of IV on DV at moderator value of 0).

When treating the predictors as ordinal and monotonic, do I similarly need to center the predictors to interpret their lower-order terms as main effects? If so, given that the predictors are ordinal, would I center them at their medians, or do something else? Does the centering approach depend on whether I’m doing an ordinal logistic regression (as specified above) or ordinal probit regression? Thanks!

1 Like

I recommend against using centering for that purpose. Whose do say whether you should center on the mean vs. the mode or median? I prefer to use any coding I want, and to get specific contrasts of interest post-model-fitting. For “main effects” you can construct for example either a contrast at a specific level of an interacting factor, or a contrast that averages over levels of an interacting factor.

2 Likes

Thanks! Looking back at the Monotonic Effects section of Bürkner and Charpentier (2020), it seems that centering prior to model fitting cannot make a difference when both predictors are treated as monotonic, because it seems the “monotonic transform” recodes each predictor’s values from 0 to D, where D is the total number of predictor categories minus 1 (p. 424).

To test a contrast that averages over levels of an interacting predictor (say, at the median of moderator z), would I use hypothesis() to test the main effect of IV x at the moderator’s median, where the median is expressed on the transformed scale ranging from 0 to D? (For example, if the median is the third predictor category, then z_median below would be 2.)

# Fit ordinal logistic regression model with ordinal, monotonic predictors
fit <- brm(y ~ mo(x)*mo(z), data = dat, family = cumulative())

# In general, where "z_median" is the moderator's median on the 0-to-D scale
hypothesis(fit, "bsp_mox + bsp_mox:moz * z_median = 0", class = NULL)

# For example, when "z_median" is computed to be 2 on the 0-to-D scale
hypothesis(fit, "bsp_mox + bsp_mox:moz * 2 = 0", class = NULL)

I don’t know the answer to that but I usually seek approaches that ask the user to code on the original scale. Example: for a “good, better, best” predictor with a median or mode of “better” I’d want to specify “better”.

1 Like

Thanks again! My thought to use hypothesis() in this way to test the main effect of IV x in the presence of its interaction with moderator z is based on the idea that, in linear regression with two interacting, continuous predictors, the marginal effect of x is the partial derivative with respect to x (see Section 2.1 here), which we can then evaluate at the median of moderator z.

y = b_0 + b_1x + b_2z + b_3xz
\frac{\partial y}{\partial x} = b_1​ + b_3​z

When evaluating the partial derivative at moderator z’s median (continuing my example, let’s say the median is z’s third predictor category, which is coded 2 on the 0-to-D scale), we would plug in 2 for z.

\frac{\partial y}{\partial x} \vert_{z=2} = b_1​ + b_3​*2

@paul.buerkner @Emmanuel_Charpentier @matti, I’d love any input you might have on this in the context of two interacting ordinal, monotonic predictors in an ordinal logistic regression (where DV y is also ordinal), which combines elements of Bürkner and Vuorre (2019) and Bürkner and Charpentier (2020). Thanks!

The effects used by most practitioners are more like finite differences than derivatives. Finite differences are simple contrasts such as differences or double differences.

1 Like