Categorical interactions - post-hoc tests

As my name suggests, I am new to brms package.

My model looks like this:
y ~ A * B + (1|subject)

The A variable has levels a1 and a2 and the B variable has levels b1, b2, b3, and b4.

If I were to use lme4 package, I would fit the model with lmer and then run anova(model) to see that the interaction A:B is significant. Then I would use emmeans function to further explore this relationship, which would show me that, for example, the contrast a1:b4 > a2:b4 is significant.

As far as I know, if I use fit my model with brms, I won’t be able to use anova to get the significance of the general interaction A:B. Of course I can directly use emmeans and test for the contrast a1:b4 > a2:b4 but I feel like this decision should be motivated by seeing a significant interaction, which I can’t get here. And as a1 and b1 are baseline levels, the summary function will provide me only with comparisons against a1:b1…

What would you do in this situation?

Hey @NewToBrms, welcome to the forums! I’m not a big emmeans user, but I believe it is available for at least some brm() models. As an alternative, I do know the marginaleffects package also supports a variety of brm() models.

You are right, the anova() approach won’t work with brm() models. If it were me, I would just skip that step and go straight from fitting the model to looking at the contrasts of interest. In my field (clinical psychology) we might call this an effect-size approach, as opposed to a null-hypothesis-testing approach.

1 Like


I agree with @Solomon , would recommend to move directly to the comparisons of interest. In traditional Null hypothesis signifficance test (NHST) the omnibus approach is supposed to help to control type I error rates. Which is usually used in addition to some p-value corrections.
In Bayesian, I recommend to not focus on this idea of null hypotehesis, but to describe the posteriors for the parameters/comaprisons of interest. Would be closer to the idea of planned contrasts intraditional ANOVA

If you still want to do an omnibus test like, you could run 2 models, with and wothout the interation, and do a model comparison with LOO

y ~ A + B + (1|subject)
y ~ A * B + (1|subject)