Approach to use bayes_factor in brms

Hello. I’m new to Bayes analysis and brms and trying hard to wrap my head around this.
By using brms, I’d like to confirm the null effects I found with lme4.

I would like to get bayes factors for all possible main effects and interaction effects from the example model below (I used simplified variable names and code for readability).

brm(dv ~ ABC + (1|subject), family = bernoulli())

At first, I thought I can simply compare a model with a predictor I’m interested in with another model that doesn’t include the predictor as below:

full <- brm(dv ~ ABC + (1|subject))
m1 <- brm(dv ~ A + B + C + A:B + A:C + B:C + (1|subject)) # A:B:C removed from the full
BF for A:B:C <- bayes_factor(full, m1)

m2 <- brm(dv ~ A + B + C + A:C + B:C + (1|subject)) # A:B removed from m1
BF for A:B <- bayes_factor(m1, m2)

m3 <- brm(dv ~ A + B + C + B:C + (1|subject)) # A:C removed from m2
BF for A:C <- bayes_factor(m2, m3)

And repeat similar procedures for getting BFs for each main effect.
This procedure made sense to me at first, but then I came across this discussion here.

My question is whether the procedure illustrated above to get BF makes sense.
If not, I’d appreciate any suggestions or advice on how to compute BF for each main and interaction effect.

This vignette about Bayes Factors from the bayestestR package contains some useful information (in particular, the Comparing Models using Bayes Factors subsection).
Whether a Bayes Factor testing approach makes sense really depends on the conventions in your area of research, but there are definitely alternatives. See for example this paper by John Kruschke: Rejecting or accepting parameter values in Bayesian estimation

1 Like

Hi @zhffk at the risk of being an annoying forum person who answers a different question to what you asked, I can make a suggestion that might achieve something similar. Bayes Factors are possible to get in brms although I am not very well versed in them, and I believe to be properly used you need to consider very carefully your priors for model components rather than just defaults (ideally this would be done anyway, but it seems even more important to get reliable BFs vs. just decent parameter estimates).

Something that might however work very similarly to your approach of comparing a model with vs. without a specific component added would be instead to consider something like leave one out cross validation LOO-CV or information criteria such as WAIC. Values for these can be easily assessed and compared for a brms model. These metrics aim to give something like an indication/estimate of the explanatory/predictive power of the model (while penalising for excessive complexity). So you can for example get LOO for one model, then LOO for the other model, and compare them. This would tell you whether adding whatever you added to the model appears to improve model performance.

These are packages that allow this, and I think they are also available already in brms too. Several people on the forum contribute to these:


First you would use add criterion to your model fit objects, and choose to add something like LOO or WAIC. Then, you can compare the models.

This is not the same as getting a BF that would give you evidence for or against inclusion, but it would tell you for example whether adding the predictor does or does not reliably improve the estimated predictive power of the model, which seems to be getting at a very similar question.

1 Like