Inference of the results for Linear Mixed Models computed with brms

Hi,

I have a question about the interpretation of the results of a multi-membership LMM computed with the brm function. One of my predictors has several levels but in only some of these levels the 95% credible interval crosses zero while for the other it doesn’t. What does that mean for the overall effect of this predictor?

To give an example: I have dyadic data. The response is a similarity measure between two individuals. My predictor is rank. So I want to see if two dominant individuals are more similar to each other than a dominant and a subordinate individual or two subordinate individuals.

Here is the function we used:

mRank.brm<-brm((Gunif)~ DomDyad+z.Years +SeasonSame+
(1 + SeasonSame.dry17+SeasonSame.wet17 +SeasonSame.wet16||GroupDyad)+
(1 + SeasonSame.dry17+SeasonSame.wet17 +SeasonSame.wet16||mm(ID1, ID2)),
data=test.data, backend=“cmdstanr”, control=list(adapt_delta=0.999, max_treedepth=20))

Here ist the output:

For the predictor DomDyad, only the level OthOth seems to have an effect. How would I describe this result now?

I hope someone can help me out.
Thanks!

Try searching google with the terms “linear model categorical variable contrasts”

1 Like

Thanks for your reply! However, that doesn’t help me much. So are you suggesting to do post-hoc contrasts?

You seem to be coming from a background where you’re used to doing an ANOVA to query the overall influence of a variable, possibly followed by post-hoc contrasts to investigate the precise nature of that influence.

While there are ways to use Bayes to ask an ANOVA-like question (either literally computing an F-ratio of some sort in each sample of the posterior of a single model, or computing a Bayes Factor for two models, one with and one without the predictor of interest), the Stan community tends to hew to a more “parameter estimation” philosophy of inference (as opposed to hypothesis-testing/model-comparison).

So when you specify a model in brms using formula syntax, it is behind the scenes constructing a set of contrasts for each term in your specification for use in a multiple-regression style model. The precise nature of the contrasts depends on if you’ve set a custom contrast property to the variable in R; if not there is a default contrast coding scheme used (I think brms uses R’s defaults, treatment for categorical variables and poly for ordinal).

After sampling you then get a posterior in which you can summarize the marginal posterior for each contrast. See the brms::conditional_effects() function for visualizing the implications of such posteriors for the condition means of the variable associated with the contrasts.