I have a question about the interpretation of the results of a multi-membership LMM computed with the brm function. One of my predictors has several levels but in only some of these levels the 95% credible interval crosses zero while for the other it doesn’t. What does that mean for the overall effect of this predictor?
To give an example: I have dyadic data. The response is a similarity measure between two individuals. My predictor is rank. So I want to see if two dominant individuals are more similar to each other than a dominant and a subordinate individual or two subordinate individuals.
You seem to be coming from a background where you’re used to doing an ANOVA to query the overall influence of a variable, possibly followed by post-hoc contrasts to investigate the precise nature of that influence.
While there are ways to use Bayes to ask an ANOVA-like question (either literally computing an F-ratio of some sort in each sample of the posterior of a single model, or computing a Bayes Factor for two models, one with and one without the predictor of interest), the Stan community tends to hew to a more “parameter estimation” philosophy of inference (as opposed to hypothesis-testing/model-comparison).
So when you specify a model in brms using formula syntax, it is behind the scenes constructing a set of contrasts for each term in your specification for use in a multiple-regression style model. The precise nature of the contrasts depends on if you’ve set a custom contrast property to the variable in R; if not there is a default contrast coding scheme used (I think brms uses R’s defaults, treatment for categorical variables and poly for ordinal).
After sampling you then get a posterior in which you can summarize the marginal posterior for each contrast. See the brms::conditional_effects() function for visualizing the implications of such posteriors for the condition means of the variable associated with the contrasts.