- Operating System: Windows 10
- brms Version: 2.8.0
I do not understand how to interpret random slopes from the output of
despite reading the informative vignettes and the 2 following papers:
brms: An R Package for Bayesian Multilevel Models
Advanced Bayesian Multilevel Modeling
with the R Package brms
Among others, I read this post on the output from
lmer and I understood something about random slope interpretation.
However instead of
Variance in brms I have
Estimate, as for fixed factor!!.
This page also helped me a bit.
In the following example the variable “experience.Mom” has a positive estimate at the group level (random slope) which, if I understand correctly is always the case (why??), and a negative effect at the population level. (intervals are over the 0 so no clear effect anyhow).
What does my random slope
Estimate means? Why don’t I have a value for each level of my intercept? How does it influence my outcome variable (Y)?
I hope I explained myself, if what I ask is unclear please tell what to improve.
#Group level (random)
Estimate Est.Error l-95% CI u-95% CI Eff.Sample
sd(Intercept) 0.08 0.06 0.00 0.23 4323
sd(experience_Mom.z) 0.09 0.06 0.00 0.24 4831
cor(Intercept,experience_Mom.z) 0.00 0.57 -0.94 0.95 7528
# Population level (fixed)
Estimate Est.Error l-95% CI
experience_Mom.z -0.02 0.09 -0.20
Thank you for any explanation.
note that by default,
brms does not report estimates for the actual random effects, but only their standard deviation (the hyperparameter). To see the actual effects estiamtes use
Also I generally find it useful to interpret models not from their coefficients but using posterior predictions.
Does that answer your question?
Yes, I think that the name “Estimate” in the summary threw me off a little. Their standard deviation is useful to understand explained variance and repeatability, with ranef I have the estimate for each level.
Can you point out an example interpreting a mixed model including random slopes using posterior predictors? (sorry for the very general request, I will look it up myself too but maybe you have a nice example on top of your mind)
Thank you very much for the clarification.
Sorry, not really. (this is probably a typo, but not I am speaking about “predictions” not “predictors”). It IMHO tends to be quite dependent on the actual scientific question you are asking. But the general approach is to use
posterior_linpred for some inputs of interest and interpret the results. The plus side is that this automatically incorporates any possible correlations in the posterior.
For example, you may want to plot how the outcome changes as you change a continuous variable and a discrete variable at the same time as in this plot:
The color indicates two groups that are allowed to have different slopes, each line is a sample from the posterior distribution (the response is non-linear here, hence the curved lines), the thick blue stair-like line is what would be expected if there was perfect alignment between doctors and certain guidelines.
We can interpret this as a) there is a disagreement between doctors and guidelines, b) we are less certain about the blue group (because there is less data for them) and c) the groups heavily overlap, so we cannot prove a difference, but we also cannot prove that they are similar as some of the blue curves are very different from the red curves (which have more data and are better constrained)
#Group level (random) achieved
brm() model. This high value indicates that the model is over-fitting when established by
lme4 package. Should I pay attention to this problem in
brm model to adjust random effect?