I’m fitting a mixed effects logistic regression with an interaction term:
my_model <- brm(correct ~ confidence*condition + (confidence*condition|subject),
data = my.data.frame,
family = 'bernoulli')
and I would like to see how random slopes for the interaction terms “confidence*condition”, for each “condition” level correlate between each other. “condition” factor has three levels, and in the model output I can only get:
cor(confidence:condition1,confidence:condition2)
Is there a way to get it for all three correlations?
I was also trying to extract individual random slope estimates (and then run the correlation analysis myself), using
ranef(my_model)
… but this also only yields estimates for “confidence:condition1” and “confidence:condition2”.
As I understand, it’s because it is relative to the third level, but here I’m interested in the absolute values for each level of “condition”.
I hope my question makes sense and I’m not asking something super trivial. Apologies if I’m missing something obvious (I’m quite new to the whole mixed-effects modeling world).
- Operating System: MacOS
- brms Version: 2.13.0
1 Like
Hi,
sorry for taking too long to respond. Unfortunately you have not provided a lot of details about your data - especially what values does confidence
attain. The full model output could also have been useful, so I’ll be operating from a bit of guesswork (don’t worry, I don’t think it is likely to matter a lot). My best guess is that confidence
is a numerical variable and that condition
is a factor with values 0, 1, 2
.
The important thing to notice is that brms
(as basically any other linear regression package) uses “Dummy coding” for factors/booleans. To make stuff less confusing, I will use condition
to mean value of the condition
variable in data and b_condition
as the coefficient brms
uses. So for the confidence * condition
your data gets recoded for the model as (using cond
for condition
and cnf
for confidence to make the table smaller):
Orig. data | Data for model
==================================================================
cnf cond | Intercept cnf cond1 cond2 cnf:cond1 cnf:cond2
0.3 "0" | 1 0.3 0 0 0 0
0.6 "0" | 1 0.6 0 0 0 0
0.21 "1" | 1 0.21 1 0 0.21 0
0.4 "1" | 1 0.4 1 0 0.4 0
0.74 "2" | 1 0.74 0 1 0 0.74
0.66 "2" | 1 0.66 0 1 0 0.66
The model than has one coefficient for each column in the “Data for model part” (and then one coefficient per subject for the varying part).
This is why you see only confidence:condition1
and confidence:condition2
in your summary - this is how the model sees the data and this is why only those terms are included in the correlations you model.
So while the confidence
coefficient can be easily interpreted in this model as the “effect” of confidence
for condition
0, confidence:condition1
is not the effect of confidence
for condition
1 - to get this effect, you need to add confidence + confidence:condition1
.
So if you really need to know the correlation across all three conditions (which is not a natural quantity in the model), you can either use some linear algebra to account for the need to sum the coefficients, or - possibly more easily - you can interpret model via its predictions. I answered a similar question at Multivariate model interpreting residual correlation and group correlation but feel free to ask for clarifications here, if it is not clear.
Best of luck with your model!
Dear Martin, thank you very much for your response and apologies it took me so long to reply back.
Your assumption about them were right and your response makes a lot of sense. I now realize that at the moment of asking the question the missing piece of the puzzle was wrapping my head around the “dummy coding”.
Extra thanks for pointing me to a similar case as well - it was indeed similar conceptually, in terms of the need for the correlations.
1 Like