Phase is a categorical variable (factor) with three levels (0, 1, 2)
With the default contrast coding, I get estimates for the following fixed effects:
Intercept (estimate at Phase = 0)
Slope 1 (difference between 0 and 1)
Slope 2 (difference between 0 and 2).
I also get estimates for the following random effects
Correlation between random intercepts and random slopes 1
Correlation between random intercepts and random slopes 2
Correlation between random slopes 1 and random slopes 2
My question
I would like to test the hypothesis that the correlation between random slopes 1 and random slopes 2 is different from 0. I’d like to run a model comparison (e.g., by using bayes_factor), in which I compare a model with all three correlations to a model where the third correlation is set to 0.
Is that possible?
From the brms documentation I learned that it is possible to set parameters to constants in the prior specification. Is that also possible for (parts of the) variance-covariance matrix?
@paul.buerkner might chime in to correct me, but I suspect this level of granularity is going to be difficult to achieve, but…
If you’re willing to adopt an estimation framework (rather than an “evidence” framework embodied by BFs), then you get this comparison for free from the single full model; you can just compute the difference between the two correlations in each sample from the posterior, yielding a difference posterior you can visualize and summarize with an eye to how much mass is near zero.
Could you elaborate on what inference you are seeking to achieve with this comparison? If merely about whether the third correlation has much posterior credibility away from zero, again, this comes for free with a single model in the estimation framework.
@mike-lawrence Thank you for your suggestions! I actually do want to quantify the evidence in favor of the correlation in terms of a BF (although, of course, looking at the posterior distributions is already extremely informative!). Also, I am not sure if I missed something here, but I don’t think the difference between the posterior distributions for correlation1 - correlation2 would tell me something about correlation2? These correlations are conceptually very different.
Ah, sorry, my error. I was misreading and no difference is required to evaluate whether zero remains a credible value for the correlations between slope 1 and slope 2.
Bah, another error, this time in expression, so here’s a more explicit expression:
If you are interested in whether zero remains a credible value for the correlation between the intercept and slope1, just look at the posterior.
If you are interested in whether zero remains a credible value for the correlation between the intercept and slope2, just look at the posterior.
If you are interested in whether zero remains a credible value for the difference between [correlation between the intercept and slope1] and [correlation between the intercept and slope2], compute the difference in each sample of the posterior and look at the difference posterior.
Ok, thanks for clarifying this! I am interested a different question, namely testing the presence of the correlation between slope1 and slope2 by comparing a model with that correlation to a model without that correlation (i.e., correlation set to a constant = 0). The two other other correlations, intercept - slope1, intercept - slope 2, should be part of both models.
You cannot set parts of the correlation matrix to zero currently.
However, If you want to run a Bayes factor analysis, you can also use the hypothesis() function (see the Evid.Ratio column for the BF) which applies Savage-Dickey Ratio rather than bridgesampling as bayes_factor() does.
So, you check out the name of the correlation parameter of interest via parnames(model). Then you write hypothesis(model, "<cor> = 0", class = ""), where <cor> is the name of the relevant correlation parameter.
Thanks! The Savage-Dickey method is indeed an alternative. I had hoped for a possibility to use bridge sampling (because I decided to use bridge sampling for all my other hypothesis tests, and I think it’s good to be consistent with regard to hypothesis testing). @paul.buerkner Are there any plans to work on this possibility? Again, thanks a lot!