Hypothesis for main and interaction effects

Hello,

I’ve run a model as y ~ x1 * x2 * x3 where all the predictors are categorical with 2 levels each. I had a few questions,

  1. Is it possible to change the contrast coding of the predictors from the default treatment coding to sum coding without have to run the model again?

  2. I’m looking for a relatively easy way to use the hypothesis function to get Bayes Factors for main effects (of x1, x2, and x3) as well as interaction effects (x1:x2, x2:x3, x1:x3, x1:x2:x3). I would appreciate any advice or resource in this regard.

Thank you for reading :)

1 Like

bump

  1. You can compute any desired contrasts yourself, for example, using the hypothesis function.

  2. For 2 levels, you have just one main and just one interaction coefficients, so if you had specified appropriate priors, and set sample_prior = TRUE, you can use the hypothesis function for BFs as described in the doc.

Thanks @paul.buerkner. The intercept parameter is not the overall grand mean right but the cell mean of the reference level right? If I set my categorical predictors to effects coding (and not the default, dummy coding) then the parameters would make more intuitive sense and hence make using the hypothesis function easier?

Also, does the ‘0 + intercept’ matter for 2 level categorical predictors or is that only for continuous predictors?

I am not sure i understand. Can you clarify please?

Hi @paul.buerkner,

I’ve figured out my problem with the parameters - it was a lack of understanding on my part (deviation contrasts with 0.5, -0.5 gives me parameters that make sense which I can pass through the hypothesis function as required).

My question regarding ‘0 + intercept’ is about how the Intercept is parameterized. I have read the relevant documentation but am not sure I completely understand it so was wondering if y ~ 0 + Intercept + x1*x2*x3 would be any different from y ~ x1*x2*x3 when x1, x2, and x3, each, are two level factors? I ran a couple of tests and the parameters in summary didn’t seem to change.

Also, on another topic (based off here), I noticed slight variability when using the bayes_factor function (even with 40,000 post-warmup posterior samples) for model comparison. I thought of doing what the OP suggested (i.e., running the function a 100 times) but if I interpreted Henrik accurately, he said that because of how bridge sampling works, that would not be the way to go and that one would need at least 2 independent sets of posterior samples? Do I go about getting another independent set of posterior samples purely by running the model again via brm?

Thank you for all your help!

The difference between 1 and 0 + Intercept is just in how priors on the intercept are specified, which is probably not a concern of your in this case.

No second set of posterior samples required. The variability is across different runs of bayes_factor for the same set of posterior samples.

Ah, I see! So with the ‘0 + Intercept’ case I can just have priors on b and that would include the intercept as well rather than me having to specify it separately, correct?

Indeed… I see there are ‘iterations’ that are shown when I run the bayes_factor function. What does that relate to? Also, should I run this function a few times to gauge the stability of the BF - would you suggest an approach like OP mentioned in the link in the previous post (i.e., run the function 100 times to get a central and variability measure)?