[Edit - my first proposal of an approach in this message was not correct due to mistaking what the sigma term in the model output means. In the later comments from ReneTwo and myself, a more suitable approach is (hopefully) arrived at]
I am looking to check the validity of an approach I have taken to get a posterior distribution of effect sizes in a linear model.
I have fitted the following regression model. I suppressed the intercept so that I begin with estimated means for each group. Group and Prox are categorical variables with 2 values:
model <- brm(outcomeVar ~ groupindex-1 + proxindex + proxindex:groupindex + (1|subject)
To approximate a ‘main effect’ of group, I:
- drew samples from the posterior (posterior_samples)
- calculated the overall score of each group, irrespective of proximity, by adding different parameter estimates together. E.g., for group1 overall score, I just calculated: (group 1 mean + (group 1 mean + effect of proximity))/2 - so the result is the average of group 1, over both proximity conditions.
- I then calculated differences between the groups by subtracting one Group’s overall score from the other, over all the samples from the posterior.
- I then can get a posterior distribution of the differences. So, this is just a simple group level contrast that averages over one of the variables, and I think it is correct
Is it valid to then generate a posterior distribution of possible effect sizes (Cohen’s d - the difference between groups divided by the standard deviation) by simply dividing the group difference in each posterior sample by the estimated sigma value for each sample? This makes sense to me, and does produce sensible results, but I wanted to confirm that it is an acceptable way to use the posterior. I have attached a plot I made of one such effect size posterior distribution (the very large size of the effect is not a mistake - it is for a manipulation check where there is almost no overlap in the values from each group).
What I did next was also to compare two effect sizes by subtracting their estimates from each other. I then could get a posterior distribution of the difference between effects sizes, showing whether or not one was likely larger than the other. Is that also a viable approach?
If you are able to confirm whether this approach to making effect sizes is valid, it would be awesome!
Please also provide the following information in addition to your question:
- Operating System: macOS Sierra 10.12.6
- brms Version: 2.8.0