I’m currently learning how to design a study in Bayesian and decided to use brms since the syntax is very similar with lme4. Usually in frequentist way I run full model and null model, then do likelihood ratio test in order to get the significance because the p-value in full model might not be accurate. After that, I run this simulation x times in order to get the power.
full model: metrics ~ treatment + (1 + subjectId)
null model: metrics ~ (1 + subjectId)
How do you do this in Bayesian perspective?
This is something that I’ve done but still not sure if it’s correct or not:
Using brms this is what I understood, suppose the model is similar with full model above, so what I need to do is run hypothesis(full_model, "treatment > 0") and observe if the posterior probability is higher than alpha then repeat x times to get the “power”.
There are many ways to see if treatment has an “effect”. The most straight-forward solution is what you already found via hypothesis. As a Bayesian, I would object comparing anything to an “alpha” and then call it “significant” though. Instead you may just report the posterior probability (Post.Prob) that the treatment effect is greater zero.
Thanks for your response. Right, I should’ve care more about the posterior instead of rejecting or not.
And after scrolling through the forum I’ve found a neat way to get E[Y|treatment] by using emmeans + tidybayes, this will really help if I use more covariates or more complex model.
Thanks for your response. Right, I should’ve care more about the posterior instead of just rejecting or not.
And after scrolling through the forum I’ve found a neat way to get the E[Y|treatment] by using emmeans + tidybayes, this will really help if I had more than 1 variables.