I can’t really share much information with you, for compliance. But I’ve been fitting classical lme models part because of suggestions of colleagues (my understanding of how priors influence inference is still kind of ill), and partly because it’s just faster (I’ve used the predict function millions of times). Only, last week discovered this huge infrastructure with rstanarm, etc. Prior to this I was trying to implement samplers myself, and then write Stan code myself.
Anyway - I had been running both B/F in parallel. Classical LME models (it’s longitudinal clinical data where were estimating treatment effect). For both we have similar results, for parameter estimates (either the MLE or the posterior mode), which is good. But I’m stuck within this binary hypothesis testing framework, and my confidence intervals for regression coefficients aren’t giving me much information. For example, if I plot a classical simple linear regression’s confidence intervals for a predictive distribution, it in no way describes the variability in the data. I just did this in R to verify.
Whereas, with a Bayesian approach with principled priors, the model identified an effect of a drug (although HPD interval still close to 0, but not overlapping), that had a positive effect, and positive effect WRT the response is good in this treatment. I brought this up to my PI - they said, that’s extremely interesting you mentioned that. We prescribed that drug on a hunch that it would help, and it was expected has a mostly positive effect (especially relative to the other drugs used in treatment), and they mentioned this could warrant a clinical trial. The effects of drugs are not always all or nothing, so It’s important to have uncertainty estimates that are accurate. An upside of B to me is the accurate uncertainty estimates, for regression coefficients or posterior predictive, and the interpretation using probability I find extremely helpful.
I am reporting to the practitioners the classical results, to be conservative (and for time, because I’mnot used to allof the tools yet), but this result, although not statistically significant in a classical model, was completely in line with the practitioners thoughts, and why they prescribed the drug in the first place, (i.e. it is clinically significant), and the data supports that fact.
This is just leading me to get more into really engaging with priors and understanding how they influence the model, and getting more into how to make sure I’m accurately representing my data. I’m just finding Bayesian models to be more informative in this sense, as not everything is a huge effect, especially with many different groups, time varying, different machinery, but it’s interesting to consider the marginal effects, too. IDK - could be important that a drug has a higher probability of, say, killing someone, and that would be reason to discontinue it from the study. Also - having effects that are mostly negative, that are highly dispersed, things like that.
But yeah - the reason for the title. I’ve heard pharmacologists are interested in Bayesian and I’m wondering if it’s for reasons similar to this.
I’d like to hear other examples. I’m genuinely interested in comparing both methodologies. No Bias toward one or the other.