Could anyone refer a book or resource on the design and analysis of experiments through Bayesian methodology?
I’m a big fan of ‘Statistical Rethinking’ by Richard McElreath. But the author is upfront that he’s an anthropologist; ancient civilizations simply are not producing any more clay cups. So the idea of a randomized control trial is irrelevant to him.
But in my line of work, we absolutely can randomize units and control treatment exposure. From my research, the Frequentist gospel on this topic (for applied researchers) is Design and Analysis of Experiments with R.
I’m looking for the Bayesian analog to this book or the resources that come closest.
How would a Bayesian choose appropriate priors and likelihoods for block designs, pair-wise assignments, panel data, etc?
Practitioners primarily use Bayesian stats on observational data so the idea of experimentation is a bit alien
Experimentation w/ Bayesian stats is the wild west w/o broad consensus about how it should be done so nobody, even with experience, wants to risk communicating their ad hoc approach as “the way”
I was going to reply to the original post last year, but never got around to it, because the answer is complicated, though not really.
Long story short: it is trivial to do anything you are doing using a “frequentist” approach in using bayesian statistics, but once you are using the “bayesian approach” replicating the former approach it’s a waste of its potential.
Therefore, a Bayesian Guide to Experimental Science would probably be more a book on the philosophy of statistics and experimental design than a cookbook, and practitioners tend to think they don’t have the time for philosophy of science.
Any frequentist method can be written as an inference problem, there isn’t really choosing the likelihood, only correctly describing it correctly. Priors can be a longer discussion, but if there is a strong reason to think you shouldn’t impose probabilities in this way (even if they are also bad reasons: your boss doesn’t believe in Bayes, the editors always reject papers with bayesian methods, Reviewer #2 is a pain in the ass) you can always just use flat priors. Otherwise, priors are chosen to reflect the prior knowledge you have about the system, plain and simple (I always give the example of a model where human lifespan was a parameter, so it’s safe to say the probability of it being over 100 years is low, and zero for >120).
I don’t think that is true at all; if there is a correlation between bayesian statistics and observational data, I don’t think there’s a causal link there.
I disagree with that as well. Unless if by “wild west” you mean it’s harder to do if you don’t have appropriate knowledge of statistics. It goes back to the first quote: you can always do the same, just bayesian. The problem is that the “consensus” in experimental science is often "whatever most people always used and editors don’t complain ", even if it’s wrong. There is really nothing any more ad hoc in Bayesian statistics than in Frequentist Statistics.
Part of the “problem” with any resource in Bayesian Statistics is that it most likely requires actually learning about statistics and understanding concepts in probability and inference that experimenters are often not equipped, don’t have the time, the will, or the patience to do. Instead, it’s more “useful” to follow a cookbook that tells you what tests to apply and how to interpret the outcome. Unfortunately, doing that takes away everything that is actually useful and interesting in statistics in general.
There are some sections on using Bayesian models in spatial sampling design in Spatial sampling with R. I like the book and perhaps they are useful/generalisable to your setting.