Sample size planning for experiments

Hey,

I often have to do power calculation and sample size estimation for biological experiments.
usually the biologist is interested in how many mice they need in their experimental setup to detect some effect size with some power. They may or may not have some prior data from which I can infer the expected measurement error or effect sizes.
Recently I shift more and more to Bayesian models to analyse their data (usually with brms) but now struggle to answer the sample size question in the bayesian framework.

When doing these in the frequentist framework, I usually simulate data under different experimental setups (different effect sizes, different number of animals per experimental unit). I use the prior information to simulate these data. In each simulated dataset I do a frequentist test (eg. testing contrasts using lm or lmer) and then calculate the proportion of simulations that pass this test (eg significant under 5% p value) . This is the power. If I can say to to biologist that they need eg. 4 animals per group to have 80% power to see a certain effect size of interest, they are happy.

Now I want to achieve something similar under the bayesian framework. My first thought was to again simulate data many times, do a bayesian analysis and define some criteria like have 80% of the posterior distribution of the parameter of interest be above some effect size threshold. And then again calculate the proportion of “passed” tests.
I have a feeling that this not the optimal way to go about it. For one, doing eg. 1000 bayesian analyses with mcmc on the different simulated datasets will take ages.

So if I want to know how many animals I need to observe a certain effect size with a certain confidence, how would I do this in the Bayesian framework?

I have seen this previous post:

And also read the material reference in the last answer, but does not really click with me.

What would be very helpful if somebody would now of an example sample size analysis with eg. brms on some real (or synthetic) dataset that I can tease apart. I had no luck in finding any.

Thanks for any pointers you can give ! :)

2 Likes

Depending on all kinds of details, Bayesian power simulations might not take as long as you might fear (but they could). I’ve explored the topic a bit, and you can find a couple blog posts here and here.

1 Like

Hey
Thanks for your writeups. They are really useful! I’ll also dig a bit around in your other blog posts :)
So I guess there is not really getting around doing many data simulations and analysis steps also in the Bayesian framework for sample size planning. :)

Somehow I thought if I somehow could set up Bayesian model of the intended experiment which also encapsulate any prior data available. It would be possible to use this for sample size planning.
Not sure if someone is researching this or if there are obvious reasons why this could never be possible?

1 Like

I’m glad the posts were useful. To your comment,

yeah, unless you have some very impressive calculus chops, this is the only way forward of which I’m aware.