Bayesian power analysis for sample size planning

I’m just going to echo @Ara_Winter here: a power analysis is a little odd from a Bayesian framework. Traditionally, the reason to do a power analysis is to show reasonable ability to correctly reject the null hypothesis. This being said, I do know that Kruschke has a paper that does talk about what power analysis looks like in Bayesian statistics. It may be helpful to look over his characterization of Bayesian power analysis and see whether his approach makes more sense for what you’ve proposed in your RR.

Also, just to echo the point on simulation being a good place to start. I wanted to pass along a resource recommended to me on this forum for simulating datasets.

One thing you may consider looking at as well is sensitivity checks in the priors. Commonly, power analyses are done to determine a target sample size to detect effects of various sizes, so editors/reviewers may be familiar with power analysis as a proxy for sample size adequacy. There are several papers on the use of Bayesian methods for small samples where perfectly good results are obtained with reasonably informative priors, and I think that’s where maybe “sample size adequacy” might come in with Bayesian methods. Essentially, you might want to just show one of two things: (a) that your sample size is large enough that your results are largely insensitive to your priors or (b) that your priors are informative enough to arrive at “correct” inferences given your sample size. There was a recent post on this forum on prior sensitivity checks. This should all be do-able with simulated data.

Though you plan to use brms, which is really good about creating Stan models that are efficient, you might still consider sharing your planned call to brms to see if there might be any speed-up tricks that people could recognize. There have been a few posts on the forum in the last couple of months on using sufficient statistics to speed up modeling. Similarly, brms support of cmdstan also means that potentially multithreading or a variational Bayes estimator could be used to speed up these preliminary fits. If there’s any of these kinds of recommendations from the community, then it may help ensure you have reasonable wait times for your simulation fits.

1 Like