How can I simulate a greater dataset (brms/Bayesian analysis)

I am a complete beginner so please excuse my very basic questions:
For a registered report, I aim to do a “Bayesian power analysis” for my planned GLMMs (Gamma distribution, log-link; conducted in R with brms) to justify my maximum sample size in a sequential Bayesian design. I am unsure how to do this as I only found introductions for simpler analyses like t-tests. Is it possible to use the effects found in a pilot study (N = 30) and simulate a greater sample (N = 100) that has the same fixed effects and recompute the brms models repeatedly? And if yes, how can I do that? Has anyone a hint for me how I can simulate data based on the outcome of a brms model?

Howdy!

First, it is actually not a good idea to base your power analysis off of the results of a pilot study. Andrew Gelman has written a lot about this, so you might search around for some things. The purpose of a pilot study is to learn about the practical difficulties involved in your study and its implementation. The noisy parameter estimates in a small sample pilot study should not be used as evidence for what they might be in a larger study. For example, suppose you find a large effect (i.e. some large magnitude of parameter estimate) of interest and you base your power analysis off of this estimate. It is more than likely that this large effect is simply noise from small sample size. If you program this effect into your power analysis, then the resulting sample size that you find necessary to estimate this inflated effect will be too small to find the more modest effect that you actually see in your large sample study. The values of parameters in your sample size analysis should be based off of previous research, the literature, etc., and even then, it is quite possible to overestimate the magnitude of effect sizes (and underestimate your sample size) due to publication bias of positive effects!

This leads to the second point, that power analysis is actually not a good idea anyway. In fact, “Bayesian power analysis” seems to me to be a bit of a technical impossibility, as Bayesian models estimate parameters but don’t do statistical hypothesis tests in the sense of Frequentist hypothesis tests. Of course, it is possible to do a bizarre blend (see R code below), where one simulates data and fits a model repeatedly and finds the proportion of times where the 95% CI excludes zero. This is unfortunately very little different than the Frequentist null hypothesis significance testing folly.

In any case, there are very good reasons to do sample size analysis - the real benefit of sample size analysis is in tweaking the parameters, sample size, and experiment design in your simulation over and over again, plotting the effects of each ‘tweak’, and thinking hard about how different assumptions in your experiment affect the results. This can be extremely eye-opening if you think hard about all of the assumptions that you’re making in your design and what happens to the experiment if any and all of them are violated. Eventually, you can do simulations and modeling iteratively and find a sample size that adequately achieves the precision of the parameter estimate of interest, conditional on all of the assumptions. It’s really about exploring assumptions, learning, and coming up with the best design before you spend all the time and money.

With that long preamble over, there is unfortunately a time when you are coerced into doing a power analysis, and that is if you must satisfy the almighty grant lords who require one. In that case, you can do something like you suggest where you simulate data in R using some specified model and parameter values, fit a model, obtain the 95% CI, do this iteratively, and then look at the proportion of times that the CI didn’t include zero. The below example R code does that using rstanarm which is faster than brms because it doesn’t compile C++ code every time it runs. This simulates a simple four-site interrupted time series design. The model is a Poisson GLM with varying intercept for site. I reckon the below takes a really long time to run with 500 iterations, but I just pulled it from some old files, so I haven’t tried it in a while.

library(rstanarm)
options(mc.cores = parallel::detectCores())

ul <- rep(0,500)
for(i in 1:500){

J <- 4	#number of sites in the study
N_per_site <- 24	#number of months in the study
site_id <- rep(1:J, rep(N_per_site, J))  #create id for each site
index <- rep(1:N_per_site, J)  #create id for each sample point
N <- length(index)
time <- index	#number of months in study
cont1 <- rep(0,6)	#control months 1
cont2 <- rep(0,6)	#control months 2
treat1 <- rep(1,6)	#treatment months 1
treat2 <- rep(1,6)	#treatment months 2
treatment <- c(cont1,treat1,cont2,treat2)
admissions <- rpois(N,1200)	#number of admits per month (fix mean and generate from Poisson dist); Put the number of admission per month here. 
alpha <- log(0.004823928)	#coefficient for the number cases per admission per month in control. Put the number of cases/(admission/month) here
z <- rnorm(J,0,0.11)  #variability between sites. Chosen from another simulation.
beta1 <- 0	#slope over time for control
beta2 <- log(0.75)	#25% reduction in cases per month for treatment
lambda <- exp((alpha + z[site_id]) + log(admissions) + beta1*time + beta2*treatment)
outcome <- rpois(N, lambda)

data <- cbind(time,treatment,outcome,admissions,site_id)
data <- data.frame(data)
data$treatment<-factor(data$treatment)
data$site_id<-factor(data$site_id)

model.stanarm <- stan_glmer(outcome ~ offset(log(admissions)) + time + treatment + (1|site_id), family=poisson, data=data)

ul[i]=model.stanarm$`stan_summary`[3,10]

}

data.power<-data.frame(ul) #the "ul" vector from the sim is the upper limit of the 95% uncertainty interval. put these into a data frame
head(data.power)

data.power$indicator[data.power$ul<0]<-1  #create an indicator variable if the 95% uncertainty interval upper limit is less than zero 
head(data.power)
table(data.power$indicator)

Again, I would stress that this is not the best thing to do in practice, but it is sometimes necessary for checking off a box on a grant application.

2 Likes

Hello @jd_c, I am beyond grateful for your very helpful and informed answer and it has helped me to understand the difficulty of a power analysis in the context of Bayesian analyses more. I also found the code you provided extremely helpful. However, I had some troubles applying it to a Gamma distribution as I have never modeled any data before and did not find any tutorial for such data but I tried to apply the approach suggested by Glen_b under this stackexchange question: r - How can I generate data for a GLM that explains my outcome well? - Cross Validated
and modeled each RT data point like this:

for (i in 1:nrow(simdata)) {
  eta = (beta0 + # intercept
           beta1 * X1 + # main effect of binary variable X1 (0 or 1)
           beta2 * X2 + # main effect of binary variable X2
           beta3 * X1 * X2 # interaction effect of X1 and X2
         
  mu=exp(eta) 
  scale_param = mu/shape_param
  RT <- rgamma(1,shape_param, scale = scale_param)
}

I took the coefficients resulting from the GLMM analysis with the ‘original’ data as the betas for the respective effects and also defined the shape parameter like in the model output.
The resulting data seem plausible and also the results of the GLMM model are very similar to the “original” data, but if you see any flaws in my approach (i.e., how I modeled the interaction or that I took the coefficients from the initial model results) I would be very grateful for that. Otherwise, I would now plan to apply a sequential Bayes factor design and justify the maximum feasible N by simulating e.g., 1000 data sets and provide the false negative rate for the effect of interest. I have a very basic question: Did I get it right that these are Monte-Carlo simulations?
Again, I thank you for your kind help and for taking the time to give such a helpful answer.