Paralellizing brms across both chains and fits

Hi Martin, I ran into a similar issue recently and resolved it by setting cores=1. However I wonder if it’s possible to parallelize across both chains and models at the same time?

I have 16 models I wish to run. I can either 1) run each model in parallel in which case the chains are sampled on same core, or 2) run chains in parallel but then each model is now waiting in queue.

I am running on a shared server with very many cores. Can I run each chain on a separate core for each model, hence utilizing 64 cores at once?

1 Like

Hi,
moved to a new topic as this is IMHO really a new question.

Yes, you can run many models in parallel, but you have to manage the paralellization yourself via packages like future or parallel. Just wrapping the calls to individual fits into futures should be a quick way to get started.

I also have some note really well documented code to do that for rstan/cmdstanr models here: ALSFRS_models/sampling_parallel.R at master · jpkrooney/ALSFRS_models · GitHub , and a wrapper to do that for brms models at ALSFRS_models/brm_parallel.R at master · jpkrooney/ALSFRS_models · GitHub (other files might be dependencies).

In both cases the idea is that you provide a list of arguments that are shared in all fits and a list of lists of arguments that are unique, e.g. to run a set of different brms models on the same data you would run:

res <- brm_parallel(args_shared = list(data = my_data), 
  args_per_fit = list(
    list(formula = y ~ x),
    list(formula = y ~ x + z),
    list(formula = y ~ x, family = "gamma")))

The arguments total_cores and cores_per_fit then control the paralellization.

1 Like

This is really useful, thanks! Read the future documentation and looks like it has everything I need. Have you considered introducing such functionality directly into brms? Maybe a direct wrapper to future for those like me who are not familiar with parallel architecture and would appreciate a quick fix

scartch that, I see that brm has an argument call to future. Thanks again!