I used multiple imputation then run the model on a dichotomous DV (0,1).
I read that I need to include (chains = ) but how can I tell how many I need?
imp <- mice(data, m = 5, print = FALSE)
Mod0 <- brm_multiple(Mrcg ~ wfpre*Group + (1|Subject) + (1+Group|Word),
data = imp,
family = 'bernoulli',
prior = prior(normal(0, 5), class = "b"),
control = list(max_treedepth = 15))
- Operating System: Windows 10
- brms Version: 2.7.0
How many cores does your processor have?
If you have four cores you probably have 8 logical processors, so you can use up to that many chains, while setting “cores = 8”, without having more than one chain for each processor (though it’s probably a good idea to leave a core or two unused).
I think you also need to consider how many iterations to run, which should in part be determined by the complexity of your model. If I wanted ~ 3000 non warm-up iterations total, with chains = 7 and cores = 7, I’d use iterations = 900 and warmup = 500, to get 400 non warmup iterations on each chain and adequate warmup.
That said, I’m sure somebody else can provide a more general strategy for this determination, and it’s likely they already have if you search around a bit.
If you are starting out with this you don’t really need to modify the MCMC settings, the defaults are often fine if not always the most efficient. The goal is to get enough to have reliable inferences on the parameters you care about so you would typically run a model, check the effective sample size and other diagnostic, and if those indicate that you need more samples you might need to increase the number of iterations and/or chains. The trade-off between chains and iterations is that chains run most efficiently as 1 chain per core but iterations always run serially. Running multiple chains from different starting values also gives you a chance to evaluate how reliably the code will initialize and converge to a given answer so people tend to take something like 4 chains as a default.