I’m looking to compare `brms`

and `lme4`

in a Monte Carlo Simulation with ~ 400 different combination of conditions of 10,000 replications each, so I’m wondering what the best way/examples to do this would be. I don’t know if it’s just me, but I’m not having much success with ‘brms Monte Carlo Simulation’ on Google Scholar. I’m also only familiar with Bayesian statistics from afar and not in-practice.

My first question is, the default `brms`

settings has 4 chains of 2000 iterations each and throws away half of the iterations for warm-up, so that gives 4000 posterior samples–if I understand it correctly. My question is, if I get any convergence warnings, do you think I should set control statements to re-run the model from scratch with an increased number of iterations? (Is there a way to continue progress?) Should I set a limit to the maximum number of iterations?

- I got the tip that convergence failure is also a result, although that still leaves the question of what the default number of iterations should be or else there’d be no difference between 2 and 2000 iterations.

My second question is, I’m pretty sure that an informative prior should make all `brms`

estimates superior to `lme4`

, but, if I understand correctly, the default flat prior shouldn’t show much or any differences between `lme4`

and `brms`

. I also heard that a default flat prior is highly undesirable. Should I use a weakly informative prior then? An uninformative shrinkage prior? There just seems to be a lot of choices and given the context, I don’t know if it’s feasible to try all of them.

Thanks,

Michael