Multiple chains versus single chain, after model converged

specification

#1

I ran the model I proposed with rstan and got a converged solution after fitting 4 chains using all cores on my laptop. As it takes a long time to finish running one specific condition and I have many conditions to run, I can understand the effect sample size will change because it was combined effect sample size from all the chains. Will the mean, mean_se change, which are my focus of interest? Is it legitimate to run 1 chain only after having obtained a converge solution?

To make it clear, this is more about a concern on CPU and memory usage. I thought to myself when I only run one chain, it will use less computing resource on my PC, whereas when I run 4 chains, it definitely uses more computer resources than only 1 chain. I may be wrong. :-)

Thank you very much.


#2

The issue with this is that by the time you have a solution from multiple chains that passes the diagnostics – why run one more chain? You probably already have a solution.

Maybe just use one core? It’ll take much longer, but then your computer won’t get as stressed.


#3

This. You can still run 4 chains on a single core. If you’re limited by say a 4 core PC then another option might be 2 cores, which should still allow you to do other things. I try to always keep at least one core free for web browsing, email, etc.

In your Rscript console, simply set:

options(mc.cores = 1)

#4

Apart from multiple chains what are the better alternatives for determining if an appropriate number of iterations have been applied?

Also is there anything relevant regarding blocking or swapping techniques when using NUTS on multiple chains?


#5

I am sorry. I should have put a context of my question. I am running a simulation study. I need to run many replications, say at least 50 replications. For now I am testing each condition to make sure I get a converged solution. After that, I am wondering if I could run 1 chain of as many replications as I want. Thank you.


#6

I guess it depends. Is anything changing in the replications? What makes a replication a replication in this case?

As a compromise, maybe just run 2 chains :D? I think we run multiple chains just cause it’s a really powerful, easy diagnostic that comes at the cost of just a few more computers or whatnot.


#7

The only thing I noticed is that I did find a couple of replication did not get converged so I have to dump those results. Replications is very important in my field because we are trying to make a inference on the population based on the samples (replications) we worked with. It does take more computing resources. lol


#8

Ah, yeah, if any of your diagnostics are failing, I wouldn’t get rid of the multichain stuff. It’s just too effective and you’ll always be second guessing yourself if you don’t.

Are these simulation studies you’re doing? Or just running the inferences on different subsets of the same data?


#9

Yes, it is a simulation study based on some realistic parameter estimates (treating as "truth) . I am trying to see how well I can recover those “true” parameters by fitting a couple of models of interest. In terms of non-convergence, for instance, among 50 replications I ran, I found that 2 or 3 (the worst may 5 or 6) replication may get 1 divergence. I am simulating new data for each replication. But before running all the replications, I would run multiple chains and figure out a converged solution. Then I would use those settings for all the 50 replications, such as acceptance rate and step size.