This might be a very repeated issue from newbies as myself and even might be quite annoying. However, here I go.
I have found in papers of bayesian inference the use from a few thousands of iterations for warm-up and sampling, up to huge numbers (>100000).
In this post from the old google group, Bob suggested to check the n_eff simulations in order to check whether is necessary to increase the number of iterations (i.e., by doubling it).
By using shinystan, I have noticed that one can decide upon the warning threshold value (ranging from 0 to 100%, being by default 10% if am not mistaken).
So, my questions are some:
- Is there any guidance on how to pick up this threshold?
- In the mentioned post, someone asked if it is possible restart a new simulation with the output of a previous simulation and the answer by that time was that wasn’t possible, it is now?
- I have some models where the montecarlo se / posterior sd and the Rhat statistic show no warning at all, while the n_eff/N is present in some estimates. Is this a sign that I should increase the number of simulations even thought the chains seemed to have converged?
Thanks in advance for your opinions and answers!