Hi All,
Initially looking at the nature / quality and level of the discussions, I was very hesitant to post my questions as I am newbie to stan. I would like to thank the stan community for posting quick responses to doubts being posted, this is of immense help.
I am currently working though the case studies in bayesian cognitive modeling book, specifically the memory retention chapter. I am trying to rework the https://github.com/stan-dev/example-models/blob/master/Bayesian_Cognitive_Modeling/CaseStudies/MemoryRetention/Retention_2_Stan.R example. The first stan code, wherein, we treat all participants having the same information decay rate and baseline decay rate, goes through without any issue and I am able to infer from the model. However, when I introduce person level paramters,the model doesn’t converge at all.
There were 9721 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmupExamine the pairs() plot to diagnose sampling problems
The largest R-hat is 1.07, indicating chains have not mixed.
Running the chains for more iterations may help. See
http://mc-stan.org/misc/warnings.html#r-hatBulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
Running the chains for more iterations may help. See
http://mc-stan.org/misc/warnings.html#bulk-essTail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
Running the chains for more iterations may help. See
http://mc-stan.org/misc/warnings.html#tail-ess
I see a huge number of divergent transitions. If the number had been much smaller, I would have tried to play around with control parameters like tree depth or adapt_delta. However, will the serious divergences I am noticing, could there be some serious issues with the code? Setting a higher adapt_delta doesn’t help with reducing the number of transitions either. Please advice.