Low ESS and High RHat for Random Intercept & Slope Simulation (rstan and rstanarm)

Yes. However, I just ran three new models with increasing numbers of post-warmup iterations. I’ve included the diagnostics below:

  1. Iter = 2000, warmup = 1000
Diagnostics:
                                      mcse Rhat n_eff
(Intercept)                           0.00 1.00  180 
x                                     0.00 1.02  191 
sigma                                 0.00 1.00 3264 
Sigma[region:(Intercept),(Intercept)] 0.00 1.01  381 
Sigma[region:x,(Intercept)]           0.00 1.02  270 
Sigma[region:x,x]                     0.00 1.03  340 
  1. Iter = 4000, warmup = 1000
Diagnostics:
                                      mcse Rhat n_eff
(Intercept)                           0.00 1.01   397
x                                     0.00 1.01   513
sigma                                 0.00 1.00 10084
Sigma[region:(Intercept),(Intercept)] 0.00 1.00   901
Sigma[region:x,(Intercept)]           0.00 1.00   706
Sigma[region:x,x]                     0.00 1.00  1183
  1. Iter = 10,000, warmup = 1000
Diagnostics:
                                      mcse Rhat n_eff
(Intercept)                           0.00 1.01  1122
x                                     0.00 1.00  1524
sigma                                 0.00 1.00 23557
Sigma[region:(Intercept),(Intercept)] 0.00 1.00  2638
Sigma[region:x,(Intercept)]           0.00 1.00  2020
Sigma[region:x,x]                     0.00 1.00  3324

So, yes, increasing the number of post-warmup iterations definitely increases n_eff and seems to decrease Rhat.

No, I haven’t gotten any divergences. I’ve also tried adapt_delta = .999 with 2000 post-warmup iterations per chain and 1000 warmup iterations. I didn’t notice an improvement with that (iter = 2000, warmup = 1000)

Diagnostics:
                                      mcse Rhat n_eff
(Intercept)                           0.00 1.04  157 
x                                     0.00 1.02  277 
sigma                                 0.00 1.00 3368 
Sigma[region:(Intercept),(Intercept)] 0.00 1.02  263 
Sigma[region:x,(Intercept)]           0.00 1.00  320 
Sigma[region:x,x]                     0.00 1.01  428 

I guess there is some funneling (iter = 2000, warmup = 1000):

However, there are no divergences.

Overall, it looks like I can crank up the post-warmup iterations to achieve acceptable n_eff and Rhat. And from looking at the autocorrelation plots, it’s apparent that there is substantial autocorrelation for all parameters:

Again, this autocorrelation disappears when \sigma_e is more than twice as large as the SD for the random intercept and slope. It also disappears when I introduce an unmodeled variable z_{it} with an effect of 1 to the simulation.

So, is it fair to assume that the sampling issues that I’m observing are caused by insufficient variation in the sample? If that’s the case, I’m more than happy to just increase \sigma_e. However, if the problem lies elsewhere and inducing more residual variation is just an accidental fix; I’d very much like to know that!