Insufficient convergence in mixture Generalized Partial Credit Model

Thanks a lot for your replies!

That is an interesting read. However, do you think this solution applies in my case? It seems like a method to get inferences from chains that behave well when taken separately but explore different modes of the posterior. However, in my case, the chains all sample from roughly the same area of the posterior but do so too fuzzily. For example, split Rhats for the \lambda_1 parameter in model 7 for every chain are:

1.05 1.00 1.03 1.00 1.01 1.01 1.03 1.04

This is something I should explore again. Interestingly, when I started, I restricted the \theta parameters to be equal across groups (i.e. dropping the c). When I loosened that restriction, it vastly improved the fit, however. This is how I got Rhats even close to 1.

Would you mind expanding on that a little? I suppose you mean reparameterizing in a way that there are no model parameters in the priors? This would inadvertently mean splitting up the \alpha from alpha_times_.... such that we have

target += normal_lpdf(beta_raw[c, k] | 0, 1);
target += normal_lpdf(mu_beta[c, k] | 0, 1);

and then alpha * (mu_beta + beta_raw) in the model. But the combination was done to improve the fit in the first place, as I asked about in this thread (Convergence depending on parameterization of discrimination parameter in the GPCM). Is there a way to both non-centralize and remove the multiplicative identifiability issue?

Thanks again for your help!