Rhat greater than 1 for some parameters

Hi everyone,

I have been running the following code, under the following initial values. I achieve convergence in all the values, except for the log posterior, AB0, and omega[2]. These do not go really even beyond Rhat of 1.5, but I can see in the traceplots that they do not converge. Any ideas on how to fix this?

initebl = function(){
nsubjects ← length(unique(newebl$SUBJID))
pop_var = c(0.5, 0.5)
mua_pop = rnorm(1, 0, pop_var[1])
AB0_pop = exp(rnorm(1, log(37), pop_var[2]))
omega ← abs(rnorm(2, 0, pop_var))
theta_pop ← c(mua_pop, AB0_pop)
theta ← matrix(NA, nsubjects, length(theta_pop))
for (i in 1:nsubjects){
theta[i, ] = rlnorm(length(theta_pop), log(theta_pop), omega)
list(mua_pop = mua_pop, AB0_pop = AB0_pop, realsigmaA = abs(rnorm(1, 0, 1)),
sigma = abs(rnorm(1, 0, 1)), omega = omega, theta = theta)

  real[] numerical_int(real t0, real ts, real[] theta){
    real dt = ts - t0;
    real y[1];
    y[1] = theta[2]*exp(-theta[1]*dt);
    return y;
  int <lower=1> N;
  int <lower=1> nsubjects;
  int <lower=1> subj_ids[N];
  real <lower=0> y[N];
  real ts[N];
  real t0;
transformed data{
  int nTheta = 2;
  int niv = 2;
  //prior sd
  real prior_sd[niv] = {0.5, 0.5};
  //population parameters
  real <lower=0> mua_pop;
  real <lower=0> AB0_pop;
  //residual error
  real <lower=0> realsigmaA;
  real <lower=0> sigma;
  //inter-individual variability
  vector <lower=0>[niv] omega;
  real <lower=0> theta[nsubjects, nTheta];
transformed parameters{
  vector <lower=0>[nTheta] theta_pop = to_vector({mua_pop, AB0_pop});
  real mu[N, 1];
  //vector [new_mu];
  //individual parameters
  vector <lower=0> [nsubjects] mua = to_vector(theta[, 1]);
  vector <lower=0> [nsubjects] AB0 = to_vector(theta[, 2]);
  for (j in 1:N){
    mu[j, ] = numerical_int(t0, ts[j], {mua[subj_ids[j]], AB0[subj_ids[j]]});
  mua_pop ~ normal(0, prior_sd[1]);
  AB0_pop ~ lognormal(log(37), prior_sd[2]);
  realsigmaA ~ normal(0, 1);
  sigma ~ normal(0, realsigmaA);
  omega ~ normal(0, 1);
   // interindividual variability
  for (j in 1:nsubjects){
    theta[j, ] ~ lognormal(log(theta_pop), omega);

  // likelihood
  y ~ lognormal(log(mu[N, 1]), sigma);
generated quantities {
  real pred_antibody[N];
  for (i in 1:N) {
  pred_antibody[i] = exp(normal_rng(log(y[i]), sigma));

mua_pop and omega[1] have also suspiciously small n_eff

As sigma is scalar, there is no information to learn something useful about realsigmaA and it would be better to define a fixed prior directly on sigma.

I couldn’t see anything obvious in the code, but the code is a bit complicated to parse. Can you explain a bit more about your model?

1 Like

The model is an exponential decay. Instead of defining the ODE in the function section I have defined the numerical solution. I have longitudinal data which consists of 700 participants and 3 follow-up data points per individual.
Regarding your first comment, how do you fix the number of n_eff that MCMC uses?
Regarding your second comment on realsigmaA, I need to estimate this parameter to compare it with other software, as a way to define the constant error model: http://sia.webpopix.org/residualError.htmlhttp://sia.webpopix.org/residualError.html

Oops, I just noticed I had forgot to follow this thread

n_eff is an estimate of how efficient MCMC is. In your case it is clear that MCMC is not mixing well and that shows also in n_eff. You can run more iterations, which will increase n_eff (if the variance of quantity is finite and there are no bigger convergence problems). See Rank-Normalization, Folding, and Localization: An Improved Rˆ for Assessing Convergence of MCMC (with Discussion) for more information.

sigma is already the parameter for the constant error model. Adding realsigma doesn’t change anything else than the prior on the magnitude of that error.

It might be then the usual problem with hierarchical model, or ODE issues.

Thank you so much for your advice.
I have run it with 3000 iterations, and although the convergence seems to be improved, R sometimes crashes if I increase too much the iterations.
I still see an Rhat higher than 1, are there other possible reasons on why this is not converging well? I am also attaching the warning messages that I get after finalizing the chains, in case that I am missing some key message there.

I’ll just say one thing. When the warning message says, “Increase max_treedepth,” don’t do it! That’s an out-of-date error message that should be fixed. The good news is that if you follow the link to the warnings page (Runtime warnings and convergence problems), you’ll see better advice.

So my starting point is to go to this page (Runtime warnings and convergence problems) and follow the advice there.

1 Like

I just noticed that this is wrong. You have defined mu as N x 1 array, but you are here using only the last element [N,1]. This is definitely causing problems.

(following up on a call @irenegfsr and I had)

Seeing this is a hierarchical model and there are divergent transitions, you could try using a non-centered parameterization. Currently

\theta_{jk} \sim \text{logNormal}(\log(\theta_{\text{pop},k}), \omega_k).

The non-centered equivalent would be

\eta_{jk} \sim \text{normal}(0, 1); \ \ \theta_{jk} = \theta_{\text{pop},k} e^{\omega_k \eta_{jk}},

where I’ve used the fact that \log \theta_{jk} \sim \text{normal}(\log(\theta_{\text{pop},k}), \omega_k).

More minor note: it might help to run more than 2 chains (say 4 or 8 chains depending on how many cores your machine has), in order to stabilize the estimates of \widehat R.