Warnings for a model with consideration on measurement error on x

Hi all,

I fitted a model with consideration on measurement error on x, and I took the measurement error on x known as data. The data generation process:

# Simulated true data
alpha = -10
beta = 2
nobs = 40
x <- runif(nobs, 6, 8)

# observed data with measurement errors
sigmax <- sqrt(log(0.5^2+1))  # measurement error
xobs <- x + rnorm(nobs, 0, sigmax)
sigmay <- sqrt(log(0.5^2+1))   # observation error on y
y <- alpha + beta*x + rnorm(nobs, 0, sigmay)  

The model is:

data{
  int nobs;
  real xobs[nobs];
  real y[nobs];
  real sigmax;
}

parameters {
  real<upper=0> alpha;
  real<lower=0> beta;
  real mu_x;
  real<lower=0> sigma_x;
  real<lower=0> sigmay;
  real x[nobs];
}

model {
  // priors
  alpha ~ normal(0, 10);
  beta ~ normal(0, 10);
  mu_x ~ normal(0, 10);
  sigma_x ~ normal(0, 10);
  sigmay ~ normal(0, 10);
 
  // model structure  
  for (i in 1:nobs){
    x[i] ~ normal(mu_x, sigma_x);
    xobs[i] ~ normal(x[i], sigmax);
    y[i] ~ normal(alpha + beta*x[i], sigmay);
  }
}

Why do I keep getting warnings like “There were xx divergent transitions after warmup”?

1 Like

For more on divergences, see resources including Runtime warnings and convergence problems, Identity Crisis, and Taming Divergences in Stan Models. If the number of divergences is not too large, it might be fruitful to increase Stan’s adapt_delta parameter to a value as high as 0.95 or 0.99; exactly how to do so depends on what interface you are using.

One thing that I notice is that in your simulation you have x arising from a uniform distribution, but in your model you have x arising from a normal distribution. Heuristically, the normal prior on x in a measurement error model seems to perform ok even when the true generating distribution isn’t normal, but it’s possible that this mismatch is partially or wholly responsible for the difficult geometry that is giving you problems in fitting.

Thank you. I changed the normal priors on sigma_x and sigmay to inv_gamma(1, 1), and this helped to removing divergent transitions after warmup. However, I don’t know why.

This is strongly suggestive of a funnel geometry. When you trim off the possibility of really small standard deviations via the prior (unlike the half-normal, the inverse gamma sees small values as very unlikely), you trim off the neck of the funnel where curvature is highest and exploration is difficult.