Measurement error model does not converge

Hi,

Recently I have fitted the following measurement error model using simulated data, the idea is to build a regression model, but one of the predictors has measurement error. We only have the observation under measurement error, and we also have some observed variance scales associated with the measurement error model.

``````data {
int<lower=0> N;
vector[N] y; ///regression outcome
vector[N] w; ///observed variable with measurement error
vector[N] s; /// observed uncertainty for the measurement error
vector[N] z; ///other covariates

}
parameters {
real alpha;
real beta_x;
real beta_z;
real<lower=0> sigma_y;
vector<lower=0>[N] x;
real b0;
real b1;
vector<lower=0>[N] sigma;
vector<lower=0>[N] sigma_s;
}

model {
w ~ normal(b0 + b1*x, sigma);/// measurement error model for mean
s ~ normal(sigma, sigma_s); ///measurement error model for uncertainty
y ~ normal(alpha + beta_x * x + beta_z * z, sigma_y); ///regression model
}
``````

The R code to generate the simulated data is as follows:

``````N = 10
x = rexp(N, rate = 0.1)
sigma = c(0.2,0.2,0.3,0.3,0.4,0.4,0.5,0.5,0.6,0.6)
sigma_s = c(0.02,0.02,0.03,0.03,0.04,0.04,0.05,0.05,0.06,0.06)
w = c()
s = c()
for (i in 1:N) {
w[i] = rnorm(1, mean = 0.2 + x[i], sd = sigma[i])
s[i] = rnorm(1, mean = sigma[i], sd = sigma_s[i])
}
z = runif(N, min = 8, max = 16)
y = 1 + 2*x + 0.5*z + rnorm(n = N, mean = 0, sd = 1)
``````

However, this model does not converge, and I really appreciate any help:)

It looks like you do not have any priors in your model. Measurement error type models can lead to difficult posteriors (unless you have a lot of replicates). I would try at least setting weakly informative priors on all the model parameters and seeing if the problem persists.

I think I can see where you are heading. My advice would be to make sure you can first run a simpler model, like constant sigmas.
Additionally, there are a few missing aspects, as @js592 mentioned.

I ran your model, and without priors is really hard to dig into it.

Let’s simplify things for a moment by assuming that `sigma` is observed with certainty and is data. Then the model becomes

``````model {
w ~ normal(b0 + b1*x, sigma);/// measurement error model for mean
y ~ normal(alpha + beta_x * x + beta_z * z, sigma_y); ///regression model
}
``````

Notice what happens if we divide `x` by two, while multiplying both `b1` and `beta_x` by two. Nothing changes at all. The parameters `b1`, `beta_x`, and `x` are not identifiable in this model. You could identify the model by adding a sufficiently strong prior, but I don’t think this would yield the inference that you intend.

In a garden-variety measurement error model, we would write something like

``````w ~ normal(x, sigma);
y ~ normal(alpha + beta_x * x + ... , sigma_y)
``````

where `sigma` is the known standard deviation scales for the Gaussian measurement noise in `w`. I’m uncertain which elaborations on this structure you view as important, and which ones you are happy to discard.