How can I add two normal distribute measurement error

I am a beginner of rstan and I want to add two normal distribute measurement error. But, the measurement errors have different weights.
According to the user guide, I performed the code as follows:

data {
  int N; //the number of observations
  int J; //the number of groups
  int K; //number of columns in the model matrix
  int id[N]; //vector of group indices
  matrix[N,K] X; //the model matrix
  vector[N] y; //the observed response variable
  real<lower=0> mtau1;//
  real<lower=0> mtau2;//
}
parameters {
  real yt [N];//the true response variable
  vector[K] gamma; //population-level regression coefficients
  vector[K] tau; //the standard deviation of the regression coefficients
  //implementing Matt's trick
  vector[K] beta_raw[J];
  real sigma; //standard deviation of the individual observations
}
transformed parameters {
  vector[K] beta[J]; //matrix of group-level regression coefficients
  //computing the group-level coefficient, based on non-centered parametrization based on section 22.6 STAN (v2.12) user's guide
  for(j in 1:J){
    beta[j] = gamma + tau .* beta_raw[j];
  }
}
model {
for (n in 1:N){
   y[n]~0.9724*normal(yt,mtau1)+0.0276*normal(yt,mtau2);
}
    
  vector[N] mu; //linear predictor
  //priors
  gamma ~ normal(0,5); //weakly informative priors on the regression coefficients
  tau ~ cauchy(0,2.5); //weakly informative priors, see section 6.9 in STAN user guide
  sigma ~ gamma(2,0.1); //weakly informative priors, see section 6.9 in STAN user guide
  
  for(j in 1:J){
   beta_raw[j] ~ normal(0,1); //fill the matrix of group-level regression coefficients 
  }
  for(n in 1:N){
    mu[n] = X[n] * beta[id[n]]; //compute the linear predictor using relevant group-level regression coefficients 
  }
  //likelihood
  y ~ normal(mu,sigma);
}


BUT this step is wrong 
```stan
for (n in 1:N){
   y[n]~0.9724*normal(yt,mtau1)+0.0276*normal(yt,mtau2);
}

I got this code from others’ JAGS code
##others code in JAGS

y[n]~ dnormmix (mtau[1:2, n], mtau[1:2,n], f)
mu[1,n] <-yt[n]
mu[2,n] <- yt[n]
mtau[1,n] <-mtau1[n]
mtau2[1,n]<-mtau2[n] 
f[2] <- 0.0276
f[1] <- 0.9724

So how can I change my code and add these two errors. Please…

Not sure but the JAGS code looks like you need a mixture model. Mixtures are a bit awkward to write in Stan because you need to use target +=. You can do it like this

model {
  for (n in 1:N){
    target += log_mix(0.9724,
                  normal_lpdf(y[n]| yt, mtau1),
                  normal_lpdf(y[n]| yt, mtau2));
  }
}

There’s no sigma in the JAGS code, I don’t understand what’s going on with that, but maybe the last line should be

  //likelihood
  yt ~ normal(mu,sigma);

(that is, yt instead of y)

3 Likes

Thanks a lot for your reply, it helped me a lot. I have tried it but I still have a question.

Need I add a “0.0276” behind “0.9724” ?

Here is what the function documentation says about log_mix()

real log_mix(real theta, real lp1, real lp2)
Return the log mixture of the log densities lp1 and lp2 with mixing proportion theta,
defined by
log_mix(θ, λ_1 , λ_2 ) = log( θ exp(λ_1 ) + (1 − θ) exp(λ_2 ))

It seems the log_mix() function calculates the (1 − θ) on its own.

Thanks a lot, I have changed my code, while it did not work, I did it several times, while it has been stopped here as the picture shown.

Something I missed above is that yt is a vector so it must be indexed too

target += log_mix(0.9724, normal_lpdf(y[n]| yt[n], mtau1), normal_lpdf(y[n]| yt[n], mtau2));

Actually ,I feel sorry that I forgot the mtau indexes should also be vectors. So I changed the code as follows:

     for (n in 1:N){
        target += log_mix(0.9724,
                      normal_lpdf(y[n]| yt[n], mtau1[n]),
                      normal_lpdf(y[n]| yt[n], mtau2[n]));
      }

Now it runs, but it runs slowly. Thanks a lot for your help.

I feel so sorry to disturb you again, because I find my measurement error should have another priors.
I have been struggle on it, while I am failed.

/*A simple example of an hierarchical model*/
data {
  int N; //the number of observations
  int J; //the number of groups
  int K; //number of columns in the model matrix
  int id[N]; //vector of group indeces
  matrix[N,K] X; //the model matrix
  vector[N] y; //the observed growth
  real<lower=0> mtau1[N];//
  real<lower=0> mtau2[N];//
}

parameters {
  real yt[N];//true growth
  real yl[N];//log_true growth
  real yp[N];// predict.log.growth
  vector[K] gamma; //population-level regression coefficients
  vector[K] tau; //the standard deviation of the regression coefficients
  real<lower=0> ptau[N];
  //implementing Matt's trick
  vector[K] beta_raw[J];
  real sigma; //standard deviation of the individual observations
}

transformed parameters {
  vector[K] beta[J]; //matrix of group-level regression coefficients
  //computing the group-level coefficient, based on non-centered parametrization based on section 22.6 STAN (v2.12) user's guide
  for(j in 1:J){
    beta[j] = gamma + tau .* beta_raw[j];
  }
}


model {
   vector[N] mu; //linear predictor
   yp~normal(yl,ptau);//prior
   yl=log(yt);//
 for (n in 1:N){
    target += log_mix(0.9724,
                  normal_lpdf(y[n]| yt[n], mtau1[n]),
                  normal_lpdf(y[n]| yt[n], mtau2[n]));	   
  }
  
  
  //priors
  gamma ~ normal(0,5); //weakly informative priors on the regression coefficients
  tau ~ cauchy(0,2.5); //weakly informative priors, see section 6.9 in STAN user guide
  sigma ~ gamma(2,0.1); //weakly informative priors, see section 6.9 in STAN user guide
  ptau~ cauchy(0,2.5);//weakly informative priors, see section 6.9 in STAN user guide
  
  for(j in 1:J){
   beta_raw[j] ~ normal(0,1); //fill the matrix of group-level regression coefficients 
  }
  
  for(n in 1:N){
    mu[n] = X[n] * beta[id[n]]; //compute the linear predictor using relevant group-level regression coefficients 
  }
  
  //likelihood
  yp ~ normal(mu,sigma);
}
in the JAGS, it looks like that
yl[n] ~ dnorm(yp[n], ptau)
      yt[i] <- exp(yl[n])

while the y (observed growth had some negative values(<=0) ), I tried to use
yl=log(yt)
it does not work, I do hope to get your reply, many many thanks.

You can make yt a transformed parameter.

parameters {
  // no yt[N] here...
  real yl[N];//log_true growth
  real yp[N];// predict.log.growth
  ...
}
transformed parameters {
  real yt[N] = exp(yl);//true growth
  vector[K] beta[J]; //matrix of group-level regression coefficients
  ...
}

Thanks a lot for your reply.