I am a beginner of rstan and I want to add two normal distribute measurement error. But, the measurement errors have different weights.
According to the user guide, I performed the code as follows:

data {
int N; //the number of observations
int J; //the number of groups
int K; //number of columns in the model matrix
int id[N]; //vector of group indices
matrix[N,K] X; //the model matrix
vector[N] y; //the observed response variable
real<lower=0> mtau1;//
real<lower=0> mtau2;//
}
parameters {
real yt [N];//the true response variable
vector[K] gamma; //population-level regression coefficients
vector[K] tau; //the standard deviation of the regression coefficients
//implementing Matt's trick
vector[K] beta_raw[J];
real sigma; //standard deviation of the individual observations
}
transformed parameters {
vector[K] beta[J]; //matrix of group-level regression coefficients
//computing the group-level coefficient, based on non-centered parametrization based on section 22.6 STAN (v2.12) user's guide
for(j in 1:J){
beta[j] = gamma + tau .* beta_raw[j];
}
}
model {
for (n in 1:N){
y[n]~0.9724*normal(yt,mtau1)+0.0276*normal(yt,mtau2);
}
vector[N] mu; //linear predictor
//priors
gamma ~ normal(0,5); //weakly informative priors on the regression coefficients
tau ~ cauchy(0,2.5); //weakly informative priors, see section 6.9 in STAN user guide
sigma ~ gamma(2,0.1); //weakly informative priors, see section 6.9 in STAN user guide
for(j in 1:J){
beta_raw[j] ~ normal(0,1); //fill the matrix of group-level regression coefficients
}
for(n in 1:N){
mu[n] = X[n] * beta[id[n]]; //compute the linear predictor using relevant group-level regression coefficients
}
//likelihood
y ~ normal(mu,sigma);
}
BUT this step is wrong
```stan
for (n in 1:N){
y[n]~0.9724*normal(yt,mtau1)+0.0276*normal(yt,mtau2);
}

I got this code from others’ JAGS code
##others code in JAGS

Not sure but the JAGS code looks like you need a mixture model. Mixtures are a bit awkward to write in Stan because you need to use target +=. You can do it like this

model {
for (n in 1:N){
target += log_mix(0.9724,
normal_lpdf(y[n]| yt, mtau1),
normal_lpdf(y[n]| yt, mtau2));
}
}

There’s no sigma in the JAGS code, I don’t understand what’s going on with that, but maybe the last line should be

Here is what the function documentation says about log_mix()

real log_mix(real theta, real lp1, real lp2)
Return the log mixture of the log densities lp1 and lp2 with mixing proportion theta,
defined by
log_mix(θ, λ_1 , λ_2 ) = log( θ exp(λ_1 ) + (1 − θ) exp(λ_2 ))

It seems the log_mix() function calculates the (1 − θ) on its own.

I feel so sorry to disturb you again, because I find my measurement error should have another priors.
I have been struggle on it, while I am failed.

/*A simple example of an hierarchical model*/
data {
int N; //the number of observations
int J; //the number of groups
int K; //number of columns in the model matrix
int id[N]; //vector of group indeces
matrix[N,K] X; //the model matrix
vector[N] y; //the observed growth
real<lower=0> mtau1[N];//
real<lower=0> mtau2[N];//
}
parameters {
real yt[N];//true growth
real yl[N];//log_true growth
real yp[N];// predict.log.growth
vector[K] gamma; //population-level regression coefficients
vector[K] tau; //the standard deviation of the regression coefficients
real<lower=0> ptau[N];
//implementing Matt's trick
vector[K] beta_raw[J];
real sigma; //standard deviation of the individual observations
}
transformed parameters {
vector[K] beta[J]; //matrix of group-level regression coefficients
//computing the group-level coefficient, based on non-centered parametrization based on section 22.6 STAN (v2.12) user's guide
for(j in 1:J){
beta[j] = gamma + tau .* beta_raw[j];
}
}
model {
vector[N] mu; //linear predictor
yp~normal(yl,ptau);//prior
yl=log(yt);//
for (n in 1:N){
target += log_mix(0.9724,
normal_lpdf(y[n]| yt[n], mtau1[n]),
normal_lpdf(y[n]| yt[n], mtau2[n]));
}
//priors
gamma ~ normal(0,5); //weakly informative priors on the regression coefficients
tau ~ cauchy(0,2.5); //weakly informative priors, see section 6.9 in STAN user guide
sigma ~ gamma(2,0.1); //weakly informative priors, see section 6.9 in STAN user guide
ptau~ cauchy(0,2.5);//weakly informative priors, see section 6.9 in STAN user guide
for(j in 1:J){
beta_raw[j] ~ normal(0,1); //fill the matrix of group-level regression coefficients
}
for(n in 1:N){
mu[n] = X[n] * beta[id[n]]; //compute the linear predictor using relevant group-level regression coefficients
}
//likelihood
yp ~ normal(mu,sigma);
}

in the JAGS, it looks like that
yl[n] ~ dnorm(yp[n], ptau)
yt[i] <- exp(yl[n])

while the y (observed growth had some negative values(<=0) ), I tried to use yl=log(yt)
it does not work, I do hope to get your reply, many many thanks.