Dear All,

I am a relatively new user of Stan struggling with stable behavior of a model that works well with no problems in JAGS. I am wondering if the performance/behavior is linked somehow to the way I wrote the code in Stan. I would appreciate if you could help me with that problem.

The model contains regularly spaced missing observations, and both JAGS and San’s versions account for that.

The variance between observations increases and collapses along with occurrence of the new, consequent observation.

**JAGS model**:

```
Model= "model{
for(i in 2:N0){
FACTOR1[i] ~ dnorm(FACTOR1[i-1],TAU[i])
} }"
```

(in all plots gray lines represent 80% confidence interval)

**Stan Model:**

```
data {
int<lower=1> N0;
vector[N0] Factor1_0;
int FCT1_NA_COUNT;
int FCT1_NA_INDEX[FCT1_NA_COUNT];
vector[N0] FACTOR1_SD;
}
parameters {
vector<lower=0.3, upper=1.7>[FCT1_NA_COUNT] FCT1_MISSING;
}
transformed parameters {
vector[N0] FACTORS;
FACTORS = Factor1_0; FACTORS[FCT1_NA_INDEX] = FCT1_MISSING;
}
model {
for(i in 2:N0){
#FACTORS[i] ~ normal(FACTORS[i-1],0.04); // Version 1
FACTORS[i] ~ normal(FACTORS[i-1],FACTOR1_SD[i]); // Version 2
}
}
```

Version 1 of the Stan’s code is relatively stable, fast and yields expected results (but not the results that I am trying to model). The 2nd version with the increasing variances (all are below 0.04) doesn’t converge, and simulations don’t produce the expected behavior (I’ll include some plots to present that.

Stan Version 1:

Stan Version 2:

Thank you very much in advance for all your help.

Regards

Peter

[edit: escape model code]