I am trying to model a stochastic volatility model as

v[t] = omega + phi*v[t-1] + e[t], |phi| < 1

e[t] = sigma[t]*z[t], z[t]~student_t(nu, 0, 1)
sigma[t]^2 = eta + alpha* e[t-1]^2

The stan code is the following :

data {

int<lower=100> T;

real<lower=0> v[T];

real<lower=0> v0; // v[0]

real<lower=0> v_1; // v[-1]

}

```
parameters {
real<lower=0> omega;
real<lower=0, upper=1> phi;
real<lower=0> eta;
real<lower=0, upper=1> alpha;
real<lower=2, upper=10> nu;
}
transformed parameters {
real e[T];
real<lower=0> sigma[T];
real e0 = v0 - omega - phi*v_1;
e[1] = v[1] - omega - phi*v0;
for (t in 2:T) {
e[t] = v[t] - omega - phi*v[t-1];
}
sigma[1] = sqrt(eta + alpha*pow(e0,2));
for (t in 2:T) {
sigma[t] = sqrt(eta + alpha*pow(e[t-1],2));
}
}
model {
e ~ student_t(nu, 0, sigma);
}
```

This code runs quite fast, however if I change the model section to

model {

for (t in 2:T) {

v[t] ~ student_t(nu, omega + phi*v[t-1], sigma);

}

}

which I think is equivalent to the original, then it runs much slower. However the most important things is the outcome of the simulation or the posterior mean of the parameters are different by an none insignificant amount. This difference is reproducible across simulations and is very stable. Can anyone help to explain why is the difference in the posterior mean of the parameters?