State-space model (vectorized sampling vs for loop with recursion?)

Hi I am just wondering if which method is better for Stan language?
let’s say

x_{t+1}\sim \text{N}(\beta x_{t},\sigma_x) where t=1…T
y_{t}\sim \text{N}(\gamma x_{t},\sigma_y)

I am wondering if it is better (in terms of speed or sampling) to create the whole state space and sampling it from it or sampling with for loop recursion?

Vectorized sampling

transformed parameters {
    real<lower=0> y_mean[T];
    real<lower=0> x_mean[T];
    x_mean[1]=x_init;
    for (t in 1:T){
        x_mean[t+1]=beta*x[t];
        y_mean[t]=gamma*x[t];
    }
    y_mean[t]=gamma*x[t];
}

model{
    x~normal(x_mean,sigma_x);
    y~normal(y_mean,sigma_y);
} 

For loop recursion


model{
   x[1]~normal(x_init,sigma_x);
   y[1]~normal(gamma*x[1], sigma_y);
   for (t in 2:(T)){
   // or I can define it with target+=normal_lpdf..
      x_temp=beta*x[t-1];
      x[t]~normal(x_temp,sigma_x);
       y_gamma=gamma*y[t];               
      y[t]~normal(y_temp,sigma_y);
   }
}

The former. It is not really sampling, but anything with a distribution that accumulates the derivatives over conditionally independent observations is faster.

Thanks a lot. I actually posted by accident.

If I am interested in only parameter space instead of state space (x_mean and x ), is it possible to not to monitor all transformed parameters? (or any idea to do better like particle MCMC?)

Say I am interested in only x and y and beta and gamma.

Particle MCMC is not better. Marginalizing out the state space analytically (when possible) is expected to be better. Failing that, you have to include the states in the posterior distribution.

In RStan, there is a pars argument where you can specify quantities to include (or exclude if the include argument is FALSE), but that just means those draws are not returned back to R. They are still parameters whose joint distribution with the other parameters conditional on the data are being sought.