Hi I am just wondering if which method is better for Stan language?
let’s say

x_{t+1}\sim \text{N}(\beta x_{t},\sigma_x) where t=1…T y_{t}\sim \text{N}(\gamma x_{t},\sigma_y)

I am wondering if it is better (in terms of speed or sampling) to create the whole state space and sampling it from it or sampling with for loop recursion?

model{
x[1]~normal(x_init,sigma_x);
y[1]~normal(gamma*x[1], sigma_y);
for (t in 2:(T)){
// or I can define it with target+=normal_lpdf..
x_temp=beta*x[t-1];
x[t]~normal(x_temp,sigma_x);
y_gamma=gamma*y[t];
y[t]~normal(y_temp,sigma_y);
}
}

The former. It is not really sampling, but anything with a distribution that accumulates the derivatives over conditionally independent observations is faster.

If I am interested in only parameter space instead of state space (x_mean and x ), is it possible to not to monitor all transformed parameters? (or any idea to do better like particle MCMC?)

Say I am interested in only x and y and beta and gamma.

Particle MCMC is not better. Marginalizing out the state space analytically (when possible) is expected to be better. Failing that, you have to include the states in the posterior distribution.

In RStan, there is a pars argument where you can specify quantities to include (or exclude if the include argument is FALSE), but that just means those draws are not returned back to R. They are still parameters whose joint distribution with the other parameters conditional on the data are being sought.