Translate normal evolution into transformed parameters

I have some complex stan model form someone else and he seems to have translated sampling statements into (transformed params) equations

So instead of

a[t] ~ normal(a[t-1] + x[t], sig)

he has

a[t] = a[t-1] + sig * innov + x[t]

in the transformed params part.

Is this usually done for performance purposes?

Where can I read more about this?

It’s a change of parameterization that can help performance. It implies the same prior on a[t], but works differently.

The specific answer to your question is: Bayesian structural time series modeling (That whole thread might be valuable for comparison)

I’m assuming also that innov is also declared as a parameter and there’s a innov ~ normal(0, 1); statement in the model block.