I’ve been looking at some of the examples in the manual and on the forums for time-series models and was wondering what the preferred way to parameterize these models is.
It seems that in some cases people use:
f ~ MultiNormal(0, K(x|alpha,rho))
and others use:
f ~ normal(0,alpha^2) for(i in 2:length(f)) f[i] ~ normal(f[i-1], c(x[i]-x[i-1],alpha,rho))
It seems that these would work out to be the same, but I was wondering if there is a difference in computational effeciency in STAN? Also for some covariance kernels, eg Matern 1/2, there are sparse representations for the precision matrix and could use MultiNormPrecision.
Additionally, using the non-centered parameterization you could rewrite the second using f’ such that f~N(0,1). This seems very similar (equivalent?) to using the Cholesky decomposition of the covariance matrix.
I don’t have a good intuition on what would be best in terms of keeping parameters on unit scales, vectorization, and depth of the autodiff graph. I was going to start exploring these options for a model I am working on but wanted to ask here first if people have experience or reccomendations between:
Multinormal vs. Conditional Specification
Centered vs. Non-Centered Parameterization