Is it possible to use the technique below, but for autoregressive priors on parameters (as opposed to data)? Maybe not because they wouldn’t be initialized…?
What is a more efficient way to code the below, which gives the warning
Info: left-hand side variable (name=gamma) occurs on right-hand side of assignment, causing inefficient deep copy to avoid aliasing. ?
for(i in 1:I)
for(t in 2:T)
If you have temporal data, it would be better to make
y_lag or whatever in the
transformed data block rather than slicing every leapfrog step. For parameters …
that warning is a false positive.
For parameters, are there ways to improve on our code snippet?
good to know! thanks again!
does it help to use
cumulative_sum ? e.g. as proposed by
Aware! The code about splines adds the intercept twice too. One time in calling bs
and another time in a0. B has full rank checked by rankMatrix.
To model the random walk just do:
gamma_raw ~ normal(0, 1);
gamma = cumulative_sum(gamma_raw) * tau;
If you want smoothing, I’d go instead with the mgcv package and specify bs=“ts” or bs=“ad”.
If I’m not mistaken the variance for gamma -> Infinitive. Does an…
For a random walk, that might be ok but I doubt the difference in speed will be noticeable.