I’m fitting a simple linear model to time series data with brm() from the brms package, and I’d like to include temporal autocorrelation which is quite clear in the residuals of a model without it (via pacf()). I’m examining the links between a predictor and the response 1-3 time steps (weeks) ahead, to see how useful the predictor might be in predicting the response. For the 1 step ahead predictive model, an ar(1) structure works fine:
fit1 ← brm(y_t_plus1 ~ x, ar(p = 1), data = dm1)
For this 1 week ahead model the autocorrelation term is quite strong, ar=0.91, whereas the predictor (x) offers little explanatory value, suggesting that the best prediction we can make for y (t+1) it something similar to the current value y(t) since the two are highly (auto)correlated. But for 3 (or 2) weeks ahead, I’d like to use a model that only makes use of information that would be available at the time of the prediction. For a prediction 3 weeks ahead, an ar(p=1) model would make use of data 2 weeks into the future. If I use a model of the form:
fit3 ← brm(y_t_plus3 ~ x, ar(p = 3), data = dm1)
this fits a model with ar(3), ar(2) and ar(1) terms, but the value of y_t_plus_1 and y_t_plus_2 wouldn’t be known at the time of prediction since they are in the future. Thus I’d like to fit a model that only has the ar(3) term but not the ar(1) or ar(2) terms. I obviously could fit a model with a fixed effect for the autocorrelation of the response variable:
fit_fixed ← lm(y_t_plus3 ~ x+y_t_plus_0, data = dm1)
But this is obviously different from fitting the autocorrelation as a part of the variance-covariance matrix. I was hoping there might be something available in brm() syntax akin to removing the intercept in a simple linear model, e.g. lm(y~-1+x).
I read through the manual for brms, and searched for possible examples that might have this but no luck so far.