# Linear Regression with ARMA errors

I have implemented a linear regression model where the error terms follow an ARMA(p, q) process.

But when fitting the model I see divergent transitions after warmup and the AR and MA parameters are very different of the coefficients estimated with forecast::Arima in R, while the regression coefficients \beta_0 and \beta_1 seem reasonable.

So I am wondering if there is an error in the implementation below?

I am new to Stan, so hope someone can help.

This is the model formula:
y_n = \beta_0 + \beta_1 * x_n + \eta_n
\eta_n = \phi_1 * \eta_{n-1} + ... + \phi_{p} * \eta_{n-p} + \theta_1 * \epsilon_{n-1} + ... + \theta_q * \epsilon_{n-q} + \epsilon_n

This is the Stan implementation I came up with:

data {
int<lower=0> P;
int<lower=0> Q;
int<lower=0> N;
vector[N] x;
vector[N] y;
}

parameters {
real beta_0;
real beta;
real<lower=-1, upper=1> phi[P];
real<lower=-1, upper=1> theta[Q];
real<lower=0> sigma;
}

transformed parameters {
vector[N] eta;
vector[N] mu;
vector[N] epsilon;

for (n in 1:N) {
// Compute mean mu as linear regression of explanatory variables
mu[n] = beta_0 + beta * x[n];

// Compute ARMA error terms eta
eta[n] = y[n] - mu[n];
epsilon[n] = eta[n];

// AR-part
for (p in 1:P) {
if (n > p) {
real ar_p = phi[p] * eta[n - p];
mu[n] = mu[n] + ar_p;
epsilon[n] = epsilon[n] - ar_p;
}
}

// MA-part
for (q in 1:Q) {
if (n > q) {
real ma_q = theta[q] * epsilon[n - q];
mu[n] = mu[n] + ma_q;
epsilon[n] = epsilon[n] - ma_q;
}
}
}
}

model {
y[(max(P, Q) + 1):N] ~ normal(mu[(max(P, Q) + 1):N], sigma);
}


Hola,

The code seems reasonable.

Some priors will help a lot in the performance, be aware of the chosen priors to phi and theta, cause you constrain every paramater to [-1,1] and somehow guarantee stationarity. But a normal prior with those hard constains will throw alot of errors .

Try to check your model’s performance with some simulated data. If it gives reasonable estimates (probably it will) then dont be afraid if the estimates in real data differ from a frequentist approach.

Hope it works :)

2 Likes

Thank you for your feedback :)

Yes, it works for simulated data, so I think the implementation is ok. I observed that it does not work well on real data when I have few data points and when the priors on the beta coefficients are very specific, e.g. normal(-0.01, 3.05e-06). But the same model runs fine on the same data if I use normal(-0.01, 0.1) prior. I was expecting that it would be more robust in terms of prior choice. And if it diverges, I cannot generate useful forecasts as the coefficients are far off.

I wasn’t sure if the constraints on theta and phi are needed. I put them in because this is advised in the Stan user guide in the time series section.

Ok,

1. Those recommendations are if your data behaves stationary. Have you checked that?

The theory says that a process AR(1) is stationary if the phi parameter is between [-1,1], then for an AR§ check if every phi is between [-1,1] is not enough but helps.

1. For guarantee [-1,1] a prior like N(0,0.01) is too strong, try N(0,0.5) maybe I think I use that as default.

2. ARMA processes are really bad models for forecast, you can use State-space models for a better forecast, or use prophet for an automatic forecast.

3. If is you have some time check my package varstan, it can be installed from github:
https://github.com/asael697/varstan

The package works similar to the forecast package, I even put the auto.sarima() function that fits the “best” ARIMA model to your data, and you can do dynamic regression as well ;)

Hopefully, I can retake the work of my package to update some other forecast’s functionalities and put it on CRAN. Hope this work for now for more stuff contact me I will be glad to help. :)

2 Likes
1. Ah, I missed that these restrictions only make sense when the time series is differenced before. That might explain why it does not work well in some situations.

2. I did not set any priors for the ARMA coefficients, only for the regression coefficients \beta

3. ARMA models are State-space models as far as I know and I think it depends on the predictive power of the regressors. The ARMA term just makes sure that autocorrelation left in the residuals of a classical linear model is used.

4. Your package looks like really good work! Will definitely try this out in the future. Is it possible to use auto.sarima() without seasonality like forecast::auto.arima? As we are currently handling seasonality with fourier terms in the xreg matrix and not with a seasonal arma.

I did not set any priors for the ARMA coefficients, only for the regression coefficients
β

If you put some priors it will work a bit better, or at least Stan will complain less (he complains to much XD).

ARMA models are State-space models as far as I know and I think it depends on the predictive power of the regressors. The ARMA term just makes sure that autocorrelation left in the residuals of a classical linear model is used.

Well yeah SSMs are really a general class of models, but in this particular case, its predictive power is really bad.

Your package looks like really good work! Will definitely try this out in the future. Is it possible to use auto.sarima() without seasonality like forecast::auto.arima ? As we are currently handling seasonality with fourier terms in the xreg matrix and not with a seasonal arma.

No, saddly I haven’t put that it ignores the seasonal component. But I copy Rob’s Fourier function to varstan, so It’s available when you load varstan, and you can put fourier() results in reg argument so auto.sarima will look the best model ARIMA with the regression residuals. (don’t tell Rob XD).

1 Like