Vectorize AR(K)

Hello! is there a way to vectorize or make this code below more efficient (we are trying to avoid the for loops), for an autoregressive model AR(K)?

model {
  for (n in (K+1):N) {
    real mu = alpha;
    for (k in 1:K)
      mu += beta[k] * y[n-k];
    y[n] ~ normal(mu, sigma);
  }
}

from Stan doc: 2.1 Autoregressive models | Stan User’s Guide

Would you have suggestion? Thank you.

Have you seen the code generated by brms?

Thank you for suggestion. Ok, we are looking into it (we are using Python).

Hi pookie did you ever sort this out? Would be v nice to have as an example in the docs!

I think one approach would be to broadcast the y vector to an (N-K) x K matrix, where each column contains the previous K lags of data, then the mu vector is a simple matrix multiplication.

So it would need one function that would take the number of observations and lag order, and return the indices that would comprise each row:

  array[] int ark_inds(int N, int K) {
    // Nested array containing the indices for the nth lag
    array[(N-K), K] int inds_tmp;
    for (n in 1:(N-K)) {
      inds_tmp[n] = linspaced_int_array(K, n, n+(K-1));
    }

    // Concatenate nested arrays to a single array
    return to_array_1d(inds_tmp);
  }

Then the y vector can be broadcast using the to_matrix function with row-major ordering:

    array[(N-K) * K] int inds = ark_inds(N, K);
    matrix[(N-K), K] y_matrix = to_matrix(y[inds], (N-K), K, 0)

This can all be done in the transformed data block, which should keep things pretty computationally negligible.

Then the likelihood can be constructed using the normal_id_glm likelihood:

y[(K+1):N] ~ normal_id_glm(y_matrix, alpha, beta, sigma);

How does that sound?

2 Likes

Actually, all that indexing and broadcasting is completely unnecessary, it could just be a loop. So the vectorised Stan program would be:

data {
  int N;
  int K;
  vector[N] y;
}
transformed data {
  matrix[(N-K), K] y_matrix;

  for(n in 1:(N-K)) {
    for(k in 1:K) {
      y_matrix[n, k] = y[n + (k-1)];
    }
  }
}
parameters {
  real alpha;
  vector[K] beta;
  real<lower=0> sigma;
}

model {
  y[(K+1):N] ~ normal_id_glm(y_matrix, alpha, beta, sigma);
}

1 Like