Fitting trend/bias in an AR(1) timeseries

I have been using R-stan to estimate the mean reversion and bias/trend of multiple time series. Firstly I determined whether the time series were mean-reverting or random walks. Then I fit a model to estimate the bias of the random walks. As random-walks are non-stationary I have estimated a model of the change in Y (which I believe is an I(1) process?):

y_t - y_{t-1} = \delta + \varepsilon

Where \varepsilon is iid normally distributed.

model {
  for (i in 1:N) { // for each observation
   deltaY[i] ~ normal(delta[TOPIC[i]], sigma[TOPIC[i]]);
  }
  // Priors:
  delta_bar ~ normal(a,b);
  delta_raw ~ normal(0,1);
  sigma_delta ~ normal(0,0.3);
  
  sigma ~ normal(0,0.3);
  
  // Hyper-priors
  a ~ normal(0,0.3);
  b ~ normal(0,0.5);
}

And this has converged beautifully and the auto-correlation is low etc.

I have tried to fit the bias of the mean-reverting time series to no avail - it just doesn’t converge.

This model is: Y_t = \delta * t + \varepsilon

\varepsilon is iid normally distributed.

The code for the stan file is below:

data{
  int K; // number of topics found to be random walks.
  int N; // number of observations.
  real Y[N]; // observations - Y_t - Y_{t-1}
  int TOPIC[N]; // index of the identity of topic for each observation.
  real timestep[N]; // index of the timestep (1:23 for each topic) for each observation.
}

parameters {
  real delta_raw[K];
  real delta_bar;
  real<lower=0> sigma_delta;
  real a;
  real<lower=0> b;

  real<lower=0> sigma[K];
}

transformed parameters {
  real delta[K];
  
  for (j in 1:K) {
    delta[j] = delta_bar + delta_raw[j] * sigma_delta;
  }
}

model {
  for (i in 1:N) {
    Y[i] ~ normal(delta[TOPIC[i]] * timestep[i], sigma[TOPIC[i]]);
  }
  // Priors:
  delta_bar ~ normal(a,b);
  delta_raw ~ normal(0,1);
  sigma_delta ~ normal(0,0.4);
  
  sigma ~ normal(0,0.4);
  
  // Hyper-priors
  a ~ normal(0,0.4);
  b ~ normal(0,0.4);
}

What should I be doing differently? The observations are of the scale: e-04. Are my priors too broad/narrow?

All help is appreciated, thank you.

My question is why do you call the second model mean reverting? The thing I’ve seen with the label mean reverting is one of these doohickies: https://en.m.wikipedia.org/wiki/Ornstein–Uhlenbeck_process

And that has an ODE looking component that does the mean reverting. I don’t see how \Delta Y_t = \delta * t + \epsilon mean reverts.

@lorcan, you might be interested in a model that I’ve been working on – it’s certainly not a final product yet. It’s a Stan version of bsts’s semilocal linear trend model. I have some code on GitHub that might help get you started: GitHub - roualdes/semilocal-linear-trend: Semilocal linear trend model in Stan.

If you end up fitting the Stan version, let me know what you think. Thanks.

Also, here’s a discourse post about my efforts thus far:

1 Like