Out-of-sample Forecast for a stochastic volatility model

Hi,
I am currently trying to write a simple stochastic volatility model, which can make one or two steps ahead forecast. It was really easy for a GARCH(1,1) model, but I find it difficult for a SV model. Does anyone have an idea how to implement a forecast in the generated quantities block for the following model:

data {
    int<lower=0> T;
    vector[T] y;
}
parameters {
    real mu;
    real<lower=-1, upper=1> phi; 
    real<lower=0> nu;
    real<lower=0> sigma;
    vector[T] h_std;
}
transformed parameters {
    vector[T] h;
    
    h = h_std * sigma;
    h[1] = h[1] / sqrt(1 - phi * phi);
    h = h + mu;
    
    for(t in 2:T){
        h[t] = h[t] + phi * (h[t-1] - mu);
    }
}
model {
    // Priors
    phi ~ uniform(-1, 1);
    sigma ~ gamma(1, 2);
    mu ~ normal(0, 5);
    nu ~ cauchy(0,5);
    h_std ~ normal(0, 1);


    y ~ student_t(nu,0, exp(h/2));
}
generated quantities {
    vector[T] sigmaT; 
    sigmaT = exp(h/2); 
}```

Getting out of sample predictions from SV models is a bit (but only a bit) more complex than GARCH models because the next day’s volatility isn’t fixed conditional on the past and the parameters.

The easiest way to make predictions is to `run the likelihood forward’ as an RNG rather: e.g., in your generated quantities component include something like

real<lower=0> sigma_T_plus_one; 

{
  real h_T_plus_one = mu + phi * (h[T] - mu) + normal_rng(0, sigma);
  sigma_T_plus_one = exp(h_T_plus_one / 2); 
}

(I’m using local variables here b/c I’m assuming you only want things on the sigma scale, not the h or h_std scales)

That said, this slightly underestimates the volatility of the return (technically, the volatility of the volatility) because it only uses one RNG draw. That said, if you have a ton of Stan draws to explore the posterior well, this won’t really matter: if you get good n_eff from stan, you’ll almost certainly have enough RNG draws as well. If you want to be safe, I’d usually take several draws of sigma_T_plus_one in each generated quantities block:

vector[N_forecasts] sigma_T_plus_one; 

{
  vector[N_forecasts] h_T_plus_one; 
  for(n in 1:N_forecasts){
    h_T_plus_one[n] = mu + phi * (h[T] - mu) + normal_rng(0, sigma);
  }
  sigma_T_plus_one = exp(h_T_plus_one / 2); 
}

This is pretty cheap computationally (things done in the GQ block don’t need autodiff), but it might chew up memory. If you know what you want to do with the forecast, it may be easier to only return a summary than the whole set of draws.

The same principle works for multi-step forecasts, e.g.:


vector[T_forecast] sigma_forecast; 

{
  real h = 0; 
  real h_prev = h[T]; 
  for(t in 1:T_forecast){
    h = mu + phi * (h_prev - mu) + normal_rng(0, sigma); 
    sigma_forecast[t] = exp(h / 2); 
    h_prev = h; 
  }
}

(You can simplify this analytically b/c the sum of normals is still normal, but I’m writing it explicitly here so that you can extend it to more general innovation distributions.)

3 Likes