Forecasting in brms

I would like to do simple forecasts of over 1000 time series at a time. I saw on github that BSTS are not part of brms, which is fine, I never seemed to get great forecasts out of them anyways. I also saw this

splines and Gaussian processes, which do a fairly good job in replacing (basic) BSTS terms but with much less convergence issues and the immediate ability to do forecasting if desired.

I have never tried forecasting with splines or Gaussian processes. Does an example of doing this in brms exist in the wild?

Not sure how splines or GPs perform with respect to forecasting as compared to explicit time-series models to be honest (read: maybe I should change the warning message). At least getting forcasting done with them is straight forward. Here is an example:

df <- data.frame(y = rnorm(100), time = 1:100)
fit <- brm(y ~ s(time), df)
newdf <- data.frame(time = 1:110)
newdf <- cbind(newdf, fitted(fit, newdata = newdf))
names(newdf) <- make.names(names(newdf))
print(newdf)

ggplot(newdf, aes(time, Estimate)) +
  geom_smooth(aes(ymin = X2.5.ile, ymax = X97.5.ile),
              stat = "identity")
1 Like

A very recent publication Statistical and Machine Learning forecasting methods: Concerns and ways forward actually studies this empirically and among the models considered are GPs.

1 Like

Great paper. I have played around with forecasts using boosting, and forecast::nnetar, examples here with worse performance metrics than ETS and ARIMA based models. I thought I was the only one seeing this — maybe it was just my data. It is nice to have a recent paper to provide more evidence vs just my anecdotal experience.

Have you also a tried the forecastHybrid package to combine auto.arima(), ets(), thetam(), nnetar(), stlm(), and tbats(); it often improves the forecast quite a bit as compared to the based forecast.

If you have a hierarchical structure in your forecast (like product hierarchies or geographical hierarchies) I can highly recommend hts.

Thanks. I have seen and tried hts, I was surprised with poor results and long computation times. This could have been due to the structure I put them in. I know that it is often recommended to put stores in geographical groups, but there could be other potentially better groups that I have not looked at yet such as population size or surrounding demographics.

Though I have combined many forecasts I have never used forecastHybrid. Combining forecasts has been one of the more successful approaches. Again, this is just with the data I have seen. So if anyone is reading this for their own application test it on your own data.

Some of the other more successful approaches have been ets, stlm(), and auto.arima or a combination of the above. This does vary by frequency. I have found, for example, that one method works well for quarterly data while another for monthly and another for weekly.

If anyone is interested in digging into forecasting I highly recommend checking out Forecasting: Principles and Practice. The second edition just came out in April 2018.

And we should not forget to mention the Stan based prophet :)

2 Likes

It depends also on the reconciliation method used: the rather recent MINT method is, in my experience, very fast (boils down to standard linear algebra operations, in addition to computational efforts for forecasts). It is also mentioned in a new chapter in fpp2.

I leave with that since this is not anymore Stan related from my side.