Leave-future-out cross-validation for time-series models

The link you provided for Stan website has title " Leave-one-out cross-validation for non-factorizable models". Did you mean to link to Approximate leave-future-out cross-validation for Bayesian time series models • loo which has title " Approximate leave-future-out cross-validation for Bayesian time series models" ?

If we have a model p(y_i|f_i,\phi) and joint time series prior for (f_1,...,f_T) then p(y_i|f_i,\phi) can be considered independent given f_i and \phi and likelihood is factorizable. This is true often and the past values are informative about future values, but conditionally we know f_i, the past values are not providing additional information. This should not be confused with that when we don’t know f_i and integrate over the posterior of (f_1,...,f_T), then y_i are not independent anymore. Also they are not anymore exchangeable as we have the time ordering telling additional information. In cross-validation M-step ahead prediction is more about the usual interest in predicting future and evaluating the time series model for (f_1,...,f_T), but leave-one-out cross-validation is valid for assessing conditional part p(y_i|f_i).

Thanks, we’ll fix this to not say that.

Thanks we need to fix the vignette. We first implemented leave-future-out by fitting the model with all data, and removing one by one observations, but after an anonymous hint, we realized that it’s better to fit first the minimal model and add one by one observations as then the proposal tends to have thicker tails than the target.

Tagging @paul.buerkner so the he also sees what we need to fix.

I hope this helps

2 Likes