I’m having a hard time making sense of the write-ups on LFO cross-validation found on the Stan website (http://mc-stan.org/loo/articles/loo2-non-factorizable.html) and on arxiv (https://arxiv.org/pdf/1902.06281.pdf), both of which are entitled “Approximate leave-future-out cross-validation for Bayesian time series models.” The Stan-website writeup (in the section “M-step-ahead predictions”) assumes that we have a factorizable model, i.e., the values y[t], 1 <= t <= T, have independent distributions conditional on the model parameters – but this is never true for any but trivial time-series models! The whole point of having a time series model is that past values are informative about future values.

The arxiv writeup correctly notes that “Most time series models have a non-factorizable likelihood”, but then equation (10) treats the likelihood as factorizable anyway. Furthermore, it appears to have the formula for the raw importance ratios inverted – compare equation (10) in the arxiv write-up with the second equation in section “Approximate M-SAP Using Importance Sampling” in the Stan-website writeup.

Can the authors, or anyone else who understands the paper, help me out here?