Continuing the discussion from Leave-future-out cross-validation for time-series models:
I have some behavioural data where for N trials, a subject chooses, at time t, between two options given some stimulus x_t. The subject’s decision function seems to evolve over time, which I model with an AR(1) process for the subjects’ parameters \theta. Furthermore, there seems to be a tendency for subjects to repeat choices, i.e. the choice y_t depends directly on y_{<t}.
After reading the linked thread and this blogpost I find myself confused about about the appropriateness of PSIS-LOO for my model class and would be grateful if someone can clarify this to me.
-
My model violates the exchangeability assumption required by LOO/WAIC, which makes me think that it’s inappropriate. But on the other hand, I don’t actually care about predicting the future, just about the ability of the model to detect meaningful structure, which speaks in favour of LOO.
-
Understanding that PSIS-LOO is just an approximation for LOO-CV, I wonder if the following line of thinking is correct: “Because I consider LOO-CV an appropriate measure for model goodness (as I don’t care about predicting the future), I should also consider PSIS-LOO to be appropriate.”
Or does the lack of exchangeability in my model imply that the approximation will be bad? -
A practical matter: I use arviz’s loo-function to compute PSIS-LOO. Even if using PSIS-LOO is in principle appropriate, does the implementation in arviz compute it correctly in case of a time series where the order matters? Or does it silently fail?
Thanks a lot!