Continuing the discussion from Leavefutureout crossvalidation for timeseries models:
I have some behavioural data where for N trials, a subject chooses, at time t, between two options given some stimulus x_t. The subject’s decision function seems to evolve over time, which I model with an AR(1) process for the subjects’ parameters \theta. Furthermore, there seems to be a tendency for subjects to repeat choices, i.e. the choice y_t depends directly on y_{<t}.
After reading the linked thread and this blogpost I find myself confused about about the appropriateness of PSISLOO for my model class and would be grateful if someone can clarify this to me.

My model violates the exchangeability assumption required by LOO/WAIC, which makes me think that it’s inappropriate. But on the other hand, I don’t actually care about predicting the future, just about the ability of the model to detect meaningful structure, which speaks in favour of LOO.

Understanding that PSISLOO is just an approximation for LOOCV, I wonder if the following line of thinking is correct: “Because I consider LOOCV an appropriate measure for model goodness (as I don’t care about predicting the future), I should also consider PSISLOO to be appropriate.”
Or does the lack of exchangeability in my model imply that the approximation will be bad? 
A practical matter: I use arviz’s loofunction to compute PSISLOO. Even if using PSISLOO is in principle appropriate, does the implementation in arviz compute it correctly in case of a time series where the order matters? Or does it silently fail?
Thanks a lot!