Leave-one-group-out CV

I hope you don’t mind me hijacking the thread for a question about leave-one-group-out cross-validation that’s been bugging me.

Suppose that I want to model N observations nested in J groups (for example, N responses from J participants in a psychological study). Going back to @avehtari 's blog post (https://andrewgelman.com/2018/08/03/loo-cross-validation-approaches-valid/), this example would correspond to the school rather than state case, and one would want to estimate the model’s prediction accuracy for a new participant.

Assuming that observations within each group are independent, couldn’t I sum the log-likelihoods of all points within a group, and then estimate the model’s out-of-sample predictive accuracy by applying PSIS-LOO to the J joint log-likelihoods? I have read the relevant article and posts, but have to admit that some of the details are beyond me. Is there a reason why this wouldn’t be valid, or why k-fold stratified cross-validation would be preferable?

Thank you in advance!