Group-level vs. individual-level model comparison

I am comparing 3 (quite different) models of cognitive mechanisms underlying the performance of clinical patients (schizophrenia) in an experimental task.
I have a few options as to how to perform model comparison. Here the pros and cons as I understand them, but I’d appreciate some feedback.

  1. Ideally I would do a mixture multilevel model where the theta is conditioned on participant. This is in practice impossible to properly fit.

  2. I can run loo and stacking weights on the 3 multilevel models. Pros: pooling. Cos: many in the field would object to participants being assumed as similar.

  3. I can run the models on each individual separately and do loo and stacking weights at the individual level. Pros: individual weights, so different models can be better for different individuals. no Pooling. Cons. No pooling.

  4. I could extract pointwise loo scores, consider them by participant and do a post-hoc individual level model comparison. Pros: pooling and individual weights. Cons: Pooling. A bit convoluted.

At the moment I have implemented 2 and 3, with partially complementary results.

I think you’ve summarised the situation quite well. If I understand 4 correctly, it feels wrong (maybe @avehtari could provide some more thoughts).

At what level is this a concern? I see two possibilities:

  1. The pooling would put similar model coefficients to different patients, which is problematic (IMHO easy to argue against as you can put a wide prior on the between-patient variability)
  2. The idea of one model explaining all patients is problematic. In this case, you could do posterior predictive checks to see if this is actually the case. The fact that the model fits well obviously does not necessarily imply that the patients are similar, just that your model is useful.

On a more philosophical note, I like the idea of checking whether the qualitative features of the model fit the data. In this sense loo, stacking or whatever you use may actually be misleading - some nice discussion of this is by Danielle Navarro: OSF.

Finally, are you sure you can’t fit the 1. model? Maybe that’s something people here can help you with…

1 Like

Thanks for the answer!

  1. Pooling vs. no pooling
    I agree. It’s just something that came up again and again when talking to psychiatry researchers. I have also had no-pooling models tested explicitly against multilevel ones and faring much worse, but still the critique came. But yes, posterior predictive checks by participants is something to work on.

  2. Extracting individual loo scores from multilevel model
    I’d also be happy to hear more from @avehtari as to how the loo scores of individual participants in the multilevel model could be interpreted and what they could be used for.

  3. Mixture model
    One of the models at stake is pretty hard to fit (divergences and chains stalling). After discussions here (e.g. [Clarification] Divergences during sample_prior = "only" model fitting) we managed to fit it with no divergences by using a simplex prior on the btw subject variance across parameters. Adding that model to a mixture model has so far just generated endless divergences.

  4. Qualitative model checking: yes! We had Danielle in Aarhus a couple of weeks ago (talk here: https://youtu.be/tNkmsAOn7aU) and I am converted :-)

I have read your question few times, and I don’t understand what you are trying to do. Can you explain a bit more? Meanwhile see also tutorial for CV for hierarchical models https://avehtari.github.io/modelselection/rats_kcv.html

Aki

Thanks for the answer! The question is:

Can the point wise (by data point) loo estimates be interpreted? E.g. if I extract the loo estimates from model 1 for all the datapoints concerning participant 1 and the same for model 2, can I then compare the models only for participant 1 using those estimates? McElreath in his new version of the Statistical Rethinking book explores which datapoints are easier to explain in one model compared to another, but does so only qualitatively.

I know this is not an ideal procedure, but it is still interesting to know whether and how the point wise loo estimates can be interpreted and used to compare models for single datapoints or clusters of datapoints (e.g. 1 participant, or one stimulus).

Thanks, this clarified the question.

Yes, you can do this comparison. The comparison is conditional on the model and other data used to update the posterior, thus it is indicative which model would be good for each individual but if you then create another model which uses model 1 for some participants and model 2 for other participants the predictive performance for participant 1 can change. Further complication arises if the differences between the predictive performance estimates are small and there are several choices to be made. Then it would be better to integrate instead of selecting.

Thanks! This is very helpful and makes perfect sense.

Hi, I have a related situation, so I thought maybe it makes most sense to tag on to this (sorry for the naive questions).

  1. Can I use the weights of the ‘stacking’ as my reported model comparison in a paper? I.e. is the advantage over just summing up the loo scores for each model that it takes into account that for some people one model might fit better and for other another model. I am having the problem that in general for most people model 1 fits better, but for some people model 2 fits hugely better, so just summing up the loo scores it looks like model 2 is better.
  2. If my two models have the same number of parameters with exactly the same priors, do I still need to compute loo or can I use the log likelihood? (I was thinking of this as e.g. for AIC the difference to log likelihood is just correcting for the number of parameters) Or is this wrong for a hierarchical model as even though the models are set up to be the same they might have different numbers of effective parameters because of the hierarchy?