I am trying to figure out whether for my current project there could be a reason not to model a hierarchy:

I have collected data from 400 humans on a computer task. I have analysed the data in a non-hierarchical model (logistic regression reparameterized to have the ‘noisiness’/‘inverse temperature’ separately for each person from the regression weights) that suggested that, given appropriate transformation the parameter estimates are more or less normally distributed.

In a next step I want to link these parameter estimates to responses participants have made on questionnaires about psychiatric symptoms. For this I have used a regression analysis predicting the behavioural measures based on the questionnaire scores.

My question now is: Should include the behavioural measures that I obtain from the hierarchical or the non-hierarchical model? They are correlated, but not perfectly, r ~0.8. And how would I decide? One hesitation I have about the hierarchical model is that for each individual person, their estimate will be influenced to an extent by the behaviour of other people, which might not be a desirable feature.

I do realise that an alternative would be - instead of running two types of models - to instead make one large model that explains both the behaviour in the computer task and the questionnaires at the same time. I’m hesitant to do that as a first step because it would be quite unusual for the field. Also, because this now becomes a very large model, it seems to take very long to fit. Ideally I’d do it as well as the approach above and find the same.

I’d be very grateful for any input on this
Jacquie

Hi,
since nobody else replied, I will give it a try, though my expertise on the topic is questionable, so only accept my suggestions, if you can verify for yourself that they make sense in your case :-)

I think the only universal reason is when the hierarchy does not make sense in the real world (e.g. putting a common hierarchical prior on temperature and population density). But there probably might be application-specific reasons. I think that the vibe here at the forums is that hierarchy tends to be super useful in almost all cases. One way to think about hierarchy is that it greatly reduces variance of your estimates as well as the total mean squared error of the estimates for a small additional bias (all estimates are biased towards the population mean).

I think you should start with posterior predictive checks, that let you see if there is important disagreement between your model and the data. You can look for example at predictions per person or the predicted within and between person variance. Note that the hierarchical model is (usually - depends on specific priors) more constrained so if neither model disagrees with your data, hierarchy is IMHO to be preferred on Occam’s razor-type arguments.

With that said, it is IMHO good practice to actually do both and include alternative analyses in the appendix, in the spirit of multiverse analysis. So if your reader disagrees about which model is better, they can easily check how much the results change if you use a different model.

You could also use loo or cross-validation in general to asses the predictive performance of the models and choose the one that generalizes better.

Note that - on the philosophical level - the reasoning can also be reversed: you would certainly expect people to be at least a little bit similar to each other, so assuming they are independent could easily be seen as undesirable.

I am not sure I understand you here, but wouldn’t a middle ground be to predict the actual task output regressing on the questionnaire responses? (or is that you had in mind originally?) This however doesn’t absolve you of the hierarchy vs. non-hierarchy choice.

This is an excellent point. Given that when I fit the data individually and I plot histograms of the participants’ parameters and they do look like they follow a normal distribution, the right thing really seems to be to directly capture this in the model.