Calculating Intra-Cluster Correlation (ICC) for the Group-Level Effect in a Multinomial GLMM

techniques

#1

Hosmer et al. (2013: 327) advise that in a logistic regression model with a single random (group-level) effect, the so-called intra-cluster correlation is an estimate of the proportion of the overall variability accounted for by the random (group-level) effect. The formula that they provide for computing this statistic is \frac{\sigma^2}{{\frac{1}{3}\pi^2}+\sigma^2}, where \sigma is the estimated standard deviation of the random (group-level) effect. How do I apply this formula to a multinomial model with C = 4 categories and one random effect per non-baseline category? The raw frequencies of the categories are c_1= 688 (baseline category), c_2 = 747, c_3 = 667, and c_4 = 437 respectively.

If left to my own devices I’d probably calculate the average of the three ICCs, weighted by the overall counts of the three non-baseline categories. But I’d rather not rely on my own vague and error-prone mathematical intuitions.


Hosmer, D. W., Lemeshow, S. & Sturdivant, R. X. (2013). Applied logistic regression (3rd ed.). Hoboken, N.J.: Wiley.

#2

I’m not sure what you mean by “per regresison equation”. Is it just a discrete (as opposed to truly multinomial) regression?

If you want the posterior standard deviation of a parameter, you need to do that outside of Stan using the posterior draws.


#3

Sorry, that must have been an erroneous wording (I’ve edited it now). It’s multinomial. With 4 outcome categories, the model is composed of 3 binary logistic regressions, each comparing one non-reference outcome with the reference outcome. So for each parameter specified in the model formula, there are three parameters (one for each component logistic regression). What I want to do is calculate a “summary statistic” of the ICC for the entire multinomial model, using the three individual ones.


#4

Why not use multi-logit? It’s the usual approach for multiple categories. There’s a discussion in the manual regression chapter.

That doesn’t answer this question, but presumably what you’re trying to work out is reduction in variance from the predictors.


#5

There is an icc()-function in the sjstats-package, which can be used for stanreg or brms objects.
However, for non-Gaussian models, it is recommended to calculate the ICC based on the posterior predictions (I think this was stated by @Bob_Carpenter or @bgoodri). You can use the ppd-argument to calculate the ICC based on the posterior predictions.


#6

Here’s the link to the discussion.


#7

Thanks for the input, strengejacke. I tried the package you’re referring to. And indeed, its author defines ICC identically to what I’ve understood it to mean: “the proportion of the variance explained by the grouping structure in the population”. But the output of the icc() function is cryptic to me:

icc(mod, ppd = TRUE)
# Random Effect Variances and ICC

Family: categorical (logit)
Conditioned on: all random effects

## Variance Ratio (comparable to ICC)
Ratio: 0.01  HDI 89%: [-0.10 0.13]

## Variances of Posterior Predicted Distribution
Conditioned on fixed effects: 1.10  HDI 89%: [0.97 1.22]
Conditioned on rand. effects: 1.11  HDI 89%: [1.07 1.14]

## Difference in Variances
Difference: 0.01  HDI 89%: [-0.11 0.14]

A ratio comparable to ICC is 0.01? That would mean next to no ICC, entailing that the random effect has little explanatory power. But this is not true – even just comparing the Deviances of the two models yields a difference of over 600 points – on three degrees of freedom! Also, the random-effects model predicts 62% of the quaternary responses correctly, compared to 57% for the fixed-effects model. It doesn’t seem an inconsequential group-level effect to me.

Also, running icc(mod, ppd = FALSE) yields:

# Random Effect Variances and ICC

Family: categorical (logit)

## trigger
          ICC: 0.50  HDI 89%: [0.36 0.65]
Between-group: 1.07  HDI 89%: [0.46 1.66]

## Residuals
Within-group: 1.00  HDI 89%: [1.00 1.00]

This says ICC is 0.5, i.e. the random effect accounts for HALF the explained variance. This, on the other hand, sounds very extreme – none of the three component logistic regressions constituting this multinomial model seems to have an ICC that high – calculating them individually using Hosmer et al’s formula (see first post) yields .24, .25, and .47, respectively. Their average (weighted by sample size) yields 0.3.

Which of the two values seen above is the estimated ICC? The 0.01 or the 0.50 – or neither?

Hmm. If I had to guess, I’d guess it’s the second one, calculated with ppd = FALSE. That’s where the output correctly names the random effect (‘trigger’) whereas the output of ppd = TRUE just uses the generic phrase “all random effects”. But still – 50 percent?