Intraclass correlation coefficient of non-linear parameter

Hi all!

I am trying to estimate the intraclass correlation coefficient (ICC; or repeatability, R, as termed in ecology) of a nonlinear parameter at the individual level after fitting a model with brms. My data are spring molt progression rates of individual bighorn sheep. I have multiple molt estimates per individual per year (≈7 per id-year) which allow an estimation of a random intercept of year nested within id for the inflection point of a nonlinear logistic growth curve (see phi1 below). A simplified version of my code is the following:

``````# Define model priors
prior1 =
prior(normal(1, 0.5), nlpar = "phi2") +
prior(normal(175, 8), nlpar = "phi1") +
prior(cauchy(2,3), class="sd", group="id", coef="Intercept", nlpar="phi1") +
prior(cauchy(8,4), class="sd", group="id:year", coef="Intercept", nlpar="phi1") +
prior(student_t(3, 0, 2.5), class="sigma")

# Fit model
final <- brm(bf(moltProp ~ 1 / (1 + exp((phi1-jj)/exp(phi2))),
phi2 ~  1,
phi1 ~ 1 + (1|id/year),
nl=TRUE), data = molt, prior = prior1,
iter = 8000, warmup = 4000, cores = 4, chains = 4)

# jj is Julian date
``````

To estimate an ICC at the individual level for a linear multilevel model with (1|ID) I would simply use the the standard deviation estimated from the ID random intercept (sd_id__Intercept) and the model residuals (sigma):

``````hyp <- "sd_id__Intercept^2 / (sd_id__Intercept^2 + sigma^2) = 0"
(hyp <- hypothesis(final, hyp, class = NULL))
``````

However, in the case of the nonlinear model, (1|ID/Year) is estimated for phi1 only, whereas the sigma that is returned in the summary is at the moltprop (response variable) level. Am I correct in thinking that sd_id__phi1_Intercept and sd_id:year__phi1_Intercept, respectively represent among- and within-individual variation for phi1? If so, could I estimate an ICC for phi1 using the following code?

``````hyp <- "sd_id__phi1_Intercept^2 / (sd_id__phi1_Intercept^2 + sd_id:year__phi1_Intercept^2) = 0"
(hyp <- hypothesis(final, hyp, class = NULL))
``````

If not, how could an ICC be estimated for phi1 at the individual level? In other words, is there a way to correctly extract or estimate within-individual variance for phi1 only?

Data to run code are shared as a CSV file.
molt.csv (46.8 KB)

• Operating System: Windows
• brms Version: 2.15.0

Thank you very much!

1 Like

I apologize to anyone who has taken time to try and run my code. I made a mistake in the data file I shared. Here is the correct data which allows the code to run smoothly:
Stan.csv (43.7 KB)

1 Like

I didn’t try anything but I’m glad your problem is solved.

The problem isn’t solved actually. I simply shared the correct data to run the example code with the problem.

Oh, I thought running smoothly meant it worked, sorry.

2 Likes

A couple of questions:

• You say that year is nested within individual, but isn’t this a cross-classified model, i.e. all individuals experience all years? The standard deviation of individual-specific estimates is, therefore, for an average year while the standard deviation across years is for an average individual. The residual variance (`sigma^2` I assume in your model output) is the within-individual variance for an average year.

• Your modelling (I assume) `molt_prop` as a Gaussian variable but it is necessarily bound between 0 and 1. This probably isn’t the best model for this data and I would consider something like a count model with an offset term or a beta likelihood.

In general, ICCs are not well-defined for non-linear models (as far as I understand) because the variance of level-2 effects (e.g. individual-specific intercepts) are not on the same scale as the residual variance, e.g. in linear Gaussian multilevel models, the individual-specific effects would describe the variability of the conditional mean values of your response variable, whereas non-linear models do not have this clear interpretation. In your case, you also have log-scale individual- and year-specific effects (because `phi1` is log-scale), but a residual variance for a Gaussian response variable on the original scale.

One way I think you could get around this is to use simulation (although I haven’t tested this). Using the non-linear formula above, generate N posterior samples for the conditional mean (or the rate parameter in a count model etc.) where representative samples of the random-effects are included by adding `rnorm(N, 0, sd_id__Intercept)`. You’d then have a representative set of samples for the conditional mean whose variance will reflect the between-individual variation. You could then use the traditional ICC formula, i.e. `var(simulations) / (var(simulations) + sigma^2)`.

2 Likes

Thank you so much!

1. I had modelled year nested within individual because I thought this would control for pseudo-replication within individual-years. I have multiple estimates of molt progression on each individual (≈7) within each year. I also have three years of study with most individuals experiencing all three years, meaning most individuals have ≈ 21 molt estimates in the dataset. If I understand correctly, I used (1|id/year) as an equivalent to (1|id) + (1|idyear), were idyear would be a variable representing each individual-year in the dataset. I guess I could simply use (1|id) with a fixed effect of year as a factor, but I wonder if this is OK given the pseudo-replication I just described?

2. This has also puzzled me. I thought of using a beta likelihood but it does not work with data with many true zeros and ones. I could look into a count model with an offset term. However, molt proportion is not a count and varies continuously between no molt (i.e., 0 ) and complete molt (i.e, 1) every season. The equation I have currently used respectively forces a lower and upper asymptote of zero and one to the logistic growth curve. Therefore, I think my model is doing a fairly good job of modelling the progression of molt proportion while considering it’s bounded nature. Posterior predictive checks and conditional effects also seem reasonable.

The idea of estimating the between-individual variance with simulations sounds very good and relatively simple. I’ll have to give it a try!

@Benjamin.Larue

I think your syntax was correct for the `brms` model, it was more the interpretation of the model I was getting at. Observations are nested within individuals and nested within years, but individuals are not nested within years in the traditional hierarchical sense because individuals experience all years. You could have split it up into the random effect of individual that reflects individual-specific effects for an average year, the random effect of year that reflects the year-specific effects for an average individual, and an interaction between individual and year that would give you the individual-year-specific effects. But I could have misunderstood your data structure.

My concern with your Gaussian likelihood was that you are predicting the mean of the Gaussian with your non-linear model, but there is nothing stopping the posterior predictive distribution (i.e. adding in the residual variance) making predictions outside of (0,1), in theory at least.

In terms of simulation, I ran a very simple and contrived example to make it more concrete:

``````set.seed(12345)

# n data points per individual
N_i <- 5
# n individuals
N_j <- 500
N <- N_i*N_j
id <- rep(1:N_j, each=N_i)
alpha <- 0
beta <- 0.5
sigma <- 2
ICC <- 0.4
sigma_j <- sqrt(ICC*sigma^2/(1 - ICC))
u <- rnorm(N_j, 0, sigma_j)
x <- rnorm(N)
eta <- alpha + beta*x + u[id]
y <- rnorm(N, eta, sigma)
hist(y, breaks=50)

d <- data.frame(id=id, y=y, x=x)

fit <- brms::brm(
y ~ 1 + x + (1|id),
data=d,
cores=4
)

draws <- as.data.frame(fit)

## estimate ICC via simulation
# for each individual, predict their average value, returning a S * N matrix (S = number of posterior samples)
preds <- sapply(1:N_j, function(i) draws\$b_Intercept + draws[,grep("r_id", colnames(draws))][,i])
# for each row of the posterior samples, compute the variance across individuals
sim_var <- apply(preds, 1, var)
# compute ICC
icc_sim <- sim_var / (sim_var + draws\$sigma^2)

# plot ICC estimates against actual
plot(density(icc_sim), lwd=3, main="", xlab="icc")
lines(density(draws\$sd_id__Intercept^2/(draws\$sd_id__Intercept^2 + draws\$sigma^2)), lty=2, lwd=3)
abline(v=ICC, col="steelblue2", lwd=3, lty=3)
legend(
"topright",
legend=c("simulation", "model", "real"),
lty=c(1,2,3)
)
``````

Edit to add: for the count model, I was assuming you might have access to the raw data the proportions derive from.

1 Like

@cgoold
Thank you so much for the explanations and the example!

I’ve been testing the simulation approach with my model in the last few days. I have positive results in that I can compute more realistic ICCs that are not <0.01 or >0.99 (contrary to other approaches I had tried). Despite the positive results, I still have a few concerns. In the example given above (or any other linear model), inter-individual variance does not change when the intercept changes or when parameters are added to the formula. For instance, in the linear example you shared:

`preds <- sapply(1:N_j, function(i) draws\$b_Intercept + draws[,grep("r_id", colnames(draws))][,i])`

gives the same results as:

`preds <- sapply(1:N_j, function(i) draws[,grep("r_id", colnames(draws))][,i]) # no intercept`
or
`preds <- sapply(1:N_j, function(i) 10000 + draws[,grep("r_id", colnames(draws))][,i]) # arbitrary intercept of 10000`

In the case of my non-linear model, I needed inter-individual variance on the scale of the response variable (i.e. moltProp) so I could use my model sigma estimate to compute an ICC. To get inter-individual variance on the right scale, I used the following:

`draws2 = as.data.frame(final)`
`draws2Test = draws2[,grep("r_id_", colnames(draws2))]`
`N_j = 85 `
`preds <- sapply(1:N_j, function(i) (1 / (1 + exp((draws2Test)/exp(draws2\$b_phi2_Intercept))))[,i]) ​`

I think this gives a good approximation of inter-individual variance on the moltProp scale at the inflection point. However, I notice that inter-individual variance reduces rapidly when I simulates dates further away from the inflection point, such as this example where I move 20 days away from the inflection point:

`preds <- sapply(1:N_j, function(i) (1 / (1 + exp((draws2Test + 20)/exp(draws2\$b_phi2_Intercept))))[,i]) ​`

This reduction makes sense given that the molt proportion of more and more individuals approaches the asymptote of one thereby reducing inter-individual variance. On the other hand, I believe that the sigma estimate of my model represents the full extent of possible dates in the data, meaning that if I use a simulated inter-individual variance at the inflection point, the ICC is overestimated. I wonder how to deal with this?
My first idea was to simulate inter-individual variance across the whole range of dates (approximately 50 days, or 25 days on either side of the inflection point) and then average the estimates, but I am not sure this is OK?

Any thoughts on how to solve this problem would be greatly appreciated!

‘brms’ also has a zero_one_inflated_beta that might handle your data.

I was thinking similarly to @JLC and @cgoold that the response distribution is probably an important factor here. Though your mean function is constrained to between 0 and 1, your normal variance means that your model predicts values outside of (0,1). A zero-one-inflated beta model, or a beta model with censored responses, might be more justifiable.

Also, when using a constant variance model, wouldn’t this constant variance assumption have a strong influence on the estimate for sigma, and therefore on the ICC?