 # Multivariate multilevel distributional model + random correlations

• Operating System: Mac
• brms Version: 2.8.8

Say I have longitudinal data on two response variables. The number of measurement occasions t is something reasonably large like t = 100. I then fit a multivariate interecepts-only model where the two random intercepts are themselves allowed to correlate. Since I also care about individual differences in sigma, I impose a hierarchical structure on that and allow correlations among all 4 random parameters with the |x| syntax.

bfa <-
bf(a ~ 1 + (1 |x| id),
sigma ~ 1 + (1 |x| id))

bfb <-
bf(b ~ 1 + (1 |x| id),
sigma ~ 1 + (1 |x| id))

fit1 <-
brm(data = my_data,
mvbf(bfa, bfb))


My main substantive question is the correlation of a and b within participants (i.e., id). My initial thought is that would be the correlation between b_sigma_a_Intercept and b_sigma_b_Intercept. But that doesn’t seem quite right. Substantively, the within-lag correlations between a and b will be different across participants.

So, shouldn’t that correlation itself be random? If my reasoning isn’t too wacky, is this possible in brms?

I don’t really see the lag being modeled somewhere in the model. What are a and b exactly and could you perhaps write down your model using mathematical notation?

I suppose the language of lags unnecessarily muddied the water. Let’s retract that part.

Say a and b are anxiety and depression ratings. They’re collected once daily for 100 days across a good number of participants. I’m interested in person-specific correlations between the two.

Eventually, the model would have predictors for the two variables (e.g., time, day of the week, workday…). But that’s down the road.

I imagine the correlations between the two as having a population mean and person-specific deviations. So I suppose I’m looking for something like \rho_{[a, b]_\text{id}}.

I see. I think this would indeed come down to predicting the (residual) correlations of the multivariate model but this is not yet possible in brms. see https://github.com/paul-buerkner/brms/issues/254

Thanks! Looks like I have some reading to do.

I think I gave you the wrong link above. I just corrected it.

So if i’m following the updated link correctly, you’re hoping to take cues from @bgoodri’s comments on this thread to allow for partial pooling across correlation matrices as part of the brms version 3.0 update.

UPDATED: My first post contained unnecessary code parts in the stan program. These parts have been removed now.

In case it’s of interest, I have taken a (probably completely wrong) shot at the bivariate case of this problem, and I’d be curious to hear whether what I’m doing makes sense. Specifically, I took a bivariate distributional brms model and modified the stan code to predict the correlations between F1 and F2.

The model is meant to analyze vowel formants F1 and F2 (the two acoustic dimensions along which vowels are distinguished), for which we are aiming to predict whether accent (whether the talker is a native or non-native speaker of the language) affects the distribution of the vowel in F1-F2 space. For simplicity’s sake I’m assuming vowels have bivariate Gaussian distributions in F1-F2 space, and I’ll focus on one vowel.

The bivariate model of F1 and F2 with linear predictors for the mean and standard deviations, but not the correlations between F1 and F2 (Talker is between Accent since each talker either is a native OR a non-native speaker of the language):

bf_F1 =
bf(F1 ~ 1 + Accent + (1 | p | Talker)) +
lf(sigma ~ 1 + Accent + (1 | q | Talker)) + gaussian()

bf_F2 =
bf(F2 ~ 1 + Accent + (1 | p | Talker)) +
lf(sigma ~ 1 + Accent + (1 | q | Talker)) + gaussian()

formula = bf_F1 + bf_F2


I basically want to extent the model to also model:

rho ~ 1 + Accent + (1 | Talker)

Here’s the modified stancode and some simulated data:

SimulatedF1F2.csv (81.1 KB)

Following Bloome and Schrage’s (ordinary; not multilevel) model, the linear predictor for correlations models logits (converted into correlations via inv_logit(x) * 2 - 1), though I am wondering whether has unwanted downsides (feedback welcome). Another note that might be helpful is that I stuck as close as possible to the original brms-generated stancode, including some redundancies that I assume have their origin in the goal of brms to generate the code in ways that is maximally general.

And here is some example code to hijack brms’ make_standata() function to prep the input for the model and run it (for this particular example):

brm_cor = function(formula, data, filename, ...) {
if (file.exists(filename)) {
} else {
standata.new = make_standata(formula, data)
# remove residual variance related input
standata.new$nrescor = NULL # copy over everything from F1 (same as F2) to the input for the correlations standata.new$N_F1F2 = standata.new$N_F1 standata.new$K_cor_F1F2 = standata.new$K_F1 standata.new$X_cor_F1F2 = standata.new$X_F1 standata.new$N_3 = standata.new$N_1 standata.new$M_3 = standata.new$M_1 standata.new$J_3_F1F2 = standata.new$J_1_F1 standata.new$Z_3_cor_F1F2_1 = standata.new$Z_1_F1_1 standata.new$NC_3 = standata.new\$NC_1

m = stan(
file = "Distributional-Lobanov F1F2-cor-i.stan",
data = standata.new,
chains = 4,
warmup = 1000,
iter = 2000,
cores = 4,
...
)
saveRDS(m, filename)
}

return(m)
}

bf_F1 =
bf(F1 ~ 1 + Accent + (1 | p | Talker)) +
lf(sigma ~ 1 + Accent + (1 | q | Talker)) + gaussian()

bf_F2 =
bf(F2 ~ 1 + Accent + (1 | p | Talker)) +
lf(sigma ~ 1 + Accent + (1 | q | Talker)) + gaussian()

m.simulated.cov = brm_cor(
formula = bf_F1 + bf_F2,
data = d,
filename = "../models/Distributional-SimulatedData-cor.rds"
)


The model infers the same means and variances as the brms model without correlations (so far so good). Qualitatively, the correlations also seem to be recovered correctly (with shrinkage to 0, it seems). I’m still running simulations to test whether what I did makes sense.

Novice question: is (some) of the functionality desired above available in the HMSC package? May have ideas for implementing in bmrs?

And here is an example. I simulated 40 subjects (20 each per accent) with 40 simulated vowel productions per talker, and the following statistics (first element is value for Accent 1; second element is value for Accent 2; mu1 refers to mean of cue F1, etc.):

mu1 = c(-1.5, 1.5)
mu2 = c(-1.5, -.5)
sigma1 = c(.5, 1)
sigma2 = c(.3, .3)
rho12 = c(0, -.7)


I get output:

F1 mean in Accent 2, mu = 1.02560410447052
F1 mean in Accent 1, mu = -1.83847794291701
Difference in F1 mean, diff(mu) = 2.86408204738753

F2 mean in Accent 2, mu = -0.424986547061549
F2 mean in Accent 1, mu = -1.30599309751299
Difference in F2 mean, diff(mu) = 0.881006550451442

Variability along F1 in Accent 2, sd = 1.13850891134698
Variability along F1 in Accent 1, sd = 0.556999243128987
Difference in variability along F1, diff(sd) = 0.58150966821799

Variability along F2 in Accent 2, sd = 0.436877492880983
Variability along F2 in Accent 1, sd = 0.280503100538621
Difference in variability along F2, diff(sd) = 0.156374392342362

F1-F2 correlation of Accent 2, r = -0.675749303462245
F1-F2 correlation of Accent 1, r = 0.0126401142165298
Difference in F1-F2 correlation, diff(r) = -0.688389417678775

Thank you for posting this paper. I will look into it. A first glance suggest that it is indeed relevant and might subsume the problem I’m trying to address (and apparently does so outside of stan).