I have a dataset is ~ 600 participants each measured on four dichotomous outcomes. These are meaningfully clinically (e.g. death), and it is expected that outcomes will be correlated. The purpose of the model is ultimately to inform a patient level simulation for a health economic model, so capturing this correlation will be important. From reading this forum and a quick lit scan (e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6290916/), I was thinking a multivariate GLM linked by patient level random effects might be the solution modeled for ex as
o1 <- bf(out1 ~ (1 | p | id)) o2 <- bf(out2 ~ (1 | p | id)) m1 <- brm(o1 + o2...)
This works in the sense that I end up with correlated predictions, but I am finding standard errors to be about 2-10 times as large as the univariate models. Using posterior_predict and allow_new_levels on a new data set the correlated model gives me average rates across the sample with 95% percent intervals from 0.04 to 0.72. The same interval for the univariate model is 0.17 - 0.27.
I feel like the issue here is that most of these mGLMs are applied to longitudinal scenarios and that it may not translate to one observation per patient but thought I’d give a quick post here before giving up.