Multivariate random-effects meta-analysis in brms accounting for different time points

I’m drafting a protocol for a meta-analysis of outcomes extracted from randomized clinical trials (RCTs), with no individual-level patient data available. There are five clinical outcomes of interest, which will likely be available from different trials (anticipating between three and ten trials) at different points. Not all trials will report all outcomes and at all time points. For a some, but not all, outcomes, study-level correlations are available from a previous meta-analysis. The goal is to estimate a treatment effect (comparing t1 with t2, where in the toy data below, all outcome values are assumed to be t1 - t2) per outcome but not necessarily per time, i.e., time can be “marginalized” (I’m not entirely sure if this is the correct term to use).

I’d like to use multivariate meta-analysis, as per, for example, NICE DSU technical documentation (Multivariate meta-analysis TSD | NICE Decision and Technical Support Unit | The University of Sheffield) and preferably in brms, to model this but struggle with its implementation for the study-outcome-time structure.

After reading through previous posts here, including on random-effects structure in multilevel MA (Dependency and random effect structure in a multilevel meta-analysis), multivariate MA syntax (Brms: Multivariate meta-analysis syntax), and a (unanswered) multivariate multilevel MA question (Multivariate Multilevel Meta-Regression with brms) as well as further Googling and reading through the brms docs and Solomon Kurz’s blogs, I’ve come up with the below. Based on my understanding, this should

  • Implement random effect per outcome-study combination
  • “Marginalize” over time points and implies, based on my admittedly limited understanding, exchangeability over time and therefore compound symmetry (an assumption I’m willing to make, not least because I don’t think there will be sufficient data to model time explicitly)
  • Imply correlation across outcomes (where I can’t use the published correlations directly but could investigate them in sensitivity analyses)

I’m rather uncertain about this, however, and would appreciate feedback on the statistical rationale and indeed the code implementation below (possibly also in the form of points to additional sources/references if any)?

library(brms)

data <- data.frame(
  study = as.factor(c("s1", "s1", "s1", "s1", "s1", "s1", "s1", "s1", "s2", "s2", "s2", "s2", "s2", "s3", "s3", "s3", "s3")),
  outcome = as.factor(c("a", "a", "a", "b", "b", "c", "d", "d", "a", "a", "b", "b", "c", "a", "b", "b", "d")), 
  weeks = c(12, 16, 24, 4, 12, 12, 16, 24, 12, 16, 12, 24, 24, 16, 12, 16, 24),
  effect_mean = runif(n = 17, min = -1, max = 1),
  effect_sd = runif(n = 17, min = 0, max = 1)
)

corr_a_b <- 0.6
corr_a_d <- 0.8

formula <- bf(
  effect_mean | se(effect_sd, sigma = FALSE) ~ 
    0 + outcome + (0 + outcome |i| study)
)

priors <- c(
  # Pooled effects
  set_prior("normal(0, 1)", class = "b"),
  
  # Between-study SDs
  set_prior("normal(0, 0.5)", class = "sd"),
  
  # Outcome correlation (between-study)
  set_prior("lkj(1)", class = "cor", group = "study")
)