Dependency and random effect structure in a multilevel meta-analysis

I am using brms to conduct a meta-analysis of unstandardized mean differences in an area where studies report multiple effect sizes from the same sample and further report multiple experiments in a single paper. The structure of the data looks something like this:

effect	sample	article	diff	diff_se
1	1	1	10	3
2	1	1	6	4
3	2	1	8	2
4	3	2	9	4
5	4	3	1	3
6	4	3	2	3
7	4	3	-3	4

The reason for multiple measurements within a given sample varies from study to study. Some might include multiple tasks measuring the same construct whereas others might include multiple measures from the same task but using different stimuli (e.g., faces, objects, words). I plan on addressing these dependencies using a multilevel meta-analysis including random effects for sample and article as follows:

model1 = brm(diff | se(diff_se) ~ 1 + (1|sample) + (1|article), data = dat)

I use mildly informative priors, etc. but left them out for simplicity. This is my first multilevel meta-analysis so I wanted to ask advice on two things.

First, it seems people vary with respect to the random effect structure in these models. For example, some seem to use random effects for effect and article, but not sample. I suppose it would be possible to even include random effects for all three. Is there any statistical reason to prefer one structure over another?

Second, am I correct that including multilevel structure like this should handle dependencies that might exist between the effects measured within a particular sample? We don’t know the precise covariance between dependent effects so I do not think it is possible to conduct a multivariate meta-analysis.

I think you need to set sigma = TRUE inside se()?

Otherwise looks good to me.

You may include (1|sample) + (1|article) + (1|effect) and then test this model against other possibilities for instance using loo.

You are correct that multilevel structure accounts for dependencies of effects measured on the same sample.

About to raise this thread from the dead…
I was using the technique described in this thread for a paper with a colleague and a reviewer claimed that inclusion of a random effect does not account for dependencies of the included effects ( “…even if this approach handles correlated effect sizes, it does not handle correlated variance estimates”) They have recommended I abandon brms and use rma.mv in metafor along with the robust function to apply a robust variance estimator like the one recommended by Hedges, Tipton, & Johnson, 2010. I cannot seem to find anything in the Bayesian meta-analysis literature about robust variance estimators. Is this a valid concern, and if so, any chance you could point me in the right direction?

1 Like

I don’t think this argument is super valid to be honest (but I may be wrong), if you have been using 3 level meta-analysis with brms, that is one random effects term per study and one per effect size (assuming multiple effect sizes per study). Robust variance estimation is not a term that you will find in the Bayesian literature as it is mostly a concern in frequentist statistics. If I remember correctly, it essentially downweights using multiple effect sizes from the same study, thus reducing the influence of studies with a lot of effect sizes.

I would suggest also running rma.mv in the way the reviewer recomments and comparing the results. If nothing relevant changes you can just report that it does not matter. If somethimg relevant changes report the rma.mv results as a sensitivity analysis.

3 Likes

The issue of modelling dependent effect sizes has been raised frequently on the r meta-analysis mailing list so I also started to wonder how to address the concerns when using Bayesian packages. Nowadays, Wolfgang Viechtbauer - the creator of the metafor package - strongly recommends creating a variance-covariance matrix so it is possible that reviewers will start to address this issue more often. Wolfgang argues that without the full variance-covariance matrix the meta-analytic estimates will be biased. From my experience, the results based on the full vs diagonal matrix differ only slightly. However, such anecdotal evidence probably won’t convince the reviewers.

Recently, I had a similar problem to @nostatisfaction that a full variance-covariance matrix was not available so I wanted to provide another possible solution. We could start with imputing the variance covariance by assuming a constant correlation between dependent effect sizes. One such function is available in the clubSandwich package. Then we could use the fcor argument in the brm package to provide the variance-covariance matrix (V). The se argument could then be dropped as the variance-covariance matrix already includes sampling variances in the diagonal. I ran respective models on one data set in brms and metafor and the estimates were very similar.
Your basic model would look something like this when accounting for the nested structure:
model1 = brm(diff ~ 1 + (1|article/sample/effect),fcor(V), data = dat,data2=list(V=V)
Perhaps @paul.buerkner could confirm whether it seems sensible.

2 Likes

Interesting thread. I wonder if fitting a multivariate (to account for the correlation between effect sizes from the same sample) multilevel (to account for the hierarchical structure) model is the best approach here.

Something like:

model1 = brm(diff | se(diff_se) ~ 1 + (0 + effect|article/sample) , data = dat

What do you think, Paul?

I would be careful in calling it “the best approach” here but it definitely looks like a sensible approach to me.

1 Like

Good point. One could also add the random intercepts:

model1 = brm(diff | se(diff_se) ~ 1 + (1 + effect|article/sample) , data = dat