Hi, I have some doubts about brms implementaton of a multilevel meta-analysis. Basically I have n independent studies but some of them are nested within the same paper (so with similar methods and of course authors). So I decided to consider each study (substudy) as nested in the paper variable study. I attach my dataset meta_clean.

load("meta_reprex")
library(brms)
mod_prior <- c(
prior(normal(0, 2), class = "b", coef = "Intercept"),
prior(cauchy(0, 1), class = "sd")
)
fit <- brm(eff_size_g|se(eff_size_se_g) ~ 0 + Intercept + (1|study/substudy),
data = meta_reprex,
prior = mod_prior,
cores = parallel::detectCores(),
chains = 6,
iter = 6000,
sample_prior = T,
save_pars = save_pars(all = TRUE),
control = list(adapt_delta = 0.99))

Now I have some doubts related to:

Given that each study weight is computed substudy-wise, how I can reconstruct the study level weight?

Following @mattipost I have reconstructed each study effect size computing overall intercept + deviation of the single study. However, If I want to reconstruct the substudy estimation can I simply use the same approach?

The last question regards the use of fitted, predict etc. Even with a fitted meta-analytic model could I use predict to see the predicted value for a single study or trying to predict a new average study?

Sorry for not responding earlier, your question is relevant and well written.

Overall your approach (nesting study and substudy) is quite sensible. The dataset seems to not have been properly attached, so I canât check it, but I would expect there to be some issues with fitting (which you seem to have encountered as you have increased adapt_delta), because most studies would (I gues) only have one substudy. If you think it is sensible, there are tricks to add the âbetween substudyâ variability only to studies that are actually substudies and avoid having this term for single studies - just ask, I donât want to spend time writing those if you donât think it would be useful to you.

Back to your questions:

Yes, this should work exactly as well. In fact, I think the answer to your other questions lies exactly in using predictions from the model instead of focusing on the coefficients. I.e. to reconstruct the effect the original studies, you can directly use posterior_epred (or fitted) with the original dataset and this will take care of all the structure you have in your data. For predicting a new study you actually get two options: either predict a new substudy of an already published study or a completely new study.

I am not completely sure what you mean by âweightâ, but I maybe you could formulate a prediction task that answers this question as well.

Thank you for your detailed response. Iâm sorry about the dataset uploading, maybe something went wrong and I did not realize it.

Yes my primary concern regards the fact that, with âbrmsâ even fitting a multilevel model (in this case the meta-analysis) with n study and a random intercept per study you can have the posterior for each paper.
In this case I was not sure that using âfittedâ or the matti vuorre approach gives the same results (in conceptual terms) given the nested structure.

Regarding the weights question, In meta-analysis (also in brms of course) each study is weighted by the standard deviation. Given that I have a cluster of study, does the weight need to be calculated at the study level or at the cluster level?

If I understood both approaches correctly, it should. (e.g. in your case you need to add the intercept and both sd parameters). Also note that the hypothesis function in brms makes a lot of those tasks simpler (it not only tests a hypothesis, but also returns the samples of the formula you wrote for the hypothesis). If you find the results differ, there is an issue (feel free to ask here if you see something weird there).

Oh, I think I understand (but I am honestly a bit out of my depth with this question, so please check my reasoning).

brms does not work with the concept of weight directly. It just assumes that the observed effect in each study is drawn from a normal distribution with mean mu (the linear predictor in brms, in your case the intercept + both the random effects) and standard deviation given by the se term. Now, according to Wikipedia for a simple meta-analysis (a single random effect) this seems to be equal to weighting the studies by their precision (inverse squared standard error) and running an appropriate correction. In the two-level structure you have, you could either claim that the weight stays the same, but the correction becomes more involved, or you could somehow try to correct the weight to make the original correction workâŚ But I honestly donât think any of this makes a lot of sense.

You could presumable make some sense of the total sd of each study around the global intercept (i.e. sqrt(sd_study ^ 2 + sd_substudy ^ 2).

One final note is that your model assumes the variation of substudies within each study is the same (e.g. two groups may have different mean effects they produce, but they will have the same sd of the effects). Which I am not sure is desirable - on the other hand, it would likely be hard to estimate a per-group SD so this is probably a sensible compromise.

Thank you very much! A lot of useful information!
Just for being sure and to summarise the situation:

fitted() or posterior_epred() gives me the estimated value for that study and the credible interval. This is the same as computing the estimated intercept (the overall effect) + deviation of the single study.

brms() unlike lme4 for example can handle a nesting structure that has a single element, like using 1|study in the meta-analysis where study is a factor with one element per level (like in simple meta-analysis). In my case I am simply taking into account the fact that substudies within the same paper are more equal than others

the meta-analysis weight of each study is not simple an inverse-variance weight for these nested structure but I have a sd for each substudy and the model takes into account this. However there is not easily available the weight at the study level.

fitted or posterior_epred will (in their default setting) give you values that will be the same as intercept + deviation of the study + deviation of the substudy, i.e. all model terms will be included

Yes, but unfortunately even in the Bayesian contexts, hierarchy with very few branches can cause problems. Luckily, Stan (and hence brms) will signall such problems via divergences during sampling - if you got no warnings, you should be good.

This is something I am not sure about, although it sounds plausible to me. Actually you should be able to compute the weight at the study level the same way you would do for a single-level metaanalysis (which I do not know how to do), but this weight wouldnât directly apply to the substudies, so a bit less clear whether it would be a useful quantitu.