Estimating separate group level sigmas in Bayesian meta-analysis?

Hi,

I would like to perform a Bayesian meta-analysis and currently comparing multiple pooling options, similar to the radon example in Andrew Gelman’s Books. I have data from 47 groups, with sample sizes varying between 2 and 49. When I use the common hierarchical model everything works fine:

Partial_pooling_model <- brm(data ~ 1 + (1|WWTP) (1)

However, there are some groups with comparatively large sample sizes, for which the data indicate that the residual sigma may actually be larger/ smaller than the estimated common sigma of the partial pooling model and in my case sigma actually matters for the following calculations. Therefore, I wanted to estimate both, the mean and sd in a hierarchical framework and tried:

Partial_pooling_both_model <- brm(bf (data ~ 1+(1|WWTP), sigma ~ 1+(1|WWTP)) (2)

The model converges without any divergent transitions or other error messages. What irritates me right now is that I expected both mu and sigma for the individual WWTP to shrink towards their population means, but for some WWTPs the resulting sigma is actually larger than if I would have fit a separate model to these data. The larger sigma in turn leads to a somehow unrealistically large prediction intervals.

My guess for this behavior is that the larger group-level sigma results from the group-level mean of the individual WWTP being shrunk towards the population mean, thus increasing the distance between the mean and the observations? (but I am not sure), but the more general question would be if a model in the form of (2) makes sense in the first place? I think it should but maybe I have overseen something and I haven’t found any discussion on this topic so far.

1 Like

I am slightly confused by your meta-analytic model. Is there any reason you are not using the se() addition term like

data | se(error) ~ …

See also https://vuorre.netlify.com/post/2016/2016-09-29-bayesian-meta-analysis/

Hi Paul,

thanks for your answer. After reading the post of Matti Vuore maybe my formulation of the problem as “meta-analysis” was a misnomer. He describes as special feature of meta-analysis that “within-group variances” are considered to be known. When I understood correctly, the se() addition term refers to the standard error of the sampling distribution of the individual studies, which I, however, do not know.

My case is that I have 330 observations from 47 groups. However, because for some groups the sample size is very small (N = 2), and for some them both measurements are below the limit of quantification, the within group variance cannot be measured precisely. So, what I would like to do is to model unequal within group variances, as e.g. described in (Gelman and Hill 2007, p.:372)

"Modeling the varying variances.
Once we have a parameter that varies by group, it makes sense to model it, for example using a lognormal distribution:
*Bugs code: *

  •               y[i] ~ dnorm (y.hat[i], tau.y[school[i]])*
    
  •              tau.y[j] <- pow(sigma.y[j], -2)*
    
  •              sigma.y[j] ~ dlnorm (mu.lsigma.y, tau.lsigma.y)*
    

This extra stage of modeling allows us to better adjust for unequal variances when
the sample size within groups is small (so that within-group variances cannot be
precisely estimated individually)."

That is what I wanted to do with formulation (2) in the inital post.

Did you figure this out? I’ve just started to look at the same and from Paul’s post (https://cran.r-project.org/web/packages/brms/vignettes/brms_distreg.html) I would have thought to code it like you did?