Meta-analysis of proportions with brms

Hello everyone,

I would like to use brms to perform a meta-analysis of proportions (infection of impanted medical devices).

n_infections: number of infections per study
n_patients: number of patients per study
study: the name of the study
I expect a proportion of 7%

study <- c("study1", "study2", "study3", "study4", "study5", "study6", "study7", "study8", "study9", "study10", "study11", "study12", "study13", "study14", "study15", "study16", "study17", "study18", "study19", "study20", "study21", "study22", "study23", "study24", "study25", "study26", "study27", "study28", "study29", "study30", "study31", "study32", "study33", "study34", "study35", "study36", "study37", "study38", "study39", "study40", "study41", "study42", "study43")

n_patients <- c(432, 255, 214, 295, 440, 509, 599, 289, 476, 269, 450, 500, 286, 31, 496, 244, 9677, 201, 99, 380, 7199, 720, 74, 111, 82,  95, 227, 17035, 159, 810, 91, 10819, 55, 407, 639, 289, 133, 68, 134, 349, 245, 458, 490)
n_infections <- c(40, 18, 18, 40, 40, 25, 7, 15, 15, 5, 24, 13, 27, 1, 12, 9, 50, 5, 3, 24, 338, 44, 14, 4, 2, 8, 13, 1039, 3, 61, 5, 380, 6, 45, 32, 24, 5, 1, 0, 31, 10, 32, 25)

my_data <- data.frame (study, n_patients,n_infections)

require (brms)

# specify the prior distribution of the overall effect size μ and the between-study heterogeneity τ
priors <- c (prior (normal (7, 2), class = Intercept),
             prior (cauchy (0, 1), class = sd))

# fit the model. 
brm1 <- brm (n_infections | trials (n_patients) ~ (1 | study),
             data = my_data, 
             prior = priors,
             family = binomial (),
             iter = 5000)

Despite, brms manual reading and internet search I am not sure how to do it.

Is it correct ?

How would you do ?

Thank you very much in advance for your help.

  • Operating System: Windows 10 / RStudio 2022.07.1 Build 554
  • brms Version: 2.18.0

Hi Charles,

The first time I ever used brms was also to perform a meta-analysis on proportions (for me it was prevalence of sexual abuse in various populations). Your code looks completely fine to my amateur eyes. I was also lucky to get a reviewer who was interested in looking over my code. They gave their ok to that simple model too. You should be good!

Bear in mind though that the prior you are supplying to your intercept is on the log scale. Normal(7,2) does not correspond to a prior centered on 7% on the response probability scale.

Hello Mzarchev,

Thank you very much for your input.
Which effect size prior should I used ? What would you counsel me in that case ?

Thank you for your help

@mzarchev makes a good point that your prior here is on the LINK scale, which defaults to “logit” (not log!) for the binomial distribution.

There are a few heuristics for working on the logit scale: 0=50%, -1.1=~25%, +1.1=~75%, -4 is almost never (<2%), +4 is almost always (>98%). Still, I personally have a really hard time thinking on the logit scale. I strongly suggest plotting your logit-scale prior on the response (=probability) scale to make sense of what it implies.

There’s probably a more elegant way to do this, but I just simulate a large number of draws from the prior distribution, then back-transform using the inverse link function (here that’s plogis). For Normal(7, 2),

plot(density(plogis(rnorm(n = 1e6, mean = 7, sd = 2))), xlim = c(0, 1))

This prior has a mean on the probability scale of plogis(mean(rnorm(1e6, 7, 2)))=99.91% (note the mean is taken before the transformation to avoid problems with nonlinear averaging/Jensen’s inequality). This prior places extraordinarily low prior probability on your expected value. We can dial that down to 7% by applying the logit function qlogis(0.07) to get the value that will back-transform to 7% (about -2.6). Importantly, high standard deviations on Normal priors can do strange things when back-transformed through the inverse-logit function—they create sharply bimodal distributions with essentially all of the probability at zero or one with very little in between. So Normal(-2.6, 10) gives:

You should adjust the standard deviation to a value that corresponds with your prior uncertainty about the overall cross-study mean probability.

Finally, your among-study standard deviation is subject to the same back-transformation and as such can cause very similar problems when estimating study-level probabilities. It’s made all the more difficult because changing your among-study prior will affect how your within-study prior influences the outcome on the response scale. Perhaps the easiest way to visualize these interacting priors is with ‘prior predictive checks’. In brm, set sample_prior = "only", then visualize the individual study-level probabilities that the priors allow for to make sure that they are within reasonable bounds that you determine based on your domain knowledge.

Some code to help you with those prior predictive checks for a simple meta-analysis:


linpred_draws(brm1, newdata = my_data) %>%
  ggplot(aes(x = study, y = plogis(.linpred)))+

My guess is that Cauchy(0, 1) will need to be dramatically tighter to prevent loading of prior probability at 1. (Remember: an effect of +/-4 on the link scale is HUGE on the response scale with a logit link.)

Hello everyone,

Thank you for your help.

Being a novice in Bayesian meta-analysis, I struggle to understant your comments.
I did not though it would so complicated to determinate a prior.

In order to improve my model and prior selection, I have
I have performed a “frequentist” meta-analysis with R after the selection of 44 studies.
I used the meta package and a generalised linear mixed model method and logit transformation.
I have found a random pooled prevalence of 4.75% 95%CI[3.8 - 5.92] with a tau² = 0.4820 (tau = 0.6942).

Are this information useful to “built” mu and tau priors ?

Thank you very much again for your help.