Partial pooling treatment effects in a 2x2 RCT

Hi all, a common study design in medicine is the 2X2 clinical trial. That is, participants receive either drug A, B, A+B or placebo. The conventional instruction for analyzing such a study is to undertake several contrasts (A vs placebo, B vs placebo, A+B vs placebo etc), and use a Bonferroni correction to control the overall family-wise error rate.

I am not completely satisfied with this approach. Firstly, it substantially increases the risk of type-II error. Secondly, it is principally concerned with p values and NHST, and does not influence the actual parameters of clinical interest (treatment effect estimates and their 95% CIs).

Having been influenced by Andrew Gelman’s work on the use of heirarchical models in place of traditional multiple comparisons corrections, I have been thinking about whether one could apply a hierarchical model to this problem.

Specifically, the coefficient for the treatment variable Bi (containing the estimated treatment effects) would be assigned a Normal(0, sigma) distribution. Sigma would be assigned an exponential prior. This way, the estimated grand mean of the four treatment effects would be shrunk towards zero, and the individual treatment estimates would be partially pooled towards this grand mean estimate.
Afterwards, specific contrasts (e.g. A+B vs placebo) could be made by sampling from the posterior.

My question is: would the partial pooling of treatment effects be reasonable in a study such as the 2x2 trial where treatment only has four levels? Is that enough levels to reliably estimate the variance? Are there further potential issues I am not seeing?


so I guess each subject has only one treatment (i.e., they use A, B, AB, or P once?) If that’s the case then I can’t see how partial pooling could help if you intend to use the subjects as an entity in the model. Or do you see A, B, AB, and P as some sort of within measurements, even though they’re not?

Thank you torkar for the response. I must be missing something here. In fact, I’m completely confused!

Each subject only receives one treatment. And something crucial I forgot to mention is that the data are longitudinal. So each individual will have multiple observations.
A conventional model in this context includes a varying intercept for each subject, and an indicator variable for treatment (A, B, AB or P). Traditionally, each treatment group will be compared against other groups and a multiple comparison correction performed.

But this study has limited power for performing these contrasts. So some ‘borrowing of information’ would be valuable. So what I would like to do is to consider each treatment effect as ‘exchangeable’ and assign them a probability distribution (in order to partially pool the individual treatment effect estimates towards the overall mean treatment effect).

Is this nonsensical? Can partial pooling be used this way?

Well if you do have multiple observations and those are representative of your treatment effects then is this what you want to do?

\mathrm{y} \sim \mathrm{distribution}(\mu)\\ \mu = \alpha + \beta_a A + \beta_b B + \beta_{ab}AB + \beta_pP + (1 | \mathrm{subject})

Thanks for for your patience torkar while I get my head around this.
What I am thinking about is considering ‘treatment’ as a random effect. Assuming the data are in long format here is the code in brms syntax:

Y ~ 1 + (1 | treatment/ID)

So there is no population level coefficient for treatment, and only the group level effects are estimated. Is this completely wrongheaded?

So, for each treatment you have subjects nested? Hmm… Have you tried it out to see what will happen? Could be worthwhile checking :)

It seems this question was never resolved. I’ve encountered the same basic question in the course of my work, and I’ve attempted to formulate it here: mixed model - Exchangeability, causal inference, and partial pooling - Cross Validated

I’d love to see this thread picked up again and/or my SE question responded to.

1 Like