Guidelines on using meta-analysis to inform priors

Hi all, I’m in search of guidelines for the incorporation of meta-analytic evidence into a prior for the treatment effect in an RCT. Ideally, I would like to use published ‘standard’ advice for this - as it, to my mind, makes it clear that the priors were not manipulated to ‘skew’ the results in one particular direction.

If we take a normally distributed prior, the mean can be easily enough be taken to be the estimate of the meta-analysis. The variance, however, i’m unsure of. Using the standard error from the meta-analysis would seem to be very over-optimistic (given heterogeniety, differences in treatment formulation, differences in sample characteristics, evidence quality etc), and frankly would be too highly informative. One possible solution might be a mixture of the meta-analysis estimates with a ‘neutral’ prior centered at zero. But the appropriate choice of weights opens up even more questions.

I am confident there must be writing on this - I have just been unsuccessful in locating anything because Bayesian meta-analysis itself has swamped the literature. So any thoughts or references would be greatly appreciated!

1 Like

I’m not aware of any official guidelines for this, but I’ll share a few thoughts based on my own experience. I’ve used meta-analysed RCT data as priors for Bayesian analyses of RCTs, but only as sensitivity analyses after using relatively weak priors (centred on no difference and including all plausible effect sizes without providing much information) in the primary analyses.
In the cases I’ve worked with, the previously available evidence (from a meta-analysis) has been limited, either due to few (usually small) trials or few events. Given that the data were limited and only used in sensitivity analyses, I felt that it was OK to use the meta-analysis result with its SE directly as a prior.

One solution (based on what’s already done in meta-analyses), is to conduct a random effects model, calculate the prediction interval and base the prior on that. The prediction interval represents the expected effect sizes in all future studies after incorporating between-study heterogeneity (https://bmjopen.bmj.com/content/6/7/e010247 ; https://www.ajodo.org/article/S0889-5406(20)30001-9/fulltext), instead of only presenting the uncertainty around the mean effect (like the CI). Thus, it’s a principled way to express this uncertainty that’s already used in meta-analyses, and it would be easy to use for a prior in Bayesian analysis of an RCT.
One possible downside is that estimating tau in conventional meta-analyses is difficult with few studies, so it may lead to very wide prediction intervals in such cases. This should not be a big problem given that it then provides little information (reflecting the uncertainty) and has limited influence on the result.

If you find any more definitive guidelines on this, please share.

4 Likes

Hey there! Sorry, I don’t have any “standard” advice on this one. What @AGranholm wrote sounds good to me. Another (more simple way) might be to consider a Student-T prior, where you take the location as the mean, the scale as the se of the effect and then you can set the df (or nu) parameter to reflect your (subjective) confidence in the result – note, that with nu = 1 your prior is really wide. Maybe you can even use a (hierarchical) prior on nu.

Cheers,
Max

2 Likes

Thank you both for your insights.

@AGranholm I have yet to find any ‘official’ guidelines on this yet either, which is strange given it is one of the primary strengths of Bayesian methods in medical research (the capacity to take advantage of prior evidence like meta-analyses). For my project, I initially specified a prior based on the meta-analysis estimate and SE, but I found it my case it was too informative. I could see a place for a sensitivity analysis. I suppose this is kind of ‘updating’ the meta-analysis with the present study results?
The prediction interval seems like a principled way to include previous evidence, without overwhelming the current trial data. A bit more laborious but a very useful idea. Thanks!

@Max_Mantei thanks a lot for your thoughts. The Student-T prior is probably a better idea than my informed+neutral mixture, and more intelligible to readers. The fundamental challenge remains the same with this approach though. In the absence of clear guidelines, I have to pick a df parameter that makes sense to me. There is nothing to stop me from playing with several different values of df, and presenting the ones that look the most favourable (like Bayesian ‘P-hacking’). Referencing standard guidelines would I think (partially) take that out of the equation.

This paper is a good example of what i’m talking about, except it doesn’t extend into using meta-analyses. @harrelfe I hope that yourself and co-authors can extend on these guidelines - they are very helpful!

3 Likes

If you reframe this “problem”, specifying a prior is a great way to make your assumptions explicit and give you opportunity to talk/write about them. I do think it is often not too hard to come up with priors that other find reasonable. As always a good way is to actually simulate from the prior and see what they imply. This might be a good starting point to set reasonable nu parameters for your Student-T priors.

But you are right, it’s easier (and often more convincing) to point to standard procedure.

Hope you’ll find a good way to deal with this! :)

2 Likes

Oh I agree completely. Priors are very useful. I don’t want to do away with them, or use them thoughtlessly, just to be confident in doing things in a principled manner. In the mean-time the Student-T/random effects predictions are both very compelling options.

1 Like

I think there are two importantly distinct inferential goals here that require somewhat different handling of the prior. Given that the meta-analysis already exists, you might be repeating the study to see whether the previous results replicate (i.e. you come from a position of not necessarily trusting the previous results), or you might be collecting more data to narrow the posterior uncertainty from the meta-analysis (i.e. you fundamentally believe that the meta-analysis–and the data underlying it–are sound).

In the latter case, I think the prediction-interval approach of @AGranholm sounds good. If you wanted to go full Bayesian on this, you could repeat the meta-analysis perched hierarchically atop your new data and estimate your new data and the meta-analysis jointly in one step. But maybe that would be overkill!

On the other hand, if you are genuinely concerned that the meta-analysis won’t replicate, then it doesn’t make sense to me to use the meta-analysis posterior to strongly inform your prior. Or rather, the strength of concordance between the meta-analysis prediction interval for the study-specific effect sizes and your prior is effectively a measure of your degree-of-belief in the correctness of the meta-analysis itself (in hand-wavy terms, this behaves like your “prior” on how reliable the meta-analysis is).

2 Likes

Just by happenstance I ran across this today:

Seems to be exactly what you’re looking for? Tagging @wds15.

2 Likes

Thanks for your inputs here. Some really clarifying ideas.

I think I am somewhere between the two inferential positions you describe - I trust the meta-analysis and underlying data to a degree, but think it is overly confident in its conclusions given the high heterogeneity. In your terms, it overstates the degree of belief in the correctness of the meta-analysis.

In fact, the meta-analysis finds that the pooled data are very inconsistent with the null (95% CI all well above zero; small p value). So the evidence is treated as compelling. But when I calculated the prediction interval, I was surprised to find that the interval was a lot wider and actually quite substantially crossed zero. Which goes to show how overstated meta-analytic results can be. If a new study was performed, there’s about a 10% chance that the average treatment effect might be harmful!

Very interested in this MAP R package - I will check out their approach. Thanks!

First, if you use informative priors you can never avoid bias! Will the bias matter is a different - hard to answer - question. You can only communicate what lead to the prior as the prior becomes part of the analysis just like the data. To make things worse, often we just use few studies for the meta-analysis and then assumptions on the model to get the prior do make a difference to some degree.

You should start with our vignette

https://cran.r-project.org/web/packages/RBesT/vignettes/introduction.html

and maybe also read references therein. The literature on meta-analysis is overwhelming, that’s true. The big difference of RBesT over other work is that is allows you very easily to assess the design properties of a trial which you conduct when actually using the prior in your analysis. That is a common question when planning clinical trials with it.

2 Likes

This is an interesting question. I work in evidence synthesis and can give a perspective on how your planned analysis may be treated if included in a systematic review. (I.e., I’m not trying to answer your question about how to choose a prior, but give you another perspective.)

The main issue is that an analysis of RCT data that uses informative priors constructed from “all” previous such trials would likely be perceived as biased and excessively precise, and your study might be downrated for risk of bias and/or certainty of evidence. (I understand of course that inducing “bias” and “excess” precision is exactly why one would choose an informative prior, but many systematic reviewers are unfamiliar with Bayesian methods and would be immediately skeptical.) Your analysis might be omitted from meta-analysis, because including it with the studies you used to construct your prior would effectively double count the participants included in those earlier trials, leading to excessively precise meta-analytical estimates. Or, the analyst might impute a frequentist CI and adjust it to remove the double counting, so that the “new” evidence from your trial could be included in their meta-analysis. Or, the systematic reviewers might ask for the IPD and simply reanalyze your trial data.

From my evidence synthesis point of view, I would actually prefer to see what your new evidence says about the research question in isolation, even if your CI/CrI is wide due to a small sample size or whatever is leading you to want an informative prior. It is important for systematic reviewers to investigate between-study heterogeneity (e.g., to infer something about effect moderation).

My suggestion (which may or may not be feasible) is to prespecify the primary analysis to exclude evidence from other studies (e.g., to do a frequentist analysis or to use very weak priors), and then perhaps plan a secondary analysis that includes the previous evidence (either a regular meta-analysis or a Bayesian analysis with priors constructed from the previous evidence). However, if it is likely that your trial will quickly be incorporated into a systematic review and meta-analysis, you might be wasting research effort by doing the meta-analysis yourself.

I hope this perspective has been useful. I’m aware that when one has a hammer, every problem looks like a nail. This cuts both ways: as someone who loves Stan, I’m tempted to apply it and the Bayesian approach everywhere; as a meta-analyst, every RCT should be conducted to facilitate systematic reviews! Everything is a trade-off.

5 Likes