Updating meta-analysis and stopping rules


Some recent work on so-called “living” systematic reviews has lead to a lot of talk about the concern of falling into traps analagous to taking multiple looks at trial data during recruitment. As a result I have come across some papers detailing applications of trial sequential analysis, conditional power, or alpha spending within the meta-analysis paradigm. In each case, it seems like a substantial undertaking to modify the supplied code to my own analyses.

I know that a common Bayesian approach is simply to base stopping rules off of the posterior predictive distribution, and I am wondering if I am missing something with why that wouldn’t work in the meta-analysis context. If I have a random effect model, couldn’t I accomplish the same goal as TSA by checking to see the probability that a new trial would find meaningfully conflicting results? Since this can be accomplished quite easily with one of two lines of code, I assume I must be missing something.


This is a bit late following up, but Andrew just linked an old discussion of his in a blog post:

Others on this list know a lot more than me about this kind of statistical analysis.

(I’ve always found simulation a good way to answer questions like these.)


Thanks Bob, saw this post and still trying to process. I like the simulation suggestion and I think that is the road I will end up going down.