Some recent work on so-called “living” systematic reviews has lead to a lot of talk about the concern of falling into traps analagous to taking multiple looks at trial data during recruitment. As a result I have come across some papers detailing applications of trial sequential analysis, conditional power, or alpha spending within the meta-analysis paradigm. In each case, it seems like a substantial undertaking to modify the supplied code to my own analyses.
I know that a common Bayesian approach is simply to base stopping rules off of the posterior predictive distribution, and I am wondering if I am missing something with why that wouldn’t work in the meta-analysis context. If I have a random effect model, couldn’t I accomplish the same goal as TSA by checking to see the probability that a new trial would find meaningfully conflicting results? Since this can be accomplished quite easily with one of two lines of code, I assume I must be missing something.