Bayes factors for a series of experiments using posteriors as priors

Hey all,

I’ve a rather conceptual but still specific question for a current project. This question is likely to come up again in my work and I think is quite relevant for my field (psychology/neuroscience) in terms of we how currently collect and present evidence. Therefore and because I want to get this right for a pre-registration of that study, it would be great to get some input.

In sum, I want to provide evidence that an outcome is a quadratic (in fact U-shaped1) function of my predictor. This is my model:

brm(y ~ x + I(x*x) + (1 | subject) + (1 | stimulus)).

There are two cases for which I have the same prediction. In one case the link function is Gaussian and in the second case the link function is Bernoulli.

Our theory predicts that I(x*x) is positive and I want to quantify the evidence for/against that. I’ve already ran three experiments that with the same design and data structure. Unfortunately by the time I’ve started this study, I wasn’t aware of Stan or brms so I’ve analysed the data with frequentist mixed models but the specific hypothesis and predictions were also pre-registered. In that analysis, two out of the three experiments show ‘significant’ quadratic effects. Since I now know that brms exists, I want to re-analyse the data and run a fourth experiment.

My original plan was/is to use the posterior distribution of my beta parameters (intercept, beta1 & beta2) as prior for the subsequent experiments. How is that feasible? How do I have to model the dependency between the betas that this works? I think for me that would be the neatest way for the article that I want to write. Should may use poly() instead of I() that the terms are orthogonal?

I’ve briefly talked to Paul Buerkner during StanCon in Cambridge and he hinted that I could just analyse all experiments in one go but I struggle how I incorporate the random intercepts for subject and stimulus that are nested within the experiments. Is this model

brm(y ~ x + I(x*x) + (x + I(x*x) | experiment) + (1 | subject) + (1 | stimulus))

what I am looking for? If I choose this how do I evaluate the evidence from a future experiments? Do I just re-run the model with this data included? Is that (because of different hierarchical structure) equivalent to using posteriors as priors?

In order to quantify and report the evidence, I know I can just examine the posterior distribution and check whether zero is included in the 95 % credibility interval but for the journal/reviewers are unlikely to accept this but want to see Bayes Factors. How can I use the previous evidence to generate priors for my last experiment? Is it permissible to retroactively choose weakly informative prior or even a super-vague prior as specified here for the analysis for the already existing data?

It would be great if I could feedback on this because it’s my first stan/brms project and I want to do similar things in the future.

1 Here, it could be interesting to model fit two lines based on a breaking point as Simonsohn suggests here but it’s completely over my head and I have no idea how implement this with my data structure in hierarchal Bayesian model.

Between my long-ago Frequentist days and my current Bayesian days, I spent a few years espousing likelihood ratios as endpoints of statistical analyses. With LRs, you could combine evidence from multiple studies by merely multiplying their LRs together. From what I gather BF’s seem very similar to LRs mathematically and philosophically, so is it possible you can simply multiply your BFs?

If you include model 1’s prior as model 2’s posterior, it is the same thing as just combining the data from data1 with data2.
As in:

p(\theta,Y_1,Y_2) = p(Y_2|\theta)p(\theta|Y_1)p(Y_1) = p(Y_1,Y_2|\theta)p(\theta)

So if you’re going to use a posterior as a prior in a second model, you may as well just combine all the data and estimate it in one go.
This will also be the exact same thing as a fixed effects meta-analysis.

Paul’s suggestion includes a random effect of experiment. If there are differences between the experiments in any way, then this would probably be the preferred option. This is akin to a random effects meta-analysis. You assume there can be minor (or major, I guess) differences between the experiments. Then you get a distribution of effects across experiments.

A word of caution: If you run the model you proposed, ensure that subject is unique across all studies; that is, if you have ‘subject 1’ in both experiment 1 and 2, and they are not the same person, then you should have a unique ID for them. Alternatively, you can use (1 | experiment:subject). Likewise with stimulus; if stimulus 1 is different across studies, you should have unique id’s, or tell brms they are nested within experiment using (1 | experiment:stimulus).

Thank you that is really helpful! I think going for a random effect is sensible in terms of getting a better idea of the uncertainty.

However, when I fit the model with experiment as a random effect sometimes I get a small number of transitions after warmup around 5 with (adapt_delta = 0.9999) that I don’t get any if I don’t includes this. How problematic are these in that case?