Hey all,

I’ve a rather conceptual but still specific question for a current project. This question is likely to come up again in my work and I think is quite relevant for my field (psychology/neuroscience) in terms of we how currently collect and present evidence. Therefore and because I want to get this right for a pre-registration of that study, it would be great to get some input.

In sum, I want to provide evidence that an outcome is a quadratic (in fact U-shaped1) function of my predictor. This is my model:

`brm(y ~ x + I(x*x) + (1 | subject) + (1 | stimulus)).`

There are two cases for which I have the same prediction. In one case the link function is Gaussian and in the second case the link function is Bernoulli.

Our theory predicts that I(x*x) is positive and I want to quantify the evidence for/against that. I’ve already ran three experiments that with the same design and data structure. Unfortunately by the time I’ve started this study, I wasn’t aware of Stan or brms so I’ve analysed the data with frequentist mixed models but the specific hypothesis and predictions were also pre-registered. In that analysis, two out of the three experiments show ‘significant’ quadratic effects. Since I now know that brms exists, I want to re-analyse the data and run a fourth experiment.

My original plan was/is to use the posterior distribution of my beta parameters (intercept, beta1 & beta2) as prior for the subsequent experiments. How is that feasible? How do I have to model the dependency between the betas that this works? I think for me that would be the neatest way for the article that I want to write. Should may use poly() instead of I() that the terms are orthogonal?

I’ve briefly talked to Paul Buerkner during StanCon in Cambridge and he hinted that I could just analyse all experiments in one go but I struggle how I incorporate the random intercepts for subject and stimulus that are nested within the experiments. Is this model

`brm(y ~ x + I(x*x) + (x + I(x*x) | experiment) + (1 | subject) + (1 | stimulus))`

what I am looking for? If I choose this how do I evaluate the evidence from a future experiments? Do I just re-run the model with this data included? Is that (because of different hierarchical structure) equivalent to using posteriors as priors?

In order to quantify and report the evidence, I know I can just examine the posterior distribution and check whether zero is included in the 95 % credibility interval but for the journal/reviewers are unlikely to accept this but want to see Bayes Factors. How can I use the previous evidence to generate priors for my last experiment? Is it permissible to retroactively choose weakly informative prior or even a super-vague prior as specified here for the analysis for the already existing data?

It would be great if I could feedback on this because it’s my first stan/brms project and I want to do similar things in the future.

1 Here, it could be interesting to model fit two lines based on a breaking point as Simonsohn suggests here but it’s completely over my head and I have no idea how implement this with my data structure in hierarchal Bayesian model.