# Rule of thumb for converting a range of priors for linear to logistic regression

I have trial level data from a study in which participants responded to a series of stimuli. I have a predictor of interest. For the sake of this example, let’s call it the size of the stimuli.

There is a null effect of size on both reaction time and accuracy. So I was asked to compute the Bayes Factor for that predictor.

I don’t have a clear way to choose an informed prior, and so I am computing the Bayes Factor across a range of priors. I am using the package rstanarm in R for this.

In the analyses of reaction time, I compute the Bayes Factor for size across a range of priors. In particular, normal distributions centred at 0 with sds ranging from 0.10 to 5. I do this using the `stan_lmer()` function.

I want to do the same for accuracy, analyzed using `stan_glmer(family = "binomial")` . My question is if it makes sense to use the same range of priors as I did in the analysis of reaction time?

You should do some prior predictive checks to get a feel for the range of outcomes implied by a given set of priors.

But as someone with decent experience with 2afc data and logit models thereof, 0.1 to 5 is a pretty good range of values. When I do accuracy models, I generally use `normal(0,1)` as the prior for the magnitude of effects.

I have to ask: were you specifically directed by a reviewer to generate BFs? Another approach would be to indicate you don’t buy into the philosophy of inference implied by BFs and instead insist that the posterior encodes the results best.

Thanks for the feedback! The reviewer asked us to use a Bayesian regression to check the robustness of the null result.

When you mention using the posterior, would this be something like: choosing a generic prior and then checking how much of the posterior is outside of a range of practical equivalence?

Some folks literally just show the posterior and/or report some quantiles thereof. If you feel you can define a “region of practical equivalence”, then sure, you can also express the % of the posterior that falls in that range. For the purposes of brevity in talking about the posterior in a results/discussion section, an alternative I’ve used is to explicitly say you’ll use a trichotomized summary whereby you’ll talk of the effect as being different from zero if the 95% credible interval excludes zero, equivalent to zero if the 50% credible interval includes zero, and uncertain otherwise.

Oh interesting! I know that priors really affect Bayes Factors. Is that the same for this sort of analysis? Also, I wonder if you have anything I can cite for this approach?

I’ve certainly seen it asserted that this approach (typically termed the “estimation” approach to contrast with the “evidence” approach manifest by BFs) is less sensitive to priors than BFs, especially with priors in the “weakly informative” range.

Thanks again!

We’re still getting a version through peer review, but see my dissertation here for an example estimation-based report for 2AFC data.