Evidence ratios when using default priors



I have fitted a multilevel model with an interaction between three factors (3X2X2 factorial design, predicting EEG amplitudes with several experimental conditions, i.e., categorical variables) with varying slopes and intercepts. I am using the default priors, I have a lot of data, and the model converges very nicely.

Now I want to test a couple of hypotheses for the differences between conditions. Most of my hypotheses are directional (e.g. in condition A amplitude is higher than in condition B) and I use the hypothesis function for this. I have a couple of questions about the function:

  1. In some cases the evidence ratio is infinite. Is there a way to still get a number in that case? If not, I am not sure how to report this result in a paper.

  2. In one of the comparisons (difference between conditions in baseline) I have a point hypothesis (there should be no difference between two conditions in baseline). In the help for the hypothesis function it is stressed that we should have proper priors when testing point hypothesis. Do you think that using default priors is a wrong way to go in that case? Apart from this issue, I am happy with having the weak default priors in this model, I have a lot of data, no strong priors, and I think that the priors are overwhelmed by my data.

  3. When reporting evidence ratios, for both directional and point hypothesis, what can be cited as a reference? I can’t find any references in the help of the hypothesis function.

  4. For testing a hypothesis for a difference between two conditions, do you think that reporting the ER is enough, or would you suggest reporting something else as well?

Thank you very much for your help, and for the amazing package :)


  1. Inifinite means 100% posterior probability (more precisely 100% of posterior samples) in the direction your are testing.

  2. If you specify a hypothesis of the form “a = b”, the Evidence ratio is a Bayes factor (computed via Savage Dickey Ratio) and default priors of brms should never be used to compute Bayes factor of point hypotheses as the latter are strongly dependent on the prior. Maybe you should consider another way of testing this prior. Did you though abouts ROPEs (region of practically equivalence) yet?

  3. If you are using directional hypothesis, reporting the (equivalent) posterior probabilities instead of the Evidence ratio might be more intuitive. Example

H <- hypothesis(fit, "a > 0")
colMeans(H$samples > 0)
  1. I would always (at least) report the posterior mean of the difference as well as its 95% (or whatever %) credible interval.


Hi Paul,

Thanks for the quick response! I wanted to avoid using the a=b (BF) situation, but feels weird to test a hypothesis with a difference (a>b) to show that there is no difference in baseline. I also tested these using ROPE and I’ll probably go for that then.