Why does hypothesis() give strong evidence-ratio for 'xxx = 0' and 'xxx < 0'?

Hello folks,
I am sorry as I think there are two issues in one; first, I’m really not familiar with Bayesian hypothesis testing, and second, I haven’t found good explanation on the hypothesis() function that handles brms models.

So I have a brms Linear Model that has two binary categorical variables A and B.
I would like to test if the 1:1 > 1:0
So I test my hypothesis with this code

hypothesis(model1,'A1:B1 + B1 = 0 ')
hypothesis(model1,'A1:B1 + B1 > 0 ')

I figured out this was the equation I needed to test specifically A1:B1 > A1:B0

What I don’t understand is that I get positive results like such :

Hypothesis    Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob Star
A1:B1+B1=0     0.19        0.12    -0.05     0.44     95.57     0.99     
Hypothesis    Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob Star
A1:B1+B1>0     0.19        0.12    -0.01     0.4      15.92     0.94     

In Bayesian statistics, do I need to test if the effect is different than 0 before testing if it is superior or inferior, or is this foolish?
As I understand, when doing a <>0, the calculations in hypothesis() function are not the same as when doing =0, but I don’t understand how to interpret them.

Could you give me any advice ?
I have read some publications, but it is still unclear (because I don’t fully get what’s going on in hypothesis()).

Thanks for any help

Pushing up because this is really puzzling me

I think the first paragraph in the Details section of the documentation of the hypothesis function in brms is pretty good on this point (?brms::hypothesis). Is there something specific there that you don’t understand or have further questions about?

Hello,
Thank you for your response.
I have read this section, and many different topics in forums.
I think where I was confused is that when considering Evidence ratio, previous publication propose that: “Jeffreys recommends that odds greater than 3 be considered “some evidence,” odds greater than 10 be considered “strong evidence,” and odds greater than 30 be considered “very strong evidence” for one hypothesis over another.”

I thought that was the gold-standart. Which would mean, under such thresholds, both null and non-null hypothesis are considered valid.

But what we realized later is that in fact, the evidence ratio should much higher to be considered valid. As mentioned here.
A ER of 19 means a ratio of 0.95/0.05.
I know that interpretation is subjective obviously. But when considering the null effect hypothesis, usual method is to verify that 0 is not in the 95 or 90% confident interval.

As mentioned in the Rouder et al., the criterion of 3, 10 and 30 are more fitted for Bayes Factor, when other thresholds should be used for evidence ratio.
This was not an obvious feature of hypothesis() and there aren’t so many pages talking about this essential difference in reading the outputs of hypothesis either.
We probably still have a lot of frequentist biases in our way of validating hypothesis, but coming from a non mathematical background, it makes it difficult to adapt to hypothesis testing using hypothesis().

Thanks for your help

Note that in a Bayesian setting, we are almost always certain that the true effect size is not zero. The true effect could be very small, and we could be uncertain about its direction, but we still know that it isn’t literally zero. This is the point null hypothesis needs to be handled differently from the one-tailed hypotheses. In one-tailed case, it is straightforward to speak of the posterior probabiliy that the hypothesis is true, and the output of hypothesis is based on that probability. In point-null case, the posterior probability that the hypothesis is true is always zero.

Oh, thank you for this clarification, it makes sense indeed.
Thanks