I am trying to understand how to specify ROPE when fitting a logistic model.
Let’s say I would like to detect whether the effect of a dummy-coded predictor (e.g., intervention) is larger than 1.20 or smaller than 0.8 on an odds ratio. I want to examine whether the intervention increases/decreases the outcome value by 20% or larger; if not, the intervention does not make a meaningful difference.

In this case, ln(1.20) = 0.1823215568 ln(0.8) = -0.2231435513

So, I should set my ROPE as -0.2231435513, 0.1823215568.

Am I understanding this right? I feel this makes sense, but I did not encounter such a simple guideline, so I wonder if I might be misunderstanding something.

So, does the log(1.1) in the post refer to ln(1.1)?

Then, I guess my above approach is in the right path, but the interpretation should be:

I want to examine whether the intervention increases/decreases the ODDS of the outcome value (i.e., success/fail rate) by 20% or more; if not, the intervention does not make a meaningful difference.

I may be struggling, as English is not my first language, and I am not keen on math. If somebody can confirm my understanding, I would greatly appreciate it.

Hey, @a_t. I think part of the difficulty here, which is part of the difficulty that often comes up with logistic regression, it there are many ways to talk about the results. For example, at times you are talking about odds ratios, and other times you are referencing percentages. Let’s get specific. How exactly do you want to express your outcome? As an odds ratio? As a difference in probabilities? As one probability expressed as a percent change relative to another probability? Something else?

Me, for example, I always prefer probability contrasts, which is one probability minus another probability. I believe this is sometimes called a risk difference (though the jargon of “risk” is a poor fit for my discipline). Other folks, however, love those odds ratios.

Thank you for your response, @Solomon! I truly appreciate you and this community, as I am learning Bayesian modeling alone without having anybody near me to ask such questions.

My initial idea was as an odds ratio.
By me stating

increases the odds of the outcome value by 20% or more

I intended to express the differences in odds ratio (1.2 times+) compared to a control condition (e.g., no intervention).

I am using the brms code something like as follows: fit = brm ( outcome ~ intervention + pretest_outcome + (1|participant) + (intervention | item), family = "bernoulli",...)
where intervention was dummy coded as 0 for control and 1 for intervention. outcome is a correct/incorrect response on a cognitive task (e.g., math questions) after the treatment (intervention vs. no intervention). We did the same test for the outcome variable as a pretest beforehand, hence pretest_outcome in the model.

So I maybe should have stated like…? I want to examine whether the intervention increases/decreases the ODDS of the outcome value (i.e., success/fail rate) 1.2 times or more/less compared to the control condition; if not, the intervention does not make a meaningful difference.

then, setting rope to be:
upper: ln(1.20) = 0.1823215568
lower: ln(0.8) = -0.2231435513
Am I understanding this right?

Me, for example, I always prefer probability contrasts, which is one probability minus another probability. I believe this is sometimes called a risk difference (though the jargon of “risk” is a poor fit for my discipline). Other folks, however, love those odds ratios.

So, with this approach, we need to know the baseline odds to start with right? In my experiment, I do not know how well the control condition performs on the outcome value beforehand. Even in such a case, can I still use the probability contrasts?