In my field, it is common to report odds ratios as estimates from logistic regression results. Now I have fitted a logistic regression model (model with bernoulli family and binary outcome) in brms, and get following “estimates” from summary():

estimate = 1.84
95% CI = 0.43 - 3.55

These are on the log-odds scale, and the results indicate that the 95% CI is completely positive (“significant”, in frequentist / NHST terms).

If I want to transform the data into the odds ratio scale, and I exponentiate the values from the posterior distribution and then calculate a summary statistics like mean / median for the “point estimate”, the results look like this:

Odda Ratio: 5.98
95% CI = 0.72 - 25.56

i.e., the 95% intervals are partly “negative”, or in freq. terminology: non significant. I know this is due to the fact that I don’t have a single value, but a distribution of values, and the mean of an exponentiated distribution is not the same as the exponentiated mean of a distribution. But, how to deal with this issue?

Paul suggested transforming first, and then summarizing:

which leads to the problems I have here. So my question is, if you want to report “Odds Ratios” from Bayesian logistic regression, what approach would you suggest? I think that readers who are less familiar with Bayesian methods are confused by the fact that the results are both “significant” and “not significant” at the same time…

I’ve always either stuck to the logit scale when reporting confidence intervals, or used one of the effect interpretation libraries to transform the estimated effects to the probability scale. I have not presented confidence intervals for odds ratios, mainly because the OR scale is asymmetric and looks awful when plotted. But this has all been in the frequentist framework.

I usually plot Odds Ratios and their CI on a log-scale, (like scale_x_log10() in ggplot2), so they look symmetric. But here is the issue of presenting the results (estimate and CI) in tables, so I can’t sail around this problem. ;-)

Marginal effects plots, where the effects are shown on the probability scale, are planned as well. Here I indeed do not have this problem…

I usually use the median as point estimate, so in my case, the exponentiated point estimate does not change, no matter if I calculate median before or after transforming the posterior distribution. But since the post. dist. is slightly skewed (see pic below), the HDI covers the region of practical equivalence when I first exponentiate, then calculate the HDI, but does not cover the region of practical equivalence when I first calculate the HDI and then transform the HDI-values.

I wonder if Paul has a mathematical explanation for recommending what he does…whether the odds ratios are somehow ‘closer to the truth’ than the logits, given that the data itself consists in binomial probabilities, not logits, and the OR scale is an intermediate step between the two.

<awaits a math guru’s answer>

Meanwhile, I’d report the estimated effect of that particular predictor regardless of the scale used. It’s the point estimate that matters most after all, not the amount of uncertainty around it.

Just to help other readers: The OP is using highest density intervals (the narrowest interval to contain x% of the data) and not equal-tailed intervals (the middle x% of the data).

I only know of HDIs from Kruschke. Do you know what he says about summarizing transformations?

Would you say that this issue is particularly relevant for HDI? Indeed, when using “simple quantiles”, e.g. like brms::posterior_interval(), the values for the CI are always “positive”:

Yes, the credibility intervals work like the median. If you exponentiate the posterior draws, the order of draws does not change (median of exp is exp of the median, 2.5% percentile of exp is exp of 2.5%, …) I think if you want to do a hypothesis test, looking at percentiles and credible intervals is more consistent. The HDI is probably better if you want to give uncertainty around the parameter estimate.