How to deal with/interpret Credible Intervals including zero

Greetings all,

I have a rather naive question about CI with zero. As a part of my learning process, I do apply some data of mine to Bayesian framework. In the traditional framework, confidence intervals with zero are stated as ‘non significant’. How about Bayesian Framework? Honestly, I am not sure how to interpret the term ‘non significant’. Does it mean ‘the analysis is all wrong/useless and we should not use it’ or ‘since it is non significant, no relationship between predictors and outcome’. In the second case, how do we explain differences between different predictors?

Meanwhile I applied Neg_binominal:

check_distribution(m1)
# Distribution of Model Family

Predicted Distribution of Residuals

 Distribution Probability
       normal         75%
    bernoulli          6%
         beta          6%

Predicted Distribution of Response

               Distribution Probability
 neg. binomial (zero-infl.)         56%
              beta-binomial         22%
                  lognormal         19%

For instance;

 stan_glm(n~ 1 + groups, neg_binomial_2(link = "log"), iter = 10000, QR = TRUE, data = alternation_freq)
 describe_posterior(m1, ci = 0.89)

Parameter         | Median |        89% CI |     pd |          ROPE | % in ROPE |  Rhat |      ESS
-----------------------------------------------------------------------------------------------------------------
(Intercept)       |   4.94 | [ 3.43, 6.80] |   100% | [-0.10, 0.10] |        0% | 1.001 |  7194.00
Group1            |  -0.38 | **[-2.86, 2.28]** | 61.91% | [-0.10, 0.10] |     6.47% | 1.001 | 10819.00
Group2		      |   1.13 | **[-1.35, 3.72]** | 80.28% | [-0.10, 0.10] |     3.86% | 1.001 | 10686.00
Group3            |   0.01 | [-2.63, 2.48] | 50.43% | [-0.10, 0.10] |     6.68% | 1.000 | 10852.00
Group4            |   0.22 | [-2.42, 2.69] | 56.89% | [-0.10, 0.10] |     6.35% | 1.000 | 10841.00
Group5            |  -0.49 | [-3.12, 1.93] | 64.91% | [-0.10, 0.10] |     6.15% | 1.000 | 11083.00
Group6            |  -0.18 | [-2.60, 2.42] | 55.92% | [-0.10, 0.10] |     7.14% | 1.001 | 11625.00
Group7            |   0.28 | [-2.16, 2.95] | 58.74% | [-0.10, 0.10] |     6.54% | 1.000 | 10902.00
Group8            |   0.35 | [-2.21, 2.83] | 60.67% | [-0.10, 0.10] |     6.55% | 1.001 | 12263.00
Group9            |  -0.28 | [-2.81, 2.26] | 59.14% | [-0.10, 0.10] |     6.71% | 1.001 | 10009.00
Group10           |  -0.07 | [-2.68, 2.46] | 52.39% | [-0.10, 0.10] |     6.99% | 1.000 | 10405.00

The correct interpretation of “significant/insignificant” is an often debated and misunderstood topic that would be hard to do justice in a forum post. If you want to dive into it, a good place to start is the back-and-forth between Andrew Gelman, Sander Greenland, Deborah Mayo, and others. As to your question, provided you trust your model and data collection a simple Bayesian result is to report the posterior probability that a certain effect exceeds a certain relevant clinical threshold (not necessarily 0, though it could be). The forward usage and/or causal interpretation of this will depend on the experimental goals and design.

1 Like

In my case, it is the count data of outcome per group. Thus, CI with zeros can/might mean no differences between/among groups.

Also, since regression is based on variation between intercept and other predictors, how do we assess the data including all groups.

For instance, considering the result above, second row shows variation from intercept to group1, right? Is marginal effect based on this result able to show overall variation among all groups?