Setting ROPE values on a model of count data

I have a data set of counts of various bird species in three national parks. My aim is to see if they’re declining in any of those areas. I’m using brms to fit a hierarchical model (I give some more detail of the model below*).

My question is what’s the best way to assess whether my coefficients on the dates are negative which would indicate a decline?

I’ve seen some authors argue for using a ROPE but I’d appreciate your thoughts on the intervals to specify when I’m only interested if there’s a decline. This is complicated because I’m working on the log scale given a negative binomial error structure.

Here’s my fixed effect output:

Population-Level Effects: 
               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept         -0.13      0.57    -1.21     1.10 1.00     2424     2469
ndate             -0.00      0.01    -0.02     0.01 1.00     2684     2657
NPRuaha           -0.09      0.69    -1.48     1.32 1.00     2840     2476
NPSelous           0.54      0.86    -1.24     2.14 1.00     3085     2712
SeasonWet          0.14      0.16    -0.17     0.45 1.00     5330     3321
CarcassPres1       0.62      0.27     0.11     1.18 1.00     4709     2861
ndate:NPRuaha     -0.01      0.01    -0.03     0.00 1.00     2836     2964
ndate:NPSelous    -0.02      0.01    -0.05     0.01 1.00     2702     2574

Here’s the model:

library(brms)
library(bayestestR)

#'  set normal priors on the intercept terms
newprior_1 <-
  c(
    set_prior("normal(0,1)", class = "b", coef = "Intercept"),
    set_prior("normal(0,1)", class = "b", coef = "NPRuaha"),
    set_prior("normal(0,1)", class = "b", coef = "NPSelous")
  )

#' fit the model with tighter normal priors
nbglm_norm_prior_tight <-
  brm(
    AWB ~  0 + Intercept + # this allows control of prior on the intercept
      ndate * NP + Season + CarcassPres +
      (1 | NP / StandardTransect) +
      offset(log(Tlength)),
    family = negbinomial,
    data = mydata,
    chains = 4,
    control = list(adapt_delta = 0.999, max_treedepth = 15),
    prior = newprior_1,
    save_pars = save_pars(all = TRUE)
  )

My use of ROPE drawn from the equivalence_test function:

#' run a test using ROPE
bayestestR::equivalence_test(draws$b_Intercept)
bayestestR::equivalence_test(draws$b_ndate + draws$`b_ndate:NPRuaha`)
bayestestR::equivalence_test(draws$b_ndate + draws$`b_ndate:NPSelous`)

*I modelled the counts as a function of time (in months) interacting with park, the presence of carrion, the season (wet or dry), and a random effect of the transect nested within the park. We also included the log length of the transect as an offset to account for unequal sampling effort.

Full data set:

mydata <- structure(list(AWB = c(7, 66, 15, 44, 22, 60, 45, 32, 30, 33, 
14, 0, 45, 39, 39, 24, 37, 66, 37, 60, 18, 3, 25, 13, 34, 38, 
58, 0, 12, 6, 33, 2, 34, 18, 75, 20, 4, 9, 15, 4, 0, 21, 50, 
24, 21, 9, 5, 87, 13, 43, 1, 19, 13, 1, 28, 56, 18, 42, 13, 2, 
53, 16, 37, 51, 79, 5, 49, 11, 34, 91, 30, 2, 0, 15, 3, 57, 5, 
18, 31, 14, 56, 72, 35, 94, 10, 45, 8, 29, 33, 34, 8, 53, 54, 
24, 5, 21, 11, 27, 83, 23, 24, 4, 10, 13, 17, 11, 51, 6), ndate = c(0, 
0, 0, 0, 0, 14, 14, 14, 14, 14, 14, 19, 19, 19, 19, 20, 20, 25, 
25, 25, 25, 25, 25, 32, 32, 33, 33, 33, 33, 33, 38, 38, 38, 38, 
38, 38, 39, 39, 39, 39, 39, 41, 41, 42, 42, 42, 42, 43, 43, 43, 
44, 46, 46, 46, 46, 51, 51, 51, 51, 51, 51, 54, 54, 54, 55, 56, 
56, 56, 58, 58, 61, 61, 61, 61, 61, 63, 63, 66, 66, 67, 67, 67, 
68, 68, 68, 68, 68, 71, 72, 73, 78, 78, 79, 79, 89, 89, 89, 90, 
90, 91, 91, 91, 96, 96, 96, 96, 97, 97), nyear = c(2013, 2013, 
2013, 2013, 2013, 2014, 2014, 2014, 2014, 2014, 2014, 2015, 2015, 
2015, 2015, 2015, 2015, 2015, 2015, 2015, 2015, 2015, 2015, 2016, 
2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 
2016, 2016, 2016, 2016, 2016, 2016, 2017, 2017, 2017, 2017, 2017, 
2017, 2017, 2017, 2017, 2017, 2017, 2017, 2017, 2017, 2017, 2017, 
2017, 2017, 2017, 2017, 2018, 2018, 2018, 2018, 2018, 2018, 2018, 
2018, 2018, 2018, 2018, 2018, 2018, 2018, 2018, 2018, 2019, 2019, 
2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 
2020, 2020, 2020, 2020, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 
2021, 2021, 2021, 2021, 2021, 2021, 2021), NP = structure(c(1L, 
1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 
1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 
2L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 
2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 1L, 3L, 3L, 
2L, 2L, 2L, 3L, 3L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 3L, 3L, 
3L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 1L, 1L, 3L, 3L, 2L, 2L, 2L, 
1L, 1L, 3L, 3L, 3L, 2L, 2L, 2L, 3L, 3L, 1L), .Label = c("Katavi", 
"Ruaha", "Selous"), class = "factor"), Season = c("Dry", "Dry", 
"Dry", "Dry", "Dry", "Dry", "Dry", "Dry", "Dry", "Dry", "Dry", 
"Wet", "Wet", "Wet", "Wet", "Wet", "Wet", "Dry", "Dry", "Dry", 
"Dry", "Dry", "Dry", "Wet", "Wet", "Wet", "Wet", "Wet", "Wet", 
"Wet", "Wet", "Dry", "Dry", "Dry", "Dry", "Dry", "Dry", "Dry", 
"Dry", "Dry", "Dry", "Wet", "Wet", "Wet", "Wet", "Wet", "Wet", 
"Wet", "Wet", "Wet", "Wet", "Dry", "Dry", "Dry", "Dry", "Dry", 
"Dry", "Dry", "Dry", "Dry", "Dry", "Wet", "Wet", "Wet", "Wet", 
"Wet", "Wet", "Wet", "Dry", "Dry", "Dry", "Wet", "Dry", "Dry", 
"Dry", "Dry", "Dry", "Wet", "Wet", "Wet", "Wet", "Wet", "Wet", 
"Wet", "Wet", "Wet", "Wet", "Dry", "Dry", "Dry", "Wet", "Wet", 
"Wet", "Wet", "Wet", "Wet", "Wet", "Wet", "Wet", "Wet", "Wet", 
"Wet", "Dry", "Dry", "Dry", "Dry", "Dry", "Dry"), CarcassPres = structure(c(1L, 
1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L), .Label = c("0", 
"1"), class = "factor"), StandardTransect = structure(c(4L, 3L, 
1L, 5L, 7L, 5L, 1L, 8L, 6L, 4L, 3L, 8L, 5L, 6L, 1L, 4L, 3L, 4L, 
3L, 5L, 1L, 8L, 6L, 8L, 6L, 5L, 1L, 8L, 6L, 1L, 5L, 5L, 1L, 6L, 
4L, 3L, 8L, 6L, 5L, 1L, 8L, 4L, 3L, 5L, 1L, 8L, 6L, 5L, 1L, 6L, 
8L, 5L, 1L, 8L, 6L, 4L, 3L, 5L, 1L, 8L, 6L, 3L, 4L, 10L, 2L, 
1L, 5L, 6L, 2L, 10L, 5L, 8L, 8L, 1L, 6L, 3L, 4L, 4L, 3L, 9L, 
2L, 10L, 5L, 1L, 7L, 5L, 7L, 2L, 10L, 9L, 4L, 3L, 10L, 2L, 5L, 
1L, 7L, 4L, 3L, 2L, 10L, 9L, 5L, 7L, 1L, 2L, 9L, 4L), .Label = c("Jongomero", 
"Kidai", "LakeChada", "LakeKatavi", "Lunda", "Magangwe", "Mbagi-Mdonya", 
"Mpululu", "Msolwa", "Mtemere"), class = "factor"), Tlength = c(35.2, 
86.7, 93, 75, 27.2, 74.4, 93, 10.3, 45.8, 35.2, 78.2, 10.3, 71, 
45.8, 93, 35.2, 63.9, 35.2, 77.9, 86.6, 93, 10.3, 45.8, 10.3, 
45.8, 68.9, 93, 10.3, 45.8, 93, 86.7, 90.5, 93, 45.8, 35.2, 81.6, 
10.3, 45.8, 88.2, 93, 10.3, 35.2, 64.6, 82.3, 93, 10.3, 45.8, 
77.9, 93, 45.8, 10.3, 90.3, 93, 10.3, 45.8, 35.2, 77.4, 87.5, 
93, 10.3, 45.8, 66, 35.2, 71.2, 85.7, 93, 87.5, 45.8, 85.5, 69.6, 
97.8, 10.3, 10.3, 93, 45.8, 86.6, 35.2, 35.2, 71.9, 77.9, 88.5, 
80, 85.2, 93, 56.1, 85.5, 56.1, 97.6, 81.8, 79.7, 35.2, 71.1, 
81.9, 53.8, 86.5, 68.7, 60.7, 78.7, 56.6, 66.9, 71.8, 79.2, 82.6, 
71.8, 92.4, 85.6, 78.5, 77.3)), row.names = c(NA, -108L), class = "data.frame")

ROPE is when you want to know if there is lot of posterior mass near some value. If you want to look at the sign of the coefficient you can just compute probability that the coefficient is negative (e.g. mean(coefficient<0)). You can extract the posterior draws from bmrs object and use posterior package: Tools for Working with Posterior Distributions • posterior

1 Like

Thanks for the response. This may sound stupid but why is the average of the coefficient that’s less than zero the same as the probability that it less than zero. I’d have thought you’d need to include the whole density rather than cutting it off.

You see, coefficient<0 will output a binary vector, like so
TRUE TRUE FALSE TRUE FALSE FALSE TRUE. From this you can convert to 1 1 0 1 0 0 1. Taking the average is, mathematically,

\bar{\delta}_k = \frac{1}{M}\sum_{i=1}^M \mathbb{I}\left(\beta_k^{(i)} \leq 0\right),

where \beta_k^{(i)} is the i-th posterior draw for k-th coefficient. This is a good (consistent, unbiased) estimate of the probability \operatorname{Pr}\left(\beta_k \leq 0 \mid \text{data}\right). Note that it does use the whole density in the sense that it averages over many draws.

3 Likes