Varying standard deviations for varying intercepts?

I’m interested in modelling how the variability of varying intercepts (e.g., one for each year) changes over time (year), having accounted for several fixed effects.

The data I’m working with are sightings of different species (one count per species per year when sampling occurred), alongside a measure of sampling effort. Our research question is 1) how do different species vary in inter-annual variability, and 2) does inter-annual variability itself change over time?

To begin, I’m using a multilevel (random intercepts) approach to model annual encounter rates of different species, having adjusted for sampling effort (as an offset term) and long term trends in abundance (as a low-basis smooth term).

Here is a basic formula for the original varying effects model, which seems to be running well. Note that the smooth term has been simplified here:

\begin{aligned} Encounters &\sim Poisson(\lambda) \\[1em] log(\lambda) &= log(Effort) + s(Year) + \alpha_{[Year]} \\[1em] \alpha_{[Year]} &\sim Normal(\overline{\alpha}, \sigma) \\[1em] \overline{\alpha} &\sim Normal(0,1.5) \\ \sigma &\sim Exponential(1) \\ \end{aligned}

To address research question 1, we are then comparing the variance terms (\sigma) of each species’ random intercepts to understand how species differ in inter-annual variability.

As a next step, I want to explore whether, within species, this variability is itself growing or shrinking through time. One approach would be to subtract the posteriors of random intercepts from consecutive years and model these “year-to-year differences” as a function of time. I’m wondering if there is a more elegant way to model this “effect of year on variability or unpredictability of encounters” that would make good use of the data and not necessarily assume that consecutive years is the correct scale to think about the process.

One idea I’ve had is to run a second multilevel model where we smuggle a time effect into the sigma term for each species’ random intercept. This would mean allowing each annual intercept to have its own standard deviation, and (I think?) that the resulting B-coefficient would represent temporal trends in variability (after accounting for sampling effort and abundance trends). Something like the following:

\begin{aligned} Encounters &\sim Poisson(\lambda) \\[1em] log(\lambda) &= log(Effort) + s(Year) + \alpha_{[Year]} \\[1em] \alpha_{[Year]} &\sim Normal(\overline{\alpha}, \sigma_{[Year]}) \\[1em] \sigma_{[Year]} &\sim \overline{\sigma} + \beta*Year \\[1em] \overline{\alpha} &\sim Normal(0,1.5) \\ \overline{\sigma} &\sim Exponential(1) \\ \overline{\beta} &\sim Normal(0,1) \end{aligned}

My questions are:

  1. Does this approach seem suitable for addressing research question 2? Or is there a preferable way to model change in variability over time, having accounted for a couple fixed effects?

  2. If this approach does seem like a worthwhile, would it be possible to implement the “varying sigma as a function of year effect” in brms, or would it need to be done in Stan?

Any thoughts are welcome – thanks in advance!