Improving state-space fishery population model

I’m fitting about as simple of a model as there comes in population dynamics. It’s a logistic population model with observation and process error, where we have observed catches and an index of abundance from a fishery, and want to estimate population parameters such that given the catch history we can fit the abundance index. But I’m having some trouble making it converge properly across a broad range of fisheries. Specifically, while I can get a good fit (and convergence) for one fishery with some messing with the hyperparameters, when I try and use said model to fit data from many different fisheries the thing starts to run into all kinds of divergence and max_treedepth problems. The answer may simply be that I need to develop some routines for adjusting the hyperparameters for specific fisheries, or just be more patient.

I’m curious though if the community has any recommendations on ways to improve the overall structure of the model. In particular, I’ve never actually come across a dynamic population model on any of the Stan forums, and so am wondering if there are some tricks/nuances to using Stan in these circumstances (i.e. fitting a population model that evolves over time) that I should be using. I’ve included a link to the data and the stanfit object. The observation error (sigma_obs) appears to be a particular problem and I’m wondering if there’s a better transformation here.

https://drive.google.com/open?id=1A4a185kDUMHnu3tYeqkAF_4eCIyA2-l4

data {
    int<lower = 1> nt; // number of time steps to run

    vector[nt] catch_t; // observed catch

    vector[nt] log_index; // observed log population index

}



parameters {

  real log_r; // log intrinsic growth rate

  real<lower = 1> k_mult; // multiplier on mean catch to define carrying capacity

  real<upper = 0> log_q; // catchability coefficient

  real<lower = 0> sigma_obs; // observation error

  real<lower = 0> sigma_r; // process error

  vector[nt] log_rec_dev; // recruitment deviates

}

transformed parameters{

  real k;

  real r;

  real q;

  real temp;

  real penalty;

  vector[nt] b_t;

  vector[nt] index_t;

  vector[nt] rec_dev;

  rec_dev = exp(sigma_r * log_rec_dev - sigma_r^2/2);

  q = exp(log_q);

  r = exp(log_r);

  k = k_mult * (mean(catch_t) * 4 / r);

  b_t = rep_vector(1e-3, nt);

  index_t = rep_vector(1e-3, nt);

  b_t[1] = k * rec_dev[1];

  penalty = 1e-3;

  for (t in 2:nt){

    temp = ((b_t[t - 1]  + b_t[t - 1]*r*(1 - b_t[t - 1] / k)) - catch_t[t - 1]) * rec_dev[t];

    if (temp  <= 0){ // if catches produce negative population

      penalty += temp^2;

      b_t[t] = 1e-3;

    } else {

      b_t[t] = temp;

    }

  } // close time loop


  index_t = b_t * q; // calculate estimated population index

}


model {


  log_index ~ normal(log(index_t), sigma_obs);

  log_r ~ normal(log(0.2),2);

  k_mult ~ normal(2,5);

  log_q ~ normal(-6,0.5);

  sigma_obs ~ cauchy(0,2.5);

  sigma_r ~ normal(0.2,0.001); // constraining tightly for now

  log_rec_dev ~ normal(0,1);

  target += -penalty; // penalty for crashing the population


}


Hi Dan,
Did you ever get a solution to this old issue? I’m having similar issues with the sigma params as you’ve described.
I’ve been using gamma priors on precision as recommended in the literature, they are marginally less problematic than the cauchy you have. But still running into convergence issues
Thanks,
Chris

If anyone every follows up on this, here’s my solutions.
First read Dan Ovando’s blog post on the topic, which has lots of tricks for getting efficient convergence on these types of models:https://www.weirdfishes.blog/blog/fitting-bayesian-models-with-stan-and-r/
The solutions I came to were:
Set initial values that make sense for the model (stan defaults to uniform random value drawn from -2 to 2)
Use a non-centered parameterization for the SD of the process error (https://mc-stan.org/docs/2_18/stan-users-guide/reparameterization-section.html)
I also ended up using exponential priors on the SD of the observation and process errors. Choosing tight enough priors (for identifiability) really helps, especially for the process error.
I still don’t understand how the case-studies in the old literature ever achieved convergence with the super broad priors they were using… (‘uninformed’ gamma priors on precision).

Glad to hear the blogpost helps! I should update that thing someday. @vianeylb has been leading the charge on setting up a stan ecology community that hope fully can host some examples along these lines. Creating a logistic population model example for that is on my to-do list for someday…

Along with the suggestions you put up, a few things that help are

  1. Giving the model a gradient to push against when the population starts to crash , rather than just saying biomass must be greater than or equal to 0

  2. Estimate a vector of fishing mortality rates and use those to fit to the catch data, rather than treating the catch data as constants in the model. This ensures that biomass never goes below 0 (assuming F is between 0 and less than 1). This requires a estimating a lot of fishing mortality rates which can slow things down, and requires specifying a prior on observation error for the catches, and a prior on the fishing mortality rates. If you’re including process error and observation error on the index of abundance, you’ll probably need to just fix the observation error of the catches at some low level. If you’re estimating process error then you need to give some thought to the ratio of the process error to the variance of the fishing mortality rates, since letting both be totally free till cause problems . You can for example allow process error to be pretty large but constrain fishing mortality rates to a random walk style process with minimal variation year-to-year.

  3. Consider prior predictive tuning. This has helped us a lot for batch running of surplus production models. Suppose that you set some generic and wide lognormal prior on the carrying capacity of the population k. Since a bunch of those values will be close to zero, it’s likely that a large part of the prior parameter space will result in the population crashing, making life difficult for the model. What you can do is generate a bunch of draws from the prior conditional on the model (think of it as the post-model-pre-data distribution), where the model is the combination of the functional form of the population model (e.g. logistic growth) combined with the catch history that is treated as a constant in the model (i.e. it’s not in the likelihood). From there, you can adjust your priors to keep values that don’t result in the population crashing, which is the one thing we knew didn’t happen (assuming that the population still exists and never stopped existing). For example you can use this process to find growth rate r and k priors that generally speaking don’t crash the population. This won’t get rid of crashes in the population, there may be combinations of r, k, and process error values that produce population levels smaller than catches, but it will help the model spend more time in feasible parts of the parameters space. To take the example from the stan documentaion linked to above, just as it wouldn’t make sense to have a prior on soccer scores that has the model spend a bunch of time around values like 1e6, we want to avoid priors in the population model that spend a bunch of time with biomass < 0.

But yeah I agree we need more worked examples in ecological problems where broad default priors and model setup don’t work too well!