Help with normalizing constant

Hello - I’m trying to fit a fish passage model. After some exposure to BUGS years ago this is my first jump into STAN. I am having some issues and I’m hoping the community could provide some help.

I am fitting a fish passage model based on dx.doi.org/10.1139/cjfas-2014-0554 (but just for a single year at this point). I am fitting a negative binomial model and it runs (not all parameters look very good though) but the issue I’m having is I can figure out how to include a 'normalizing constant (\psi_sum). In my STAN code I have a ‘25’ in place of where that parameter should go but when I sub the \psi_sum in for the 25 I get the error:
“Chain 1: Rejecting initial value:
Chain 1: Error evaluating the log probability at the initial value.
Chain 1: Exception: neg_binomial_2_log_lpmf: Log location parameter[1] is nan, but must be finite! (in ‘modela7fc42ebcdfc_naive_single3’ at line 41)”.

  • Adding inits doesn’t seem to solve the issue.

Any help would be appreciated. Thank you.

First, I simulate some data:

#######  SIMULATE DATA IN R
r <- 10000  # total run
p <- 250  # peak escapement DAY
sigma <- 10 # error
d <- 200:300 #days

set.seed(111)
psi <- sum(exp(-((d-p)^2)/(2*sigma^2)))
run <- r*(exp(-((d-p)^2)/(2*sigma^2)))/psi
run_error <- rlnorm(length(run),log(run),0.2)
single_sim <- data.frame(julian=d,Count=round(run_error,0))
ggplot(data=single_sim,aes(x=julian,y=Count))+ geom_segment(aes(x = julian,xend=julian,y=0,yend=Count),alpha=0.75)+theme_bw()

#########################################################
# Set up sampling conditions
##################################
##### Set STAN conditions
warmups <- 1000
total_iterations <- 2000
max_treedepth <-  10
n_chains <-  1
n_cores <- 4
adapt_delta <- 0.95

### Format data for stan model
sim_dat <- with(single_sim,list(y=Count,x=julian,N=nrow(sim_run)))

### Run stan model
fit <- stan(file = 'naive_single3.stan', data = sim_dat,
            chains = n_chains,
            warmup = warmups,
            iter = total_iterations,
            cores = n_cores,
            refresh = 250,
            init = list(list(log_r=5,p=230,sigma_arrival=5,reciprocal_phi=.1,r=exp(7),phi=.1,c_hat=rep(10,101),mu=rep(10,101,y_rep=rep(10,101),phi2=rep(0.01,101)))))
library(posterior)

########## STAN MODEL CODE ###########

data {
  int N;      // Number of observations (101)
  int y[N];   // Vector of observations
  vector[N] x;// Vector of DOY
}
 
// The parameters we are going to estimate in our model
parameters {
  real<lower=0> p;  
  real<lower=0.01> log_r;
  real<lower=0> sigma_arrival;
  real<lower=0> reciprocal_phi;
}
 
transformed parameters{
  real r = exp(log_r);
  real phi;
//  vector[N] phi2;
//  real<lower=0> phi_sum;
  vector[N] c_hat;
  phi = 1. / reciprocal_phi;


for (i in 1:N){
    c_hat[i] = r*exp(-(pow((x[i]-p),2)/(2*pow(sigma_arrival,2))))/(25);
}

//for (i in 1:N)  {
//   phi2[i] = exp(-(pow((x[i]-p),2)/(2*pow(sigma_arrival,2))));
//  }

//phi_sum = sum(phi2);


}
 
 model {
  //Priors
  reciprocal_phi ~ cauchy(0., 3);
  log_r ~ normal(log(10000),5);
  y ~ neg_binomial_2_log(c_hat, phi);
}

generated quantities {
  vector[N] mu;
  ////vector[N] log_lik;
  vector[N] y_rep;
  mu = exp(c_hat);
  for (i in 1:N) {
   // log_lik[i] = neg_binomial_2_log_lpmf(y[i] | c_hat[i], phi);
    y_rep[i] = neg_binomial_2_rng(c_hat[i], phi);
  }
}

} 

I believe this is due to the order of your operations. A key thing to remember with Stan is that code blocks are processed from top to bottom.

In your model block, you’re attempting to use phi_sum in place of the 25 before you’ve defined what phi_sum is:

for (i in 1:N){
    c_hat[i] = r*exp(-(pow((x[i]-p),2)/(2*pow(sigma_arrival,2))))/(phi_sum);
}

for (i in 1:N)  {
   phi2[i] = exp(-(pow((x[i]-p),2)/(2*pow(sigma_arrival,2))));
  }

phi_sum = sum(phi2);

When you’re using phi_sum as a normalising constant, it’s still set to its uninitialised value of nan, so the result of the division is then also nan, causing the errors.

Try moving the construction of phi2 and phi_sum above the construction of c_hat and seeing if that resolves the error

1 Like

That was indeed the issue - thank you so much.

This line of thinking that ‘order doesn’t matter’ was from my past experience with BUGS where it doesn’t.

I spent quite a few hours trying to bound parameters to solve the issue where just moving a line up a few solved the problem. Thank you very much, you saved my weekend.

I really appreciate the help.

2 Likes

No worries! A couple more tips for your model. Firstly, you can take advantage of Stan’s vectorised operations to make things a bit simpler.

So instead of:

for (i in 1:N) {
  phi2[i] = exp(-(pow((x[i]-p),2)/(2*pow(sigma_arrival,2))));
}

You could use:

phi2 = exp(-square(x-p) / (2*square(sigma_arrival)))

This pays off even more for c_hat, where you can just write:

c_hat = r * phi2 / sum(phi2)

Additionally, when using the neg_binomial_2_log distribution (or any of the other _log count distributions), you shouldn’t exponentiate the arguments beforehand, as this is handled internally in a more numerically-stable way.

So instead of:

  vector[N] phi2 = exp(-square(x-p) / (2*square(sigma_arrival)));
  vector[N] c_hat = r * phi2 / sum(phi2);

You would use:

  vector[N] log_phi2 = -square(x-p) / (2*square(sigma_arrival));
  vector[N] c_hat = log_r + log_phi2 - log_sum_exp(log_phi2);

Note also the use of the log_r parameter directly (should that also follow a lognormal distribution?).

Hope that helps!

1 Like

Hi Andrew,

That helped a great deal. Thank you. I knew about the vectorization, I was leaving that out while I tried to get used to the indexing. I’m going to build this into a hierarchical model (as in the paper) and just wanted to keep things straight. The biggest bonk you saved my from was the log issue. I can’t believe i missed that. Thank you, I was spending a lot of time wondering if it was priors or something why my estimates weren’t coming back from the simulated data.

Now the model works great.
variable mean median sd

1 p 250. 250. 0.187
2 log_r 9.23 9.23 0.0291
3 sigma_arrival 10.1 10.1 0.111

Thanks again. Your help was invaluable and much appreciated! And thank you for explaining your thoughts behind things . I learned a lot.

No worries, happy to help!