JAGS to Stan conversion log(0) probability error

Hello, I am trying to convert an existing model in JAGS into a stan model. I believe the models are identical but when I run the stan model I get the following error:
“Log probability evaluates to log(0), i.e. negative infinity.
Chain 1: Stan can’t start sampling from this initial value.”
and after 100 attempts it fails.

The stan code is below:

// Poisson model example
 data {
   // Define variables in data
   // Number of observations (an integer)
   int<lower=0> T;
   int<lower=0> X;
   int<lower=0> C;
   // Number of beta parameters
   int<lower=0> p;
   int<lower=0> pop[X,T,C];
  
   // Covariates
   real PC[X,3];

   // Count outcome
   int<lower=0> y[X,T,C];
 }

 parameters {
   // Define parameters to estimate
   real beta[T,C,p];
   real mu_beta[T,p];
   real<lower=0> tau_beta[T,p];
   real<lower=0> tau_mu[p];
   real<lower=0> tau_u[X];
   real epsl_x[X,T,C];
   
      // offset
 }

 transformed parameters  {
   //
   real lp[X,T,C];
   // real mu[N];
   real<lower=0> mx[X,T,C];

   for (t in 1:T) {
      for(x in 1:X){
         for(a in 1:C){
     // Linear predictor
     lp[x,t,a] = beta[t,a,1]*PC[x,1] + beta[t,a,2]*PC[x,2] + beta[t,a,3]*PC[x,3] + epsl_x[x,t,a];

     mx[x,t,a] = exp(lp[x,t,a]);
         }
      }
   }
 }

 model {
   // Prior part of Bayesian inference
   // Flat prior for mu (no need to specify if non-informative)


   // Likelihood part of Bayesian inference
   for(t in 1:T){
      for(x in 1:X){
         for(a in 1:C){
   y[x,t,a]~ poisson(pop[x,t,a]*mx[x,t,a]);
   }
 }
}
for(t in 1:T){
   for(a in 1:C){
for(j in 1:p){
 beta[t,a,j] ~normal(mu_beta[t,p],tau_beta[t,p]^2);
   }
}
}
//hyper priors
for(j in 1:p){
  for(t in 1:2){
    mu_beta[t,j] ~ normal(0,tau_mu[j]^2);
  }
  for(t in 3:T){
    mu_beta[t,j] ~ normal(2*mu_beta[t-1,j] - mu_beta[t-2,j],tau_mu[j]^2); //AR(2) function for PC coefficients
  }
}
// prior
for(a in 1:C){
  for(x in 1:X){
    for(t in 1:T){
      epsl_x[x,t,a] ~ normal(0,tau_u[x]^2); //for constant in logmx (small variance)
    }
  }
}
for(x in 1:X){
  tau_u[x] ~ uniform(0,0.1); //epsilon has very informative (don't want to vary)
}
for(j in 1:p){
      tau_mu[j] ~ uniform(0,40);
      for(t in 1:T){
         tau_beta[t,j] ~ uniform(0,40);
      }
}
}

And the working JAGS model I copied is:

## pooled LC-type model with 1-3 PCs
## smoothing parameters through time with non-informative priors
model{
  for (x in 1:X){ #age
    for (t in 1:T){ #time
      for (s in 1:S){ #state
        for (a in 1:n.a[s]){ #counties within state
          y.xtas[x,t,a,s] ~ dpois(mu.xtas[x,t,a,s]) #deaths are poisson distributed
          yrep.xtas[x,t,a,s] ~ dpois(mu.xtas[x,t,a,s]) #draws from posterior predictive distribution (to construct PIs)
          mu.xtas[x,t,a, s] <- pop.xtas[x,t,a, s]*mx.xtas[x,t,a, s] # pop x mortality rate
          mx.xtas[x,t,a,s] <- exp(logmx.xtas[x,t,a,s])
        }
        for (a in n.a[s]+1:n.amax){ #this is required when states have different number of counties (nothing estimated, just getting rid of errors)
          mu.xtas[x,t,a, s] <- 0
        }
      }
    }
  }
  
  for (x in 1:X){ #age
    for (t in 1:T){
      for (s in 1:S){
        for (a in 1:n.a[s]){
          logmx.xtas[x,t,a,s]<- beta.tas[t,a,s,1]*Yx[x,1] + beta.tas[t,a,s,2]*Yx[x,2] + beta.tas[t,a,s,3]*Yx[x,3] +u.xtas[x,t,a,s]
        }
      }
    }
  }
  
  #priors
  for (s in 1:S){
      for (a in 1:n.a[s]){
        for(p in 1:P){
          for(t in 1:T){
            beta.tas[t,a,s,p] ~ dnorm(mu.beta[s,t,p], tau.beta[s,t,p])
          }
        }
      }
      for(t in 1:T){
      for (a in n.a[s]+1:n.amax){
        for(p in 1:P){
          beta.tas[t,a,s,p] <- 0
        }
      }
      
      for(p in 1:P){
        tau.beta[s,t,p] <- pow(sigma.beta[s,t,p], -2)
        sigma.beta[s,t,p] ~ dunif(0,40)
        
      }
      }
      

  }
  
  for(s in 1:S){

      for (a in 1:n.a[s]){
        for(x in 1:X){
        for(t in 1:T){
          u.xtas[x,t,a,s] ~ dnorm(0, tau.u[x,s])
        }
        
      }
  }
    
    for(t in 1:2){
      
      for(p in 1:P){
        mu.beta[s,t,p] ~ dnorm(0, tau.mu[s,p])
      }
      
      
    }
    for(t in 3:(T)){
      
      for(p in 1:P){
        mu.beta[s,t,p] ~ dnorm(2*mu.beta[s,t-1,p] - mu.beta[s,t-2,p], tau.mu[s,p])
      }
      
    }
    
    for(p in 1:P){
      tau.mu[s,p] <- pow(sigma.mu[s,p], -2)
      sigma.mu[s,p] ~ dunif(0,40)
    }
    
      for(x in 1:X){
        tau.u[x,s] <- pow(sigma.u[x,s], -2)
        sigma.u[x,s] ~ dunif(0,0.1)
      }
  }
  
  
}

Can anyone identify the issue? Thank you!

real<lower=0> tau_u[X];
tau_u[x] ~ uniform(0,0.1);

In Stan parameter and distribution constraints need to match. This is one thing that can cause the errors you’re getting.

If tau_u is randomly initialized outside of (0.0, 0.1) (the sampling statement implies no constraints), the probability density out there is zero and you’ll get a log(0) error.

There might be other things but start there.

The fastest way to find the error if that doesn’t work is by commenting out sections of the model block and seeing if the behavior changes (and then refining the search from there)

1 Like