Target() == -inf at the beginning of model block

I’m trying to debug my model for which Stan sometimes says the initial parameters have a loglikelihood of -inf. However, this is not true since I can verify it with my own loglikelihood function (I’m almost certain both are correct). When using print(target()) in the first line of my model block, I see that it is -inf. I’m confused because I assumed the target was always 0 at the beginning of the block. For that reason, I have no idea what is the problem. Is the jacobian for the unrestricted parameters computed before the block? If so, what does it mean to have -inf?

Important things to note:

  • I’m letting Stan find out the step size and the metric once, then I fix the values (I call Stan multiple times);
  • the issue seems to occur only when there is no adaptation (could it be why?).

Here is an example for which this happens:

n:30
beta_:2.5
theta:[0.513179,-2.96086,1.55316,0.15685,1.22294,0.0186857,-0.302652,1.39564,-0.460454,0.953593,-0.755759,-2.83643,-1.77762,0.979781,1.3331,1.88611,-1.54583,-1.75944,1.80086,-1.96354,1.21745,1.00603,0.952342,-1.22556,-2.93655,-0.201251,1.43891,-1.09986,2.05508,-0.68469]
edge:[0,1,0,0,0,0,0,1,0,0,1,0,0,1,1,0,0,0,0,0,1,0,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,0,0,0,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,0,1,0,0,0,0,1,1,0,1,0,0,0,0,1,1,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,1,0,1,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,1,1,0,0,0,0,1,1,0,1,0,0,1,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,1,1,0,1,0,0,0,1,0,0,0,0,1,0,1,1,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,1,1,1,0,0,0,1,0,1,1,0,1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,1,0,0,0,1,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0]
kappa_min:1.03527
gamma_:2.5
radius_div_mu:77.3876

and here is the model:

functions {
  real angular_distance(real theta1, real theta2) {
    return pi()-abs(pi()-abs(theta1-theta2));
  }
}

data {
  int<lower=3> N;
  real<lower=0> beta_;

  array[N] real<lower=-pi(), upper=pi()> theta;
  array[(N*(N-1))%/%2] int<lower=0, upper=1> edge;

  real<lower=0> kappa_min;
  real<lower=0> gamma_;
  real<lower=0> radius_div_mu;
}

transformed data{
  array[(N*(N-1))%/%2] real<lower=0, upper=pi()> distances;

  int r=1;
  for (i in 1:N-1) {
    for (j in i+1:N) {
      distances[r] = angular_distance(theta[i], theta[j]);
      r+=1;
    }
  }
}

parameters {
  array[N] real<lower=kappa_min> kappa;
}

model {
  print("at beginning of model: ", target());
  for (i in 1:N) {
    target += -gamma_*log(kappa[i]);
  }

  int k=1;
  for (i in 1:N-1) {
    for (j in i+1:N) {
      if (edge[k]==1) {
        if (distances[k] > 0) {
          target += -log1p_exp(beta_*(log(distances[k])+log(radius_div_mu/kappa[i]/kappa[j])));
        }
      }
      else {
        target += -log1p_exp(-beta_*(log(distances[k])+log(radius_div_mu/kappa[i]/kappa[j])));
      }
      k+=1;
    }
  }
}

Thanks for the help!

I haven’t gotten around to trying to print the statement at the beginning of one of my own models yet, but just a question: is there a reason you’re negating all of the log values before adding them to target? Typically, target takes log probabilities, not negative log probabilities, but it could be that I just don’t know how your model works

It’s specific to my model, it’s a product over terms of the shape \frac{1}{1+x^b}. The log probabilities are often negative. For a normal distribution we get -\frac{1}{2}\log(2\pi \sigma^2) - \frac{(x-\mu)^2}{2\sigma^2} .