Target() == -inf at the beginning of model block

I’m trying to debug my model for which Stan sometimes says the initial parameters have a loglikelihood of -inf. However, this is not true since I can verify it with my own loglikelihood function (I’m almost certain both are correct). When using print(target()) in the first line of my model block, I see that it is -inf. I’m confused because I assumed the target was always 0 at the beginning of the block. For that reason, I have no idea what is the problem. Is the jacobian for the unrestricted parameters computed before the block? If so, what does it mean to have -inf?

Important things to note:

  • I’m letting Stan find out the step size and the metric once, then I fix the values (I call Stan multiple times);
  • the issue seems to occur only when there is no adaptation (could it be why?).

Here is an example for which this happens:


and here is the model:

functions {
  real angular_distance(real theta1, real theta2) {
    return pi()-abs(pi()-abs(theta1-theta2));

data {
  int<lower=3> N;
  real<lower=0> beta_;

  array[N] real<lower=-pi(), upper=pi()> theta;
  array[(N*(N-1))%/%2] int<lower=0, upper=1> edge;

  real<lower=0> kappa_min;
  real<lower=0> gamma_;
  real<lower=0> radius_div_mu;

transformed data{
  array[(N*(N-1))%/%2] real<lower=0, upper=pi()> distances;

  int r=1;
  for (i in 1:N-1) {
    for (j in i+1:N) {
      distances[r] = angular_distance(theta[i], theta[j]);

parameters {
  array[N] real<lower=kappa_min> kappa;

model {
  print("at beginning of model: ", target());
  for (i in 1:N) {
    target += -gamma_*log(kappa[i]);

  int k=1;
  for (i in 1:N-1) {
    for (j in i+1:N) {
      if (edge[k]==1) {
        if (distances[k] > 0) {
          target += -log1p_exp(beta_*(log(distances[k])+log(radius_div_mu/kappa[i]/kappa[j])));
      else {
        target += -log1p_exp(-beta_*(log(distances[k])+log(radius_div_mu/kappa[i]/kappa[j])));

Thanks for the help!

I haven’t gotten around to trying to print the statement at the beginning of one of my own models yet, but just a question: is there a reason you’re negating all of the log values before adding them to target? Typically, target takes log probabilities, not negative log probabilities, but it could be that I just don’t know how your model works

It’s specific to my model, it’s a product over terms of the shape \frac{1}{1+x^b}. The log probabilities are often negative. For a normal distribution we get -\frac{1}{2}\log(2\pi \sigma^2) - \frac{(x-\mu)^2}{2\sigma^2} .