Loo pacakge and map_rect

I am trying to interface the loo package with a Stan code that relies on map_rect. From the loo package documentation, I understand that one would need to output the log-likelihood per observation such that extract_log_lik can interact with it.

However, it’s not entirely obvious to me how to output an array of observation-wise likelihood from each shard. For example, consider the following logistic regression example provided in the map_rect page of the User’s Guide.

functions {
  vector lr(vector beta, vector theta, real[] x, int[] y) {
    real lp = bernoulli_logit_lpmf(y | beta[1]
                                       + to_vector(x) * beta[2]);
    return [lp]';
  }
}
data {
  int y[12];
  real x[12];
}
transformed data {
  // K = 3 shards
  int ys[3, 4] = { y[1:4], y[5:8], y[9:12] };
  real xs[3, 4] = { x[1:4], x[5:8], x[9:12] };
  vector[0] theta[3];
}
parameters {
  vector[2] beta;
}
model {
  beta ~ std_normal();
  target += sum(map_rect(lr, beta, theta, xs, ys));
}

From my understanding, the lr function returns a single log-likelihood corresponding to a particular shard and these shard-wise likelihoods are then later summed up in the model block.

My initial intuition is to modify the lr function to return a vector of observation-wise log-likelihoods (instead of a likelihood per shard), as in:

functions {
  vector lr(vector beta, vector theta, real[] x, int[] y) {
    vector[N] log_lik;
      for (n in 1:N) {
        log_lik[n] = bernoulli_logit_lpmf(y[n] | beta[1] + x[n] * beta[2]);
      }
    return log_lik;
    }
}

However, I am not sure if this is a reasonable thing to do from a performance perspective and I am not sure how best to concatenate log_lik across shards so that log_lik can be used by loo and also to feed to target +=.

I would appreciate if someone could suggest a solution using the toy example above. Thank you.

Usually, the observation-wise likelihoods are computed after the sampling in generated quantities. As generated quantities are computed only for each iteration and not for each leap-frog step, the computation time is usually much smaller than the sampling time. If you have a very large number of observations, it is probably more memory efficient to do the parallel computation of log_lik outside of Stan. See loo.function() method, cores argument, and possibly also the vignette Using Leave-one-out cross-validation for large data • loo

1 Like