Memory errors with Laplace approximation in cmdstanr

I am getting intermittent memory errors when using the $laplace() method in cmdstanr.

Chain 1 terminate called after throwing an instance of 'std::bad_alloc'
Chain 1   what():  std::bad_alloc

This seems to indicate I have run out of memory, but the models and computation would not seem big enough for that.

I am not sure of the specific trigger, but all the models that have crashed in this way have used matrix_exp(). For example, the following toy code.

mod_string <- "data {
  matrix[2,2] Q;
  int<lower=0> N;
  array[N] int<lower=0, upper=1> y;

}
parameters {
  real<lower=0, upper=1> theta;
}
model {
  matrix[2,2] P;
  for (i in 1:10000){
    P = matrix_exp(Q);
  }
  theta ~ beta(1, 1);
  y ~ bernoulli(theta);
}"

mod <- cmdstanr::cmdstan_model(write_stan_file(mod_string))
set.seed(1)
stan_data <- list(N = 100, y = rbinom(100, size=1, prob=0.5),
                  Q = rbind(c(-1, 1), c(1,-1)))
fit <- mod$laplace(data = stan_data, seed = 123, draws=100000)

When either the number of Laplace draws and/or the number of matrix_exp computations gets big enough, the crash happens after a certain number of draws (which could be as low as 1000 for larger models than the above). I’d not have expected the Ps to get stored between iterations of the for loop.

For comparison, the MCMC and variational methods in cmdstanr work fine, and rstan’s optimize(...,draws=) works instantly (though I am aware this works differently, not calculating a Jacobian).

It’s on Windows, current versions of everything.

1 Like

I can confirm the amount of memory does steadily increase as the laplace sampling continues for this model. Curious

This will be fixed by Clean up var memory in laplace_sample by WardBrian · Pull Request #3324 · stan-dev/stan · GitHub.

Thanks for reporting!

2 Likes

Great, thanks for addressing so quickly!