STAN on multiple cores occasionally crashing Linux without overwhelming memory

I’m trying to fit a nonlinear model on not particularly many observations, and it occasionally crashes my computer forcing me to reset. Running on 1 core seems to always work, 2 cores sometimes crashes, and 3 cores crashes fairly regularly: it’s stochastic, which is most annoying of all. I would have expected I was overwhelming RAM, but I’m running a pretty powerful machine: 16 virtual cores and 64GB RAM, so I don’t think it’s that. Any help would be very much appreciated. I’m fairly new to working more on Linux, so I might be doing something stupid somewhere.

I’m running Pop!_OS 20.04 LTS, and using brms_2.13.9 through RStudio, though I encounter the issue using either rstan_2.19.2 or cmdstanr_0.1.3 (cmdstan-2.24.1) backends. The model I’m fitting isn’t really that large: it’s a nonlinear model fitting 1000 observations from 50 groups, with 5 parameters per group, and their associated SD etc --> comes to 822 parameters total.

I opened htop to watch RAM usage while fitting yesterday, and had the computer crash while it was open. Had to screenshot with my phone as the computer had hung.

… so, not even 5% memory usage, and only 3 cores active.

I’m a bit stumped. If anyone has any suggestions, I’d be thrilled.

Thanks so much in advance!

Can you paste any snippet on how you call your model? That might help us debug this.

I can definitely do that! Maybe I should have done so sooner. Sorry about that.

logtwotcm_prior <- c(
  set_prior("normal(-3, 0.2)", nlpar = "logk1"),
  set_prior("normal(-1.5, 0.2)", nlpar = "logvnd"),
  set_prior("normal(1, 0.2)", nlpar = "logbpnd"),
  set_prior("normal(-4, 0.2)", nlpar = "logk4"),
  set_prior("normal(-2, 1)", nlpar = "logvb"),
  set_prior("normal(0, 0.2)", nlpar = "logk1", class = "sd"),
  set_prior("normal(0, 0.2)", nlpar = "logvnd", class = "sd"),
  set_prior("normal(0, 0.2)", nlpar = "logbpnd", class = "sd"),
  set_prior("normal(0, 0.2)", nlpar = "logk4", class = "sd"),
  set_prior("normal(0, 0.2)", nlpar = "logvb", class = "sd"),
  set_prior("normal(0, 0.005)", class = "sigma"))

logtwotcm_fit_formula <- bf( meas_tac ~ twotcm_log_stan(logk1, logvnd, logbpnd,
                                                       logk4, logvb, MidTime, 
                                    lambda1_pfc, lambda2_pfc, lambda3_pfc, 
                                    A1_pfc, A2_pfc, A3_pfc, tstar_pfc, 
                                    lambda1_tot, lambda2_tot, lambda3_tot, 
                                    A1_tot, A2_tot, A3_tot, 
                                    tstar_tot, indicator),
     # Nonlinear variables
     logk1 + logvnd + logbpnd + logk4 + logvb ~ 1 + (1|m|ID),
     # Nonlinear fit
     nl = TRUE)

  logtwotcm_fit <- brm(
    data = modeldat,
    prior = logtwotcm_prior,
    stanvars = stanvar(scode = two_compartment_log_stan, 
    chains = 3,
    cores = 2,
    backend = "cmdstanr")

I’m not sure if I can share the model function definition itself just yet, but it’s pretty straightforward. It defines the real variables, exponentiates the log variables, and then runs a very long, analytical solution to a pharmacokinetic model. It’s just one “line”, over about 20-30 lines on the screen.

As said, the crashes are stochastic. It works some of the time, and fails other times. And with more cores, it fails more regularly.

Thanks in advance for any help. I’m very happy to run any kinds of checks which might be useful - I just don’t know what these could be.

And if you run with

chains = 3,
cores = 1

it runs fine?

Yup. I’ve yet to have a crash with cores=1. Occasionally crashes with cores=2; and with cores=3it’s quite frequently (probably >50% of the time - though I actually can’t remember if it worked at least once).

This is just a prototype implementation for now in small simulated datasets. The plan is to upscale this model to bigger datasets, with a more complicated hierarchical structure, so then I would worry about cores=1 failing too. Otherwise, I might otherwise just have run these as single chain models and stick them together.

Ok thanks. Lets first check whether the issue is at the stan level or brms level.

In order to do that, you have to make the stancode and standata and run cmdstanr separately. You can use make_stancode (see How to convert "standata" to "json"? to see a snippet on how to qucikly transform the data for cmdstanr).

If that still crashes then its something weird going on in the Stan core, otherwise its something that happens after Stan runs.

SUMMARY: still crashes out.

I tried it using cmdstanr as follows:

# Code
stanc <- make_stancode(logtwotcm_fit_formula,
  data = modeldat,
  prior = logtwotcm_prior,
  stan_funs = two_compartment_log_stan)

# Data
stand <- make_standata(logtwotcm_fit_formula,
    data = modeldat,
    prior = logtwotcm_prior,
    stanvars = stanvar(scode = two_compartment_log_stan, 
    chains = 4,
    cores = 4,
    backend = "cmdstanr")

# Data list
stand_list <- list()
for (t in names(stand)) {
  stand_list[[t]] <- stand[[t]]

# Saving code
stanc_f <- cmdstanr::write_stan_file(stanc, basename = "cmdstanr_test.stan")

# Model
mod <- cmdstanr::cmdstan_model(stanc_f)

# Sample
fit <- mod$sample(
  data = stand_list,
  chains = 4,
  parallel_chains = 4,
  refresh = 500

I used 4 cores just to be sure to get it to actually freeze if it was going to, and it did. htop says I had <7.5GB RAM used out of 62.6GB.

So then it’s something in the STAN code probably. But my Linux-fu is not good enough to try to diagnose what’s going on.

One potential thing to try: this is simulated data. I could send the data and full code privately to someone to try it out on a different version of Linux. I could also try to get it working on my Windows machine, but that’ll take some time and fiddling as I wasn’t able to get cmdstanr working on there when I last tried.

(also, thanks so much for all the help thus far!)

What if you inject some couts in sampling lib to see where it crashes?

I’d be happy to try this, but I don’t really know where to start. If you could give me some pointers, I could try. I don’t have any experience with C++, so I’m pretty clueless on how to go about doing this.

Yeah, that or a bug in the Stan backend (math most likely).

Feel free to DM me and I can try this on Linux easily.

Awesome - thank you! I’ve emailed everything over. Fingers crossed that you can reproduce the issue.

How quickly did it fail for you? Instantly, in warmup or in sampling?

Also, can you post the outputs of

make --version
g++ --version

Re warmup vs sampling, I’m not sure with using cmdstanr directly, but I didn’t see any progress bars after it started sampling after the first sample. When using cmdstanr through brms with 3 chains, the crash sometimes happened early, sometimes later, so I think it didn’t matter. But I’ll try it again a few times later today after a meeting.

Re make and g++ (and clang++) :

GNU Make 4.2.1
Built for x86_64-pc-linux-gnu
Copyright (C) 1988-2016 Free Software Foundation, Inc.
Licence GPLv3+: GNU GPL version 3 or later <>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

g++ (Ubuntu 9.3.0-10ubuntu2) 9.3.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO

clang version 10.0.0-4ubuntu1 
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin

And my .R/Makevars is as follows:

#CXX14FLAGS=-O3 -march=native -mtune=native

#CXX14 = g++ -fPIC
CXX14 = clang++ -fPIC

CXXFLAGS=-O3 -std=c++1y -mtune=native -march=native -Wno-unused-variable -Wno-unused-function

CXX14FLAGS=-O3 -std=c++1y -mtune=native -march=native -Wno-unused-variable -Wno-unused-function

Actually, that’s something to try: using g++ instead of clang. I’ll give that a shot too!

After changing the Makevars to run g++, I just tried running 6 chains on 6 cores, and my computer froze again after between 50 and 150 samples on all the chains. So it was warmup this time in any case. Though I don’t know if cmdstanr uses the compiler described in the Makevars, or if you change it some other way?

cmdstanr does not use makevars no, and if you didn’t touch anything it probably picked up g++ (which I believe is the default on Linux).

You can switch to a different compiler for cmdstan with

cmdstan_make_local(cpp_options = list("CXX"="clang++"))
rebuild_cmdstan(cores = 4)

This seems to run fine on my Ubuntu machine though… All chains ran just fine, so not sure what to make of all this (finished between 3500 - 4100 seconds).

I am using the exact same compiler, make, and cmdstan version. Argh… Will give this a few more thoughts.

1 Like

I ran this with the model/data you sent.


file_path <- file.path("cmdstanr_test.stan")
mod <- cmdstan_model(file_path, compile = TRUE)

fit <- mod$sample(data = "cmdstanr_test_data.json", 
                  chains = 4,
                  parallel_chains = 4,
                  refresh = 100)

Maybe try fixing the seed so we can see if this pops up with a specific seed for you?

Well, that’s awesome that you’re running all the same versions of everything, but rather frustrating that we’re getting different results.

I tried different seeds and got different results. With seed=42, my PC hung very early - about 20 seconds after it started sampling.

fit <- mod$sample(data = "cmdstanr_test_data.json", 
                  chains = 4,
                  parallel_chains = 4,
                  refresh = 100, seed=42)

With seed=12345, it ran for 15 minutes or so, and then started spamming the console with

*** recursive gc invocation
*** recursive gc invocation
*** recursive gc invocation

Is that garbage collector invocation?

It looks like someone using prophet hit this in the past. You may need to reinstall some dependencies or try the devtools::clean_dll() that was mentioned in the below


Thanks so much for the suggestion!

I tried devtools::clean_dll(), and it doesn’t do much. Just gives me an error about looking for the root directory with a DESCRIPTION file. The command is for deleting compiled files when writing an R package. In this case, I’m calling cmdstanr from RStudio, and not within a package, so there’s nothing to clean out. Thanks for finding the old prophet issue though!

Since the error is occurring when running cmdstanr directly, I presume that it’s something that cmdstan depends on that’s causing the issues. Regarding dependencies then, I tried changing from g++ to clang++ in cmdstan. I got slightly different results: I ran 6 chains, 6 cores and seed=42, and during warmup, while some chains were steadily progressing, I got back “Chain 1 finished unexpectedly”, then “Chain 3 finished unexpectedly”, and then my computer hung again. I never saw this with g++.

So something is causing some of these chains to fail, but changing the C++ compiler doesn’t necessarily fix it. Though it might make it a little bit more resilient, as now some of the chains fail in a visible way. Are there other dependencies that I might cycle through reinstalling? I guess I could try reinstalling make, but are there others you might recommend?

Another possibility could be that my model is specified badly (I’m having issues with convergence simultaneously, and busy drafting another question). Could it be that improving my model definition is enough to prevent STAN from freezing my machine?

Do you know any python? Maybe try the same model with CmdStanPy?


Assuming that python --version --> 3.x, if not use python3

python -m pip install cmdstanpy
python -m cmdstanpy.install_cmdstan
# create a (follow example in
# create myfile.stan
# use rstan function to create datafile myfile.rdata

If you want to access the csv files later, add this to your python file

fit.save_csvfiles(".") # save to local folder