Hi,
this appears to be a bug/limitation of the current cmdstanr implementation which relies on jsonlite::write_json
. Could you try building a small reproducible example (e.g. by simulating a large dataset) and filing an issue at Issues · stan-dev/cmdstanr · GitHub ?
I think the only workaround that does not require code changes to cmdstanr
is for you to write the JSON file yourself (you can inspect the format by writing a smaller dataset) in a way that does not require construction of large strings. Then you can call the model executable directly (see e.g. 4 MCMC Sampling | CmdStan User’s Guide) and then use cmdstanr::read_stan_csv
or cmdstanr::as_cmdstan_fit
to read the results into R.
A similar problem was discussed here: Brms limited memory issue while running on 15M data points (without solution unfortunately). The problem was noted for jsonlite at R, convert large dataset into JSON - Stack Overflow (once again without solution)