Limits to JSON conversion for Large Data (R character strings are limited to 2^31-1 bytes)

My R list has several data objects, totaling 7GB in R. The main list object is the independent data matrix of size 5 million x 157. The following error is given when I try to sample in CmdStanR:

Error in collapse(tmp, inner = FALSE, indent = indent) :
R character strings are limited to 2^31-1 bytes

This happens in CmdStanR before any sampling occurs. It appears there is a limit to the data that R can convert to JSON. I was able to duplicate the error with the following command (my list is data_all_stan):

write_stan_json(data_stan_all, file = file.path(dir_out, “data_stan_all.json”))

That gives the same error as above.

I have 128GB RAM, so it is not a RAM limitation on my desktop. This appears to be a limitation that I found others dealing with too outside Stan, converting to JSON. I tried Windows and WSL.

Not sure if others have recommendations. For now, I am cutting down the file size by randomly selecting rows.


this appears to be a bug/limitation of the current cmdstanr implementation which relies on jsonlite::write_json. Could you try building a small reproducible example (e.g. by simulating a large dataset) and filing an issue at Issues · stan-dev/cmdstanr · GitHub ?

I think the only workaround that does not require code changes to cmdstanr is for you to write the JSON file yourself (you can inspect the format by writing a smaller dataset) in a way that does not require construction of large strings. Then you can call the model executable directly (see e.g. 4 MCMC Sampling | CmdStan User’s Guide) and then use cmdstanr::read_stan_csv or cmdstanr::as_cmdstan_fit to read the results into R.

A similar problem was discussed here: Brms limited memory issue while running on 15M data points (without solution unfortunately). The problem was noted for jsonlite at R, convert large dataset into JSON - Stack Overflow (once again without solution)

1 Like

This bug/limitation is not yet solved?