Alternative .csv reader



Lol, this is c++ that’s too messy at the moment to be a PR… this thread was me fishing to see if Ben thought the complexity of some (cleaned up) c++ might be worth the potential speed up in reading cmdstan .csv files in rstan. I think the answer was they thought they could make the current R code faster so I didn’t pursue it.

The code is under ‘’ under ‘inst/include/zoom*’ and if you install that package you can run it with ‘stannis:::read_cmdstan_csv()’.

Next chance I get to update it I’m going too refactor the abundant typedefs into classes and add an intermediate binary serialization step so that you can access individual parameters without loading the entire sample into memory.


The peak memory usage in R isn’t a very reliable measure for how much memory is required, since it depends on how frequently garbage collection is being done. With my CSV speedups, it did not change peak memory usage much in an unconstrained setting, but it was actually able to run with less available memory (tested by using ulimit to cap memory).

The new code is faster, but not as fast as your code. There is definitely room to speed up the CSV reading still, but also some of the performance difference is the extra work that read_stan_csv does in creating the stanfit object, so I don’t think it would be a 3x speedup.

I looked briefly at extending Rstan to allow for this sort of backend. You could make the entire sample data frame a memory mapped matrix that is stored on disk, and then you wouldn’t have to load anything into memory until it is needed by downstream analysis. The OS can handle loading the relevant pages from disk into memory. So you could process very large stanfit objects with minimal memory overhead.

Unfortunately, the mmap package in R didn’t seem to support creating multiple vectors indexing into different points in a single large mmap object, so it was going to require rather extensive changes to the either the rstan or mmap package to support this.


What else does rstan do? My output is reshaped to be identical (per-parameter arrays with equivalent dimensions to rstan). It’s not also calculating r-hats or something (?)

boost::iostreams looks like it makes this relatively painless so was going to go that route.


Yeah, that’s worth doing but I’ll wait till the Ubuntu I’m on decided to update to the new R since there are supposedly speedups coming :)


It’s not calculating \hat r, but it does a bunch of string munging on the parameter names that seemed to take up a fare amount of time. I don’t remember all the details, but you can see the slow steps using the R profiling tools.

I’m not familiar with boost::iostreams. It may be the same as the mmap format - which is just the raw binary representation of the floats. The cool thing about mmap is that the OS can manage the memory overhead, and modern OSs are really good at this. When the data is needed, the OS loads the page into memory, and if memory is needed by R or another process, the page gets removed from memory.

Compared to writing out to files with streams, I think it would be comparable in the initial loading of the CSV, but it could be cleaner when loading and working on the saved object, since you wouldn’t need to specify in advance which parameters to load - they would just be pulled into memory on demand. I think it can also be made a little cleaner this way because you can have one big disk object for the whole model, rather than a separate file for each parameter.

Yes, R 3.5 implemented buffered input, which makes the scan function run much faster.


Huh, I recall something about that but not why it might be necessary munging.

boost::iostreams is a library, it has mapped files as a feature:


The munging has to do with converting vectors and matrices from the flat format in the CSV file to the proper format for the stanfit class.


that part I do already too so I think it’s
a fair comparison.