After running my model in cmdstanpy I get a pretty big csv file of 1.1gb. At this point I am trying to load it using arviz with:
data = az.from_cmdstan(posterior="path-to-csv")
data = az.from_cmdstan(posterior=["path-to-csv", "parameter-name"])
both ways take hours and many gb of memory (at this point even 20gb is not enough and python process crushes).
Is there a more efficient way to load the sampling output data?
Hi, have you tried the current master (arviz)? I did update some parts of the processing.
Can you create an issue in ArviZ about this, so we can make it faster.
Hi, thanks for answering! I downloaded
python3.7.0 only yesterday, so is it right to assume this is the most updated version? I’d be happy to create an issue! Just found the link. thanks!
pip install git+https://github.com/arviz-devs/arviz
If that doesn’t work, you can go to arviz github page and open a new issue on issue tab. Just describe what is your problem, and also add a link to this discussion.
Didn’t work. Opened an issue. Thank you!