Hello all,
After running my model in cmdstanpy I get a pretty big csv file of 1.1gb. At this point I am trying to load it using arviz with: data = az.from_cmdstan(posterior="path-to-csv")
or with data = az.from_cmdstan(posterior=["path-to-csv", "parameter-name"])
both ways take hours and many gb of memory (at this point even 20gb is not enough and python process crushes).
Is there a more efficient way to load the sampling output data?
Hi, thanks for answering! I downloaded arviz using pip for python3.7.0 only yesterday, so is it right to assume this is the most updated version? I’d be happy to create an issue! Just found the link. thanks!
If that doesn’t work, you can go to arviz github page and open a new issue on issue tab. Just describe what is your problem, and also add a link to this discussion.