After running my model in cmdstanpy I get a pretty big csv file of 1.1gb. At this point I am trying to load it using arviz with:
data = az.from_cmdstan(posterior="path-to-csv")
data = az.from_cmdstan(posterior=["path-to-csv", "parameter-name"])
both ways take hours and many gb of memory (at this point even 20gb is not enough and python process crushes).
Is there a more efficient way to load the sampling output data?