I have experience using Stan in R, but now I am starting to use Stan through Python. I see that there has been a major update in PyStan3, which is the version I installed. There are two things that are annoying me in starting to use PyStan, which are different from the RStan, and so I’m asking whether there’s ways around them.
The first one, is that when I compile a model with the following line, the model seems to always be stored in a chache.
model = stan.build(program_code=model_code, data=stan_data)
This is annoying to me because during development I’ll usually modify and recompile the model several times, but with this default, I have to remember to run a specific line to delete the cache every time before recompiling, i.e.:
httpstan.cache.delete_model_directory(posterior.model_name)
I couldn’t find an option to prevent PyStan from automatically cacheing my model, which is something useful when I’m done tweaking the model, but not before.
The second, related thing that is a bit annoying, is that I need to pass the data when compiling the model. In my R workflow, when I was done tweaking the model I’d save a compiled version, and then I would run different datasets using the same compiled model. This seemed an efficient approach, since only one compilation would allow me to run many different analyses . It’s less clear to me why someone would want to automatically cache a compiled model with a fixed dataset.
In my current Rstan version, I did this two-step analysis with the following lines:
model = rstan.stan_model("./path/to/file.stan")
posteriors = rstan::sampling(model, data=stan_data)
So my two questions are: Is it possible to turn off the automatic cacheing of compiled models? Is it possible to run a compiled model with a new dataset?
Thanks!