Deleting model fits to prevent full hard drive

I am working with STAN models in a simulation study, using PyStan, where I implement the same model multiple times with different values for random_seed. I noticed that after fitting, the fit is saved to my cache folder under httpstan/4.4.2/models/“model_name”/fits/“fit_name”.

The problem I run into is that my memory gets cluttered by these files, while I don’t need them. I have tried clearing the folders containing these files manually, but since I am using parallelization, I cannot just delete entire folders on the go.

Is there a way to delete fit-files after I retrieve the posterior samples that I want, or keep Stan from saving these files?
I tried using the delete_fit-function from httpstan.cache, which requires you to specify an identifier for the (e.g. model_name), which is easy to obtain, and an identifier for the fit (e.g. fit_name), which I am not sure how to obtain (there is a calculate_fit_name-function in httpstan.fits, but I cannot get it to work).

Operating system: Centos linux release 7.9.2009 (core)
Python version: 3.8.11
PyStan version: 3.1.1
Compiler: gcc 10.2.1

I don’t think that one is exposed anywhere in pystan side, so sounds a really hard problem. I don’t remember if there is a way to turn that off.

Maybe try CmdStanPy?

I haven’t found anything yet, but I can’t imagine nobody else has not run into this problem before, since seeding by default saves all fits. Also since functions exist to delete fits, that means there must be a way to do delete them after they are created?