I have a set of models that work very well in brms / cmdstanr and an Ubuntu environment. After a few thousand warmup iterations, and several thousand additional sample iterations, the posterior very well describes thousands of parameters of interest. It is lovely.
Except: memory. Sampling many thousands of parameters many thousands of times appears to be an expensive thing to do, and the resulting object sizes mean that after running a few of these models, I run out of memory in my R session. This is a problem I would like to solve, and it is one I cannot solve simply by allocating more computing resources. I need to solve it with the resources I have, in a way that does not let the perfect become the enemy of the good. The models already use a non-centered parameterization and reasonably-informative half-normal priors on variance components.
One option, if I don’t mind sacrificing diagnostics, would be to discard the warmup iterations. This helps a fair amount.
Another option would be to lower the (saved) posterior samples and raise adapt_delta in the hope of sampling more efficiently. This helps a bit but not very much: there seems to be only so much you can accomplish by sampling less, more carefully.
Are there any additional memory savings to be had, perhaps by reusing previously-compiled models, using a different compiler / compiler configuration, or by somehow reducing the number of returned draws from the posterior distribution while substantially retaining the information therein? If anyone has found some winning strategies, I’d love to hear about them. Thanks.
Is the problem the size of the Stan object?
If so one following up question is:Are you interested in all your parameters?
If yes & yes, then one thing that you can do is to not save all of them in the model. You can move parameters from the transformed parameters to the model block, or you can use {} to create “auxiliary” blocks int he transformed parameters blocks. The parameters of these auxiliary blocks don’t get saved.
From your question, it sounds as though you need more than a few of these models loaded in R simultaneously. I wonder whether this is truly the case, or whether there might be a workaround for whatever you are doing downstream that requires multiple models at once.
Could you say a little bit more about why you want multiple models in your environment at once?
An alternative way of modeling a logistic multinomial outcome, particularly when you want to have different equations for different levels of the outcome, is to perform a series of binomial regressions over a common pivot category, and then use a softmax function to return them all to probability scale. To do this, you have to run a series of logistic binomial regressions, predict the original set of all observations on the logit link scale, and then perform the softmax back on all of those predictions. If each of those starts requiring a lot of memory, you can run out, at least if you work within a single session.