Sorry in advance, there is probably relevant bit of documentation for this, but I have not been able to find the conclusive answer. Perhaps someone who had the same experience or can point me to the relevant guidelines.
I am updating my meta-analysis with Stan package baggr (GitHub - wwiecek/baggr: R package for Bayesian meta-analysis models, using Stan) and for the first time in a couple of years I am actually recoding models. So that means I spend a lot of time re-compliling the models within the project and then sampling(). My workflow is essentially to do
devtools::load_all(quiet = TRUE)
every time I update .stan files, then run unit tests. (I always use quiet
arg to avoid all of the meaningless compiler warnings.)
Re-compliation works very well but I am noticing that when trying to run these models in my environment, within each chain sampling is 10-20 times slower than calling the same functions in the installed version of the package. Chains that should take a couple of seconds can take a minute. I assume this is because load_all()
does not use src/Makevars
and therefore doesn’t optimise code?
Is the advice to compile code with extra flags? If so, what is the cleanest way to do it? Is there another explanation?
I am working on Windows 11 (ouch), using a new clean installation, with latest packages and R versions. It’s a “clean” rstantools
.