I know we’re adding a bunch of new libraries that require compiling a separate target, some of which we’re writing, some of which are external packages whose source we’re including in our tree. Things like MPI, GPU, sundials, etc.
How do these things get added upstream? It seems a little crazy that PyStan and RStan have to also add separate build system stuff every time we do this… Is this the case?
I think we want to start looking into doing this an arbitrary number of times with new object files or libraries (to help reduce compile times, support threading, generally good C++ design) but I’m worried about the upstream cost. Can someone outline the process and say how we can reduce the burden there such that in Math / Stan / CmdStan it’s easy to add new built & linked libraries?
Tagging @Bob_Carpenter, @syclik, @bgoodri
This is currently the case. The building of models at the interface level is different based on how it interacts with the platform of interest.
It’s one of the reasons CmdStan is easiest to build. We have no additional build system requirements. We could build it from the command line without
make if we wanted.
I’ve seen how R and Python build and it’s not trivial. They both have easier ways to get C bindings, but templated C++ is tricky. There is lots of room for improvement, I think. It’ll take some digging to get it done well.
Gotcha. It seems like it would be nice for them to be able to piggyback off CmdStan’s build system work somehow, but maybe that’s too much of a holy grail.
Also, too, R packages are strongly discouraged from using GNU make-isms (although rstanarm does). If cmake could spit out a BSD make compliant makefile, that could be helpful.
That was indeed some of the original design of the makefiles, but it hasn’t held. When it comes down to it, building for each platform is platform specific. Especially in RStan – @bgoodri and @maverick have found creative build solutions to enable additional features, which don’t translate directly to CmdStan or PyStan (yet).