Clang++ Ubuntu problems


When googling around for how people use make, it seems common and maybe even conventional to use CPPFLAGS in the way that we use them (e.g. this stackoverflow q/a).

-stdlib=c++ is not unrelated to any of our projects; it was required to enable C++11 on Trusty across projects.

It seems like make is not really meant for this single-makefile-multiple-project task; all of its implicit targets and default macros assume you want the same flags across targets (with some weird C++ variants). I wonder if there is some other higher level option we could use - like separate makefiles for separate projects or libraries? Could we do that? For example, imagine giving CVODES, the gtest library, the math tests, the stan tests, and the stan compiler all their own makefiles? The math tests could call the CVODES and gtest makefiles recursively… Just brainstorming, curious what you think about this.


It sure is unrelated to a g++ build.

I agree that we need something more modular and my proposal was to make the lower-level makefiles easier to includes. I could also see doing something like you’re suggesting that would make the makefiles composable. For my own projects for reproducibility I recently adopted pydoit ( which might be a good choice since we’re already using Python to run our tests and it integrates well with command-line tools so it could defer, say, the CVODES build to a CVODES makefileke while still making it possible to inject flags where needed. Short-term I would just fix up the makefiles but long-term moving to something like pydoit sounds great.


:P That’s not the issue here; it only got added to a g++ build on your local branch when you switched the ordering of the includes. I think maybe you were joking and we’re on the same page, though?

What do you think pydoit brings to the table over project-specific makefiles? It’s website seems to indicate it’s not necessarily used for builds but for more general tasks? I generally prefer fewer frameworks when possible, though I’m open minded. I am also extremely interested in Bazel as it seems like it could substantially improve our CI and test times by giving us assurances about incremental builds and letting us use those. Not sure 1) how much work that’d be to set up or 2) how much of the supposed benefit we’d actually be able to reap in the e.g. weird generated probability distribution tests that take 9+ hours.

I’m not sure how high the ROI is on figuring out which compiler flags are necessary for which projects and splitting them out, especially given the need to test across ~4 OSes and 3 compilers, but it could be worth it for readability and make it easier to split out into separate makefiles (or another multi-project solution that could have additional benefits).


Sort of joking, but it does poison the g++ build of CmdStan unless you set both CXX and CC because CmdStan uses CXX and sets it from CC in its defaults but it uses CC when it reads the stan-dev/math/make/detect_cc makefile… that’s just stuff power-users or new devs shouldn’t have to untangle to set a compiler.


Python is used by a broader range of data-science type people who can probably read a simple makefile but would balk at the complexity we have. So since we have Python as a dev dependency already, we aren’t paying much of a cost in terms of dependencies (pydoit is a simple Python module) but we could potentially make our build process accessible to a much broader range of people who are interested in Stan development.

We do a bunch of stuff that’s not a standard build task. For example we have (in stan-dev/stan at least) models that are used to generate c++, compiled to executables and then run for unit testing. Ideally we want to do more of that, for example when doing statistical testing on releases. Rather than stretching make I would prefer to take a tool that’s meant for those sorts of generalized tasks.

Bazel seems like a good example here because one of the ways that it can promise all the benefits it promises is that you must manually specify dependencies. I know that it has modules for things like protobuf that auto-generate c++ but they are serious projects done at least in part by people internal to Google and I’m not sure we have the people-power to devote to writing modules with equivalent functionality for Stan files. Maybe we could, I’ve never gotten far enough in understanding Bazel to figure it out. I do understand Bazel could be used to speed up our builds and I’ve noticed that it’s taken an increasing amount of time to get our builds through Travis/etc… (or even done on dev desktops for that matter) so maybe it’s worth it. But it’s not a clear win over something more generic at the top level since pydoit can easily defer to make for a sub-project (Bazel probably could too but IDK).

This part I’m not sure about, especially when it comes to ROI. I would like to understand this stuff
better mostly because the scientific projects I do require builds of both data-processing and software for reproducibility.

I do think the splitting will be easier than making all the tests pass so we can see how hard the latter is.