Building MPI

Hi!

One of the last issues with the current MPI prototype has been the difficulty of being forced to use dynamic linking. This makes it necessary to help the linker find the dynamic libraries (otherwise the executables just won’t start). So far I did require that the user configures the linker himself, but I would suggest we instead hard code the paths to the dynamic libraries instead (I found this out after spending more time than I ever intended on reading macos docs on linking).

The process works with the command install_name_tool which I never used before. See the code here if you are interested. On Linux things are supposed to be easier - I have added the options to the linker, but I was not yet able to confirm if it indeed works on Linux.

In short, this means we can deploy MPI in such a way that we build our own boost mpi and hardcode the dynamic library paths which are used into all of our binaries. This should work for most users out of the box and make life much simpler as users won’t need to worry about linker stuff. In turn this means the devs should have it really easy now to get the branch up and running (it’s really worth it should you have a model which takes too much of your time to fit).

Does that approach make sense? Suggestions how to change it?

Best,
Sebastian

I think it makes sense to try to build MPI into the StanHeaders shared object rather than the rstan shared object. The StanHeaders shared object is dynamically linked on Windows and statically linked everywhere else.


Then when the model gets built, it links to the StanHeaders shared object which has already been loaded into R’s memory

This works for CVODES, so I am going to assume it will work for MPI.

@bgoodri: Are u saying that CVODES is linked in using dynamic libraries in R? If so, then we can do the same for MPI, sure.

BTW, isn’t dynamic linking faster when building the final binary? If so, then we could consider for cmdstan to also link in CVODES using dynamic linking with hardcoded paths.

The main question of my post was if for cmdstan people are fine with using hardcoded paths for dynamic linking of MPI libs.

For Windows, StanHeaders builds a DLL from CVODES. Otherwise, CRAN recommends static libraries. Either way, the StanHeaders shared object is loaded into R’s memory when rstan is loaded and the user’s stanmodel is linked against it after it is compiled. So, in a manner of speaking, the linking is always dynamic in the sense that it happens at runtime. Anyway, I don’t think it is much of an issue for R and if necessary, we can programatically find where StanHeaders is installed.

For CmdStan, I don’t have a strong opinion. Can we compile Boost MPI with -fPIC?

No idea what -fPIC does… so I can’t comment.

Yes we can.

Hi!

So to continue the MPI build discussion, let me describe how it works at the moment:

  • By default no MPI whatsoever is enabled in Stan
  • If a user wants MPI, then he first has to call
make stan-mpi

This will build boost build and the MPI libraries (along with what is needed); the binaries land in lib/boost/staged/lib (or similar, I forgot). For Mac specifc commands are executed in addition to compiling the libraries which make dynamic linking possible using hard-coded paths such that the user does not have to configure his dynamic link loader in any way. For Linux the dynamic linking with hard-coding works already by switching on specific linker options whenever we build a Stan program. We should probably add and if statement to the above make command which triggers on windows and says “Sorry! No MPI on Windows”.

  • The above make command finishes with a message which instructs the user to place in his make/local file the statement include make/mpi. Now MPI is enabled for Stan.

  • The make/mpi file configures the build system to pick up the MPI libraries and it also switches on the -DSTAN_HAS_MPI compiler flag. Whenever this is given, then the MPI code is enabled and the map_rect implementation will switch to the MPI implementation.

  • So all what the user then needs to do is to compile his Stan programs as usual and launch them with mpirun. So starting the model from foo.stan was before ./foo ... and then becomes mpirun -np #CPUs ./foo… although details depend on the system.

  • In case the user has compiled a Stan model with MPI, but he does start it without mpirun, then the binary will still work as is - MPI then falls back to use a single core only, but that’s it.

Does that logic make sense to roll out?

Best,
Sebastian

1 Like

Seems reasonable to me. Would people who are on clusters with special versions of MPI be able to use them? It might be nice if we made that relatively easy - seems like it could mean just avoiding building the Boost MPI and not adding that directory to the LD_LIBRARY_PATH or however we were going to get the system to see that.

I think the approach should work for everyone. The boost build system is supposed to be very clever to figure out what is needed (so a simple MacPorts MPI installation on a desktop or some odd cluster config will work). Note that I am proposing to hard-code the paths of the dynamic libraries such that we specifically avoid the need to mess with LD_LIBRARY_PATH.

And if a user has such a special system such that boost build fails or he does not want to use it, then he can always just throw all what he needs into the make/local. All he needs to make sure is that (a) the compiler finds a working boost MPI and (b) the -DSTAN_HAS_MPI flag is defined. That’s it.

Hia, I’ve recently had a go at building the concept-mpi-2 branch on Ubuntu 16.10 and hit upon an error after running make build.

It’s a long error message (attached as a text file) but appears to orignate in stansummary.cpp:

bin/cmdstan/stansummary.o: In function `boost::archive::detail::oserializer<boost::mpi::detail::mpi_datatype_oarchive, stan::math::mpi_stop_worker>::~oserializer()':
stansummary.cpp:(.text._ZN5boost7archive6detail11oserializerINS_3mpi6detail21mpi_datatype_oarchiveEN4stan4math15mpi_stop_workerEED2Ev[_ZN5boost7archive6detail11oserializerINS_3mpi6detail21mpi_datatype_oarchiveEN4stan4math15mpi_stop_workerEED5Ev]+0x8): undefined reference to `boost::archive::detail::basic_oserializer::~basic_oserializer()'

Can you suggest a possible cause?

For reference, my process has been:

git clone --recursive https://github.com/stan-dev/cmdstan
git checkout feature/proto-mpi
cd stan/lib/stan_math
git checkout feature/concept-mpi-2
git merge feature/issue-736-boost-mpi-sources
cd ~/cmdstan
# Point $(MATH) in make/local to cmdstan/stan/lib/stan_math/ 
make stan-mpi
make build

make stan-mpi went without a hitch and make/local on the cmdstan proto-mpi branch already has include $(MATH)make/mpi.

I gave building the MPI branch a go late last year (with little success) and have to say this has been much smoother so far. Thanks for all your hard work!

mpi_build_error.txt (229.3 KB)

The compiler does not find the build serialization library. You need to do:

cd $(MATH)
make stan-mpi
# now follow the printed instructions which tell you to modify make/local (the one living in stan-math)
cd $(CMDSTAN)
make build

Then it should work. Although, I haven’t tested the hard-linked dynamic library paths on Linux, but they should work.

Thanks Sebastian.

The instructions of make stan-mpi say to add include make/mpi to make/local.

make/local in $(MATH) and $(CMDSTAN) both have include $(MATH)make/mpi and the error persists.

@wds15, thanks for leading me here!

Since we’re at the math library, maybe we can just make the target the library file name? Is there anything to prevent us from doing that?

We need boost mpi and boost serialization library. So a single target name does not suffice here… but maybe you could call the target boost-mpi and mean with it that boost MPI & it’s dependencies are to be build.

Hmm… sounds like it does not work yet on Linux.

You can setup you LD_LIBRARY_PATH variable to include $(MATH)/lib/boost/staged/libs (or similar). I need to follow-up on this for Linux. It should work.

In the math library, we just need the dependencies as targets. If there are two, then that’s what we should have.

The issue is that we defer the building of boost mpi and boost serialziation to the boost build system. So technically we should have a single target in stan-math, because boost build will build both libs in a single go. That was the idea behind stan-mpi.

There’s nothing wrong with that.

Now that we’re talking about the build, I’m suggesting we follow how make should be used to build things and instead of a phony target, we just use make how it’s designed and call out the two targets explicitly. (So no stan-mpi target)

Maybe help me understand how it actually gets built?

If I have a test that needs MPI:

  • do I need to include additional headers? (I think no because we’re already including the boost library in the compilation step using -I)
  • do I need to link against 2 libraries?
  • are there any other compiler flags that I need to include?

Is there anything else? That’s what I think needs to happen, so for those tests, we should just add additional dependencies to those two libraries. Then we should add two targets for those two libraries that build those libraries.

I see where you are coming from, but it is not that easy. So in steps:

  • When using MPI it is strongly recommended to use instead of the usual “g++” / clang++ compiler the “mpic++” compiler. The effect of using the mpic++ compiler is that it will pick automatically the right base compiler and it will add the right “-I” + “-L” + needed libraries to link against the MPI installation (which is not the boost MPI).

  • Alternativley to using mpic++ you can usually execute “mpic++ --showme” which tells you which flags are being set. OpenMPI does recommend to use the mpic++ command, though.

Your other questions:

Besides my points above, no, because the boost MPI is already in scope.

You need to link against the MPI installation and the boost MPI + boost serialization library, yes.

Not that I am aware of any more.

Hmm… we really want the boost build system to build the MPI libraries. The boost build system will help us to deal with the different MPI installations which we need to use. The MPI installation (like OpenMPI or mpich or IntelMPI) is expected to be on the system already installed.