Building MPI

Hi!

So to continue the MPI build discussion, let me describe how it works at the moment:

  • By default no MPI whatsoever is enabled in Stan
  • If a user wants MPI, then he first has to call
make stan-mpi

This will build boost build and the MPI libraries (along with what is needed); the binaries land in lib/boost/staged/lib (or similar, I forgot). For Mac specifc commands are executed in addition to compiling the libraries which make dynamic linking possible using hard-coded paths such that the user does not have to configure his dynamic link loader in any way. For Linux the dynamic linking with hard-coding works already by switching on specific linker options whenever we build a Stan program. We should probably add and if statement to the above make command which triggers on windows and says “Sorry! No MPI on Windows”.

  • The above make command finishes with a message which instructs the user to place in his make/local file the statement include make/mpi. Now MPI is enabled for Stan.

  • The make/mpi file configures the build system to pick up the MPI libraries and it also switches on the -DSTAN_HAS_MPI compiler flag. Whenever this is given, then the MPI code is enabled and the map_rect implementation will switch to the MPI implementation.

  • So all what the user then needs to do is to compile his Stan programs as usual and launch them with mpirun. So starting the model from foo.stan was before ./foo ... and then becomes mpirun -np #CPUs ./foo… although details depend on the system.

  • In case the user has compiled a Stan model with MPI, but he does start it without mpirun, then the binary will still work as is - MPI then falls back to use a single core only, but that’s it.

Does that logic make sense to roll out?

Best,
Sebastian

1 Like