So we are assuming sequential processing on the master side, no load balancing concern(or leave it to users), and in need of a slave-side listening state. In that case, would a simple MPI skeleton like below be easier to reason with? (I’m using MPI not boost::mpi but the idea is the same, and I’m also skipping data scattering as well as serialization & deserialization).
Also, do we need a mpi
namespace within math
?
#define MPI_WORK_TAG 1
#define MPI_EXIT_TAG 2
int main(int argc, char** argv)
{
int rank;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
(rank == 0)? master() : slave();
MPI_Finalize();
}
void master()
{
int njob, rank, data;
double result;
MPI_Status status;
MPI_Comm_size(MPI_COMM_WORLD, &njob);
{
for (rank = 1; rank < njob; ++rank) {
data = 1; // first job
MPI_Send(&data, 1, MPI_INT, rank, MPI_WORK_TAG, MPI_COMM_WORLD);
}
for (rank = 1; rank < njob; ++rank) {
MPI_Recv(&result, 1, MPI_DOUBLE, MPI_ANY_SOURCE,
MPI_ANY_TAG, MPI_COMM_WORLD, &status);
}
}
{
for (rank = 1; rank < njob; ++rank) {
data = 2; // second job
MPI_Send(&data, 1, MPI_INT, rank, MPI_WORK_TAG, MPI_COMM_WORLD);
}
for (rank = 1; rank < njob; ++rank) {
MPI_Recv(&result, 1, MPI_DOUBLE, MPI_ANY_SOURCE,
MPI_ANY_TAG, MPI_COMM_WORLD, &status);
}
}
{
// next job
}
for (rank = 1; rank < njob; ++rank) {
MPI_Send(0, 0, MPI_INT, rank, MPI_EXIT_TAG, MPI_COMM_WORLD);
}
}
void slave()
{
double result;
int data;
MPI_Status status;
while(1) {
MPI_Recv(&data, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if (status.MPI_TAG == MPI_EXIT_TAG) break;
result = double(data)*double(data); // hard work
MPI_Send(&result, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);
}
}
The intention is to put the MPI calls in the same level, as well as to use tag to flag the listening state. Will this design achieve the goal? If not, what would I be missing here?
EDIT: typo