Version 21 (modified by mmamonski, 13 years ago) (diff)

--

Coupling MPI codes using MUSCLE

Example Application

As an example "Hello World" application that shows coupling MPI codes via MUSCLE we will use an extremely simplistic and naive simulation of the Large Hadron Collider (LHC) experiment. The application would model only two accelerators rings:

  • Proton Synchrotron Booster (PSB) - the small one,
  • Large Hadron Collider (LHC) - the big one.

The aforementioned accelerators are modeled as separate submodels (MUSCLE kernels) and are implemented using the "MPI Ring" code. In our quasi-simulation:

  • insert a single proton (at an energy of PSB:InitialEnergy) into the PSB,
  • where it is accelerated (of every `"PSB:DeltaEnergy") whenever it passes a ring node,
  • until achieving energy of PSB:MaxEnergy,
  • then the proton is transmitted from PSB into LHC,
  • where it is accelerated further until it increase energy to the level of LHC:MaxEnergy (simulation stops).

MPI Kernels as dynamic libraries

This approach follows the original MUSCLE philosophy that relay on using Java Native Interface/Access? mechanism to integrate C/C++ codes as MUSCLE kernels.

A new method, public void executeDirectly(), is available in the CaController class. Only this method is called instead of normal MUSCLE routines on the processes with non-zero rank. Process with rank 0 is started in the usual way. Portals cannot be attached to slave processes (i.e. to the processes with non-zero rank). The executeDirectly method default implementation calls by default execute().

Compilation

Running

Limitations

  • Any MUSCLE API routine may be ONLY called by the rank 0 process. If you need any parameters to be available for the all MPI processes use MPI_Bcast function, for e.g. (as in provided example):
    void Ring_Broadcast_Params(double *deltaE, double *maxE)
    {
            assert( MPI_Bcast(deltaE, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD) == MPI_SUCCESS);
            assert( MPI_Bcast(maxE, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD) == MPI_SUCCESS);
    }
    
  • A separate Java Virtual Machine is started for every MPI process what increase significally the memory footprint of the whole application.
  • Many MPI implementations exploits low level optimization techniques (like Direct Memory Access) that may cause crash of Java Virtual Machine.
  • Using MPI to start many Java Virtual Machines, which loads some native dynamic-link library that later calls MPI routines is something that most people rarely do. In case of problems you might not found any help (you have been warned! ;-).

MPI Kernels as standalone executables

Compilation

Running

Limitations

  • Any MUSCLE API routine may be ONLY called by the rank 0 process. If you need any parameters to be available for the all MPI processes use MPI_Bcast function, for e.g. (as in provided example):
    void Ring_Broadcast_Params(double *deltaE, double *maxE)
    {
            assert( MPI_Bcast(deltaE, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD) == MPI_SUCCESS);
            assert( MPI_Bcast(maxE, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD) == MPI_SUCCESS);
    }
    

Attachments