Version 18 (modified by mmamonski, 13 years ago) (diff) |
---|
Coupling MPI codes using MUSCLE
Example Application
A new
MPI Kernels as dynamic libraries
This approach follows the original MUSCLE philosophy that relay on using Java Native Interface/Access? mechanism to integrate C/C++ codes into the kernels.
A new method, public void executeDirectly(), is available in the CaController class. Only this method is called instead of normal MUSCLE routines on the processes with non-zero rank. Process with rank 0 is started in the usual way. Portals cannot be attached to slave processes (i.e. to the processes with non-zero rank). The executeDirectly method default implementation calls by default execute().
Compilation
Running
Limitations
- Any MUSCLE API routine may be ONLY called by the rank 0 process. If you need any parameters to be available for the all MPI processes use MPI_Bcast function, for e.g. (as in provided example):
void Ring_Broadcast_Params(double *deltaE, double *maxE) { assert( MPI_Bcast(deltaE, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD) == MPI_SUCCESS); assert( MPI_Bcast(maxE, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD) == MPI_SUCCESS); }
- A separate Java Virtual Machine is started for every MPI process what increase significally the memory footprint of the whole application.
- Many MPI implementations exploits low level optimization techniques (like Direct Memory Access) that may cause crash of Java Virtual Machine.
- Using MPI to start many Java Virtual Machines, which loads some native dynamic-link library that later calls MPI routines is something that most people rarely do. In case of problems you might not found any help (you have been warned! ;-).
MPI Kernels as standalone executables
Compilation
Running
Limitations
- Any MUSCLE API routine may be ONLY called by the rank 0 process. If you need any parameters to be available for the all MPI processes use MPI_Bcast function, for e.g. (as in provided example):
void Ring_Broadcast_Params(double *deltaE, double *maxE) { assert( MPI_Bcast(deltaE, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD) == MPI_SUCCESS); assert( MPI_Bcast(maxE, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD) == MPI_SUCCESS); }