Changes between Version 20 and Version 21 of C++ API
- Timestamp:
- 09/12/13 12:11:32 (11 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
C++ API
v20 v21 86 86 === MUSCLE and MPI === 87 87 88 When using MPI, MUSCLE_Send and MUSCLE_Receive only do their operations in rank 0, calls from other ranks are ignored. This means that the data should be gathered with MPI before a MUSCLE_Send and broadcasted after a MUSCLE_Receive. 88 When using MPI, MUSCLE calls only do their operations in rank 0, calls from other ranks are ignored. This means that the data should be gathered with MPI before a `MUSCLE_Send` and broadcasted after a `MUSCLE_Receive`. The functions `MUSCLE_Kernel_Name`, `MUSCLE_Get_Property`, and `MUSCLE_Will_Stop` cannot give a meaningful result in other ranks than rank 0 so 89 calling them from other ranks results in undefined behavior (and should be prevented). Their result can then be propagated with MPI in your code, if needed. 89 90 90 91 The `MUSCLE_Barrier` set of functions ease the integration of MUSCLE with MPI. Most MPI functions (including barrier, broadcast and gather) use a polling mechanism when they wait for communication to happen. This will use all the available CPU power but will somewhat reduce the latency of the operation. However, with MUSCLE, often other submodels than the MPI submodel should do some computing, and while the MPI operation waits this will slow down other submodels immensely. Therefore, MUSCLE has its own barrier operation, which has a higher latency than MPI_Barrier, but will not use any CPU resources. Since only rank 0 of the process ever receives data from MUSCLE, and a receive must wait for another submodel to send the message, that is a good point for calling a barrier. If multiple receives follow each other, barrier only needs to be called after the last one. … … 117 118 }}} 118 119 This paradigm is used in `src/cpp/examples/simplempi/sender.c`. 119 120 120 == C++ API == 121 121