11 | | The next sections section describes what steps are needed to transform from existing coupling done via shared memory to one exploiting MUSCLE framework. |
12 | | * the coupling is done one way (from MHD to MC), this brings a potential of introducing new level of parallelism (in curent code MHD and MC simulations are called one after another, sequentially). TODO: figure |
13 | | * the MC is in process of GPU-enabling, one may want to run both modules on different heterogenous resources (i.e. MC on GPU cluster, base MHD code on Intel Nehalem cluster) |
| 11 | The next sections section describes what steps are needed to transform from existing coupling done via shared memory to one exploiting MUSCLE framework. We are motivated by: |
| 12 | * Both MHD and MC can run concurrently, exchanging data at the begging of each time step. This brings a potential of introducing new level of parallelism as in curent code MHD and MC simulations are called one after another. |
| 13 | * The previous tests shows that MC code is much more resource demanding while having potential for greater scalability than the MHD part. Using MUSCLE it is possible to assign different number of resources to kernels (e.g. 12 cores for MHC code, 120 cores for MC one) |
| 14 | * the MC code is in process of GPU-enabling, one may want to run both modules on different heterogenous resources (i.e. MC on GPU cluster, base MHD code on Intel Nehalem cluster). |
| 15 | |
| 16 | In the end we will try to verify the above hypothesis in production runs. |