Currently submitting co-allocated MUSCLE application is only possible using the XML !JobProfile (compare [[QCG-SimpleClient]]). Beside the different job description format you have to suffix the `qcg-sub` command with the `QCG` keyword:
{{{
$ qcg-sub muscle.xml QCG
}}}
== Example (Fusion - Transport Turbulence Equilibrium) ==
* Install your application on every cluster you wish to use
* register it on every cluster using [http://apps.man.poznan.pl/trac/qcg-computing/wiki/ComunityModules QCG Community Modules (QCE)] mechanism:
{{{
qcg-module-create -g plggmuscle Fusion/Turbulence
}}}
The module must bear the same name on every cluster. Inside the module you can set/prepend any environment variable, add dependencies to other modules, e.g.:
{{{
#%Module 1.0
proc ModulesHelp { } {
puts stderr "\tName: Fusion/Turbulence"
puts stderr "\tVersion: 0.1"
puts stderr "\tMaintainer: plgmamonski"
}
module-whatis "Fusion/Turbulence, 0.1"
#load all needed modules
module add muscle2
#sets TCL variable
set FUSION_KERNELS "/home/plgrid-groups/plggmuscle/fusionkernels"
#sets environment variable
setenv FUSION_KERNELS $FUSION_KERNELS
#add to the PATH native kernels
prepend-path PATH ${FUSION_KERNELS}/bin/
set curMod [module-info name]
if { [ module-info mode load ] } {
puts stderr "$curMod load complete."
}
if { [ module-info mode remove ] } {
puts stderr "$curMod unload complete."
}
}}}
You can set there two environment variables interpreted by the MUSCLE framework:
* MUSCLE_CLASSPATH
* MUSCLE_LIBPATH
to set Java classpath and the path of dynamically loadable libraries. Thanks to this mechanism you can use single abstract CxA that do not contain any site-specific paths. Also you can load the module in the interactive QCG job:
{{{
bash-4.1$ module load Fusion/Turbulence
openmpi/openmpi-open64_4.5.2-1.4.5-2 load complete.
Fusion/Turbulence load complete.
bash-4.1$ muscle2 -ma -c $FUSION_KERNELS/cxa/testSimpleModelsB_shared.cxa.rb
Running both MUSCLE2 Simulation Manager and the Simulation
=== Running MUSCLE2 Simulation Manager ===
}}}
* Prepare XML job description:
{{{
1
inula.man.poznan.pl
1
zeus.cyfronet.pl
FusionSimpleModels.cxa.rb
--verbose
gsiftp://qcg.man.poznan.pl/~/MAPPER/${JOB_ID}.output
gsiftp://qcg.man.poznan.pl/~/MAPPER/${JOB_ID}.error
gsiftp://qcg.man.poznan.pl/~/MAPPER/FusionSimpleModels.cxa.rb
gsiftp://qcg.man.poznan.pl/~/MAPPER/fusion-preprocess.sh
gsiftp://qcg.man.poznan.pl/~/MAPPER/fusion-postprocess.sh
gsiftp://qcg.man.poznan.pl/~/MAPPER/data
gsiftp://qcg.man.poznan.pl/~/MAPPER/${JOB_ID}.out
Fusion/Turbulence
fusion-preprocess.sh
fusion-postprocess.sh
P0Y0M0DT0H30M
}}}
* In the above example we:
* run the simulation on the two clusters using advance reservations created automatically by the QCG-Broker (in the co-allocation process) on two clusters: inula and zeus (``
* we requested 30 minutes of maximum job walltime (``)
* we added