Version 5 (modified by bartek, 13 years ago) (diff)

--

End-user Information

The main goal of the QosCosGrid middleware was a construction of a flexible, efficient and secure distributed IT system being able to deal with large-scale simulations onto distributed computing resources connected over local and wide area networks, in particular using Internet connections. From the development perspective, QosCosGrid supports three classes of use cases covering a wide-range of possible applications, namely: ANSI C or similar use cases, which rely on the Message Passing Paradigm, Java use cases taking the advantage of ProActive library as the parallelization technology and multi-scale use cases based on MUSCLE library.

QCG OpenMPI

The Message Passing Interface (MPI) is de facto a standard in the domain of parallel applications demanding computational resources that are beyond what single machine can provide. It delivers end-users both the programming interface consisting of simple communication primitives and the environment for spawning and monitoring MPI processes. A variety of implementations of the MPI standard is available (both as commercial and open source). In QosCosGrid, it was decided to use OpenMPI implementation of the MPI 2.0 standard as input for further enhancements. Of key importance were the inter-cluster communication techniques that deal with firewalls and Network Address Translation. In addition, the mechanism for spawning new processes in OpenMPI needed to be integrated with QosCosGrid-developed middleware. The extended version of the OpenMPI framework was named QCG-OMPI (where QCG stands for QosCosGrid). The extensions were three-fold: 1 - internally, QCG-OMPI improves the MPI library by featuring multiple connectivity techniques to enable, when possible, direct connections between MPI ranks that are located in remote clusters potentially separated by firewalls; 2 - the MPI standard was extended to comply with the QosCosGrid semi-opportunistic approach, by providing a new interface to describe the actual topology provided by the meta-scheduler; and 3 - many MPI collective operations were upgraded to be hierarchy-aware, and optimized for the Grid.

QCG ProActive

The existence of many Java based legacy applications implied a need to find an appropriate framework which could provide a similar functionality for parallel Java applications as MPI offers to C/C++ or FORTRAN parallel code. Instead of exploiting existing Java bridges to MPI implementations we decided to use the ProActive Parallel suite. The library uses the standard Java RMI framework as a portable communication layer. With a reduced set of simple primitives, ProActive (version 3.9 as used in QosCosGrid) provides a comprehensive toolkit that simplifies the programming of applications distributed on local area networks, clusters, Internet grids and peer-to-peer intranets for Java-based applications. To satisfy the requirements of complex system simulation applications and users, we developed extensions to the ProActive library (called QCG-ProActive) with the following goals: (1) To preserve standard ProActive library properties (i.e., allow legacy ProActive applications to be seamlessly ported to QosCosGrid). (2) To provide end users with a consistent QCG Broker Job Profile schema as a single document used to describe application parameters required for execution as well as resource requirements (in particular network topology and estimated execution time). (3) To prevent end users from the necessity to have direct (i.e., over SSH) access to remote clusters and machines.

MUSCLE support

Attachments