Version 4 (modified by bartek, 13 years ago) (diff)

--

QCG BES/AR Installation in PL-Grid

QCG BES/AR service (the successor of the OpenDSP project) is an open source service acting as a computing provider exposing on demand access to computing resources and jobs over the HPC Basic Profile compliant Web Services interface. In addition the QCG BES/AR offers remote interface for Advance Reservations management.

This document describes installation of the QCG BES/AR service in the PL-Grid environment. The service should be deployed on the machine (or virtual machine) that:

  • has at least 1GB of memory (recommended value: 2 GB)
  • has 10 GB of free disk space (most of the space will be used by the log files)
  • has any modern CPU (if you plan to use virtual machine you should dedicated to it one or two cores from the host machine)
  • is running under Scientific Linux 5.5 (in most cases the provided RPMs should work with any operating system based on Redhat Enterpise Linux 5.x, e.g. CentOS 5)
IMPORTANT:
The implementation name of the QCG BES/AR service is Smoa Computing and this name is used as a common in this guide.

Prerequisites

We assume that you have the Torque local resource manager and the Maui scheduler already installed. This would be typically a frontend machine (i.e. machine where the pbs_server and maui daemons are running). If you want to install the Smoa Computing service on a separate submit host you should read this notes?. Moreover the following packages must be installed before you proceed with the Smoa Computing installation.

  • Install database backend (PostgresSQL):
      # yum install postgresql postgresql-server
    
  • UnixODBC and the PostgresSQL odbc driver:
      # yum install unixODBC postgresql-odbc
    
  • Expat (needed by the BAT updater - a PL-Grid accounting module):
      # yum install expat-devel
    
  • Torque devel package and the rpmbuild package (needed to build DRMAA):
      # rpm -i torque-devel-your-version.rpm
      # yum install rpm-build
    

The X.509 host certificate (signed by the Polish Grid CA) and key is already installed in the following locations:

  • /etc/grid-security/hostcert.pem
  • /etc/grid-security/hostkey.pem

Most of the grid services and security infrastructures are sensitive to time skews. Thus we recommended to install a Network Time Protocol daemon or use any other solution that provides accurate clock synchronization.

Configuring WP4 queue

Sample Maui configuration that gives 8 machines to exclusive use of the Work Package 4:

  # WP4
  # all users by default can use only DEFAULT partition (i.e. ALL minus WP4)
  SYSCFG           PLIST=DEFAULT
  
  
  # increase priority of the plgrid-wp4-produkcja queue
  CLASSCFG[plgrid-wp4-produkcja] PRIORITY=90000
  # jobs submitted to the plgrid-wp4 queue CAN use and CAN ONLY (note the &) use the wp4 partition
  CLASSCFG[plgrid-wp4] PLIST=wp4&
  
  # devote some machines to the Work Package 4
  NODECFG[r512] PARTITION=wp4
  NODECFG[r513] PARTITION=wp4
  NODECFG[r514] PARTITION=wp4
  NODECFG[r515] PARTITION=wp4
  NODECFG[r516] PARTITION=wp4
  NODECFG[r517] PARTITION=wp4
  NODECFG[r518] PARTITION=wp4
  NODECFG[r519] PARTITION=wp4

Now you need also to add the two queues in the Torque resource manager:

  #
  # Create and define queue plgrid-wp4
  #
  create queue plgrid-wp4
  set queue plgrid-wp4 queue_type = Execution
  set queue plgrid-wp4 resources_max.walltime = 72:00:00
  set queue plgrid-wp4 resources_default.ncpus = 1
  set queue plgrid-wp4 resources_default.walltime = 72:00:00
  set queue plgrid-wp4 acl_group_enable = True
  set queue plgrid-wp4 acl_groups = plgrid-wp4
  set queue plgrid-wp4 acl_group_sloppy = True
  set queue plgrid-wp4 enabled = True
  set queue plgrid-wp4 started = True
  
  #
  # Create and define queue plgrid-wp4-produkcja
  #
  create queue plgrid-wp4-produkcja
  set queue plgrid-wp4-produkcja queue_type = Execution
  set queue plgrid-wp4-produkcja resources_max.walltime = 72:00:00
  set queue plgrid-wp4-produkcja resources_max.ncpus = 256
  set queue plgrid-wp4-produkcja resources_default.ncpus = 1
  set queue plgrid-wp4-produkcja resources_default.walltime = 72:00:00
  set queue plgrid-wp4-produkcja acl_group_enable = True
  set queue plgrid-wp4-produkcja acl_groups = plgrid-wp4
  set queue plgrid-wp4-produkcja acl_group_sloppy = True
  set queue plgrid-wp4-produkcja enabled = True
  set queue plgrid-wp4-produkcja started = True

Installation using provided RPMS

  • Create the following users:
    • smoa_comp - needed by the Smoa Computing service
    • grms - the user that the GRMS (i.e. the QosCosGrid Broker service) would be mapped to
        useradd -d /opt/plgrid/var/log/smoa-comp/ -m smoa_comp 
        useradd -d /opt/plgrid/var/log/grms/ -m grms  
      
  • and the following group:
    • smoa_dev - this group is allowed to read the configuration and log files. Please add the Smoa services' developers to this group.
        groupadd smoa_dev
      
  • install PL-Grid (official) and QCG (testing) repositories:
    • QosCosGrid testing repository
       cat > /etc/yum.repos.d/qcg.repo << EOF
       [qcg]
       name=QosCosGrid YUM repository
       baseurl=http://fury.man.poznan.pl/qcg-packages/sl/x86_64/
       enabled=1
       gpgcheck=0
       EOF
      
    • Official PL-Grid repository
       rpm -Uvh http://software.plgrid.pl/packages/repos/plgrid-repos-2010-2.noarch.rpm
      
  • install Smoa Computing using YUM Package Manager:
      yum install smoa-comp
    
  • setup Smoa Computing database using provided script:
      # /opt/plgrid/qcg/smoa/share/smoa-comp/tools/smoa-comp-install.sh
      Welcome to smoa-comp installation script!
      
      This script will guide you through process of configuring proper environment
      for running the Smoa Computing service. You have to answer few questions regarding
      parameters of your database. If you are not sure just press Enter and use the
      default values.
      
      Use local PostgreSQL server? (y/n) [y]: y
      Database [smoa_comp]: 
      User [smoa_comp]: 
      Password [smoa_comp]: MojeTajneHaslo
      Create database? (y/n) [y]: y
      Create user? (y/n) [y]: y
      
      Checking for system user smoa_comp...OK
      Checking whether PostgreSQL server is installed...OK
      Checking whether PostgreSQL server is running...OK
      
      Performing installation
      * Creating user smoa_comp...OK
      * Creating database smoa_comp...OK
      * Creating database schema...OK
      * Checking for ODBC data source smoa_comp...
      * Installing ODBC data source...OK
        
      Remember to add appropriate entry to /var/lib/pgsql/data/pg_hba.conf (as the first rule!) to allow user smoa_comp to
      access database smoa_comp. For instance:
      
      host    smoa_comp       smoa_comp       127.0.0.1/32    md5
      
      and reload Postgres server.
    

Add a new rule to the pg_hba.conf as requested:

  vim /var/lib/pgsql/data/pg_hba.conf 
  /etc/init.d/postgresql reload

Install Polish Grid and PL-Grid Simpla-CA certificates:

 wget https://dist.eugridpma.info/distribution/igtf/current/accredited/RPMS/ca_PolishGrid-1.38-1.noarch.rpm
 rpm -i ca_PolishGrid-1.38-1.noarch.rpm 
 wget http://software.plgrid.pl/packages/general/ca_PLGRID-SimpleCA-1.0-2.noarch.rpm
 rpm -i ca_PLGRID-SimpleCA-1.0-2.noarch.rpm 
 #install certificate revocation list fetching utility
 wget https://dist.eugridpma.info/distribution/util/fetch-crl/fetch-crl-2.8.5-1.noarch.rpm
 rpm -i fetch-crl-2.8.5-1.noarch.rpm
 #get fresh CRLs now
 /usr/sbin/fetch-crl 
 #install cron job for it
 cat > /etc/cron.daily/fetch-crl.cron << EOF
 #!/bin/sh
 
 /usr/sbin/fetch-crl
 EOF
 chmod a+x /etc/cron.daily/fetch-crl.cron

The Grid Mapfile

Manually created grid mapfile (for testing purpose only)

  #for test purpose only add mapping for your account
  echo '"MyCertDN" myaccount' >> /etc/grid-security/grid-mapfile

LDAP based grid mapfile

 #install grid-mapfile generator from PL-Grid repository
 yum install plggridmapfilegenerator
 #configure gridmapfilegenerator - remember to change url property to your local ldap replica
 cat > /opt/plgrid/plggridmapfilegenerator/etc/plggridmapfilegenerator.conf << EOF
 [ldap]
 url=ldaps://10.4.1.39
 #search base
 #base=dc=osrodek,dc=plgrid,dc=pl
 base=ou=People,dc=cyfronet,dc=plgrid,dc=pl
 #filter, specifies which users should be processed
 filter=plgridX509CertificateDN=*
 #timeout for execution of ldap queries
 timeout=10
 
 [output]
 format=^plgridX509CertificateDN, uid
 EOF
 #add the gridmapfile generator as the cron.job
 cat > /etc/cron.hourly/gridmapfile.cron << EOF
 #!/bin/sh
 /opt/plgrid/plggridmapfilegenerator/bin/plggridmapfilegenerator.py -o /etc/grid-security/grid-mapfile
 EOF
 #set executable bit
 chmod a+x /etc/cron.hourly/gridmapfile.cron
 #try it!
 /etc/cron.hourly/gridmapfile.cron

Add appropriate rights for the smoa_comp and grms users in the Maui scheduler configuaration file:

  vim /var/spool/maui/maui.cfg
  # primary admin must be first in list
  ADMIN1                root
  ADMIN2                grms
  ADMIN3                smoa_comp

Copy the service certificate and key into the <code>/opt/plgrid/qcg/smoa/etc/certs/</code>. Remember to set appropriate rights to the key file.

  cp /etc/grid-security/hostcert.pem /opt/plgrid/qcg/smoa/etc/certs/smoacert.pem
  cp /etc/grid-security/hostkey.pem /opt/plgrid/qcg/smoa/etc/certs/smoakey.pem
  chown smoa_comp /opt/plgrid/qcg/smoa/etc/certs/smoacert.pem
  chown smoa_comp /opt/plgrid/qcg/smoa/etc/certs/smoakey.pem 
  chmod 0600 /opt/plgrid/qcg/smoa/etc/certs/smoakey.pem

DRMAA library

DRMAA library must be compiled from SRC RPM:

  wget http://fury.man.poznan.pl/qcg-packages/sl/SRPMS/pbs-drmaa-1.0.6-2.src.rpm
  rpmbuild  --rebuild pbs-drmaa-1.0.6-2.src.rpm
  cd /usr/src/redhat/RPMS/x86_64/
  rpm -i pbs-drmaa-1.0.6-2.x86_64.rpm 

however if you are using it for the first time then you should try to compile it with enabled logging:

  wget http://fury.man.poznan.pl/qcg-packages/sl/SRPMS/pbs-drmaa-1.0.6-2.src.rpm
  rpmbuild  --define 'configure_options --enable-debug' --rebuild pbs-drmaa-1.0.6-2.src.rpm
  cd /usr/src/redhat/RPMS/x86_64/
  rpm -i pbs-drmaa-1.0.6-2.x86_64.rpm

After installation you need either:

  • configure the DRMAA library to use Torque logs (RECOMMENDED). Sample configuration file of the DRMAA library (/opt/plgrid/qcg/smoa/etc/pbs_drmaa.conf):
      # pbs_drmaa.conf - Sample pbs_drmaa configuration file.
      
      wait_thread: 1,
      
      pbs_home: "/var/spool/pbs",
        
      cache_job_state: 600,
    

Note: Remember to mount server log directory as described in the eariler note?.

or

  • configure Torque to keep information about completed jobs (e.g.: by setting: qmgr -c 'set server keep_completed = 300').

It is possible to limit users to submit job to predefined queue by setting default job category (in the /opt/plgrid/qcg/smoa/etc/pbs_drmaa.conf file):

  job_categories: {
        default: "-q plgrid",
  },

Restricting advance reservation

In some deployments enabling advance reservation for the whole cluster is not desirable. In such cases one can limit advance reservation to particular partition by editing /opt/plgrid/qcg/smoa/lib/smoa-comp/modules/python/reservation_maui.py file and changing the following line:

  cmd = "setres -x BYNAME -r PROCS=1"

to

  cmd = "setres -x BYNAME -r PROCS=1 -p wp4"

Service configuration

Edit the preinstalled service configuration file (/opt/plgrid/qcg/smoa/etc/smoa-compd.xml):

  <?xml version="1.0" encoding="UTF-8"?>
  <sm:SMOACore
        xmlns:sm="http://schemas.smoa-project.com/core/2009/01/config"
        xmlns="http://schemas.smoa-project.com/comp/2009/01/config"
        xmlns:smc="http://schemas.smoa-project.com/comp/2009/01/config"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
        
        <Configuration>
                <sm:ModuleManager>
                        <sm:Directory>/opt/plgrid/qcg/smoa/lib/smoa-core/modules/</sm:Directory>
                        <sm:Directory>/opt/plgrid/qcg/smoa/lib/smoa-comp/modules/</sm:Directory>
                </sm:ModuleManager>
  
                <sm:Service xsi:type="smoa-compd" description="SMOA Computing">
                        <sm:Logger>
                                <sm:Filename>/opt/plgrid/var/log/smoa-comp/smoa-comp.log</sm:Filename>
                                <sm:Level>INFO</sm:Level>
                        </sm:Logger>
  
                        <sm:Transport>
                        <sm:Module xsi:type="sm:ecm_gsoap.service">
                           <sm:Host>frontend.example.com</sm:Host>
                           <sm:Port>19000</sm:Port>
                           <sm:KeepAlive>false</sm:KeepAlive>
                           <sm:Authentication>
                                   <sm:Module xsi:type="sm:atc_transport_gsi.service">
                                           <sm:X509CertFile>/opt/plgrid/qcg/smoa/etc/certs/smoacert.pem</sm:X509CertFile>
                                           <sm:X509KeyFile>/opt/plgrid/qcg/smoa/etc/certs/smoakey.pem</sm:X509KeyFile>
                                   </sm:Module>
                           </sm:Authentication>
                           <sm:Authorization>
                                   <sm:Module xsi:type="sm:atz_mapfile">
                                           <sm:Mapfile>/etc/grid-security/grid-mapfile</sm:Mapfile>
                                   </sm:Module>
                           </sm:Authorization>
                        </sm:Module>
                            <sm:Module xsi:type="smc:smoa-comp-service"/>
                        </sm:Transport>
                        
                        <sm:Module xsi:type="pbs_jsdl_filter"/>
                        <sm:Module xsi:type="atz_ardl_filter"/>
                        <sm:Module xsi:type="sm:general_python" path="/opt/plgrid/qcg/smoa/lib/smoa-comp/modules/python/monitoring.py"/>
  
                        <sm:Module xsi:type="submission_drmaa" path="/opt/plgrid/qcg/smoa/lib/libdrmaa.so"/>
                        <sm:Module xsi:type="reservation_python" path="/opt/plgrid/qcg/smoa/lib/smoa-comp/modules/python/reservation_maui.py"/>
                        
                        <sm:Module xsi:type="notification_wsn">
                                <sm:Module xsi:type="sm:ecm_gsoap.client">
                                                <sm:ServiceURL>http://localhost:19001/</sm:ServiceURL>
                                                        <sm:Authentication>
                                                                <sm:Module xsi:type="sm:atc_transport_http.client"/>
                                                        </sm:Authentication>
                                                <sm:Module xsi:type="sm:ntf_client"/>
                                </sm:Module>
                        </sm:Module>
                                
                        <sm:Module xsi:type="application_mapper">
                                <ApplicationMapFile>/opt/plgrid/qcg/smoa/etc/application_mapfile</ApplicationMapFile>
                        </sm:Module>
  
                        <Database>
                                <DSN>smoa_comp</DSN>
                                <User>smoa_comp</User>
                                <Password>smoa_comp</Password>
                        </Database>
  
                        <UnprivilegedUser>smoa_comp</UnprivilegedUser>
  
                        <FactoryAttributes>
                                <CommonName>klaster.plgrid.pl</CommonName>
                                <LongDescription>PL Grid cluster</LongDescription>
                        </FactoryAttributes>
                </sm:Service>
  
        </Configuration>
  </sm:SMOACore>

In most cases it should be enough to change only following elements:

Transport/Module/Host
the hostname of the machine where the service is deployed
Transport/Module/Authentication/Module/X509CertFile and Transport/Module/Authentication/Module/X509KeyFile
the service private key and X.509 certificate (consult the  Globus User Gide on how to generate service certificate request or use the host certificate/key pair). Make sure that the key and certificate is owned by the smoa_comp user and the private key is not password protected (generating certificate with the -service option implies this). If you installed cert and key file in the recommended location you do not need to edit these fields.
Module[type="smc:notification_wsn"]/Module/ServiceURL
the URL of the Smoa Notification Service? (You can do it later, i.e. after installing the Smoa Notification service)
Module[type="submission_drmaa"]/@path
path to the DRMAA library (the libdrmaa.so). Also, if you installed the DRMAA library using provided SRC RPM you do not need to change this path.
Database/Password
the smoa_comp database password
FactoryAttributes/CommonName
a common name of the cluster (e.g. reef.man.poznan.pl). You can use any name that is unique among all systems (e.g. cluster name + domain name of your institution)
FactoryAttributes/LongDescription
a human readable description of the cluster

Configuring BAT accounting module

In order to report resource usage to the central PL-Grid accounting service you must enable the bat_updater module. You can do this by including the following snippet in the aforementioned configuration file (/opt/plgrid/qcg/smoa/etc/smoa-comp.xml). Please put the following snippet just before the Database section:

  <sm:Module xsi:type="bat_updater">
        <BATServiceURL>tcp://acct.grid.cyf-kr.edu.pl:61616</BATServiceURL>
        <SiteName>psnc-smoa-plgrid</SiteName>
        <QueueName>test-jobs</QueueName>
  </sm:Module>

where:

  • BATServiceURL : URL of the BAT accounting service
  • SiteName : local site name as reported to the BAT service
  • QueueName : queue name to which report usage data

Note on the security model

The Smoa Computing can be configured with various authentication and authorization modules. However in the typical deployment we assume that the Smoa Computing is configured as in the above example, i.e.:

  • authentication is provided on basics of httpg protocol
  • authorization is based on the local grid-mapfile mapfile (see Users configuration?).

=Starting the service= As root type:

 # /etc/init.d/smoa-compd start

The service logs can be found in:

  /opt/plgrid/var/log/smoa-comp/smoa-comp.log

The service assumes that the following commands are in the standard search path:

  • pbsnodes
  • showres
  • setres
  • releaseres
  • checknode

If any of the above commands is not installed in a standard location (e.g. /usr/bin) you may need to edit the /opt/plgrid/qcg/smoa/etc/sysconfig/smoa-compd file and set the PATH variable appropriately, e.g.:

  # INIT_WAIT=5
  #
  # DRM specific options
  
  export PATH=$PATH:/opt/maui/bin

If you compiled DRMAA with logging switched on you can set there also DRMAA logging level:

  # INIT_WAIT=5
  #
  # DRM specific options

  export DRMAA_LOG_LEVEL=INFO

Stopping the service

The service can be stopped using the following command:

  # /etc/init.d/smoa-compd stop

Verifying the installation

  • For convenience you can add the /opt/plgrid/qcg/smoa/bin and /opt/plgrid/qcg/smoa-dep/globus/bin/ to your PATH variable.
  • Edit the Smoa Computing client configuration file (/opt/plgrid/qcg/smoa/etc/smoa-comp.xml):
    • set the Host and Port to reflects the changes in the service configuration file (smoa-compd.xml).
       <?xml version="1.0" encoding="UTF-8"?>
       <sm:SMOACore
              xmlns:sm="http://schemas.smoa-project.com/core/2009/01/config"
              xmlns="http://schemas.smoa-project.com/comp/2009/01/config"
              xmlns:smc="http://schemas.smoa-project.com/comp/2009/01/config"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
        
              <Configuration>
                      <sm:ModuleManager>
                              <sm:Directory>/opt/QCG/smoa/lib/smoa-core/modules/</sm:Directory>
                              <sm:Directory>/opt/QCG/smoa//lib/smoa-comp/modules/</sm:Directory>
                      </sm:ModuleManager>
        
                      <sm:Client xsi:type="smoa-comp" description="SMOA Computing client">
                              <sm:Transport>
                                      <sm:Module xsi:type="sm:ecm_gsoap.client">
                                              <sm:ServiceURL>httpg://frontend.example.com:19000/</sm:ServiceURL>
                                              <sm:Authentication>
                                                      <sm:Module xsi:type="sm:atc_transport_gsi.client"/>
                                              </sm:Authentication>
                                              <sm:Module xsi:type="smc:smoa-comp-client"/>
                                      </sm:Module>
                              </sm:Transport>
                      </sm:Client>
              </Configuration>
       </sm:SMOACore>
      
  • Initialize your credentials:
     $ grid-proxy-init 
     Your identity: /O=Grid/OU=QosCosGrid/OU=PSNC/CN=Mariusz Mamonski
     Enter GRID pass phrase for this identity:
     Creating proxy .................................................................. Done
     Your proxy is valid until: Wed Sep 16 05:01:02 2009
    
  • Query the SMOA Computing service:
      $ smoa-comp -G | xmllint --format - # the xmllint is used only to present the result in more pleasant way
      
      <bes-factory:FactoryResourceAttributesDocument xmlns:bes-factory="http://schemas.ggf.org/bes/2006/08/bes-factory">
        <bes-factory:IsAcceptingNewActivities>true</bes-factory:IsAcceptingNewActivities>
        <bes-factory:CommonName>IT cluster</bes-factory:CommonName>
        <bes-factory:LongDescription>IT department cluster for public   use</bes-factory:LongDescription>
        <bes-factory:TotalNumberOfActivities>0</bes-factory:TotalNumberOfActivities>
        <bes-factory:TotalNumberOfContainedResources>1</bes-factory:TotalNumberOfContainedResources>
        <bes-factory:ContainedResource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="bes-factory:BasicResourceAttributesDocumentType">
            <bes-factory:ResourceName>worker.example.com</bes-factory:ResourceName>
            <bes-factory:CPUArchitecture>
                <jsdl:CPUArchitectureName xmlns:jsdl="http://schemas.ggf.org/jsdl/2005/11/jsdl">x86_32</jsdl:CPUArchitectureName>
            </bes-factory:CPUArchitecture>
            <bes-factory:CPUCount>4</bes-factory:CPUCount><bes-factory:PhysicalMemory>1073741824</bes-factory:PhysicalMemory>
        </bes-factory:ContainedResource>
        <bes-factory:NamingProfile>http://schemas.ggf.org/bes/2006/08/bes/naming/BasicWSAddressing</bes-factory:NamingProfile> 
        <bes-factory:BESExtension>http://schemas.ogf.org/hpcp/2007/01/bp/BasicFilter</bes-  factory:BESExtension>
        <bes-factory:BESExtension>http://schemas.smoa-project.com/comp/2009/01</bes-factory:BESExtension>
        <bes-factory:LocalResourceManagerType>http://example.com/SunGridEngine</bes-factory:LocalResourceManagerType>
        <smcf:NotificationProviderURL xmlns:smcf="http://schemas.smoa-project.com/comp/2009/01/factory">http://localhost:2211/</smcf:NotificationProviderURL>
     </bes-factory:FactoryResourceAttributesDocument>
    
  • Submit a sample job:
      $ smoa-comp -c -J /opt/plgrid/qcg/smoa/share/smoa-comp/doc/examples/jsdl/sleep.xml
      Activity Id: ccb6b04a-887b-4027-633f-412375559d73
    
  • Query it status:
      $ smoa-comp -s -a ccb6b04a-887b-4027-633f-412375559d73
      status = Executing
      $ smoa-comp -s -a ccb6b04a-887b-4027-633f-412375559d73
      status = Executing
      $ smoa-comp -s -a ccb6b04a-887b-4027-633f-412375559d73
      status = Finished
      exit status = 0
    
  • Create an advance reservation:
    • copy the provided sample reservation description file (expressed in ARDL - Advance Reservation Description Language)
       $ cp /opt/plgrid/qcg/smoa/share/smoa-comp/doc/examples/ardl/oneslot.xml oneslot.xml
      
    • Edit the oneslot.xml and modify the StartTime and EndTime to dates that are in the near future,
    • Create a new reservation:
       $ smoa-comp -c -D oneslot.xml
       Reservation Id: aab6b04a-887b-4027-633f-412375559d7d
      
    • List all reservations:
       $ smoa-comp -l
       Reservation Id: aab6b04a-887b-4027-633f-412375559d7d
       Total number of reservations: 1
      
    • Check which hosts where reserved:
       $ smoa-comp -s -r aab6b04a-887b-4027-633f-412375559d7d
       Reserved hosts:
       worker.example.com[used=0,reserved=1,total=4]
      
    • Delete the reservation:
       $ smoa-comp -t -r aab6b04a-887b-4027-633f-412375559d7d
       Reservation terminated.
      
    • Check if the grid-ftp is working correctly:
       $ globus-url-copy gsiftp://your.local.host.name/etc/profile profile
       $ diff /etc/profile profile
      

Configuring firewall

In order to expose the QosCosGrid? services externally you need to open the following ports in the firewall:

  • 19000 (TCP) - Smoa Computing
  • 19001 (TCP) - Smoa Notification
  • 2811 (TCP) - GridFTP server
  • 9000-9500 (TCP) - GridFTP port-range (if you want to use different port-range adjust the GLOBUS_TCP_PORT_RANGE variable in the /etc/xinetd.d/gsiftp file)

Maintenance

The historic usage information is stored in two relations of the Smoa Computing database: jobs_acc and reservations_acc. You can always archive old usage data to a file and delete it from the database using the psql client:

 $ psql -h localhost smoa_comp smoa_comp 
 Password for user smoa_comp: 
 Welcome to psql 8.1.23, the PostgreSQL interactive terminal.
  
 Type:  \copyright for distribution terms
      \h for help with SQL commands
      \? for help with psql commands
      \g or terminate with semicolon to execute query
      \q to quit

 smoa_comp=> \o jobs.acc
 smoa_comp=> SELECT * FROM jobs_acc where end_time < date '2010-01-10';
 smoa_comp=> \o reservations.acc
 smoa_comp=> SELECT * FROM reservations_acc where end_time < date '2010-01-10';
 smoa_comp=> \o
 smoa_comp=> DELETE FROM jobs_acc where end_time < date '2010-01-10';
 smoa_comp=> DELETE FROM reservation_acc where end_time < date '2010-01-10';