Version 37 (modified by bartek, 12 years ago) (diff)

--

Benchmarks of QosCosGrid

QCG-Computing

The QCG-Computing service tests concerned the job submission and job management tasks, which are typical for this kind of a service. The proposed two types of the tests used the following metrics:

  • response time,
  • throughput

All the tests were performed using specially written program on the basis of the SAGA C++ library. There were utilized two adaptors offered by SAGA C++, namely:

  • gLite CREAM (based on glite-ce-cream-client-api-c) - gLite (CREAM-CE service),
  • OGSA BES (based on gSOAP) - UNICORE and QosCosGrid (QCG-Computing service).

The use of the common access layer allowed to minimize the risk of obtaining incorrect results. In the same purpose, the jobs were submitted to the same resource and didn't require any data transfer.

Testbed

  • Client machine:
    • 8 cores (Intel(R) Xeon(R) CPU E5345),
    • 11 GB RAM,
    • Scientific Linux 5.3,
    • RTT from the client's machine to the cluster's frontend: about 12 ms.
  • Cluster Zeus (84. place on TOP500):
    • queueing system: Torque 2.4.12 + Maui 3.3,
    • about 800 nodes,
    • about 3-4k tasks present in the system,
    • Maui „RMPOLLINTERVAL”: 3,5 minutes,
    • for the puropose of the tests, a special partition (WP4) was set aside: 64 cores / 8 nodes - 64 slots,
    • test users (plgtestm01-10 and 10 users from the plgridXXX pool) were assigned on an exclusive basis to the WP4 partition.
  • Service nodes (qcg.grid.cyf-kr.edu.pl, cream.grid.cyf-kr.edu.pl, uni-ce.grid.cyf-kr.edu.pl):
    • virtual machines (Scientific Linux 5.5),
    • QCG and UNICORE: 1 virtual core, 2GB RAM,
    • gLite CREAM: 3 virtual cores, 8 GB RAM.

Test 1 - Response Time

The main program creates N processes (each process can use a different certificate) that invoke the function sustain_thread. Next, it waits for the end of all running processes.

In general, the idea of the program is to keep in a system jobs_per_thread jobs for test_duration seconds, inquering all the time (the delays between calls drawn from a defined interval) about all currently running or queued jobs.

The following snippet shows a pseudocode of the function sustain_thread:

1. start_timer()
2. for i = 1 .. jobs_per_thread
  2a: submit_job(job[i])
3. while (current_time < test_duration) do
  3a: for i = 1 .. jobs_per_thread
  3a1: if (! is_finished(job[i].last_state))
    3a11: sleep((rand() / RAND_MAX) / SLEEP_COEF)
    3a11: query_state(job[i])
  3a2: if (is_finished(job[i].last_state))
    3a21: submit_job(job[i])
4. stop_timer()

The function submit_job(job):

1. start_timer()
2. job.job = service.create_job()
3. job.job.run()
4. stop_timer()
5. query_state(job)

The function query_state(job):

1. start_timer()
2. job.last_state = job.job.get_state()
3. stop_timer()

At the end of tests, the average, minimal and maximal times of submitting a job (submit_job) and inquiring about a job state (query_state) are printed. Additionally, the program displays the number of all submitted jobs, the number of successfully finished jobs (Done) and the number of the jobs finished with the other status (Canceled, Failed, Suspended). In the last case, the number of fails, i.e. exceptions returned by the SAGA adaptors, is shown.

Notes

Pros:

  • The test reflects the natural situation in productive environments:
    • approximately constant number of tasks,
    • "the task flow" (when one task is finished, another begins).
  • The program may be used to measure the overall capacity of the system.

Cons:

  • The measured submitting time may be distorted (the response of the service on the submit request does not necessarily imply the submit to the queueing system).

Plan of the tests

  • 50 tasks x 10 users = 500 tasks, 30 minutes, SLEEP_COEF = 10
  • 100 tasks x 10 users = 1000 tasks, 30 minutes, SLEEP_COEF = 10
  • 200 tasks x 10 users = 2000 tasks, 30 minutes, SLEEP_COEF = 10
  • 400 tasks x 10 users = 4000 tasks, 30 minutes, SLEEP_COEF = 10

Results

  • Average submit time of a single job

QCG 2.0 UNICORE 6.3.2 gLite 3.2
50x10 1.432.418.47
100x10 1.491.24(1)8.45
200x10 1.992.208.50
400x10 1.96-([2]_)8.24

Test 2 - Throughput

..[2] fdasfas