Changeset 717 for papers/SMPaT-2012_DCWoRMS
- Timestamp:
- 12/28/12 19:23:36 (12 years ago)
- Location:
- papers/SMPaT-2012_DCWoRMS
- Files:
-
- 5 edited
Legend:
- Unmodified
- Added
- Removed
-
papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.aux
r716 r717 94 94 \@writefile{lof}{\contentsline {figure}{\numberline {13}{\ignorespaces Frequency downgrading strategy}}{26}} 95 95 \newlabel{fig:70dfs}{{13}{26}} 96 \@writefile{toc}{\contentsline {section}{\numberline {6}DCWoRMS application/use cases}{26}}97 \newlabel{sec:coolemall}{{6}{26}}98 96 \@writefile{lot}{\contentsline {table}{\numberline {4}{\ignorespaces Energy usage [kWh] for different level of system load}}{27}} 99 97 \newlabel{loadEnergy}{{4}{27}} 100 98 \@writefile{lot}{\contentsline {table}{\numberline {5}{\ignorespaces Makespan [s] for different level of system load}}{27}} 101 99 \newlabel{loadMakespan}{{5}{27}} 100 \@writefile{toc}{\contentsline {section}{\numberline {6}DCWoRMS application/use cases}{27}} 101 \newlabel{sec:coolemall}{{6}{27}} 102 102 \bibcite{fit4green}{{1}{}{{}}{{}}} 103 \@writefile{toc}{\contentsline {section}{\numberline {7}Conclusions and future work}{28}}104 \newlabel{}{{7}{28}}105 103 \bibcite{CloudSim}{{2}{}{{}}{{}}} 106 104 \bibcite{DCSG}{{3}{}{{}}{{}}} … … 110 108 \bibcite{games}{{7}{}{{}}{{}}} 111 109 \bibcite{GreenCloud}{{8}{}{{}}{{}}} 110 \@writefile{toc}{\contentsline {section}{\numberline {7}Conclusions and future work}{29}} 111 \newlabel{}{{7}{29}} 112 112 \bibcite{sla}{{9}{}{{}}{{}}} 113 113 \bibcite{GSSIM}{{10}{}{{}}{{}}} -
papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.fdb_latexmk
r716 r717 1 1 # Fdb version 2 2 ["pdflatex"] 135671 0663"elsarticle-DCWoRMS.tex" "elsarticle-DCWoRMS.pdf" "elsarticle-DCWoRMS"2 ["pdflatex"] 1356716777 "elsarticle-DCWoRMS.tex" "elsarticle-DCWoRMS.pdf" "elsarticle-DCWoRMS" 3 3 "/usr/local/texlive/2010/texmf-dist/tex/context/base/supp-pdf.mkii" 1251025892 71625 fad1c4b52151c234b6873a255b0ad6b3 "" 4 4 "/usr/local/texlive/2010/texmf-dist/tex/generic/oberdiek/etexcmds.sty" 1267408169 5670 cacb018555825cfe95cd1e1317d82c1d "" … … 30 30 "/usr/local/texlive/2010/texmf-dist/tex/latex/psnfss/upsy.fd" 1137110629 148 2da0acd77cba348f34823f44cabf0058 "" 31 31 "/usr/local/texlive/2010/texmf-dist/tex/latex/psnfss/upzd.fd" 1137110629 148 b2a94082cb802f90d3daf6dd0c7188a0 "" 32 "elsarticle-DCWoRMS.aux" 135671 0665 7300 93160f416a8461d41a3c200dac702a78""33 "elsarticle-DCWoRMS.spl" 135671 06630 d41d8cd98f00b204e9800998ecf8427e ""34 "elsarticle-DCWoRMS.tex" 135671 0660 64538 98c98b811c1eb8e1409935f8ce2553dd""32 "elsarticle-DCWoRMS.aux" 1356716779 7300 0979c20bb2a905f6c68aacab6fdfbbcc "" 33 "elsarticle-DCWoRMS.spl" 1356716778 0 d41d8cd98f00b204e9800998ecf8427e "" 34 "elsarticle-DCWoRMS.tex" 1356716773 65095 81b0e5d65aaa665e50bfc482b89dbf68 "" 35 35 "elsarticle.cls" 1352447924 26095 ad44f4892f75e6e05dca57a3581f78d1 "" 36 36 "fig/70dfs.png" 1356617710 212573 e013d714dd1377384ed7793222210051 "" -
papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.tex
r716 r717 156 156 TODO - update 157 157 158 The remaining part of this paper is organized as follows. In Section~2 we give a brief overview of the current state of the art concerning modeling and simulation of distributed systems, like Grids and Clouds, in terms of energy efficiency. Section~3 discusses the main features of DCWoRMS. In particular, it introduces our approach to workload and resource management, presents the concept of energy efficiency modeling and explains how to incorporate a specific application performance model into simulations. Section~4 discusses energy models adopted within the DCWoRMS. In Section~5 we present some experiments that were performed using DCWoRMS utilizing real testbed nodes models to show vari us types of popular resource and scheduling technics allowing to decrease the total power consumption of the execution of a set of tasks. Section~6 focuses on the role of DCWoRMS within the CoolEmAll project. Final conclusions and directions for future work are given in Section~7.158 The remaining part of this paper is organized as follows. In Section~2 we give a brief overview of the current state of the art concerning modeling and simulation of distributed systems, like Grids and Clouds, in terms of energy efficiency. Section~3 discusses the main features of DCWoRMS. In particular, it introduces our approach to workload and resource management, presents the concept of energy efficiency modeling and explains how to incorporate a specific application performance model into simulations. Section~4 discusses energy models adopted within the DCWoRMS. In Section~5 we present some experiments that were performed using DCWoRMS utilizing real testbed nodes models to show various types of popular resource and scheduling technics allowing to decrease the total power consumption of the execution of a set of tasks. Section~6 focuses on the role of DCWoRMS within the CoolEmAll project. Final conclusions and directions for future work are given in Section~7. 159 159 160 160 \section{Related Work} … … 472 472 \begin {table}[ tp] 473 473 474 \begin{tabular}{l llllr}475 \hline 476 Characteristic & \multicolumn{4}{c}{Load intensity} & Distribution\\477 & 10 & 30 & 50 & 70\\474 \begin{tabular}{l c c c c r} 475 \hline 476 & \multicolumn{4}{c}{Load intensity} & \\ 477 Characteristic & 10 & 30 & 50 & 70 & Distribution\\ 478 478 \hline 479 479 Task Count & \multicolumn{4}{c}{1000} & constant\\ … … 481 481 Task Interval [s] & 3000 & 1200 & 720 & 520 & poisson\\ 482 482 \hline 483 \multirow{8}{*}{Number of cores to run} & \multicolumn{4}{c}{1} & uniform - 30\%\\483 \multirow{8}{*}{Number of cores to run} & \multicolumn{4}{c}{1} & uniform - 30\%\\ 484 484 & \multicolumn{4}{c}{2} & uniform - 30\%\\ 485 485 & \multicolumn{4}{c}{3} & uniform - 10\%\\ … … 490 490 & \multicolumn{4}{c}{8} & uniform - 5\%\\ 491 491 \hline 492 \multirow{5}{*}{Application type} & \multicolumn{4}{c}{Abinit} & uniform - 20\%\\492 \multirow{5}{*}{Application type} & \multicolumn{4}{c}{Abinit} & uniform - 20\%\\ 493 493 & \multicolumn{4}{c}{C-Ray} & uniform - 20\%\\ 494 494 & \multicolumn{4}{c}{Tar} & uniform - 20\%\\ … … 549 549 \end {table} 550 550 551 As mentioned, we assign tasks to nodes minimizing the value of expression: $(P-Pidle)*exec\_time$, where $P$ denotes observed power of the node running the particular application and $exec _time$ refers to the measured application running time. Based on the application and hardware profiles, we expected that Atom D510 would be the preferred node. Obtained scheduled, that is presented in the Gantt chart in Figure~\ref{fig:70eo} along with the energy and system usage, confirmed our assumptions. Atom D510 nodes are nearly fully loaded, while the least energy-favorable AMD nodes are used only when other ones are busy.551 As mentioned, we assign tasks to nodes minimizing the value of expression: $(P-Pidle)*exec\_time$, where $P$ denotes observed power of the node running the particular application and $exec\_time$ refers to the measured application running time. Based on the application and hardware profiles, we expected that Atom D510 would be the preferred node. Obtained scheduled, that is presented in the Gantt chart in Figure~\ref{fig:70eo} along with the energy and system usage, confirmed our assumptions. Atom D510 nodes are nearly fully loaded, while the least energy-favorable AMD nodes are used only when other ones are busy. 552 552 553 553 \begin{figure}[h!] … … 569 569 \end{figure} 570 570 571 Estimated \textbf{total energy usage} of the system is 30,568 kWh. As we can see, this approach significantly improved the value of this criterion, comparing to all ofthe previous policies. Moreover, the proposed allocation strategy does not worsen the \textbf{workload completion time} criterion, where the resulting value is equal to 533 820 s.571 Estimated \textbf{total energy usage} of the system is 30,568 kWh. As we can see, this approach significantly improved the value of this criterion, comparing to the previous policies. Moreover, the proposed allocation strategy does not worsen the \textbf{workload completion time} criterion, where the resulting value is equal to 533 820 s. 572 572 573 573 \subsubsection{Frequency scaling} 574 574 575 The last considered by us case is modification of the random strategy. We assume that tasks do not have deadlines and the only criterion which is taken into consideration is the total energy consumption. All the considered workloads have been executed on the testbed configured for three different possible frequencies of CPUs â the lowest, medium and the highest one. The experiment was intended to check if the benefit of running the workload on less power-consuming frequency of CPU is not leveled by the prolonged time of execution of the workload. 576 577 575 The last considered by us case is modification of the random strategy. We assume that tasks do not have deadlines and the only criterion which is taken into consideration is the total energy consumption. In this experiment we configured the simulated infrastructure for the lowest possible frequencies of CPUs. The experiment was intended to check if the benefit of running the workload on less power-consuming frequency of CPU is not leveled by the prolonged time of execution of the workload. The values of the evaluated criteria are as follows: \textbf{workload completion time}: 1 065 356 s and \textbf{total energy usage}: 77,109 kWh. As we can see, for the given load of the system (70\%), the cost of running the workload that require almost twice more time, can not be compensate by the lower power draw. Moreover, as it can be observed on the charts in Figure~\ref{fig:70dfs} the execution times on the slowest nodes (Atom D510) visibly exceeds the corresponding values on other servers 576 578 577 \begin{figure}[h!] 579 578 \centering … … 582 581 \end{figure} 583 582 584 \textbf{total energy usage}: 77,108 kWh 585 \textbf{workload completion time}: 1 065 356 s 586 587 588 .... 589 583 584 As we were looking for the trade-off between total completion time and energy usage, we were searching for the workload load level that can benefit from the lower system performance in terms of energy-efficiency. For the frequency downgrading policy, we observed the improvement on the energy usage criterion only for the workload resulting in 10\% system load. 585 The following tables: Table~\ref{loadEnergy} and Table~\ref{loadMakespan} contain the values of evaluation criteria (total energy usage and makespan respectively) gathered for all investigated workloads. 590 586 591 587 \begin {table}[h!] … … 594 590 \hline 595 591 & \multicolumn{5}{c}{Strategy}\\ 596 Load & R & R + NPM & EO & EO +NPM & DFS\\592 Load & R & R+NPM & EO & EO+NPM & DFS\\ 597 593 \hline 598 594 10\% & 241,337 & 37,811 & 239,667 & 25,571 & 239,278 \\ 599 30\% &89,853 & 38,059 & 88,823 & 25,595 .94& 90,545 \\595 30\% &89,853 & 38,059 & 88,823 & 25,595 & 90,545 \\ 600 596 50\% &59,112 & 36,797 & 58,524 & 26,328 & 76,020 \\ 601 70\% &46,883 & 36,705 & 46,30 62& 30,568 & 77,109 \\597 70\% &46,883 & 36,705 & 46,305 & 30,568 & 77,109 \\ 602 598 \hline 603 599 \end{tabular} … … 610 606 \hline 611 607 & \multicolumn{5}{c}{Strategy}\\ 612 Load & R & R + NPM & EO & EO +NPM & DFS\\608 Load & R & R+NPM & EO & EO+NPM & DFS\\ 613 609 \hline 614 610 10\% & 3 605 428 & 3 605 428 & 3 605 428 & 3 605 428 & 3 622 968 \\
Note: See TracChangeset
for help on using the changeset viewer.