Ignore:
Timestamp:
12/28/12 19:23:36 (12 years ago)
Author:
wojtekp
Message:
 
Location:
papers/SMPaT-2012_DCWoRMS
Files:
5 edited

Legend:

Unmodified
Added
Removed
  • papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.aux

    r716 r717  
    9494\@writefile{lof}{\contentsline {figure}{\numberline {13}{\ignorespaces  Frequency downgrading strategy}}{26}} 
    9595\newlabel{fig:70dfs}{{13}{26}} 
    96 \@writefile{toc}{\contentsline {section}{\numberline {6}DCWoRMS application/use cases}{26}} 
    97 \newlabel{sec:coolemall}{{6}{26}} 
    9896\@writefile{lot}{\contentsline {table}{\numberline {4}{\ignorespaces  Energy usage [kWh] for different level of system load}}{27}} 
    9997\newlabel{loadEnergy}{{4}{27}} 
    10098\@writefile{lot}{\contentsline {table}{\numberline {5}{\ignorespaces  Makespan [s] for different level of system load}}{27}} 
    10199\newlabel{loadMakespan}{{5}{27}} 
     100\@writefile{toc}{\contentsline {section}{\numberline {6}DCWoRMS application/use cases}{27}} 
     101\newlabel{sec:coolemall}{{6}{27}} 
    102102\bibcite{fit4green}{{1}{}{{}}{{}}} 
    103 \@writefile{toc}{\contentsline {section}{\numberline {7}Conclusions and future work}{28}} 
    104 \newlabel{}{{7}{28}} 
    105103\bibcite{CloudSim}{{2}{}{{}}{{}}} 
    106104\bibcite{DCSG}{{3}{}{{}}{{}}} 
     
    110108\bibcite{games}{{7}{}{{}}{{}}} 
    111109\bibcite{GreenCloud}{{8}{}{{}}{{}}} 
     110\@writefile{toc}{\contentsline {section}{\numberline {7}Conclusions and future work}{29}} 
     111\newlabel{}{{7}{29}} 
    112112\bibcite{sla}{{9}{}{{}}{{}}} 
    113113\bibcite{GSSIM}{{10}{}{{}}{{}}} 
  • papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.fdb_latexmk

    r716 r717  
    11# Fdb version 2 
    2 ["pdflatex"] 1356710663 "elsarticle-DCWoRMS.tex" "elsarticle-DCWoRMS.pdf" "elsarticle-DCWoRMS"  
     2["pdflatex"] 1356716777 "elsarticle-DCWoRMS.tex" "elsarticle-DCWoRMS.pdf" "elsarticle-DCWoRMS"  
    33  "/usr/local/texlive/2010/texmf-dist/tex/context/base/supp-pdf.mkii" 1251025892 71625 fad1c4b52151c234b6873a255b0ad6b3 "" 
    44  "/usr/local/texlive/2010/texmf-dist/tex/generic/oberdiek/etexcmds.sty" 1267408169 5670 cacb018555825cfe95cd1e1317d82c1d "" 
     
    3030  "/usr/local/texlive/2010/texmf-dist/tex/latex/psnfss/upsy.fd" 1137110629 148 2da0acd77cba348f34823f44cabf0058 "" 
    3131  "/usr/local/texlive/2010/texmf-dist/tex/latex/psnfss/upzd.fd" 1137110629 148 b2a94082cb802f90d3daf6dd0c7188a0 "" 
    32   "elsarticle-DCWoRMS.aux" 1356710665 7300 93160f416a8461d41a3c200dac702a78 "" 
    33   "elsarticle-DCWoRMS.spl" 1356710663 0 d41d8cd98f00b204e9800998ecf8427e "" 
    34   "elsarticle-DCWoRMS.tex" 1356710660 64538 98c98b811c1eb8e1409935f8ce2553dd "" 
     32  "elsarticle-DCWoRMS.aux" 1356716779 7300 0979c20bb2a905f6c68aacab6fdfbbcc "" 
     33  "elsarticle-DCWoRMS.spl" 1356716778 0 d41d8cd98f00b204e9800998ecf8427e "" 
     34  "elsarticle-DCWoRMS.tex" 1356716773 65095 81b0e5d65aaa665e50bfc482b89dbf68 "" 
    3535  "elsarticle.cls" 1352447924 26095 ad44f4892f75e6e05dca57a3581f78d1 "" 
    3636  "fig/70dfs.png" 1356617710 212573 e013d714dd1377384ed7793222210051 "" 
  • papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.tex

    r716 r717  
    156156TODO - update 
    157157 
    158 The remaining part of this paper is organized as follows. In Section~2 we give a brief overview of the current state of the art concerning modeling and simulation of distributed systems, like Grids and Clouds, in terms of energy efficiency. Section~3 discusses the main features of DCWoRMS. In particular, it introduces our approach to workload and resource management, presents the concept of energy efficiency modeling and explains how to incorporate a specific application performance model into simulations. Section~4 discusses energy models adopted within the DCWoRMS. In Section~5 we present some experiments that were performed using DCWoRMS utilizing real testbed nodes models to show varius types of popular resource and scheduling technics allowing to decrease the total power consumption of the execution of a set of tasks. Section~6 focuses on the role of DCWoRMS within the CoolEmAll project. Final conclusions and directions for future work are given in Section~7. 
     158The remaining part of this paper is organized as follows. In Section~2 we give a brief overview of the current state of the art concerning modeling and simulation of distributed systems, like Grids and Clouds, in terms of energy efficiency. Section~3 discusses the main features of DCWoRMS. In particular, it introduces our approach to workload and resource management, presents the concept of energy efficiency modeling and explains how to incorporate a specific application performance model into simulations. Section~4 discusses energy models adopted within the DCWoRMS. In Section~5 we present some experiments that were performed using DCWoRMS utilizing real testbed nodes models to show various types of popular resource and scheduling technics allowing to decrease the total power consumption of the execution of a set of tasks. Section~6 focuses on the role of DCWoRMS within the CoolEmAll project. Final conclusions and directions for future work are given in Section~7. 
    159159 
    160160\section{Related Work} 
     
    472472\begin {table}[ tp] 
    473473 
    474 \begin{tabular}{lllllr} 
    475 \hline 
    476 Characteristic & \multicolumn{4}{c}{Load intensity} & Distribution\\ 
    477 & 10  & 30 & 50 & 70  \\ 
     474\begin{tabular}{l c c c c r} 
     475\hline 
     476 & \multicolumn{4}{c}{Load intensity} & \\ 
     477Characteristic & 10  & 30 & 50 & 70  & Distribution\\ 
    478478\hline 
    479479Task Count & \multicolumn{4}{c}{1000} & constant\\ 
     
    481481Task Interval [s] & 3000 & 1200 & 720 & 520 & poisson\\ 
    482482\hline 
    483 \multirow{8}{*}{Number of cores to run}  & \multicolumn{4}{c}{1} & uniform- 30\%\\ 
     483\multirow{8}{*}{Number of cores to run}  & \multicolumn{4}{c}{1} & uniform - 30\%\\ 
    484484 & \multicolumn{4}{c}{2} & uniform - 30\%\\ 
    485485 & \multicolumn{4}{c}{3} & uniform - 10\%\\ 
     
    490490 & \multicolumn{4}{c}{8} & uniform - 5\%\\ 
    491491\hline 
    492 \multirow{5}{*}{Application type}  & \multicolumn{4}{c}{Abinit} & uniform- 20\%\\ 
     492\multirow{5}{*}{Application type}  & \multicolumn{4}{c}{Abinit} & uniform - 20\%\\ 
    493493 & \multicolumn{4}{c}{C-Ray} & uniform - 20\%\\ 
    494494 & \multicolumn{4}{c}{Tar} & uniform - 20\%\\ 
     
    549549\end {table} 
    550550 
    551 As mentioned, we assign tasks to nodes minimizing the value of expression: $(P-Pidle)*exec\_time$, where $P$ denotes observed power of the node running the particular application and $exec_time$ refers to the measured application running time. Based on the application and hardware profiles, we expected that Atom D510 would be the preferred node. Obtained scheduled, that is presented in the Gantt chart in Figure~\ref{fig:70eo} along with the energy and system usage, confirmed our assumptions. Atom D510 nodes are nearly fully loaded, while the least energy-favorable AMD nodes are used only when other ones are busy. 
     551As mentioned, we assign tasks to nodes minimizing the value of expression: $(P-Pidle)*exec\_time$, where $P$ denotes observed power of the node running the particular application and $exec\_time$ refers to the measured application running time. Based on the application and hardware profiles, we expected that Atom D510 would be the preferred node. Obtained scheduled, that is presented in the Gantt chart in Figure~\ref{fig:70eo} along with the energy and system usage, confirmed our assumptions. Atom D510 nodes are nearly fully loaded, while the least energy-favorable AMD nodes are used only when other ones are busy. 
    552552 
    553553\begin{figure}[h!] 
     
    569569\end{figure} 
    570570 
    571 Estimated \textbf{total energy usage} of the system is 30,568 kWh. As we can see, this approach significantly improved the value of this criterion, comparing to all of the previous policies. Moreover, the proposed allocation strategy does not worsen the \textbf{workload completion time} criterion, where the resulting value is equal to 533 820 s. 
     571Estimated \textbf{total energy usage} of the system is 30,568 kWh. As we can see, this approach significantly improved the value of this criterion, comparing to the previous policies. Moreover, the proposed allocation strategy does not worsen the \textbf{workload completion time} criterion, where the resulting value is equal to 533 820 s. 
    572572 
    573573\subsubsection{Frequency scaling} 
    574574 
    575 The last considered by us case is modification of the random strategy. We assume that tasks do not have deadlines and the only criterion which is taken into consideration is the total energy consumption. All the considered workloads have been executed on the testbed configured for three different possible frequencies of CPUs – the lowest, medium and the highest one. The experiment was intended to check if the benefit of running the workload on less power-consuming frequency of CPU is not leveled by the prolonged time of execution of the workload. 
    576  
    577  
     575The last considered by us case is modification of the random strategy. We assume that tasks do not have deadlines and the only criterion which is taken into consideration is the total energy consumption. In this experiment we configured the simulated infrastructure for the lowest possible frequencies of CPUs. The experiment was intended to check if the benefit of running the workload on less power-consuming frequency of CPU is not leveled by the prolonged time of execution of the workload. The values of the evaluated criteria are as follows: \textbf{workload completion time}: 1 065 356 s and \textbf{total energy usage}: 77,109 kWh. As we can see, for the given load of the system (70\%), the cost of running the workload that require almost twice more time, can not be compensate by the lower power draw.  Moreover, as it can be observed on the charts in Figure~\ref{fig:70dfs} the execution times on the slowest nodes (Atom D510) visibly exceeds the corresponding values on other servers 
     576         
    578577\begin{figure}[h!] 
    579578\centering 
     
    582581\end{figure} 
    583582 
    584 \textbf{total energy usage}: 77,108 kWh 
    585 \textbf{workload completion time}: 1 065 356 s 
    586  
    587  
    588 .... 
    589  
     583 
     584As we were looking for the trade-off between total completion time and energy usage, we were searching for the workload load level that can benefit from the lower system performance in terms of energy-efficiency. For the frequency downgrading policy, we observed the improvement on the energy usage criterion only for the workload resulting in 10\% system load. 
     585The following tables: Table~\ref{loadEnergy} and Table~\ref{loadMakespan} contain the values of evaluation criteria (total energy usage and makespan respectively) gathered for all investigated workloads. 
    590586 
    591587\begin {table}[h!] 
     
    594590\hline 
    595591&  \multicolumn{5}{c}{Strategy}\\ 
    596 Load  & R & R + NPM & EO & EO + NPM & DFS\\ 
     592Load  & R & R+NPM & EO & EO+NPM & DFS\\ 
    597593\hline 
    59859410\% & 241,337 &        37,811 & 239,667 & 25,571 & 239,278 \\ 
    599 30\% &89,853 & 38,059 & 88,823 & 25,595.94      & 90,545 \\ 
     59530\% &89,853 & 38,059 & 88,823 & 25,595 & 90,545 \\ 
    60059650\% &59,112 & 36,797 & 58,524 & 26,328 & 76,020 \\ 
    601 70\% &46,883 & 36,705 & 46,3062 & 30,568 & 77,109 \\ 
     59770\% &46,883 & 36,705 & 46,305 & 30,568 & 77,109 \\ 
    602598\hline 
    603599\end{tabular} 
     
    610606\hline 
    611607&  \multicolumn{5}{c}{Strategy}\\ 
    612 Load  & R & R + NPM & EO & EO + NPM & DFS\\ 
     608Load  & R & R+NPM & EO & EO+NPM & DFS\\ 
    613609\hline 
    61461010\% & 3 605 428 & 3 605 428 & 3 605 428 & 3 605 428 & 3 622 968 \\ 
Note: See TracChangeset for help on using the changeset viewer.