Changeset 720 for papers


Ignore:
Timestamp:
12/29/12 15:06:57 (12 years ago)
Author:
wojtekp
Message:
 
Location:
papers/SMPaT-2012_DCWoRMS
Files:
1 added
5 edited

Legend:

Unmodified
Added
Removed
  • papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.aux

    r718 r720  
    9494\@writefile{lof}{\contentsline {figure}{\numberline {13}{\ignorespaces  Frequency downgrading strategy}}{26}} 
    9595\newlabel{fig:70dfs}{{13}{26}} 
    96 \@writefile{lot}{\contentsline {table}{\numberline {4}{\ignorespaces  Energy usage [kWh] for different level of system load}}{27}} 
     96\@writefile{lof}{\contentsline {figure}{\numberline {14}{\ignorespaces  Schedules obtained for Random strategy (left) and DFS strategy (right) for 10\% of system load}}{27}} 
     97\newlabel{fig:dfsComp}{{14}{27}} 
     98\@writefile{lot}{\contentsline {table}{\numberline {4}{\ignorespaces  Energy usage [kWh] for different level of system load. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, DFS - Dynamic Frequency Scaling}}{27}} 
    9799\newlabel{loadEnergy}{{4}{27}} 
    98 \@writefile{lot}{\contentsline {table}{\numberline {5}{\ignorespaces  Makespan [s] for different level of system load}}{27}} 
    99 \newlabel{loadMakespan}{{5}{27}} 
    100 \@writefile{toc}{\contentsline {section}{\numberline {6}DCWoRMS application/use cases}{27}} 
    101 \newlabel{sec:coolemall}{{6}{27}} 
     100\@writefile{lot}{\contentsline {table}{\numberline {5}{\ignorespaces  Makespan [s] for different level of system load. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, DFS - Dynamic Frequency Scaling}}{28}} 
     101\newlabel{loadMakespan}{{5}{28}} 
     102\@writefile{toc}{\contentsline {section}{\numberline {6}DCWoRMS application/use cases}{28}} 
     103\newlabel{sec:coolemall}{{6}{28}} 
    102104\bibcite{fit4green}{{1}{}{{}}{{}}} 
    103105\bibcite{CloudSim}{{2}{}{{}}{{}}} 
     106\@writefile{toc}{\contentsline {section}{\numberline {7}Conclusions and future work}{29}} 
     107\newlabel{}{{7}{29}} 
    104108\bibcite{DCSG}{{3}{}{{}}{{}}} 
    105109\bibcite{DCD_Romonet}{{4}{}{{}}{{}}} 
     
    107111\bibcite{Ghislain}{{6}{}{{}}{{}}} 
    108112\bibcite{games}{{7}{}{{}}{{}}} 
    109 \@writefile{toc}{\contentsline {section}{\numberline {7}Conclusions and future work}{29}} 
    110 \newlabel{}{{7}{29}} 
    111113\bibcite{GreenCloud}{{8}{}{{}}{{}}} 
    112114\bibcite{sla}{{9}{}{{}}{{}}} 
  • papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.fdb_latexmk

    r719 r720  
    11# Fdb version 2 
    2 ["pdflatex"] 1356772502 "elsarticle-DCWoRMS.tex" "elsarticle-DCWoRMS.pdf" "elsarticle-DCWoRMS"  
     2["pdflatex"] 1356789938 "elsarticle-DCWoRMS.tex" "elsarticle-DCWoRMS.pdf" "elsarticle-DCWoRMS"  
    33  "/usr/local/texlive/2010/texmf-dist/tex/context/base/supp-pdf.mkii" 1251025892 71625 fad1c4b52151c234b6873a255b0ad6b3 "" 
    44  "/usr/local/texlive/2010/texmf-dist/tex/generic/oberdiek/etexcmds.sty" 1267408169 5670 cacb018555825cfe95cd1e1317d82c1d "" 
     
    3030  "/usr/local/texlive/2010/texmf-dist/tex/latex/psnfss/upsy.fd" 1137110629 148 2da0acd77cba348f34823f44cabf0058 "" 
    3131  "/usr/local/texlive/2010/texmf-dist/tex/latex/psnfss/upzd.fd" 1137110629 148 b2a94082cb802f90d3daf6dd0c7188a0 "" 
    32   "elsarticle-DCWoRMS.aux" 1356772505 7300 77089d653ebaaabee96ed90d7881bfd3 "" 
    33   "elsarticle-DCWoRMS.spl" 1356772503 0 d41d8cd98f00b204e9800998ecf8427e "" 
    34   "elsarticle-DCWoRMS.tex" 1356772502 65393 e1002ec8f4e2fa90cf08425cf83997ea "" 
     32  "elsarticle-DCWoRMS.aux" 1356789941 7837 5e91641a0d7007551f7b0fa7dbed8f0b "" 
     33  "elsarticle-DCWoRMS.spl" 1356789939 0 d41d8cd98f00b204e9800998ecf8427e "" 
     34  "elsarticle-DCWoRMS.tex" 1356789934 66354 8fcf12eb6ee32d4660a4487151e8dc8e "" 
    3535  "elsarticle.cls" 1352447924 26095 ad44f4892f75e6e05dca57a3581f78d1 "" 
    3636  "fig/70dfs.png" 1356617710 212573 e013d714dd1377384ed7793222210051 "" 
     
    4141  "fig/airModel.png" 1353405890 41411 f33639119a59ae1d2eabb277137f0042 "" 
    4242  "fig/arch.png" 1353403503 184917 61b6fddc71ce603779f09b272cd2f164 "" 
     43  "fig/dfsComp.png" 1356777108 463823 66bdecf7e173c8da341c4e74dc7d8027 "" 
    4344  "fig/jobsStructure.png" 1353403491 128220 3ee11e5fa0d14d8265671725666ef6f7 "" 
    4445  "fig/power-fans.png" 1354275938 26789 030a69cecd0eda7c4173d2a6467b132b "" 
  • papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.tex

    r719 r720  
    426426 
    427427\begin {table}[h!] 
    428  
     428\centering 
    429429\begin{tabular}{llr} 
    430430\hline 
     
    436436Atom D510 64 Bit & 2 GB &       4 \\ 
    437437\hline 
    438 \multicolumn{3}{c}{Storage} \\ 
    439 Type & Size & Connection  \\ 
    440 \hline 
    441 Storage Head 520 & 16 x 300 GB SSD & 2 x 10 Gbit/s CX4 \\ 
    442 \hline 
     438%\multicolumn{3}{c}{Storage} \\ 
     439%Type & Size & Connection  \\ 
     440%\hline 
     441%Storage Head 520 & 16 x 300 GB SSD & 2 x 10 Gbit/s CX4 \\ 
     442%\hline 
    443443\end{tabular} 
    444444\caption {\label{testBed} RECS system configuration} 
     
    471471 
    472472\begin {table}[ tp] 
    473  
     473\centering 
    474474\begin{tabular}{l c c c c r} 
    475475\hline 
     
    529529\end{figure} 
    530530 
    531 In this version of experiment we neglected additional cost and time necessary to change the power state of resources. As can be observed in the power consumption chart in the Figure~\ref{fig:70rnpm}, switching of unused nodes led to decrease of the total energy consumption. As expected, with respect to the makespan criterion, both approaches perform equally reaching \textbf{workload completion time}: 533 820 s. However, the pure random strategy was significantly outperformed in terms of energy usage, by the policy with additional node power management with it \textbf{total energy usage}: 36,705 kWh. The overall energy savings reached 22\%.  
     531In this version of experiment we neglected additional cost and time necessary to change the power state of resources. As can be observed in the power consumption chart in the Figure~\ref{fig:70rnpm}, switching of unused nodes led to decrease of the total energy consumption. As expected, with respect to the makespan criterion, both approaches perform equally reaching \textbf{workload completion time}: 533 820 s. However, the pure random strategy was significantly outperformed in terms of energy usage, by the policy with additional node power management with its \textbf{total energy usage}: 36,705 kWh. The overall energy savings reached 22\%.  
    532532 
    533533\subsubsection{Energy optimization} 
     
    549549\end {table} 
    550550 
    551 As mentioned, we assign tasks to nodes minimizing the value of expression: $(P-Pidle)*exec\_time$, where $P$ denotes observed power of the node running the particular application and $exec\_time$ refers to the measured application running time. Based on the application and hardware profiles, we expected that Atom D510 would be the preferred node. Obtained scheduled, that is presented in the Gantt chart in Figure~\ref{fig:70eo} along with the energy and system usage, confirmed our assumptions. Atom D510 nodes are nearly fully loaded, while the least energy-favorable AMD nodes are used only when other ones are busy. 
     551As mentioned, we assign tasks to nodes minimizing the value of expression: $(P-Pidle)*exec\_time$, where $P$ denotes observed power of the node running the particular application and $exec\_time$ refers to the measured application running time. Based on the application and hardware profiles, we expected that Atom D510 would be the preferred node. Obtained schedule, that is presented in the Gantt chart in Figure~\ref{fig:70eo} along with the energy and system usage, confirmed our assumptions. Atom D510 nodes are nearly fully loaded, while the least energy-favourable AMD nodes are used only when other ones are busy. 
    552552 
    553553\begin{figure}[h!] 
     
    560560 
    561561 
    562 The next strategy is similar to the previous one, so making the assignment of task to the node, we still take into consideration application and hardware profiles, but in that case we assume that the system supports possibility of switching off unused nodes. In this case the minimal energy consumption is achieved by assigning the task to the node for which the product of power consumption and time of execution is minimal. In other words we minimized the following expression: "$P*exec\_time$. 
    563 Contrary to the previous strategy, we expected Intel I7 nodes to be allocated first. Generated Gantt chart is compatible with our expectations. 
     562The next strategy is similar to the previous one, so making the assignment of task to the node, we still take into consideration application and hardware profiles, but in that case we assume that the system supports possibility of switching off unused nodes. In this case the minimal energy consumption is achieved by assigning the task to the node for which the product of power consumption and time of execution is minimal. In other words we minimized the following expression: $P*exec\_time$. 
     563Contrary to the previous strategy, we expected Intel I7 nodes to be allocated first. Generated Gantt chart is consistent with our expectations. 
    564564 
    565565\begin{figure}[h!] 
     
    569569\end{figure} 
    570570 
    571 Estimated \textbf{total energy usage} of the system is 30,568 kWh. As we can see, this approach significantly improved the value of this criterion, comparing to the previous policies. Moreover, the proposed allocation strategy does not worsen the \textbf{workload completion time} criterion, where the resulting value is equal to 533 820 s. 
     571Estimated \textbf{total energy usage} of the system is 30,568 kWh. As we can see, this approach significantly improved the value of this criterion, comparing to the previous policies. Moreover, the proposed allocation strategy does not worsen the \textbf{workload completion time} criterion, for which the resulting value is equal to 533 820 s. 
    572572 
    573573\subsubsection{Frequency scaling} 
     
    583583 
    584584As we were looking for the trade-off between total completion time and energy usage, we were searching for the workload load level that can benefit from the lower system performance in terms of energy-efficiency. For the frequency downgrading policy, we observed the improvement on the energy usage criterion only for the workload resulting in 10\% system load. 
     585 
     586Figure~\ref{fig:dfsComp} shows schedules obtained for Random and DFS strategy. One should easily note that the 
     587\begin{figure}[h!] 
     588\centering 
     589\includegraphics[width = 12cm]{fig/dfsComp.png} 
     590\caption{\label{fig:dfsComp} Schedules obtained for Random strategy (left) and DFS strategy (right) for 10\% of system load} 
     591\end{figure} 
     592 
     593 
    585594The following tables: Table~\ref{loadEnergy} and Table~\ref{loadMakespan} contain the values of evaluation criteria (total energy usage and makespan respectively) gathered for all investigated workloads. 
    586595 
     
    598607\hline 
    599608\end{tabular} 
    600 \caption {\label{loadEnergy} Energy usage [kWh] for different level of system load} 
     609\caption {\label{loadEnergy} Energy usage [kWh] for different level of system load. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, DFS - Dynamic Frequency Scaling} 
    601610\end {table} 
    602611 
     
    614623\hline 
    615624\end{tabular} 
    616 \caption {\label{loadMakespan} Makespan [s] for different level of system load} 
     625\caption {\label{loadMakespan} Makespan [s] for different level of system load. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, DFS - Dynamic Frequency Scaling} 
    617626\end {table} 
     627 
     628One should easily note that gain from switching off unused nodes decreases with the increasing workload density. In general, for the highly loaded system such policy does not find an application due to the cost related to this process and relatively small benefits. 
     629 
     630... 
    618631 
    619632\section{DCWoRMS application/use cases}\label{sec:coolemall} 
Note: See TracChangeset for help on using the changeset viewer.