- Timestamp:
- 12/29/12 15:06:57 (12 years ago)
- Location:
- papers/SMPaT-2012_DCWoRMS
- Files:
-
- 1 added
- 5 edited
Legend:
- Unmodified
- Added
- Removed
-
papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.aux
r718 r720 94 94 \@writefile{lof}{\contentsline {figure}{\numberline {13}{\ignorespaces Frequency downgrading strategy}}{26}} 95 95 \newlabel{fig:70dfs}{{13}{26}} 96 \@writefile{lot}{\contentsline {table}{\numberline {4}{\ignorespaces Energy usage [kWh] for different level of system load}}{27}} 96 \@writefile{lof}{\contentsline {figure}{\numberline {14}{\ignorespaces Schedules obtained for Random strategy (left) and DFS strategy (right) for 10\% of system load}}{27}} 97 \newlabel{fig:dfsComp}{{14}{27}} 98 \@writefile{lot}{\contentsline {table}{\numberline {4}{\ignorespaces Energy usage [kWh] for different level of system load. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, DFS - Dynamic Frequency Scaling}}{27}} 97 99 \newlabel{loadEnergy}{{4}{27}} 98 \@writefile{lot}{\contentsline {table}{\numberline {5}{\ignorespaces Makespan [s] for different level of system load }}{27}}99 \newlabel{loadMakespan}{{5}{2 7}}100 \@writefile{toc}{\contentsline {section}{\numberline {6}DCWoRMS application/use cases}{2 7}}101 \newlabel{sec:coolemall}{{6}{2 7}}100 \@writefile{lot}{\contentsline {table}{\numberline {5}{\ignorespaces Makespan [s] for different level of system load. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, DFS - Dynamic Frequency Scaling}}{28}} 101 \newlabel{loadMakespan}{{5}{28}} 102 \@writefile{toc}{\contentsline {section}{\numberline {6}DCWoRMS application/use cases}{28}} 103 \newlabel{sec:coolemall}{{6}{28}} 102 104 \bibcite{fit4green}{{1}{}{{}}{{}}} 103 105 \bibcite{CloudSim}{{2}{}{{}}{{}}} 106 \@writefile{toc}{\contentsline {section}{\numberline {7}Conclusions and future work}{29}} 107 \newlabel{}{{7}{29}} 104 108 \bibcite{DCSG}{{3}{}{{}}{{}}} 105 109 \bibcite{DCD_Romonet}{{4}{}{{}}{{}}} … … 107 111 \bibcite{Ghislain}{{6}{}{{}}{{}}} 108 112 \bibcite{games}{{7}{}{{}}{{}}} 109 \@writefile{toc}{\contentsline {section}{\numberline {7}Conclusions and future work}{29}}110 \newlabel{}{{7}{29}}111 113 \bibcite{GreenCloud}{{8}{}{{}}{{}}} 112 114 \bibcite{sla}{{9}{}{{}}{{}}} -
papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.fdb_latexmk
r719 r720 1 1 # Fdb version 2 2 ["pdflatex"] 13567 72502"elsarticle-DCWoRMS.tex" "elsarticle-DCWoRMS.pdf" "elsarticle-DCWoRMS"2 ["pdflatex"] 1356789938 "elsarticle-DCWoRMS.tex" "elsarticle-DCWoRMS.pdf" "elsarticle-DCWoRMS" 3 3 "/usr/local/texlive/2010/texmf-dist/tex/context/base/supp-pdf.mkii" 1251025892 71625 fad1c4b52151c234b6873a255b0ad6b3 "" 4 4 "/usr/local/texlive/2010/texmf-dist/tex/generic/oberdiek/etexcmds.sty" 1267408169 5670 cacb018555825cfe95cd1e1317d82c1d "" … … 30 30 "/usr/local/texlive/2010/texmf-dist/tex/latex/psnfss/upsy.fd" 1137110629 148 2da0acd77cba348f34823f44cabf0058 "" 31 31 "/usr/local/texlive/2010/texmf-dist/tex/latex/psnfss/upzd.fd" 1137110629 148 b2a94082cb802f90d3daf6dd0c7188a0 "" 32 "elsarticle-DCWoRMS.aux" 13567 72505 7300 77089d653ebaaabee96ed90d7881bfd3""33 "elsarticle-DCWoRMS.spl" 13567 725030 d41d8cd98f00b204e9800998ecf8427e ""34 "elsarticle-DCWoRMS.tex" 13567 72502 65393 e1002ec8f4e2fa90cf08425cf83997ea""32 "elsarticle-DCWoRMS.aux" 1356789941 7837 5e91641a0d7007551f7b0fa7dbed8f0b "" 33 "elsarticle-DCWoRMS.spl" 1356789939 0 d41d8cd98f00b204e9800998ecf8427e "" 34 "elsarticle-DCWoRMS.tex" 1356789934 66354 8fcf12eb6ee32d4660a4487151e8dc8e "" 35 35 "elsarticle.cls" 1352447924 26095 ad44f4892f75e6e05dca57a3581f78d1 "" 36 36 "fig/70dfs.png" 1356617710 212573 e013d714dd1377384ed7793222210051 "" … … 41 41 "fig/airModel.png" 1353405890 41411 f33639119a59ae1d2eabb277137f0042 "" 42 42 "fig/arch.png" 1353403503 184917 61b6fddc71ce603779f09b272cd2f164 "" 43 "fig/dfsComp.png" 1356777108 463823 66bdecf7e173c8da341c4e74dc7d8027 "" 43 44 "fig/jobsStructure.png" 1353403491 128220 3ee11e5fa0d14d8265671725666ef6f7 "" 44 45 "fig/power-fans.png" 1354275938 26789 030a69cecd0eda7c4173d2a6467b132b "" -
papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.tex
r719 r720 426 426 427 427 \begin {table}[h!] 428 428 \centering 429 429 \begin{tabular}{llr} 430 430 \hline … … 436 436 Atom D510 64 Bit & 2 GB & 4 \\ 437 437 \hline 438 \multicolumn{3}{c}{Storage} \\439 Type & Size & Connection \\440 \hline441 Storage Head 520 & 16 x 300 GB SSD & 2 x 10 Gbit/s CX4 \\442 \hline438 %\multicolumn{3}{c}{Storage} \\ 439 %Type & Size & Connection \\ 440 %\hline 441 %Storage Head 520 & 16 x 300 GB SSD & 2 x 10 Gbit/s CX4 \\ 442 %\hline 443 443 \end{tabular} 444 444 \caption {\label{testBed} RECS system configuration} … … 471 471 472 472 \begin {table}[ tp] 473 473 \centering 474 474 \begin{tabular}{l c c c c r} 475 475 \hline … … 529 529 \end{figure} 530 530 531 In this version of experiment we neglected additional cost and time necessary to change the power state of resources. As can be observed in the power consumption chart in the Figure~\ref{fig:70rnpm}, switching of unused nodes led to decrease of the total energy consumption. As expected, with respect to the makespan criterion, both approaches perform equally reaching \textbf{workload completion time}: 533 820 s. However, the pure random strategy was significantly outperformed in terms of energy usage, by the policy with additional node power management with it \textbf{total energy usage}: 36,705 kWh. The overall energy savings reached 22\%.531 In this version of experiment we neglected additional cost and time necessary to change the power state of resources. As can be observed in the power consumption chart in the Figure~\ref{fig:70rnpm}, switching of unused nodes led to decrease of the total energy consumption. As expected, with respect to the makespan criterion, both approaches perform equally reaching \textbf{workload completion time}: 533 820 s. However, the pure random strategy was significantly outperformed in terms of energy usage, by the policy with additional node power management with its \textbf{total energy usage}: 36,705 kWh. The overall energy savings reached 22\%. 532 532 533 533 \subsubsection{Energy optimization} … … 549 549 \end {table} 550 550 551 As mentioned, we assign tasks to nodes minimizing the value of expression: $(P-Pidle)*exec\_time$, where $P$ denotes observed power of the node running the particular application and $exec\_time$ refers to the measured application running time. Based on the application and hardware profiles, we expected that Atom D510 would be the preferred node. Obtained schedule d, that is presented in the Gantt chart in Figure~\ref{fig:70eo} along with the energy and system usage, confirmed our assumptions. Atom D510 nodes are nearly fully loaded, while the least energy-favorable AMD nodes are used only when other ones are busy.551 As mentioned, we assign tasks to nodes minimizing the value of expression: $(P-Pidle)*exec\_time$, where $P$ denotes observed power of the node running the particular application and $exec\_time$ refers to the measured application running time. Based on the application and hardware profiles, we expected that Atom D510 would be the preferred node. Obtained schedule, that is presented in the Gantt chart in Figure~\ref{fig:70eo} along with the energy and system usage, confirmed our assumptions. Atom D510 nodes are nearly fully loaded, while the least energy-favourable AMD nodes are used only when other ones are busy. 552 552 553 553 \begin{figure}[h!] … … 560 560 561 561 562 The next strategy is similar to the previous one, so making the assignment of task to the node, we still take into consideration application and hardware profiles, but in that case we assume that the system supports possibility of switching off unused nodes. In this case the minimal energy consumption is achieved by assigning the task to the node for which the product of power consumption and time of execution is minimal. In other words we minimized the following expression: "$P*exec\_time$.563 Contrary to the previous strategy, we expected Intel I7 nodes to be allocated first. Generated Gantt chart is co mpatiblewith our expectations.562 The next strategy is similar to the previous one, so making the assignment of task to the node, we still take into consideration application and hardware profiles, but in that case we assume that the system supports possibility of switching off unused nodes. In this case the minimal energy consumption is achieved by assigning the task to the node for which the product of power consumption and time of execution is minimal. In other words we minimized the following expression: $P*exec\_time$. 563 Contrary to the previous strategy, we expected Intel I7 nodes to be allocated first. Generated Gantt chart is consistent with our expectations. 564 564 565 565 \begin{figure}[h!] … … 569 569 \end{figure} 570 570 571 Estimated \textbf{total energy usage} of the system is 30,568 kWh. As we can see, this approach significantly improved the value of this criterion, comparing to the previous policies. Moreover, the proposed allocation strategy does not worsen the \textbf{workload completion time} criterion, wherethe resulting value is equal to 533 820 s.571 Estimated \textbf{total energy usage} of the system is 30,568 kWh. As we can see, this approach significantly improved the value of this criterion, comparing to the previous policies. Moreover, the proposed allocation strategy does not worsen the \textbf{workload completion time} criterion, for which the resulting value is equal to 533 820 s. 572 572 573 573 \subsubsection{Frequency scaling} … … 583 583 584 584 As we were looking for the trade-off between total completion time and energy usage, we were searching for the workload load level that can benefit from the lower system performance in terms of energy-efficiency. For the frequency downgrading policy, we observed the improvement on the energy usage criterion only for the workload resulting in 10\% system load. 585 586 Figure~\ref{fig:dfsComp} shows schedules obtained for Random and DFS strategy. One should easily note that the 587 \begin{figure}[h!] 588 \centering 589 \includegraphics[width = 12cm]{fig/dfsComp.png} 590 \caption{\label{fig:dfsComp} Schedules obtained for Random strategy (left) and DFS strategy (right) for 10\% of system load} 591 \end{figure} 592 593 585 594 The following tables: Table~\ref{loadEnergy} and Table~\ref{loadMakespan} contain the values of evaluation criteria (total energy usage and makespan respectively) gathered for all investigated workloads. 586 595 … … 598 607 \hline 599 608 \end{tabular} 600 \caption {\label{loadEnergy} Energy usage [kWh] for different level of system load }609 \caption {\label{loadEnergy} Energy usage [kWh] for different level of system load. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, DFS - Dynamic Frequency Scaling} 601 610 \end {table} 602 611 … … 614 623 \hline 615 624 \end{tabular} 616 \caption {\label{loadMakespan} Makespan [s] for different level of system load }625 \caption {\label{loadMakespan} Makespan [s] for different level of system load. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, DFS - Dynamic Frequency Scaling} 617 626 \end {table} 627 628 One should easily note that gain from switching off unused nodes decreases with the increasing workload density. In general, for the highly loaded system such policy does not find an application due to the cost related to this process and relatively small benefits. 629 630 ... 618 631 619 632 \section{DCWoRMS application/use cases}\label{sec:coolemall}
Note: See TracChangeset
for help on using the changeset viewer.