Changeset 720 for papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.tex
- Timestamp:
- 12/29/12 15:06:57 (12 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.tex
r719 r720 426 426 427 427 \begin {table}[h!] 428 428 \centering 429 429 \begin{tabular}{llr} 430 430 \hline … … 436 436 Atom D510 64 Bit & 2 GB & 4 \\ 437 437 \hline 438 \multicolumn{3}{c}{Storage} \\439 Type & Size & Connection \\440 \hline441 Storage Head 520 & 16 x 300 GB SSD & 2 x 10 Gbit/s CX4 \\442 \hline438 %\multicolumn{3}{c}{Storage} \\ 439 %Type & Size & Connection \\ 440 %\hline 441 %Storage Head 520 & 16 x 300 GB SSD & 2 x 10 Gbit/s CX4 \\ 442 %\hline 443 443 \end{tabular} 444 444 \caption {\label{testBed} RECS system configuration} … … 471 471 472 472 \begin {table}[ tp] 473 473 \centering 474 474 \begin{tabular}{l c c c c r} 475 475 \hline … … 529 529 \end{figure} 530 530 531 In this version of experiment we neglected additional cost and time necessary to change the power state of resources. As can be observed in the power consumption chart in the Figure~\ref{fig:70rnpm}, switching of unused nodes led to decrease of the total energy consumption. As expected, with respect to the makespan criterion, both approaches perform equally reaching \textbf{workload completion time}: 533 820 s. However, the pure random strategy was significantly outperformed in terms of energy usage, by the policy with additional node power management with it \textbf{total energy usage}: 36,705 kWh. The overall energy savings reached 22\%.531 In this version of experiment we neglected additional cost and time necessary to change the power state of resources. As can be observed in the power consumption chart in the Figure~\ref{fig:70rnpm}, switching of unused nodes led to decrease of the total energy consumption. As expected, with respect to the makespan criterion, both approaches perform equally reaching \textbf{workload completion time}: 533 820 s. However, the pure random strategy was significantly outperformed in terms of energy usage, by the policy with additional node power management with its \textbf{total energy usage}: 36,705 kWh. The overall energy savings reached 22\%. 532 532 533 533 \subsubsection{Energy optimization} … … 549 549 \end {table} 550 550 551 As mentioned, we assign tasks to nodes minimizing the value of expression: $(P-Pidle)*exec\_time$, where $P$ denotes observed power of the node running the particular application and $exec\_time$ refers to the measured application running time. Based on the application and hardware profiles, we expected that Atom D510 would be the preferred node. Obtained schedule d, that is presented in the Gantt chart in Figure~\ref{fig:70eo} along with the energy and system usage, confirmed our assumptions. Atom D510 nodes are nearly fully loaded, while the least energy-favorable AMD nodes are used only when other ones are busy.551 As mentioned, we assign tasks to nodes minimizing the value of expression: $(P-Pidle)*exec\_time$, where $P$ denotes observed power of the node running the particular application and $exec\_time$ refers to the measured application running time. Based on the application and hardware profiles, we expected that Atom D510 would be the preferred node. Obtained schedule, that is presented in the Gantt chart in Figure~\ref{fig:70eo} along with the energy and system usage, confirmed our assumptions. Atom D510 nodes are nearly fully loaded, while the least energy-favourable AMD nodes are used only when other ones are busy. 552 552 553 553 \begin{figure}[h!] … … 560 560 561 561 562 The next strategy is similar to the previous one, so making the assignment of task to the node, we still take into consideration application and hardware profiles, but in that case we assume that the system supports possibility of switching off unused nodes. In this case the minimal energy consumption is achieved by assigning the task to the node for which the product of power consumption and time of execution is minimal. In other words we minimized the following expression: "$P*exec\_time$.563 Contrary to the previous strategy, we expected Intel I7 nodes to be allocated first. Generated Gantt chart is co mpatiblewith our expectations.562 The next strategy is similar to the previous one, so making the assignment of task to the node, we still take into consideration application and hardware profiles, but in that case we assume that the system supports possibility of switching off unused nodes. In this case the minimal energy consumption is achieved by assigning the task to the node for which the product of power consumption and time of execution is minimal. In other words we minimized the following expression: $P*exec\_time$. 563 Contrary to the previous strategy, we expected Intel I7 nodes to be allocated first. Generated Gantt chart is consistent with our expectations. 564 564 565 565 \begin{figure}[h!] … … 569 569 \end{figure} 570 570 571 Estimated \textbf{total energy usage} of the system is 30,568 kWh. As we can see, this approach significantly improved the value of this criterion, comparing to the previous policies. Moreover, the proposed allocation strategy does not worsen the \textbf{workload completion time} criterion, wherethe resulting value is equal to 533 820 s.571 Estimated \textbf{total energy usage} of the system is 30,568 kWh. As we can see, this approach significantly improved the value of this criterion, comparing to the previous policies. Moreover, the proposed allocation strategy does not worsen the \textbf{workload completion time} criterion, for which the resulting value is equal to 533 820 s. 572 572 573 573 \subsubsection{Frequency scaling} … … 583 583 584 584 As we were looking for the trade-off between total completion time and energy usage, we were searching for the workload load level that can benefit from the lower system performance in terms of energy-efficiency. For the frequency downgrading policy, we observed the improvement on the energy usage criterion only for the workload resulting in 10\% system load. 585 586 Figure~\ref{fig:dfsComp} shows schedules obtained for Random and DFS strategy. One should easily note that the 587 \begin{figure}[h!] 588 \centering 589 \includegraphics[width = 12cm]{fig/dfsComp.png} 590 \caption{\label{fig:dfsComp} Schedules obtained for Random strategy (left) and DFS strategy (right) for 10\% of system load} 591 \end{figure} 592 593 585 594 The following tables: Table~\ref{loadEnergy} and Table~\ref{loadMakespan} contain the values of evaluation criteria (total energy usage and makespan respectively) gathered for all investigated workloads. 586 595 … … 598 607 \hline 599 608 \end{tabular} 600 \caption {\label{loadEnergy} Energy usage [kWh] for different level of system load }609 \caption {\label{loadEnergy} Energy usage [kWh] for different level of system load. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, DFS - Dynamic Frequency Scaling} 601 610 \end {table} 602 611 … … 614 623 \hline 615 624 \end{tabular} 616 \caption {\label{loadMakespan} Makespan [s] for different level of system load }625 \caption {\label{loadMakespan} Makespan [s] for different level of system load. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, DFS - Dynamic Frequency Scaling} 617 626 \end {table} 627 628 One should easily note that gain from switching off unused nodes decreases with the increasing workload density. In general, for the highly loaded system such policy does not find an application due to the cost related to this process and relatively small benefits. 629 630 ... 618 631 619 632 \section{DCWoRMS application/use cases}\label{sec:coolemall}
Note: See TracChangeset
for help on using the changeset viewer.