Changeset 1080 for papers


Ignore:
Timestamp:
06/10/13 04:03:13 (12 years ago)
Author:
ariel
Message:

vesion_for_submission

Location:
papers/SMPaT-2012_DCWoRMS
Files:
3 edited

Legend:

Unmodified
Added
Removed
  • papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.aux

    r1079 r1080  
    3939\newlabel{fig:jobsStructure}{{2}{8}} 
    4040\@writefile{toc}{\contentsline {subsection}{\numberline {3.3}Resource modeling}{8}} 
     41\citation{e2dc13} 
    4142\citation{d2.2} 
    4243\@writefile{toc}{\contentsline {subsection}{\numberline {3.4}Energy management concept in DCworms}{9}} 
     
    5556\newlabel{eq:E}{{1}{12}} 
    5657\citation{d2.2} 
     58\citation{e2dc13} 
    5759\@writefile{toc}{\contentsline {subsection}{\numberline {4.1}Power consumption models}{13}} 
    5860\newlabel{sec:power}{{4.1}{13}} 
    5961\newlabel{eq:ohm-law}{{2}{13}} 
    60 \@writefile{lof}{\contentsline {figure}{\numberline {4}{\ignorespaces Average power usage with regard to CPU frequency - Linpack (\emph  {green}), Abinit (\emph  {purple}), Namd (\emph  {blue}) and Cpuburn (\emph  {red}).  }}{14}} 
     62\@writefile{lof}{\contentsline {figure}{\numberline {4}{\ignorespaces Average power usage with regard to CPU frequency - Linpack ($\bullet $), Abinit and Namd ($\times $).  }}{14}} 
    6163\newlabel{fig:power_freq}{{4}{14}} 
    6264\@writefile{lof}{\contentsline {figure}{\numberline {5}{\ignorespaces  Power in time for the highest frequency}}{14}} 
    6365\newlabel{fig:fans_P}{{5}{14}} 
    64 \@writefile{toc}{\contentsline {subsubsection}{\numberline {4.1.1}Static approach}{14}} 
     66\@writefile{toc}{\contentsline {subsubsection}{\numberline {4.1.1}Static approach}{15}} 
    6567\newlabel{eq:static}{{3}{15}} 
    6668\@writefile{toc}{\contentsline {subsubsection}{\numberline {4.1.2}Resource load}{15}} 
    67 \newlabel{eq:dynamic}{{4}{15}} 
    6869\citation{fit4green_scheduler} 
     70\newlabel{eq:dynamic}{{4}{16}} 
    6971\newlabel{eq:modelLoad}{{7}{16}} 
    7072\@writefile{toc}{\contentsline {subsubsection}{\numberline {4.1.3}Application specific}{16}} 
     
    8385\newlabel{testBed}{{1}{18}} 
    8486\@writefile{toc}{\contentsline {subsection}{\numberline {5.2}Evaluated applications}{18}} 
    85 \@writefile{toc}{\contentsline {subsection}{\numberline {5.3}Methodology}{18}} 
     87\@writefile{toc}{\contentsline {subsection}{\numberline {5.3}Methodology}{19}} 
    8688\@writefile{toc}{\contentsline {subsection}{\numberline {5.4}Models}{19}} 
    8789\newlabel{sec:models}{{5.4}{19}} 
    8890\@writefile{lot}{\contentsline {table}{\numberline {2}{\ignorespaces  Workload characteristics}}{20}} 
    8991\newlabel{workloadCharacteristics}{{2}{20}} 
    90 \@writefile{lot}{\contentsline {table}{\numberline {3}{\ignorespaces  $P_{cpubase}$ values in Watts}}{20}} 
    91 \newlabel{nodeBasePowerUsage}{{3}{20}} 
     92\@writefile{lot}{\contentsline {table}{\numberline {3}{\ignorespaces  $P_{cpubase}$ values in Watts}}{21}} 
     93\newlabel{nodeBasePowerUsage}{{3}{21}} 
    9294\@writefile{lot}{\contentsline {table}{\numberline {4}{\ignorespaces  $P_{app}$ values in Watts}}{21}} 
    9395\newlabel{appPowerUsage}{{4}{21}} 
    9496\@writefile{lot}{\contentsline {table}{\numberline {5}{\ignorespaces  Power models error in \%}}{21}} 
    9597\newlabel{expPowerModels}{{5}{21}} 
    96 \@writefile{toc}{\contentsline {subsection}{\numberline {5.5}Resource management policies evaluation}{21}} 
     98\@writefile{toc}{\contentsline {subsection}{\numberline {5.5}Resource management policies evaluation}{22}} 
    9799\@writefile{toc}{\contentsline {subsubsection}{\numberline {5.5.1}Random approach}{22}} 
    98100\@writefile{lof}{\contentsline {figure}{\numberline {6}{\ignorespaces  Comparison of energy usage for Random (left) and Random + switching off unused nodes strategy (right)}}{22}} 
     
    103105\@writefile{lof}{\contentsline {figure}{\numberline {7}{\ignorespaces  Energy usage optimization strategy}}{24}} 
    104106\newlabel{fig:70eo}{{7}{24}} 
    105 \@writefile{lof}{\contentsline {figure}{\numberline {8}{\ignorespaces  Energy usage optimization + switching off unused nodes strategy}}{24}} 
    106 \newlabel{fig:70eonpm}{{8}{24}} 
    107 \@writefile{toc}{\contentsline {subsubsection}{\numberline {5.5.3}Downgrading frequency}{25}} 
    108 \@writefile{lof}{\contentsline {figure}{\numberline {9}{\ignorespaces  Frequency downgrading strategy}}{25}} 
    109 \newlabel{fig:70dfs}{{9}{25}} 
     107\@writefile{toc}{\contentsline {subsubsection}{\numberline {5.5.3}Downgrading frequency}{24}} 
     108\@writefile{lof}{\contentsline {figure}{\numberline {8}{\ignorespaces  Energy usage optimization + switching off unused nodes strategy}}{25}} 
     109\newlabel{fig:70eonpm}{{8}{25}} 
     110\@writefile{lof}{\contentsline {figure}{\numberline {9}{\ignorespaces  Frequency downgrading strategy}}{26}} 
     111\newlabel{fig:70dfs}{{9}{26}} 
    110112\@writefile{lof}{\contentsline {figure}{\numberline {10}{\ignorespaces  Schedules obtained for Random strategy (left) and Random + lowest frequency strategy (right) for 10\% of system load}}{26}} 
    111113\newlabel{fig:dfsComp}{{10}{26}} 
    112 \@writefile{toc}{\contentsline {subsubsection}{\numberline {5.5.4}Summary}{26}} 
     114\@writefile{toc}{\contentsline {subsubsection}{\numberline {5.5.4}Summary}{27}} 
    113115\@writefile{lot}{\contentsline {table}{\numberline {7}{\ignorespaces  Energy usage [kWh] for different level of system load. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, R+LF - Random + lowest frequency}}{27}} 
    114116\newlabel{loadEnergy}{{7}{27}} 
     
    119121\newlabel{eq:modelStatic}{{10}{28}} 
    120122\@writefile{toc}{\contentsline {paragraph}{Dynamic}{28}} 
    121 \newlabel{eq:modelLoadApp}{{11}{28}} 
     123\newlabel{eq:modelLoadApp}{{11}{29}} 
    122124\@writefile{toc}{\contentsline {paragraph}{Application}{29}} 
    123125\@writefile{lot}{\contentsline {table}{\numberline {9}{\ignorespaces  Comparison of energy usage estimations [kWh] obtained for investigated power consumption models. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, R+LF - Random + lowest frequency}}{29}} 
    124126\newlabel{modelsResults}{{9}{29}} 
    125 \@writefile{lot}{\contentsline {table}{\numberline {10}{\ignorespaces  Comparison of accuracy [\%] obtained for investigated power consumption models. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, R+LF - Random + lowest frequency}}{29}} 
    126 \newlabel{modelsAccuracy}{{10}{29}} 
     127\@writefile{lot}{\contentsline {table}{\numberline {10}{\ignorespaces  Comparison of accuracy [\%] obtained for investigated power consumption models. R - Random, R+NPM - Random + node power management, EO - Energy optimization, EO+NPM - Energy optimization + node power management, R+LF - Random + lowest frequency}}{30}} 
     128\newlabel{modelsAccuracy}{{10}{30}} 
    127129\@writefile{toc}{\contentsline {section}{\numberline {6}Conclusions and future work}{30}} 
    128130\bibcite{fit4green}{{1}{}{{}}{{}}} 
     
    130132\bibcite{e2dc12}{{3}{}{{}}{{}}} 
    131133\bibcite{CloudSim}{{4}{}{{}}{{}}} 
     134\newlabel{}{{6}{31}} 
    132135\bibcite{DCSG}{{5}{}{{}}{{}}} 
    133136\bibcite{SimGrid}{{6}{}{{}}{{}}} 
    134137\bibcite{DCD_Romonet}{{7}{}{{}}{{}}} 
    135 \newlabel{}{{6}{31}} 
    136138\bibcite{networks}{{8}{}{{}}{{}}} 
    137139\bibcite{Ghislain}{{9}{}{{}}{{}}} 
     
    159161\bibcite{pue}{{31}{}{{}}{{}}} 
    160162\bibcite{sgi}{{32}{}{{}}{{}}} 
    161 \providecommand\NAT@force@numbers{}\NAT@force@numbers 
     163\global\NAT@numberstrue 
  • papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.tex

    r1079 r1080  
    7676% \biboptions{} 
    7777 
     78\usepackage[T1]{fontenc} 
     79\usepackage[utf8]{inputenc} 
    7880 
    7981\journal{Simulation Modelling Practice and Theory} 
     
    119121\ead{ariel@man.poznan.pl} 
    120122 
    121 \author[psnc]{W.~Piatek} 
     123\author[psnc]{W.~PiÄ 
     124tek} 
    122125 
    123126\author[psnc]{T.~Piontek} 
     
    125128\author[psnc]{A.~Przybyszewski} 
    126129 
    127 \author[psnc,put]{J.~Weglarz} 
     130\author[psnc,put]{J.~Węglarz} 
    128131 
    129132\cortext[cor1]{Corresponding author, tel/fax: +48618582187/+48618582151} 
     
    171174Therefore, there is a need for simulation tools and models that approach the problem from a perspective of end users and take into account all the factors that are critical to understanding and improving the energy efficiency of data centers, in particular, hardware characteristics, applications, management policies, and cooling. 
    172175These tools should support data center designers and operators by answering questions how specific application types, levels of load, hardware specifications, physical arrangements, cooling technology, etc. impact overall data center energy efficiency.  
    173 There are various tools that allow simulation of computing infrastructures, like SimGrid\cite{SimGrid}. On one hand they include advanced packages for modeling heat transfer and energy consumption in data centers \cite{ff} or tools concentrating on their financial analysis \cite{DCD_Romonet}. On the other hand, there are simulators focusing on computations such as CloudSim \cite{CloudSim}. The CoolEmAll project aims to integrate these approaches and enable advanced analysis of data center efficiency taking into account all these aspects \cite{e2dc12}\cite{coolemall}. 
     176There are various tools that allow simulation of computing infrastructures, like SimGrid\cite{SimGrid}. Some of them include advanced packages for modeling heat transfer and energy consumption in data centers \cite{ff} or tools concentrating on their financial analysis \cite{DCD_Romonet}. On the other hand, there are simulators focusing on computations such as CloudSim \cite{CloudSim}. The CoolEmAll project aims to integrate these approaches and enable advanced analysis of data center efficiency taking into account all these aspects \cite{e2dc12}\cite{coolemall}. 
    174177 
    175178One of the results of the CoolEmAll project is the Data Center Workload and Resource Management Simulator (DCworms) which enables modeling and simulation of computing infrastructures to estimate their performance, energy consumption, and energy-efficiency metrics for diverse workloads and management policies. 
     
    177180To demonstrate DCworms capabilities we evaluate impact of several resource management policies on overall energy-efficiency of specific workloads executed on heterogeneous resources. 
    178181 
    179 The remaining part of this paper is organized as follows. In Section~2 we give a brief overview of the current state of the art concerning modeling and simulation of distributed systems, such as Grids and Clouds, in terms of energy efficiency. Section~3 discusses the main features of DCworms. In particular, it introduces our approach to workload and resource management, presents the concept of energy efficiency modeling and explains how to incorporate a specific application performance model into simulations. Section~4 discusses energy models adopted within the DCworms. In Section~5 we assess the energy models by comparison of simulation results with real measurements. We also present experiments that were performed using DCworms to show various types of resource and scheduling technics allowing decreasing the total energy consumption of the execution of a set of tasks. Final conclusions and directions for future work are given in Section~6. 
     182The remaining part of this paper is organized as follows. In Section~2 we give a brief overview of the current state of the art concerning modeling and simulation of computing  systems in terms of energy efficiency. Section~3 discusses the main features of DCworms. In particular, it introduces our approach to workload and resource management, presents the concept of energy efficiency modeling and explains how to incorporate a specific application performance model into simulations. Section~4 discusses energy models adopted within the DCworms. In Section~5 we assess the energy models by comparison of simulation results with real measurements. We also present experiments that were performed using DCworms to show various types of resource and scheduling technics allowing decreasing the total energy consumption of the execution of a set of tasks. Final conclusions and directions for future work are given in Section~6. 
    180183 
    181184\section{Related Work}\label{sota} 
     
    195198GreenCloud, CloudSim and DCworms are released as Open Source under the GPL. DCSG Simulator is available under the OSL V3.0 open-source license, however, it can be only accessed by the DCSG members. 
    196199 
    197 Summarizing, DCworms stands out from other tools due to the flexibility in terms of data center equipment and structure definition. 
    198 Moreover, it allows to associate the energy consumption not only with the current power state and resource utilization but also with the particular set of applications running on it. Moreover, it does not limit the user in defining various types of resource management polices. The main strength of CloudSim lies in implementation of the complex scheduling and task execution schemes involving resource virtualization techniques. However, the energy efficiency aspect is limited only to the VM management. The GreenCloud focuses on data center resources with particular attention to the network infrastructure and the most popular energy management approaches. DCSG simulator allows taking into account also non-computing devices, nevertheless it seems to be hardly customizable to specific workloads and management policies. 
     200The main strength of CloudSim lies in implementation of the complex scheduling and task execution schemes involving resource virtualization techniques. However, the energy efficiency aspect is limited only to the VM management whereas DCworms enables modeling these aspects in various environments focusing on high performance computing infrastructure. The GreenCloud focuses on data center resources with particular attention to the network infrastructure and the most popular energy management approaches. DCSG simulator allows taking into account also non-computing devices, nevertheless it seems to be hardly customizable to specific workloads and management policies. Summarizing, DCworms stands out from other tools due to the flexibility in terms of data center equipment and physical structure definition. Moreover, it allows to associate the energy consumption not only with the current power state and resource utilization but also with the particular set of applications running on it. It does not limit the user in defining various types of resource management polices.  
    199201 
    200202\section{DCworms} 
     
    248250 
    249251The DCworms allows researchers to take into account energy efficiency and thermal issues in distributed computing experiments. That can be achieved by the means of appropriate models and profiles. In general, the main goal of the models is to emulate the behavior of the real computing resources, while profiles support models by providing data essential for the energy usage calculations. Introducing particular models into the simulation environment is possible through choosing or implementation of dedicated energy plugins that contain methods to calculate power usage of resources, their temperature and system air throughput values. Presence of detailed resource usage information, current resource energy and thermal state description and a functional energy management interface enables an implementation of energy-aware scheduling algorithms. Resource energy consumption and thermal metrics become in this context an additional criterion in the resource management process. Scheduling plugins are provided with dedicated interfaces, which allow them to collect detailed information about computing resource components and to affect their behavior. 
    250 The following subsection presents the general idea behind the power management concept in DCworms. Detailed description of the approach to thermal and air throughput simulations can be found in \cite{d2.2}. 
     252The following subsection presents the general idea behind the power management concept in DCworms. Detailed description of the approach to thermal and air throughput simulations can be found in \cite{e2dc13}\cite{d2.2}. 
    251253 
    252254 
     
    289291\section{Modeling of energy consumption in DCworms} 
    290292 
    291 DCworms is an open framework in which various models and algorithms can be investigated as presented in Section \ref{sec:apps}. In this section, we discuss possible approaches to modeling that can be applied to simulation of energy-efficiency of distributed computing systems. In general, to facilitate the simulation process, DCworms provides some basic implementation of power consumption, air throughput and thermal models. We introduce power consumption models as examples and validate part of them by experiments in real computing system (in Section \ref{sec:experiments}). Description of thermal models and corresponding experiments was presented in \cite{e2dc13}. 
     293DCworms is an open framework in which various models and algorithms can be investigated as presented in Section \ref{sec:apps}. In this section, we discuss possible approaches to modeling that can be applied to simulation of energy-efficiency of distributed computing systems. In general, to facilitate the simulation process, DCworms provides some basic implementation of power consumption, air throughput and thermal models. We introduce power consumption models as examples and validate part of them by experiments in real computing system in Section \ref{sec:experiments}. Description of thermal models and corresponding experiments was presented in \cite{e2dc13}. 
    292294 
    293295The most common questions explored by researchers who study energy-efficiency of distributed computing systems is how much energy $E$ do these systems require to execute workloads. In order to obtain this value the simulator must calculate values of power $P_i(t)$ and load $L_i(t)$ in time for all $m$ computing nodes, $i=1..m$. Load function may depend on specific load models applied. In more complex cases it can even be defined as vectors of different resource usage in time. In a simple case load can be either idle or busy but even in this case estimation of job processing times $p_j$ is needed to calculate total energy consumption. The total energy consumption of computing nodes is given by (\ref{eq:E}): 
     
    300302Power function may depend on load and states of resources or even specific applications as explained in Section~\ref{sec:power}. Total energy can be also completed by adding constant power usage of components that does not depend on load or state of resources.  
    301303 
    302 In large computing systems which are often characterized by high computational density, total energy consumption of computing nodes is not the only result interesting for researchers. Temperature distribution is getting more and more important as it affects the energy consumption of cooling devices, which can reach even half of a total data center energy use. In order to obtain accurate values of temperatures heat transfer simulations based on the Computational Fluid Dynamics (CFD) methods  have to be performed. These methods require as an input (i.e. boundary conditions) a heat dissipated by IT hardware and air throughput generated by fans at servers' outlets. Another approach is based on simplified thermal models that without costly CFD calculations provide rough estimations of temperatures. DCworms enables the use of both approaches. In the former, the output of simulations including power usage of computing nodes in time and air throughput at node outlets can be passed to CFD solver. Details addressing this integration issues are introduced in \cite{d2.2}. 
    303 %This option is further elaborated in Section \ref{sec:coolemall}. Simplified thermal models required by the latter approach are proposed in \ref{sec:thermal}. 
    304  
     304In large computing systems which are often characterized by high computational density, total energy consumption of computing nodes is not the only result interesting for researchers. Temperature distribution is getting more and more important as it affects the energy consumption of cooling devices, which can reach even half of a total data center energy use. In order to obtain accurate values of temperatures heat transfer simulations based on the Computational Fluid Dynamics (CFD) methods  have to be performed. These methods require as an input (i.e. boundary conditions) a heat dissipated by IT hardware and air throughput generated by fans at servers' outlets. Another approach is based on simplified thermal models that without costly CFD calculations provide rough estimations of temperatures. DCworms enables the use of both approaches. In the former, the output of simulations including power usage of computing nodes in time and air throughput at node outlets can be passed to CFD solver. Details addressing this integration issues are introduced in \cite{d2.2}\cite{e2dc13}. 
    305305 
    306306\subsection{Power consumption models}\label{sec:power} 
     
    313313\includegraphics[width=6cm]{fig/power_default.png} 
    314314\caption{Average power usage with regard to CPU frequency 
    315 - Linpack (\emph{green}), Abinit (\emph{purple}), Namd (\emph{blue}) and Cpuburn (\emph{red}). \label{fig:power_freq} 
     315- Linpack ($\bullet$), Abinit and Namd ($\times$).  
     316\label{fig:power_freq} 
    316317} 
    317318% 
     
    352353\end{equation} 
    353354 
    354 Within DCworms we built in a static approach model that uses common resource states that affect power usage which are the CPU power states. Hence, with each node power state, understood as a possible operating state (p-state), we associated a power consumption value that derives from the averaged values of measurements obtained for different types of application. We distinguish also an idle state. Therefore, the current power usage of the node can be expressed as: $P = P_{idle} + P_{f}$ where $P$ denotes power consumed by the node, $P_{idle}$ is a power usage of node in idle state and $P_{f}$ stands for power usage of CPU operating at the given frequency level. Additionally, node power states are taken into account to reflect no (or limited) power usage when a node is off. 
     355Within DCworms we built in a static approach model that uses common resource states that affect power usage which are the CPU power states. Hence, with each node power state, understood as a possible operating state (p-state), we associated a power consumption value that derives from the averaged values of measurements obtained for different types of application. P-states correspond to predefined frequencies (and associated voltages) of CPU which are known in advance. We distinguish also an idle state, which is a specific p-state. Therefore, the current power usage of the node can be expressed as: $P = P_{idle} + P_{f}$ where $P$ denotes power consumed by the node, $P_{idle}$ is a power usage of node in idle state and $P_{f}$ stands for power usage of CPU operating at the given frequency level. Additionally, node power states are taken into account to reflect no (or limited) power usage when a node is off. 
    355356 
    356357\subsubsection{Resource load}  
     
    401402 
    402403where $P$ denotes power consumed by the node executing the given application, $P_{idle}$ is a power usage of node in idle state, $L$ is the current utilization level of the node, $P_{cpubase}$ stands for power usage of fully loaded CPU working in the lowest frequency, $c$ is the constant factor indicating the increase of power consumption with respect to the frequency increase, $f$ is a current frequency, $f_{base}$ is the lowest available frequency within the given CPU and $P_{app}$ denotes the additional power usage derived from executing a particular application ($P_{app}$ is a constant appointed experimentally for each application in order to extract the part of power consumption independent of the load and specific for particular type of task). 
    403  
    404 %\subsection{Air throughput models}\label{sec:air} 
    405  
    406 %The DCworms comes with the following air throughput models. 
    407 %By default, air throughput estimations are performed according to the first one. 
    408  
    409 %\textbf{Static} model refers to a static definition of air throughput states. According to this approach, output air flow depends only on the present air cooling working state and the corresponding air throughput value. Each state change triggers the calculations and updates the current air throughput value. This strategy requires only a basic air throughput profile definition. 
    410  
    411 %\textbf{Space} model allows taking into account a duct associated with the investigated air flow. On the basis of the given fan rotation speed and the obstacles before/behind the fans, the output air throughput can be roughly estimated. To this end, additional manufacturer's specifications will be required, including resulting air velocity values and fan duct dimensions. Thus, it is possible to estimate the air flow level not only referring to the current fan operating state but also with respect to the resource and its subcomponent placement. More advanced scenario may consider mutual impact of several air flows. 
    412  
    413 %\subsection{Thermal models}\label{sec:thermal} 
    414  
    415 %\begin{figure}[tbp] 
    416 %\centering 
    417 %\includegraphics[width = 8cm]{fig/temp-fans.png} 
    418 %\caption{\label{fig:tempModel} Temperature in time for highest frequency} 
    419 %\end{figure} 
    420  
    421 %The following models are supported natively. By default, the static strategy is applied. 
    422  
    423 %\textbf{Static} approach follows the changes in heat, generated by the computing system components and matches the corresponding temperature according to the specified profile. Since it tracks the power consumption variations, corresponding values must be delivered, either from power consumption model or on the basis of user data. Replacing the appropriate temperature values with function based on the defined material properties and/o experimentally measured values can easily extend this model. 
    424  
    425 %\textbf{Ambient} model allows taking into account the surrounding cooling infrastructure. It calculates the device temperature as a function adopted from the static approach and extends it with the influence of cooling method. The efficiency of cooling system may be derived from the current air throughput value. 
    426404 
    427405\section{Experiments and evaluation}\label{sec:experiments} 
     
    722700Referring to the Table~\ref{loadEnergy}, one should easily note that gain from switching off unused nodes decreases with the increasing workload density. In general, for the highly loaded system such policy does not find an application due to the cost related to this process and relatively small benefits. Another interesting conclusion, refers to the poor result for Random strategy with downgrading the frequency approach. The lack of improvement on the energy usage criterion for higher system load can be explained by the relatively small or no benefit obtained for prolonging the task execution, and thus, maintaining the node in working state. The cost of longer workload completion can not be compensate by the very little energy savings derived from the lower operating state of node. The greater criteria values for the higher system load are the result of greater time space between submission of successive tasks, and thus longer workload execution. Based on Table~\ref{loadMakespan}, one should note that differences in workload completion times are relatively small for all evaluated policies, except Random + lowest frequency approach. 
    723701 
    724 We also demonstrated differences between power usage models. They span from rough static approach to accurate application specific models. However, the latter can be difficult or even infeasible to use, as it requires real measurements for specific applications beforehand. This issue can be partially resolved by introducing application profiles and classification, which can deteriorate the accuracy though. This issue is begin studied more deeply within CoolEmAll project.  
     702We also demonstrated differences between power usage models. They span from rough static approach to accurate application specific models. However, the latter can be difficult or even infeasible to use, as it requires real measurements for specific applications beforehand. This issue can be partially resolved by introducing application profiles and classification, which can deteriorate the accuracy though. This issue is being studied more deeply within the CoolEmAll project.  
    725703 
    726704\subsection{Verification of models}  
     
    810788In this paper we presented a Data Center Workload and Resource Management Simulator (DCworms) which enables modeling and simulation of computing infrastructures to estimate their performance, energy consumption, and energy-efficiency metrics for diverse workloads and management policies. DCworms provides broad options of customization and combines detailed applications and workloads modeling with simulation of data center resources including their power usage and thermal behavior. 
    811789We shown its energy-efficiency related features and proposed three methods of power usage modeling: static (fully defined by resource state), dynamic (defined by a function of parameters such as CPU frequency and load), and mapping (based on power usage of specific applications). We compared results of simulations to measurements of real servers and shown differences in accuracy and usability of these models.  
    812 We also demonstrated DCworms capabilities to implement various resource management policies including workload scheduling and node power management.  The experimental studies we conducted shown that their impact on overall energy-efficiency depends on a type of servers, their power usage in idle time, possibility of switching off nodes as well as level of load. 
     790We also demonstrated DCworms capabilities to implement various resource management policies including workload scheduling and node power management.  DCworms support various types of computing, however in this paper we concentrated on scheduling of batch jobs in queueing systems typical for computing centers such as PSNC. The experimental studies we conducted shown that their impact on overall energy-efficiency depends on a type of servers, their power usage in idle time, possibility of switching off nodes as well as level of load. 
    813791DCworms is a part of the Simulation, Visualisation and Decision Support (SVD) Toolkit being developed within the CoolEmAll project. The aim of this toolkit is, based on data center building blocks defined by the project, to analyze energy-efficiency of data centers taking into account various aspects such as heterogenous hardware architectures, applications, management policies, and cooling. DCworms will take as an input the open models of the data center building blocks and application profiles. DCworms will be applied to evaluation of resource management approaches. These policies may include a wide spectrum of energy-aware strategies such as workload consolidation, dynamic switching off nodes, DVFS and thermal-aware methods. Output of simulations will include distribution of servers' power usage in time along with estimations of server outlets air flow. These data will be passed to Computational Fluid Dynamics (CFD) simulations using OpenFOAM solver and to advanced 3D visualization. In this way users will be able to study energy-efficiency of a data center from a detailed analysis of workloads and policies to the impact on heat transfer and overall energy consumption.  
    814 Thus, future work on DCworms will focus on more precise power, air-throughput, and thermal models. Additional research directions will include modeling application execution phases, adding predefined common HPC and cloud management policies and application performance and resource power models. 
     792Thus, future work on DCworms will focus on more precise power, air-throughput, and thermal models. Additional research directions will include modeling application execution phases, and adding predefined common management policies, application performance and power usage models for clouds. 
    815793 
    816794\section*{Acknowledgement} 
     
    856834\bibitem{fit4green} A. Berl, E. Gelenbe, M. di Girolamo, G. Giuliani, H. de Meer, M.-Q. Dang, K. Pentikousis. Energy-Efficient Cloud Computing. The Computer Journal, 53(7), 2010. 
    857835 
    858 \bibitem{e2dc13} M. vor dem Berge, G. Da Costa, M. Jarus, A. Oleksiak, W. Piatek, E. Volk. Modeling Data Center Building Blocks for Energy-efficiency and Thermal Simulations. 2nd International Workshop on Energy-Efficient Data Centres, Berkeley, 2013. 
     836\bibitem{e2dc13} M. vor dem Berge, G. Da Costa, M. Jarus, A. Oleksiak, W. PiÄ 
     837tek, E. Volk. Modeling Data Center Building Blocks for Energy-efficiency and Thermal Simulations. 2nd International Workshop on Energy-Efficient Data Centres, e-Energy 2013 conference, Berkeley, US, May 2013. 
    859838 
    860839\bibitem{e2dc12} M. vor dem Berge, G. Da Costa, A. Kopecki, A. Oleksiak, J-M. Pierson, T. Piontek, E. Volk, S. Wesner. Modeling and Simulation of Data Center Energy-Efficiency in CoolEmAll. Energy Efficient Data Centers, Lecture Notes in Computer Science Volume 7396, 2012, pp 25-36. 
Note: See TracChangeset for help on using the changeset viewer.