Ignore:
Timestamp:
06/03/13 16:14:18 (12 years ago)
Author:
wojtekp
Message:
 
Location:
papers/SMPaT-2012_DCWoRMS
Files:
5 edited

Legend:

Unmodified
Added
Removed
  • papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.aux

    r1069 r1071  
    11\relax  
     2\emailauthor{ariel@man.poznan.pl}{A.~Oleksiak\corref {cor1}} 
     3\Newlabel{cor1}{1} 
    24\citation{koomey} 
    35\citation{pue} 
     
    2426\newlabel{sota}{{2}{3}} 
    2527\citation{GSSIM} 
    26 \@writefile{toc}{\contentsline {section}{\numberline {3}DCworms}{5}} 
    2728\citation{GSSIM} 
    2829\@writefile{lof}{\contentsline {figure}{\numberline {1}{\ignorespaces  DCworms architecture}}{6}} 
    2930\newlabel{fig:arch}{{1}{6}} 
     31\@writefile{toc}{\contentsline {section}{\numberline {3}DCworms}{6}} 
    3032\@writefile{toc}{\contentsline {subsection}{\numberline {3.1}Architecture}{6}} 
    3133\citation{GWF} 
     
    4345\@writefile{toc}{\contentsline {paragraph}{\textbf  {Power profile}}{10}} 
    4446\@writefile{toc}{\contentsline {paragraph}{\textbf  {Power consumption model}}{10}} 
    45 \@writefile{toc}{\contentsline {paragraph}{\textbf  {Power management interface}}{10}} 
    4647\@writefile{lof}{\contentsline {figure}{\numberline {3}{\ignorespaces  Power consumption modeling}}{11}} 
    4748\newlabel{fig:powerModel}{{3}{11}} 
     49\@writefile{toc}{\contentsline {paragraph}{\textbf  {Power management interface}}{11}} 
    4850\@writefile{toc}{\contentsline {subsection}{\numberline {3.5}Application performance modeling}{11}} 
    4951\newlabel{sec:apps}{{3.5}{11}} 
    5052\citation{GSSIM} 
    5153\citation{e2dc13} 
    52 \citation{d2.2} 
    5354\@writefile{toc}{\contentsline {section}{\numberline {4}Modeling of energy consumption in DCworms}{12}} 
    5455\newlabel{eq:E}{{1}{12}} 
    55 \@writefile{lof}{\contentsline {figure}{\numberline {4}{\ignorespaces Average power usage with regard to CPU frequency - Linpack (\emph  {green}), Abinit (\emph  {purple}), Namd (\emph  {blue}) and Cpuburn (\emph  {red}).  }}{13}} 
    56 \newlabel{fig:power_freq}{{4}{13}} 
     56\citation{d2.2} 
    5757\@writefile{toc}{\contentsline {subsection}{\numberline {4.1}Power consumption models}{13}} 
    5858\newlabel{sec:power}{{4.1}{13}} 
     59\newlabel{eq:ohm-law}{{2}{13}} 
     60\@writefile{lof}{\contentsline {figure}{\numberline {4}{\ignorespaces Average power usage with regard to CPU frequency - Linpack (\emph  {green}), Abinit (\emph  {purple}), Namd (\emph  {blue}) and Cpuburn (\emph  {red}).  }}{14}} 
     61\newlabel{fig:power_freq}{{4}{14}} 
    5962\@writefile{lof}{\contentsline {figure}{\numberline {5}{\ignorespaces  Power in time for the highest frequency}}{14}} 
    6063\newlabel{fig:fans_P}{{5}{14}} 
    61 \newlabel{eq:ohm-law}{{2}{14}} 
    62 \@writefile{toc}{\contentsline {subsection}{\numberline {4.2}Static approach}{14}} 
     64\@writefile{toc}{\contentsline {subsubsection}{\numberline {4.1.1}Static approach}{14}} 
     65\newlabel{eq:static}{{3}{15}} 
     66\@writefile{toc}{\contentsline {subsubsection}{\numberline {4.1.2}Resource load}{15}} 
     67\newlabel{eq:dynamic}{{4}{15}} 
    6368\citation{fit4green_scheduler} 
    64 \newlabel{eq:static}{{3}{15}} 
    65 \@writefile{toc}{\contentsline {subsection}{\numberline {4.3}Resource load}{15}} 
    66 \newlabel{eq:dynamic}{{4}{15}} 
    6769\newlabel{eq:model}{{7}{16}} 
    68 \@writefile{toc}{\contentsline {subsection}{\numberline {4.4}Application specific}{16}} 
     70\@writefile{toc}{\contentsline {subsubsection}{\numberline {4.1.3}Application specific}{16}} 
    6971\citation{e2dc12} 
    70 \citation{abinit} 
    7172\newlabel{eq:app}{{8}{17}} 
    7273\@writefile{toc}{\contentsline {section}{\numberline {5}Experiments and evaluation}{17}} 
     
    7576\@writefile{lot}{\contentsline {table}{\numberline {1}{\ignorespaces  RECS system configuration}}{17}} 
    7677\newlabel{testBed}{{1}{17}} 
     78\citation{abinit} 
    7779\citation{cray} 
    7880\citation{linpack} 
  • papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.fdb_latexmk

    r1069 r1071  
    11# Fdb version 2 
    2 ["pdflatex"] 1370260512 "elsarticle-DCWoRMS.tex" "elsarticle-DCWoRMS.pdf" "elsarticle-DCWoRMS"  
     2["pdflatex"] 1370268172 "elsarticle-DCWoRMS.tex" "elsarticle-DCWoRMS.pdf" "elsarticle-DCWoRMS"  
    33  "/usr/local/texlive/2010/texmf-dist/tex/context/base/supp-pdf.mkii" 1251025892 71625 fad1c4b52151c234b6873a255b0ad6b3 "" 
    44  "/usr/local/texlive/2010/texmf-dist/tex/generic/oberdiek/etexcmds.sty" 1267408169 5670 cacb018555825cfe95cd1e1317d82c1d "" 
     
    3030  "/usr/local/texlive/2010/texmf-dist/tex/latex/psnfss/upsy.fd" 1137110629 148 2da0acd77cba348f34823f44cabf0058 "" 
    3131  "/usr/local/texlive/2010/texmf-dist/tex/latex/psnfss/upzd.fd" 1137110629 148 b2a94082cb802f90d3daf6dd0c7188a0 "" 
    32   "elsarticle-DCWoRMS.aux" 1370260514 8344 4fae76ffb960809b0662b0a5380c1cf5 "" 
    33   "elsarticle-DCWoRMS.spl" 1370260512 0 d41d8cd98f00b204e9800998ecf8427e "" 
    34   "elsarticle-DCWoRMS.tex" 1370260508 86035 d15304aaf50305115c88a8a70ebe8af4 "" 
     32  "elsarticle-DCWoRMS.aux" 1370268174 8439 dbc176d690cea90096674f10a42788ec "" 
     33  "elsarticle-DCWoRMS.spl" 1370268172 0 d41d8cd98f00b204e9800998ecf8427e "" 
     34  "elsarticle-DCWoRMS.tex" 1370268168 77625 f037212fb0f5e25440dd03fc93f3c095 "" 
    3535  "elsarticle.cls" 1352447924 26095 ad44f4892f75e6e05dca57a3581f78d1 "" 
    3636  "fig/70dfsGantt.png" 1370005142 138858 edc557d8862a7825f2941d8c69c30659 "" 
  • papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.tex

    r1069 r1071  
    111111%% \address[label2]{<address>} 
    112112 
    113 %\author{Krzysztof Kurowski, Ariel Oleksiak, Wojciech Piatek, Tomasz Piontek, Andrzej Przybyszewski, Jan Węglarz} 
    114  
    115113 
    116114\author[psnc]{K.~ Kurowski} 
    117115 
    118 %\ead{krzysztof.kurowski@man.poznan.pl} 
    119  
    120 \author[psnc]{A.~Oleksiak} 
    121  
    122 %\ead{ariel@man.poznan.pl} 
     116 
     117\author[psnc]{A.~Oleksiak\corref{cor1}} 
     118 
     119\ead{ariel@man.poznan.pl} 
    123120 
    124121\author[psnc]{W.~Piatek} 
    125122 
    126 %\ead{piatek@man.poznan.pl} 
    127  
    128123\author[psnc]{T.~Piontek} 
    129124 
     
    132127\author[psnc,put]{J.~Weglarz} 
    133128 
    134 %\cortext[cor1]{Corresponding author} 
     129\cortext[cor1]{Corresponding author, tel/fax: +48618582187/+48618582151} 
    135130 
    136131\address[psnc]{Poznan Supercomputing and Networking Center, Noskowskiego~10, Poznan, Poland} 
     
    171166 
    172167Rising popularity of large-scale computing infrastructures caused quick development of data centers. Nowadays, data centers are responsible for around 2\% of the global energy consumption making it equal to the demand of aviation industry \cite{koomey}. Moreover, in many current data centers the actual IT equipment uses only half of the total energy whereas most of the remaining part is required for cooling and air movement resulting in poor Power Usage Effectiveness (PUE) \cite{pue} values. Large energy needs and significant $CO_2$ emissions caused that issues related to cooling, heat transfer, and IT infrastructure location are more and more carefully studied during planning and operation of data centers. 
    173 %Even if we take ecological and footprint issues aside, the amount of consumed energy can impose strict limits on data centers. First of all, energy bills may reach millions euros making computations expensive.  
    174 %Furthermore, available power supply is usually limited so it also may reduce data center development capabilities, especially looking at challenges related to exascale computing breakthrough foreseen within this decade. 
    175168 
    176169For these reasons many efforts were undertaken to measure and study energy efficiency of data centers. There are projects focused on data center monitoring and management \cite{games}\cite{fit4green} whereas others on  energy efficiency of networks \cite{networks} or distributed computing infrastructures, like grids \cite{fit4green_carbon_scheduler}. Additionally, vendors offer a wide spectrum of energy efficient solutions for computing and cooling \cite{sgi}\cite{colt}\cite{ecocooling}. However, a variety of solutions and configuration options can be applied planning new or upgrading existing data centers. 
     
    278271 
    279272Presence of detailed resource usage information, current resource energy state description and functional energy management interface enables an implementation of energy-aware scheduling algorithms. Resource energy consumption becomes in this context an additional criterion in the scheduling process, which uses various techniques to decrease energy consumption, e.g. workload consolidation, moving tasks between resources to reduce a number of running resources, dynamic power management, cutting down CPU frequency, and others. 
    280  
    281 %\subsubsection{Air throughput management concept} 
    282  
    283 %The presence of an air throughput concept addresses the issue of resource air-cooling facilities provisioning. Using the air throughput profiles and models allows anticipating the air flow level on output of the computing system component, resulting from air-cooling equipment management. 
    284  
    285 %\paragraph{\textbf{Air throughput profile}} 
    286 %The air throughput profile, analogously to the power profile, allows specifying supported air flow states. Each air throughput state definition consists of an air flow value and a corresponding power draw. It can represent, for instance, a fan working state. In this way, associating the air throughput profile with the given computing resource, it is possible to describe mounted air-cooling devices. 
    287 %Possibility of introducing additional parameters makes the air throughput description extensible for new specific characteristics. 
    288  
    289 %\paragraph{\textbf{Air throughput model}} 
    290 %Similar to energy consumption models, the user is provided with a dedicated interface that allows him to describe the resulting air throughput of the computing system components like cabinets or server fans. The general idea of the air throughput modeling is shown in Figure~\ref{fig:airModel}. Accordingly, air flow estimations are based on detailed information about the involved resources, including their air throughput states.  
    291  
    292 %\begin{figure}[tbp] 
    293 %\centering 
    294 %\includegraphics[width = 8cm]{fig/airModel.png} 
    295 %\caption{\label{fig:airModel} Air throughput modeling} 
    296 %\end{figure} 
    297  
    298 %\paragraph{\textbf{Air throughput management interface}} 
    299 %The DCworms delivers interfaces that provide access to the air throughput profile data, allows acquiring detailed information concerning current air flow conditions and changes in air flow states. The availability of these interfaces support evaluation of different cooling strategies. 
    300  
    301  
    302  
    303 %\subsubsection{Thermal management concept} 
    304  
    305 %The primary motivation behind the incorporation of thermal aspects in the DCworms is to exceed the commonly adopted energy use-cases and apply more sophisticated scenarios. By the means of dedicated profiles and interfaces, it is possible to perform experimental studies involving temperature-aware workload placement. 
    306  
    307 %\paragraph{\textbf{Thermal profile}} 
    308 %Thermal profile expresses the thermal specification of resources. It consists of the definition of the thermal design power (TDP), thermal resistance and thermal states that describe how the temperature depends on dissipated heat. For the purposes of more complex experiments, introducing of new, user-defined characteristics is supported. The aforementioned values may be provided for all computing system components distinguishing them, for instance, according to their material parameters and/or models. 
    309  
    310 %\paragraph{\textbf{Temperature estimation model}} 
    311 %Thermal profile, complemented with the temperature measurement model implementation may introduce temperature sensors simulation. In this way, users have means to approximately predict the temperature of the simulated objects by taking into account basic thermal characteristics as well as the estimated impact of cooling devices. However, the proposed approach assumes some simplifications that ignore heating and cooling dynamics understood as a heat flow process. 
    312  
    313 %Figure~\ref{fig:tempModel} summarizes relation between model and profile and input data. 
    314  
    315 %\begin{figure}[tbp] 
    316 %\centering 
    317 %\includegraphics[width = 8cm]{fig/tempModel.png} 
    318 %\caption{\label{fig:tempModel} Temperature estimation modeling} 
    319 %\end{figure} 
    320  
    321 %\paragraph{\textbf{Thermal resource management interface}} 
    322 %As the temperature is highly dependent on the dissipated heat and cooling capacity, thermal resource management is performed via a power and air throughput interface. Nevertheless, the interface provides access to the thermal resource characteristics and the current temperature values 
    323  
    324273 
    325274\subsection{Application performance modeling}\label{sec:apps} 
     
    395344 
    396345 
    397 \subsection{Static approach}  
     346\subsubsection{Static approach}  
    398347Static approach is based on a static definition of resource power usage. This model calculates the total amount of energy consumed by the computing resource system as a sum of energy, consumed by all its components (processors, disks, power adapters, etc.). More advanced versions of this approach assume definition of resource states along with corresponding power usage. This model follows changes of resource power states and sums up the amounts of energy defined for each state. 
    399348In this case, specific values of power usage are defined for all discrete $n$ states as shown in (\ref{eq:static}):  
     
    405354Within DCworms we built in a static approach model that uses common resource states that affect power usage which are the CPU power states. Hence, with each node power state, understood as a possible operating state (p-state), we associated a power consumption value that derives from the averaged values of measurements obtained for different types of application. We distinguish also an idle state. Therefore, the current power usage of the node can be expressed as: $P = P_{idle} + P_{f}$ where $P$ denotes power consumed by the node, $P_{idle}$ is a power usage of node in idle state and $P_{f}$ stands for power usage of CPU operating at the given frequency level. Additionally, node power states are taken into account to reflect no (or limited) power usage when a node is off. 
    406355 
    407 \subsection{Resource load}  
     356\subsubsection{Resource load}  
    408357Resource load model extends the static power state description and enhances it with real-time resource usage, most often simply the processor load. In this way it enables a dynamic estimation of power usage based on resource basic power usage and state (defined by the static resource description) as well as resource load. For instance, it allows distinguishing between the amount of energy used by idle processors and processors at full load. In this manner, energy consumption is directly connected with power state and describes average power usage by the resource working in a current state. 
    409358In this case, specific values of power usage are defined for all pairs state and load values (discretized to $l$ values) as shown in (\ref{eq:dynamic}):  
     
    436385 
    437386 
    438 \subsection{Application specific}  
     387\subsubsection{Application specific}  
    439388Application specific model allows expressing differences in the amount of energy required for executing various types of applications at diverse computing resources. It considers all defined system elements (processors, memory, disk, etc.), which are significant in total energy consumption. Moreover, it also assumes that each of these components can be utilized in a different way during the experiment and thus have different impact on total energy consumption. To this end, specific characteristics of resources and applications are taken into consideration. Various approaches are possible including making the estimated power usage dependent on defined classes of applications, ratio between CPU-bound and IO-bound operations, etc. 
    440389In this case, power usage is an arbitrary function of state, load, and application characteristics as shown in (\ref{eq:app}):  
     
    792741 
    793742 
    794 %\section{DCworms application/use cases}\label{sec:coolemall} 
    795  
    796 %DCworms in CoolEmAll, integration with CFD 
    797  
    798 %... 
    799  
    800 %Being based on the GSSIM framework, that has been successfully applied in a substantial number of research projects and academic studies, DCworms with its sophisticated energy extension has become an essential tool for studies of energy efficiency in distributed environments. For this reason, it has been adopted within the CoolEmAll project as a component of Simulation, Visualisation and Decision Support (SVD) Toolkit. In general the main goal of CoolEmAll is to provide advanced simulation, visualisation and decision support tools along with blueprints of computing building blocks for modular data centre environments. Once developed, these tools and blueprints should help to minimise the energy consumption, and consequently the CO2 emissions of the whole IT infrastructure with related facilities. The SVD Toolkit is designed to support the analysis and optimization of IT modern infrastructures. For the recent years the special attention has been paid for energy utilized by the data centers which considerable contributes to the data center operational costs.  Actual power usage and effectiveness of energy saving methods heavily depends on available resources, types of applications and workload properties. Therefore, intelligent resource management policies are gaining popularity when considering the energy efficiency of IT infrastructures. 
    801 %Hence, SVD Toolkit integrates also workload management and scheduling policies to support complex modeling and optimization of modern data centres. 
    802  
    803 %The main aim of DCworms within CoolEmAll project is to enable studies of dynamic states of IT infrastructures, like power consumption and air throughput distribution, on the basis of changing workloads, resource model and energy-aware resource management policies. 
    804 %In this context, DCworms takes into account the specific workload and application characteristics as well as detailed resource parameters. It will benefit from the CoolEmAll benchmarks and classification of applications and workloads. In particular various types of workload, including data centre workloads using virtualization and HPC applications, may be considered. The knowledge concerning their performance and properties as well as information about their energy consumption and heat production will be used in simulations to study their impact on thermal issues and energy efficiency. Detailed resource characteristics, will be also provided according to the CoolEmAll blueprints. Based on this data, workload simulation will support evaluation process of various resource management approaches. These policies may include a wide spectrum of energy-aware strategies such as workload consolidation/migration, dynamic switching off nodes, DVFS and thermal-aware methods. In addition to typical approaches minimizing energy consumption, policies that prevent too high temperatures in the presence of limited cooling (or no cooling) may also be analyzed. Moreover, apart from the set of predefined strategies, new approaches can easily be applied and examined. 
    805 %The outcome of the workload and resource management simulation phase is a distribution of power usage and air throughput for the computing models specified within the SVD Toolkit. These statistics may be analyzed directly by data centre designers and administrators and/or provided as an input to the CFD simulation phase. The former case allows studying how the above metrics change over time, while the latter harness CFD simulations to identify temperature differences between the computing modules, called hot spots. The goal of this scenario is to visualise the behavior of the temperature distribution within a server room with a number of racks for different types of executed workloads and for various policies used to manage these workloads. 
    806743 
    807744 
Note: See TracChangeset for help on using the changeset viewer.