Ignore:
Timestamp:
06/03/13 18:36:18 (12 years ago)
Author:
wojtekp
Message:
 
File:
1 edited

Legend:

Unmodified
Added
Removed
  • papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.tex

    r1074 r1075  
    4848\usepackage{multirow} 
    4949%% The amsthm package provides extended theorem environments 
    50 %% \usepackage{amsthm} 
     50%% \usepackage{amsthm}   
    5151 
    5252%% The lineno packages adds line numbers. Start line numbering with 
     
    165165\section{Introduction} 
    166166 
    167 Rising popularity of large-scale computing infrastructures caused quick development of data centers. Nowadays, data centers are responsible for around 2\% of the global energy consumption making it equal to the demand of aviation industry \cite{koomey}. Moreover, in many current data centers the actual IT equipment uses only half of the total energy whereas most of the remaining part is required for cooling and air movement resulting in poor Power Usage Effectiveness (PUE) \cite{pue} values. Large energy needs and significant $CO_2$ emissions caused that issues related to cooling, heat transfer, and IT infrastructure location are more and more carefully studied during planning and operation of data centers. 
     167Rising popularity of large-scale computing infrastructures caused quick development of data centers. Nowadays, data centers are responsible for around 2\% of the global energy consumption making it equal to the demand of aviation industry \cite{koomey}. Moreover, in many current data centers the actual IT equipment uses only half of the total energy whereas most of the remaining part is required for cooling and air movement resulting in poor Power Usage Effectiveness (PUE) \cite{pue} values. Large energy needs of data centers led to increased interest in cooling, heat transfer, and IT infrastructure location during planning and operation of data centers. 
    168168 
    169169For these reasons many efforts were undertaken to measure and study energy efficiency of data centers. There are projects focused on data center monitoring and management \cite{games}\cite{fit4green} whereas others on  energy efficiency of networks \cite{networks} or distributed computing infrastructures, like grids \cite{fit4green_carbon_scheduler}. Additionally, vendors offer a wide spectrum of energy efficient solutions for computing and cooling \cite{sgi}\cite{colt}\cite{ecocooling}. However, a variety of solutions and configuration options can be applied planning new or upgrading existing data centers. 
     
    177177To demonstrate DCworms capabilities we evaluate impact of several resource management policies on overall energy-efficiency of specific workloads executed on heterogeneous resources. 
    178178 
    179 The remaining part of this paper is organized as follows. In Section~2 we give a brief overview of the current state of the art concerning modeling and simulation of distributed systems, such as Grids and Clouds, in terms of energy efficiency. Section~3 discusses the main features of DCworms. In particular, it introduces our approach to workload and resource management, presents the concept of energy efficiency modeling and explains how to incorporate a specific application performance model into simulations. Section~4 discusses energy models adopted within the DCworms. In Section~5 we assess the energy models by comparison of simulation results with real measurements. We also present experiments that were performed using DCworms to show various types of resource and scheduling technics allowing decreasing the total energy consumption of the execution of a set of tasks. In Section~6 we explain how to integrate workload and resource simulations with heat transfer simulations within the CoolEmAll project. Final conclusions and directions for future work are given in Section~7. 
     179The remaining part of this paper is organized as follows. In Section~2 we give a brief overview of the current state of the art concerning modeling and simulation of distributed systems, such as Grids and Clouds, in terms of energy efficiency. Section~3 discusses the main features of DCworms. In particular, it introduces our approach to workload and resource management, presents the concept of energy efficiency modeling and explains how to incorporate a specific application performance model into simulations. Section~4 discusses energy models adopted within the DCworms. In Section~5 we assess the energy models by comparison of simulation results with real measurements. We also present experiments that were performed using DCworms to show various types of resource and scheduling technics allowing decreasing the total energy consumption of the execution of a set of tasks. Final conclusions and directions for future work are given in Section~6. 
    180180 
    181181\section{Related Work}\label{sota} 
     
    322322 
    323323\begin{equation} 
    324 P=C\cdot V_{core}^{2}\cdot f\label{eq:ohm-law} 
    325 \end{equation} 
    326  
    327 with $C$ being the processor switching capacitance, $V_{core}$ the 
    328 current P-State's core voltage and $f$ the frequency. Based on the 
     324P=C\cdot V^{2}\cdot f\label{eq:ohm-law} 
     325\end{equation} 
     326 
     327with $C$ being the processor switching capacitance, $V$ the 
     328current P-State's voltage and $f$ the frequency. Based on the 
    329329above equation it is suggested that although the reduction of frequency 
    330330causes an increase in the time of execution, the reduction of frequency 
    331 also leads to the reduction of $V_{core}$ and thus the power savings 
    332 from the $P\sim V_{core}^{2}$ relation outweigh the increased computation 
    333 time. However, experiments performed on several HPC servers show that this dependency does not reflect theoretical shape and is often close to linear as presented in Figure \ref{fig:power_freq}. This phenomenon can be explained by impact of other component than CPU and narrow range of available voltages. A good example of impact by other components is power usage of servers with visible influence of fans as illustrated in Figure \ref{fig:fans_P}. 
     331also leads to the reduction of $V$ and thus the power savings 
     332from the $P\sim V^{2}$ relation outweigh the increased computation 
     333time. However, experiments performed on several HPC servers show that this dependency does not reflect theoretical shape and is often close to linear as presented in Figure \ref{fig:power_freq} for Actina Solar 212 server equipped with a 4 core Intel Xeon 5160 of “Woodcrest” family having four P-States (2.0, 2.3, 2.6 and 3.0GHz). This phenomenon can be explained by impact of other component than CPU and narrow range of available voltages. A good example of impact by other components is power usage of servers with visible influence of fans as illustrated in Figure \ref{fig:fans_P}. 
    334334 
    335335For these reasons, DCworms allows users to define dependencies between power usage and resource states (such as CPU frequency) in the form of tables or arbitrary functions using energy estimation plugins. 
Note: See TracChangeset for help on using the changeset viewer.