Changeset 1075 for papers/SMPaT-2012_DCWoRMS
- Timestamp:
- 06/03/13 18:36:18 (12 years ago)
- Location:
- papers/SMPaT-2012_DCWoRMS
- Files:
-
- 5 edited
Legend:
- Unmodified
- Added
- Removed
-
papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.aux
r1072 r1075 26 26 \newlabel{sota}{{2}{3}} 27 27 \citation{GSSIM} 28 \@writefile{toc}{\contentsline {section}{\numberline {3}DCworms}{5}} 28 29 \citation{GSSIM} 29 30 \@writefile{lof}{\contentsline {figure}{\numberline {1}{\ignorespaces DCworms architecture}}{6}} 30 31 \newlabel{fig:arch}{{1}{6}} 31 \@writefile{toc}{\contentsline {section}{\numberline {3}DCworms}{6}}32 32 \@writefile{toc}{\contentsline {subsection}{\numberline {3.1}Architecture}{6}} 33 33 \citation{GWF} … … 45 45 \@writefile{toc}{\contentsline {paragraph}{\textbf {Power profile}}{10}} 46 46 \@writefile{toc}{\contentsline {paragraph}{\textbf {Power consumption model}}{10}} 47 \@writefile{toc}{\contentsline {paragraph}{\textbf {Power management interface}}{10}} 47 48 \@writefile{lof}{\contentsline {figure}{\numberline {3}{\ignorespaces Power consumption modeling}}{11}} 48 49 \newlabel{fig:powerModel}{{3}{11}} 49 \@writefile{toc}{\contentsline {paragraph}{\textbf {Power management interface}}{11}}50 50 \@writefile{toc}{\contentsline {subsection}{\numberline {3.5}Application performance modeling}{11}} 51 51 \newlabel{sec:apps}{{3.5}{11}} -
papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.fdb_latexmk
r1074 r1075 1 1 # Fdb version 2 2 ["pdflatex"] 137027 6354"elsarticle-DCWoRMS.tex" "elsarticle-DCWoRMS.pdf" "elsarticle-DCWoRMS"2 ["pdflatex"] 1370277362 "elsarticle-DCWoRMS.tex" "elsarticle-DCWoRMS.pdf" "elsarticle-DCWoRMS" 3 3 "/usr/local/texlive/2010/texmf-dist/tex/context/base/supp-pdf.mkii" 1251025892 71625 fad1c4b52151c234b6873a255b0ad6b3 "" 4 4 "/usr/local/texlive/2010/texmf-dist/tex/generic/oberdiek/etexcmds.sty" 1267408169 5670 cacb018555825cfe95cd1e1317d82c1d "" … … 30 30 "/usr/local/texlive/2010/texmf-dist/tex/latex/psnfss/upsy.fd" 1137110629 148 2da0acd77cba348f34823f44cabf0058 "" 31 31 "/usr/local/texlive/2010/texmf-dist/tex/latex/psnfss/upzd.fd" 1137110629 148 b2a94082cb802f90d3daf6dd0c7188a0 "" 32 "elsarticle-DCWoRMS.aux" 137027 6357 8339 a94618b5dd90767aa78e5b5a9204ca6a""33 "elsarticle-DCWoRMS.spl" 137027 63550 d41d8cd98f00b204e9800998ecf8427e ""34 "elsarticle-DCWoRMS.tex" 137027 6354 78516 344c70da16c02031427f01279de0847a""32 "elsarticle-DCWoRMS.aux" 1370277364 8339 315dc8d5b2ff46b5c8e14b8345c629e0 "" 33 "elsarticle-DCWoRMS.spl" 1370277362 0 d41d8cd98f00b204e9800998ecf8427e "" 34 "elsarticle-DCWoRMS.tex" 1370277361 78435 4c2001980a7f9d73913d460315631798 "" 35 35 "elsarticle.cls" 1352447924 26095 ad44f4892f75e6e05dca57a3581f78d1 "" 36 36 "fig/70dfsGantt.png" 1370005142 138858 edc557d8862a7825f2941d8c69c30659 "" -
papers/SMPaT-2012_DCWoRMS/elsarticle-DCWoRMS.tex
r1074 r1075 48 48 \usepackage{multirow} 49 49 %% The amsthm package provides extended theorem environments 50 %% \usepackage{amsthm} 50 %% \usepackage{amsthm} 51 51 52 52 %% The lineno packages adds line numbers. Start line numbering with … … 165 165 \section{Introduction} 166 166 167 Rising popularity of large-scale computing infrastructures caused quick development of data centers. Nowadays, data centers are responsible for around 2\% of the global energy consumption making it equal to the demand of aviation industry \cite{koomey}. Moreover, in many current data centers the actual IT equipment uses only half of the total energy whereas most of the remaining part is required for cooling and air movement resulting in poor Power Usage Effectiveness (PUE) \cite{pue} values. Large energy needs and significant $CO_2$ emissions caused that issues related to cooling, heat transfer, and IT infrastructure location are more and more carefully studiedduring planning and operation of data centers.167 Rising popularity of large-scale computing infrastructures caused quick development of data centers. Nowadays, data centers are responsible for around 2\% of the global energy consumption making it equal to the demand of aviation industry \cite{koomey}. Moreover, in many current data centers the actual IT equipment uses only half of the total energy whereas most of the remaining part is required for cooling and air movement resulting in poor Power Usage Effectiveness (PUE) \cite{pue} values. Large energy needs of data centers led to increased interest in cooling, heat transfer, and IT infrastructure location during planning and operation of data centers. 168 168 169 169 For these reasons many efforts were undertaken to measure and study energy efficiency of data centers. There are projects focused on data center monitoring and management \cite{games}\cite{fit4green} whereas others on energy efficiency of networks \cite{networks} or distributed computing infrastructures, like grids \cite{fit4green_carbon_scheduler}. Additionally, vendors offer a wide spectrum of energy efficient solutions for computing and cooling \cite{sgi}\cite{colt}\cite{ecocooling}. However, a variety of solutions and configuration options can be applied planning new or upgrading existing data centers. … … 177 177 To demonstrate DCworms capabilities we evaluate impact of several resource management policies on overall energy-efficiency of specific workloads executed on heterogeneous resources. 178 178 179 The remaining part of this paper is organized as follows. In Section~2 we give a brief overview of the current state of the art concerning modeling and simulation of distributed systems, such as Grids and Clouds, in terms of energy efficiency. Section~3 discusses the main features of DCworms. In particular, it introduces our approach to workload and resource management, presents the concept of energy efficiency modeling and explains how to incorporate a specific application performance model into simulations. Section~4 discusses energy models adopted within the DCworms. In Section~5 we assess the energy models by comparison of simulation results with real measurements. We also present experiments that were performed using DCworms to show various types of resource and scheduling technics allowing decreasing the total energy consumption of the execution of a set of tasks. In Section~6 we explain how to integrate workload and resource simulations with heat transfer simulations within the CoolEmAll project. Final conclusions and directions for future work are given in Section~7.179 The remaining part of this paper is organized as follows. In Section~2 we give a brief overview of the current state of the art concerning modeling and simulation of distributed systems, such as Grids and Clouds, in terms of energy efficiency. Section~3 discusses the main features of DCworms. In particular, it introduces our approach to workload and resource management, presents the concept of energy efficiency modeling and explains how to incorporate a specific application performance model into simulations. Section~4 discusses energy models adopted within the DCworms. In Section~5 we assess the energy models by comparison of simulation results with real measurements. We also present experiments that were performed using DCworms to show various types of resource and scheduling technics allowing decreasing the total energy consumption of the execution of a set of tasks. Final conclusions and directions for future work are given in Section~6. 180 180 181 181 \section{Related Work}\label{sota} … … 322 322 323 323 \begin{equation} 324 P=C\cdot V _{core}^{2}\cdot f\label{eq:ohm-law}325 \end{equation} 326 327 with $C$ being the processor switching capacitance, $V _{core}$ the328 current P-State's corevoltage and $f$ the frequency. Based on the324 P=C\cdot V^{2}\cdot f\label{eq:ohm-law} 325 \end{equation} 326 327 with $C$ being the processor switching capacitance, $V$ the 328 current P-State's voltage and $f$ the frequency. Based on the 329 329 above equation it is suggested that although the reduction of frequency 330 330 causes an increase in the time of execution, the reduction of frequency 331 also leads to the reduction of $V _{core}$ and thus the power savings332 from the $P\sim V _{core}^{2}$ relation outweigh the increased computation333 time. However, experiments performed on several HPC servers show that this dependency does not reflect theoretical shape and is often close to linear as presented in Figure \ref{fig:power_freq} . This phenomenon can be explained by impact of other component than CPU and narrow range of available voltages. A good example of impact by other components is power usage of servers with visible influence of fans as illustrated in Figure \ref{fig:fans_P}.331 also leads to the reduction of $V$ and thus the power savings 332 from the $P\sim V^{2}$ relation outweigh the increased computation 333 time. However, experiments performed on several HPC servers show that this dependency does not reflect theoretical shape and is often close to linear as presented in Figure \ref{fig:power_freq} for Actina Solar 212 server equipped with a 4 core Intel Xeon 5160 of âWoodcrestâ family having four P-States (2.0, 2.3, 2.6 and 3.0GHz). This phenomenon can be explained by impact of other component than CPU and narrow range of available voltages. A good example of impact by other components is power usage of servers with visible influence of fans as illustrated in Figure \ref{fig:fans_P}. 334 334 335 335 For these reasons, DCworms allows users to define dependencies between power usage and resource states (such as CPU frequency) in the form of tables or arbitrary functions using energy estimation plugins.
Note: See TracChangeset
for help on using the changeset viewer.