Programme of Invited Talks
Energy-Aware Autonomic Resource Allocation in Multitier Virtualized Environments, Danilo Ardagna
In recent years, the energy consumption associated with Information Technology (IT) infrastructures has been steadily increasing. It is expected that at the end of 2012 up to 40% of IT budget will be consumed by energy costs. From an environmental point of view, IT accounts for 2% of global CO2 emissions polluting as the global air traffic. The reduction of energy usage is one of the primary goals of green computing, a new discipline and practice of using computing resources with a focus on the impact of IT on the environment.
A significant amount of work has been done to achieve power reduction of hardware devices (e.g., in mobile systems to extend battery life). Nowadays, low power techniques and energy savings mechanisms are being introduced also in data centers environments. In such systems, software is accessed as a service and computational capacity is provided on demand to many customers who share a pool of IT resources. Energy savings can be obtained by dynamically allocating computing resources among running applications and trading off application performance levels with energy consumption. As the customers' access rates change significantly within a single business day, energy-aware resource allocation is a challenging problem and techniques able to control the system at multiple time scales are needed.
The aim of the seminar is to introduce a unifying framework addressing data centers resource management by exploiting as actuation mechanisms allocation of virtual machines (VMs) to servers, load balancing, capacity allocation, server power state tuning, and dynamic voltage/frequency scaling. Resource management is modeled as an NP-hard mixed integer nonlinear programming problem, and solved by a local search procedure. To validate its effectiveness, the proposed model is compared to top-performing state-of-the-art techniques. The evaluation is based on simulation and on real experiments performed in a prototype environment. Synthetic as well as realistic workloads and a number of different scenarios of interest will be considered.
Intelligent Offload to Inrease Smartphone Battery Lifetime, Ranveer Chandra
Mobile devices are severely battery constrained. While smartphone capability has increased manifold in the last fifteen years, the battery energy density has only doubled. We have been exploring several techniques to improve the battery lifetime and in this talk I will discuss one such technique – offloading. In particular, by offloading computation from the main processor to either a lower power core, the network interface or to the cloud, we can put portions of the mobile device to sleep, thereby saving significant amount of energy.
Data Center Performance and Power Management, Yuan Chen
Data centers are very expensive to operate due to the power and cooling requirements of IT equipment. Rising energy costs, regulatory requirements and social concerns over green house gas emissions amplify the importance of energy efficiency. However, energy efficiency is for naught if the data center cannot deliver IT services according to predefined performance goals, as performance violations result in lost business revenue. Thus, an important question in data center resource management is how to correctly provision resource, such that performance requirements are met while minimizing energy consumption. This talk discusses several approaches in both theory and practice to improve the overall efficiency of data center operations.
- Application performance and power modeling
- Modeling application-level performance for multi-tier applications in virtualized server environments
- Modeling and monitoring power consumption and sustainability of applications and services in data centers
- Control and optimization of data center workload and resource management
- Application-level performance management in virtualized server environment
- Dynamical provisioning of IT resources for large web server farms
- Integrated management of IT, cooling, and power supply in data centers
- Integration of IT management and cooling infrastructure management
- Integrating power supply, specially renewable, and cooling supply to IT workload and resource management
Dynamic Energy Management with/for Smart ICT, Erol Gelenbe
ICT is becoming one of the main culprits of CO2 emissions, already on a par with air travel, and soon probably doing even more damage. At the same time ICT is offering cleaner substitutes to emissions in areas such as transport because of the potential for substituting on-line activities for physical activities, such as working at home rather than commuting to an office. Furthermore ICT offers the potential to manage energy more efficiently and for better matching supply and demand, and substituting renewable energy sources in the place of fossil fuels. This lecture will address some of these questions from the perspective of classical computer systems performance engineering and show how some of our well established methods can be used to understand the trade-offs and help improve the outcomes.
Energy Saving and Performance in Service Centers, Isi Mitrani
We consider the problem of managing a service center where it is desirable to keep power consumption low, while at the same time maintaining high performance levels. These conflicting objectives are addressed by designating one or more blocks of servers as `reserves', to be powered up and down when the demand increases above or falls below certain thresholds. The questions of how to choose the parameters of the operating policy are answered by analyzing suitable queueing models. These may allow customers to defect, i.e. to leave the system when their waiting times are too large. Simple and easily implementable heuristics are proposed and numerical results are presented.
Performance Control for complex Web applications, Guillaume Pierre
Dynamic resource provisioning aims at maintaining the end-to-end response time of a web application within a pre-defined range (Service Level Objective, or SLO). Provisioning resources for applications composed of multiple services remains a challenge. When the SLO is violated, one must decide which service(s) should be re-provisioned for optimal effect. We propose to assign an SLO only to the front-end service. Other services are not given any particular response time objectives. Services are autonomously responsible for their own provisioning operations and collaboratively negotiate performance objectives with each other to decide the provisioning service(s). After presenting the resource provisioning techniques themselves, I will discuss their application and implementation in the context of the ConPaaS runtime environment for elastic Cloud applications.
How to Tame Burstiness and Save Power in Multi-Tiered Systems, Evgenia Smirni
Burstiness (i.e., sudden surges) in user demands in enterprise systems that operate under the multi-tiered paradigm is a common phenomenon that leads to over-provisioning: the system is configured with excess hardware to meet peak user demands, often resulting in excessive (and unnecessary) power costs. In this talk, we present Fastrack, a parameter-free algorithm for dynamic resource provisioning that uses simple statistics to promptly distil information about changes in workload burstiness. This information, coupled with the application's end-to-end response times and system bottleneck characteristics, guides resource allocation, which proves to be effective under a broad variety of application burstiness profiles and bottleneck scenarios. Extensive simulations illustrate Fastrack's robustness for consistently meeting predefined service level objectives while minimizing power usage.
Joint work with Andrew Caniff, Lei Lu, Ningfang Mi, and Ludmila Cherkasova. A technical paper on this work appeared at ITC'10 where it received the Best Student Paper Award.
Autonomic Exploration of Trade-offs between Power and Performance in Disk Drives, Evgenia Smirni
Over-provisioning is a standard capacity planning practice that leads to disk drives that operate mostly under very low utilization (as low as single digit utilization) but that are consuming disproportional amounts of power. Methodologies that place the disk drive into a low power mode during idle times can assist in conserving power. This is a challenging problem because the performance of future jobs cannot be compromised, yet there is no knowledge of future disk arrivals. In this paper we explore the above problem by exploring ranges and trade offs of possible power savings and performance within a set of enterprise storage traces. We demonstrate the difficulty of obtaining significant power savings even in traces where overall utilization is less than 5% and explore the feasibility of popular schemes such as workload shaping for power savings. We also propose an autonomic algorithm that suggests when and for how long a power savings mode should be activated given an acceptable performance degradation target that is user provided. The robustness of the algorithm is illustrated via extensive experimentation.
Joint work with Alma Riska, Xenia Mountrouidou, and Feng Yan. The talk will be based on publications that appeared at ICAC'10, MASCOTS'11, ICPE'11, ICPE'12, and ERSS'11.
Markov fluid models for energy and performance analysis, Miklos Telek
Markov fluid model is a flexible modeling tool to describe the behavior of system with hybrid (discrete and continuous) state space. One of the continuous variable characterizing the system behavior is the energy level. This way this modeling approach allows a combined analysis of system performance considering the energy consumption. The talk is going to survey the background of Markov fluid models and introduces some application examples.
Algorithmic challenges for greening data centers, Adam Wierman
Given the significant energy consumption of data centers, improving their energy efficiency is an important social problem. However, energy efficiency is necessary but not sufficient for sustainability, which demands reduced usage of energy from fossil fuels. In this talk, I will describe some recent work highlighting the algorithmic challenges associated with "greening" data centers. We will focus on two applications:
- dynamic resizing within a data center; and
- geographical load balancing across an Internet-scale system
In both contexts I will present our new algorithms, which provide significantly improved performance guarantees when compared with the "standard" approaches using Receding Horizon Control. Additionally, if time allows, I will briefly discuss the our recent progress toward the implementation and evaluation of these algorithms in industry data centers.
Energy procurement in the presence of intermittent sources, Adam Wierman
The increasing penetration of intermittent, unpredictable renewable energy sources, such as wind energy, poses significant challenges for the utility companies trying to incorporate renewable energy into their portfolios. As a result, there is considerable discussion about how electricity markets should be restructured in order to facilitate the integration of renewable energy into the grid. Suggestions include adding additional markets, moving markets, closer to real-time, etc. In this talk, I will discuss how the optimal energy procurement of a utility company changes as a result of an increasing penetration of intermittent renewable resources, and what impact this should have on electricity market structure. The work I will present is joint with Sachin Adlakha and Jayakrishnan Nair.