Demytro Dyachuk

 Demytro Dyachuk

Demytro Dyachuk

  • Courses4
  • Reviews4

Biography

University of Saskatchewan - Computer Science



Experience

  • University of Saskatchewan

    Sessional Instructor

    CMPT 350.3 Web-Programming (1 term)
    CMPT 115.3 Principles of Computer Science (3 terms)

  • University of Saskatchewan

    Research Assistant

    Main research areas:
    - optimizing response time of service-oriented systems (7 papers)
    - ensuring QoS of service compositions by means of scheduling (4 papers)
    - efficient resource utilization by cloud applications (1 paper)
    - improving energy efficiency of data centers (4 papers)

  • Untab.io

    Co-Founder

    Demytro worked at Untab.io as a Co-Founder

  • Demonware

    Capacity Planning Engineer

    capacity planning for data centers, clusters, services
    demand forecasting
    designing BI for capacity planning purposes
    bottleneck detection
    resource utilization analysis and reporting
    cost modeling
    data center inventory modeling

  • Pax Automa

    Co-Founder

    Demytro worked at Pax Automa as a Co-Founder

  • TRLabs

    Graduate Researcher

    Developed a method for migrating legacy two-tier ASP.NET applications to Service-Oriented Architecture. The main goal of the method is minimizing the time, and consequently the costs, associated with the migration. The method was successful tested with an online market research application used by iTracks (http://www.itracks.com/).

Education

  • University of Saskatchewan

    Ph.D.

    Computer Science

  • University of Saskatchewan

    M.Sc. (not completed due to a transfer to a Ph.D.)

    Computer Science

  • University of Saskatchewan

    Sessional Instructor


    CMPT 350.3 Web-Programming (1 term) CMPT 115.3 Principles of Computer Science (3 terms)

  • University of Saskatchewan

    Research Assistant


    Main research areas: - optimizing response time of service-oriented systems (7 papers) - ensuring QoS of service compositions by means of scheduling (4 papers) - efficient resource utilization by cloud applications (1 paper) - improving energy efficiency of data centers (4 papers)

  • Chernivci National 'Juriy Fedkovyc' University

    M.Sc.

    Applied Mathemetics
    Dissertation topic: A Grid System for Performing Distributed Computations in the Presence of Unreliable Nodes.

Publications

  • A Solution to Resource Underutilization for Web Services Hosted in the Cloud

    The 11th International Symposium on Distributed Objects, Middleware, and Applications (DOA'09)

    Abstract. At the moment the service market is experiencing a con- tinuous growth, as services allow easy and quick enhancing of new and existing applications. However, hosting services according to a common on-premise model is not sufficient for dealing with erratic, spike-prone service loads. A new more promising approach is hosting services in the cloud (utility computing), which enables dynamic resource allocation. The last provides an opportunity to meet average response time require- ments even in case of long-term fluctuating loads. Unfortunately, in the presence of short term fluctuations the resources utilization has to stay under 50% in order to achieve response time of the same order as job sizes. In this work we suggest to compensate the problem of underutilization caused by hosting low-latency services by means of allocating the re- maining resources to time insensitive service requests. This solution uses load balancing combined with admission control and scheduling appli- cation server threads. The proposed approach is evaluated by means of experiments with the prototype conducted with Amazon’s EC2. The experimental results show that the servers utilization can be increased without penalizing low-latency requests.

  • A Solution to Resource Underutilization for Web Services Hosted in the Cloud

    The 11th International Symposium on Distributed Objects, Middleware, and Applications (DOA'09)

    Abstract. At the moment the service market is experiencing a con- tinuous growth, as services allow easy and quick enhancing of new and existing applications. However, hosting services according to a common on-premise model is not sufficient for dealing with erratic, spike-prone service loads. A new more promising approach is hosting services in the cloud (utility computing), which enables dynamic resource allocation. The last provides an opportunity to meet average response time require- ments even in case of long-term fluctuating loads. Unfortunately, in the presence of short term fluctuations the resources utilization has to stay under 50% in order to achieve response time of the same order as job sizes. In this work we suggest to compensate the problem of underutilization caused by hosting low-latency services by means of allocating the re- maining resources to time insensitive service requests. This solution uses load balancing combined with admission control and scheduling appli- cation server threads. The proposed approach is evaluated by means of experiments with the prototype conducted with Amazon’s EC2. The experimental results show that the servers utilization can be increased without penalizing low-latency requests.

  • On Allocation Policies for Power and Performance

    E2GC2 2010: Energy Efficient Grids, Clouds and Clusters Workshop at IEEE Grid 2010 Conference

    With the increasing popularity of Internet-based services and applications, power efficiency is becoming a major concern for data center operators, as high electricity consumption not only increases greenhouse gas emissions, but also increases the cost of running the server farm itself. In this paper we address the problem of maximizing the revenue of a service provider by means of dynamic allocation policies that run the minimum amount of servers necessary to meet user’s requirements in terms of performance. The results of several experiments executed using Wikipedia traces are described, showing that the proposed schemes work well, even if the workload is non-stationary. Since any resource allocation policy requires the use of forecasting mechanisms, various schemes allowing compensating errors in the load forecasts are presented and evaluated.

  • A Solution to Resource Underutilization for Web Services Hosted in the Cloud

    The 11th International Symposium on Distributed Objects, Middleware, and Applications (DOA'09)

    Abstract. At the moment the service market is experiencing a con- tinuous growth, as services allow easy and quick enhancing of new and existing applications. However, hosting services according to a common on-premise model is not sufficient for dealing with erratic, spike-prone service loads. A new more promising approach is hosting services in the cloud (utility computing), which enables dynamic resource allocation. The last provides an opportunity to meet average response time require- ments even in case of long-term fluctuating loads. Unfortunately, in the presence of short term fluctuations the resources utilization has to stay under 50% in order to achieve response time of the same order as job sizes. In this work we suggest to compensate the problem of underutilization caused by hosting low-latency services by means of allocating the re- maining resources to time insensitive service requests. This solution uses load balancing combined with admission control and scheduling appli- cation server threads. The proposed approach is evaluated by means of experiments with the prototype conducted with Amazon’s EC2. The experimental results show that the servers utilization can be increased without penalizing low-latency requests.

  • On Allocation Policies for Power and Performance

    E2GC2 2010: Energy Efficient Grids, Clouds and Clusters Workshop at IEEE Grid 2010 Conference

    With the increasing popularity of Internet-based services and applications, power efficiency is becoming a major concern for data center operators, as high electricity consumption not only increases greenhouse gas emissions, but also increases the cost of running the server farm itself. In this paper we address the problem of maximizing the revenue of a service provider by means of dynamic allocation policies that run the minimum amount of servers necessary to meet user’s requirements in terms of performance. The results of several experiments executed using Wikipedia traces are described, showing that the proposed schemes work well, even if the workload is non-stationary. Since any resource allocation policy requires the use of forecasting mechanisms, various schemes allowing compensating errors in the load forecasts are presented and evaluated.

  • Maximizing Cloud Providers Revenues via Energy Aware Allocation Policies

    IEEE Cloud 2010: the 2010 International Conference on Cloud Computing

    Cloud providers, like Amazon, offer their data centers’ computational and storage capacities for lease to paying customers. High electricity consumption, associated with running a data center, not only reflects on its carbon footprint, but also increases the costs of running the data center itself. This paper addresses the problem of maximizing the revenues of Cloud providers by trimming down their electricity costs. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users’ experience while minimizing the amount of consumed electricity. The results of numerical experiments and simulations are described, showing that the proposed scheme performs well under different traffic conditions.

  • A Solution to Resource Underutilization for Web Services Hosted in the Cloud

    The 11th International Symposium on Distributed Objects, Middleware, and Applications (DOA'09)

    Abstract. At the moment the service market is experiencing a con- tinuous growth, as services allow easy and quick enhancing of new and existing applications. However, hosting services according to a common on-premise model is not sufficient for dealing with erratic, spike-prone service loads. A new more promising approach is hosting services in the cloud (utility computing), which enables dynamic resource allocation. The last provides an opportunity to meet average response time require- ments even in case of long-term fluctuating loads. Unfortunately, in the presence of short term fluctuations the resources utilization has to stay under 50% in order to achieve response time of the same order as job sizes. In this work we suggest to compensate the problem of underutilization caused by hosting low-latency services by means of allocating the re- maining resources to time insensitive service requests. This solution uses load balancing combined with admission control and scheduling appli- cation server threads. The proposed approach is evaluated by means of experiments with the prototype conducted with Amazon’s EC2. The experimental results show that the servers utilization can be increased without penalizing low-latency requests.

  • On Allocation Policies for Power and Performance

    E2GC2 2010: Energy Efficient Grids, Clouds and Clusters Workshop at IEEE Grid 2010 Conference

    With the increasing popularity of Internet-based services and applications, power efficiency is becoming a major concern for data center operators, as high electricity consumption not only increases greenhouse gas emissions, but also increases the cost of running the server farm itself. In this paper we address the problem of maximizing the revenue of a service provider by means of dynamic allocation policies that run the minimum amount of servers necessary to meet user’s requirements in terms of performance. The results of several experiments executed using Wikipedia traces are described, showing that the proposed schemes work well, even if the workload is non-stationary. Since any resource allocation policy requires the use of forecasting mechanisms, various schemes allowing compensating errors in the load forecasts are presented and evaluated.

  • Maximizing Cloud Providers Revenues via Energy Aware Allocation Policies

    IEEE Cloud 2010: the 2010 International Conference on Cloud Computing

    Cloud providers, like Amazon, offer their data centers’ computational and storage capacities for lease to paying customers. High electricity consumption, associated with running a data center, not only reflects on its carbon footprint, but also increases the costs of running the data center itself. This paper addresses the problem of maximizing the revenues of Cloud providers by trimming down their electricity costs. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users’ experience while minimizing the amount of consumed electricity. The results of numerical experiments and simulations are described, showing that the proposed scheme performs well under different traffic conditions.

  • Balancing Electricity Bill and Performance in Server Farms with Setup Costs, Future Generation Computer Systems

    Future Generation Computer Systems. The International Journal of Grid Computing and eScience

    High electricity consumption, associated with running Internet scale server farms, not only reflects on the data center’s greenhouse gas emissions, but also increases the cost of running the data center itself. In this paper, we consider the problem of maximizing the revenues of service providers running large scale data centers subject to setup cost by reducing their electricity bill, while considering the fact that clients consuming the offered services have finite non-deterministic patience. As a solution, we present and evaluate the performance of allocation policies which, in the context of both one and two-tiered systems, dynamically switch servers on and off according to changes in user demand. The algorithms we present aim at maximizing the users’ experience while minimizing the amount of electricity required to run the IT infrastructure in spite of non-stationary traffic which cannot be predicted with the absolute accuracy. The results of several experiments are presented, showing that the proposed schemes perform well under different traffic conditions.

  • A Solution to Resource Underutilization for Web Services Hosted in the Cloud

    The 11th International Symposium on Distributed Objects, Middleware, and Applications (DOA'09)

    Abstract. At the moment the service market is experiencing a con- tinuous growth, as services allow easy and quick enhancing of new and existing applications. However, hosting services according to a common on-premise model is not sufficient for dealing with erratic, spike-prone service loads. A new more promising approach is hosting services in the cloud (utility computing), which enables dynamic resource allocation. The last provides an opportunity to meet average response time require- ments even in case of long-term fluctuating loads. Unfortunately, in the presence of short term fluctuations the resources utilization has to stay under 50% in order to achieve response time of the same order as job sizes. In this work we suggest to compensate the problem of underutilization caused by hosting low-latency services by means of allocating the re- maining resources to time insensitive service requests. This solution uses load balancing combined with admission control and scheduling appli- cation server threads. The proposed approach is evaluated by means of experiments with the prototype conducted with Amazon’s EC2. The experimental results show that the servers utilization can be increased without penalizing low-latency requests.

  • On Allocation Policies for Power and Performance

    E2GC2 2010: Energy Efficient Grids, Clouds and Clusters Workshop at IEEE Grid 2010 Conference

    With the increasing popularity of Internet-based services and applications, power efficiency is becoming a major concern for data center operators, as high electricity consumption not only increases greenhouse gas emissions, but also increases the cost of running the server farm itself. In this paper we address the problem of maximizing the revenue of a service provider by means of dynamic allocation policies that run the minimum amount of servers necessary to meet user’s requirements in terms of performance. The results of several experiments executed using Wikipedia traces are described, showing that the proposed schemes work well, even if the workload is non-stationary. Since any resource allocation policy requires the use of forecasting mechanisms, various schemes allowing compensating errors in the load forecasts are presented and evaluated.

  • Maximizing Cloud Providers Revenues via Energy Aware Allocation Policies

    IEEE Cloud 2010: the 2010 International Conference on Cloud Computing

    Cloud providers, like Amazon, offer their data centers’ computational and storage capacities for lease to paying customers. High electricity consumption, associated with running a data center, not only reflects on its carbon footprint, but also increases the costs of running the data center itself. This paper addresses the problem of maximizing the revenues of Cloud providers by trimming down their electricity costs. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users’ experience while minimizing the amount of consumed electricity. The results of numerical experiments and simulations are described, showing that the proposed scheme performs well under different traffic conditions.

  • Balancing Electricity Bill and Performance in Server Farms with Setup Costs, Future Generation Computer Systems

    Future Generation Computer Systems. The International Journal of Grid Computing and eScience

    High electricity consumption, associated with running Internet scale server farms, not only reflects on the data center’s greenhouse gas emissions, but also increases the cost of running the data center itself. In this paper, we consider the problem of maximizing the revenues of service providers running large scale data centers subject to setup cost by reducing their electricity bill, while considering the fact that clients consuming the offered services have finite non-deterministic patience. As a solution, we present and evaluate the performance of allocation policies which, in the context of both one and two-tiered systems, dynamically switch servers on and off according to changes in user demand. The algorithms we present aim at maximizing the users’ experience while minimizing the amount of electricity required to run the IT infrastructure in spite of non-stationary traffic which cannot be predicted with the absolute accuracy. The results of several experiments are presented, showing that the proposed schemes perform well under different traffic conditions.

  • Optimizing Cloud Providers Revenues Via Energy Efficient Server Allocation

    Elsevier (Sustainable Computing)

    Cloud providers, like Amazon, offer their data centers’ computational and storage capacities for lease to paying customers. High electricity consumption, not only reflects on the data center’s carbon footprint, but also increases the costs of running the data center itself. We examine the problem of managing a server farm in a way that attempts to maximize the net revenue earned by a cloud provider by renting servers to customers according to a typical Platform- as-a-Service model. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users’ experience while minimizing the amount of consumed electricity. Special emphasis is given to cases where user demand is time- varying and cannot be predicted with absolute accuracy. In order to deal with that, allocation policies resilient to errors in the forecasting, as well as a method for finding the parameters leading to the highest revenues are introduced. The results of several experiments are described, showing that the proposed scheme performs well under different traffic conditions.

  • A Solution to Resource Underutilization for Web Services Hosted in the Cloud

    The 11th International Symposium on Distributed Objects, Middleware, and Applications (DOA'09)

    Abstract. At the moment the service market is experiencing a con- tinuous growth, as services allow easy and quick enhancing of new and existing applications. However, hosting services according to a common on-premise model is not sufficient for dealing with erratic, spike-prone service loads. A new more promising approach is hosting services in the cloud (utility computing), which enables dynamic resource allocation. The last provides an opportunity to meet average response time require- ments even in case of long-term fluctuating loads. Unfortunately, in the presence of short term fluctuations the resources utilization has to stay under 50% in order to achieve response time of the same order as job sizes. In this work we suggest to compensate the problem of underutilization caused by hosting low-latency services by means of allocating the re- maining resources to time insensitive service requests. This solution uses load balancing combined with admission control and scheduling appli- cation server threads. The proposed approach is evaluated by means of experiments with the prototype conducted with Amazon’s EC2. The experimental results show that the servers utilization can be increased without penalizing low-latency requests.

  • On Allocation Policies for Power and Performance

    E2GC2 2010: Energy Efficient Grids, Clouds and Clusters Workshop at IEEE Grid 2010 Conference

    With the increasing popularity of Internet-based services and applications, power efficiency is becoming a major concern for data center operators, as high electricity consumption not only increases greenhouse gas emissions, but also increases the cost of running the server farm itself. In this paper we address the problem of maximizing the revenue of a service provider by means of dynamic allocation policies that run the minimum amount of servers necessary to meet user’s requirements in terms of performance. The results of several experiments executed using Wikipedia traces are described, showing that the proposed schemes work well, even if the workload is non-stationary. Since any resource allocation policy requires the use of forecasting mechanisms, various schemes allowing compensating errors in the load forecasts are presented and evaluated.

  • Maximizing Cloud Providers Revenues via Energy Aware Allocation Policies

    IEEE Cloud 2010: the 2010 International Conference on Cloud Computing

    Cloud providers, like Amazon, offer their data centers’ computational and storage capacities for lease to paying customers. High electricity consumption, associated with running a data center, not only reflects on its carbon footprint, but also increases the costs of running the data center itself. This paper addresses the problem of maximizing the revenues of Cloud providers by trimming down their electricity costs. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users’ experience while minimizing the amount of consumed electricity. The results of numerical experiments and simulations are described, showing that the proposed scheme performs well under different traffic conditions.

  • Balancing Electricity Bill and Performance in Server Farms with Setup Costs, Future Generation Computer Systems

    Future Generation Computer Systems. The International Journal of Grid Computing and eScience

    High electricity consumption, associated with running Internet scale server farms, not only reflects on the data center’s greenhouse gas emissions, but also increases the cost of running the data center itself. In this paper, we consider the problem of maximizing the revenues of service providers running large scale data centers subject to setup cost by reducing their electricity bill, while considering the fact that clients consuming the offered services have finite non-deterministic patience. As a solution, we present and evaluate the performance of allocation policies which, in the context of both one and two-tiered systems, dynamically switch servers on and off according to changes in user demand. The algorithms we present aim at maximizing the users’ experience while minimizing the amount of electricity required to run the IT infrastructure in spite of non-stationary traffic which cannot be predicted with the absolute accuracy. The results of several experiments are presented, showing that the proposed schemes perform well under different traffic conditions.

  • Optimizing Cloud Providers Revenues Via Energy Efficient Server Allocation

    Elsevier (Sustainable Computing)

    Cloud providers, like Amazon, offer their data centers’ computational and storage capacities for lease to paying customers. High electricity consumption, not only reflects on the data center’s carbon footprint, but also increases the costs of running the data center itself. We examine the problem of managing a server farm in a way that attempts to maximize the net revenue earned by a cloud provider by renting servers to customers according to a typical Platform- as-a-Service model. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users’ experience while minimizing the amount of consumed electricity. Special emphasis is given to cases where user demand is time- varying and cannot be predicted with absolute accuracy. In order to deal with that, allocation policies resilient to errors in the forecasting, as well as a method for finding the parameters leading to the highest revenues are introduced. The results of several experiments are described, showing that the proposed scheme performs well under different traffic conditions.

  • Profit-Aware Server Allocation for Green Internet Services

    MASCOTS 2010: The 18th Annual Meeting of the IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems

    A server farm is examined, where a number of servers are used to offer a service to impatient customers. Every completed request generates a certain amount of profit, running servers consume electricity for power and cooling, while waiting customers might leave the system before receiving service if they experience excessive delays. A dynamic allocation policy aiming at satisfying the conflicting goals of maximizing the quality of users’ experience while minimizing the cost for the provider is introduced and evaluated. The results of several experiments are described, showing that the proposed scheme performs well under different traffic conditions.

  • A Solution to Resource Underutilization for Web Services Hosted in the Cloud

    The 11th International Symposium on Distributed Objects, Middleware, and Applications (DOA'09)

    Abstract. At the moment the service market is experiencing a con- tinuous growth, as services allow easy and quick enhancing of new and existing applications. However, hosting services according to a common on-premise model is not sufficient for dealing with erratic, spike-prone service loads. A new more promising approach is hosting services in the cloud (utility computing), which enables dynamic resource allocation. The last provides an opportunity to meet average response time require- ments even in case of long-term fluctuating loads. Unfortunately, in the presence of short term fluctuations the resources utilization has to stay under 50% in order to achieve response time of the same order as job sizes. In this work we suggest to compensate the problem of underutilization caused by hosting low-latency services by means of allocating the re- maining resources to time insensitive service requests. This solution uses load balancing combined with admission control and scheduling appli- cation server threads. The proposed approach is evaluated by means of experiments with the prototype conducted with Amazon’s EC2. The experimental results show that the servers utilization can be increased without penalizing low-latency requests.

  • On Allocation Policies for Power and Performance

    E2GC2 2010: Energy Efficient Grids, Clouds and Clusters Workshop at IEEE Grid 2010 Conference

    With the increasing popularity of Internet-based services and applications, power efficiency is becoming a major concern for data center operators, as high electricity consumption not only increases greenhouse gas emissions, but also increases the cost of running the server farm itself. In this paper we address the problem of maximizing the revenue of a service provider by means of dynamic allocation policies that run the minimum amount of servers necessary to meet user’s requirements in terms of performance. The results of several experiments executed using Wikipedia traces are described, showing that the proposed schemes work well, even if the workload is non-stationary. Since any resource allocation policy requires the use of forecasting mechanisms, various schemes allowing compensating errors in the load forecasts are presented and evaluated.

  • Maximizing Cloud Providers Revenues via Energy Aware Allocation Policies

    IEEE Cloud 2010: the 2010 International Conference on Cloud Computing

    Cloud providers, like Amazon, offer their data centers’ computational and storage capacities for lease to paying customers. High electricity consumption, associated with running a data center, not only reflects on its carbon footprint, but also increases the costs of running the data center itself. This paper addresses the problem of maximizing the revenues of Cloud providers by trimming down their electricity costs. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users’ experience while minimizing the amount of consumed electricity. The results of numerical experiments and simulations are described, showing that the proposed scheme performs well under different traffic conditions.

  • Balancing Electricity Bill and Performance in Server Farms with Setup Costs, Future Generation Computer Systems

    Future Generation Computer Systems. The International Journal of Grid Computing and eScience

    High electricity consumption, associated with running Internet scale server farms, not only reflects on the data center’s greenhouse gas emissions, but also increases the cost of running the data center itself. In this paper, we consider the problem of maximizing the revenues of service providers running large scale data centers subject to setup cost by reducing their electricity bill, while considering the fact that clients consuming the offered services have finite non-deterministic patience. As a solution, we present and evaluate the performance of allocation policies which, in the context of both one and two-tiered systems, dynamically switch servers on and off according to changes in user demand. The algorithms we present aim at maximizing the users’ experience while minimizing the amount of electricity required to run the IT infrastructure in spite of non-stationary traffic which cannot be predicted with the absolute accuracy. The results of several experiments are presented, showing that the proposed schemes perform well under different traffic conditions.

  • Optimizing Cloud Providers Revenues Via Energy Efficient Server Allocation

    Elsevier (Sustainable Computing)

    Cloud providers, like Amazon, offer their data centers’ computational and storage capacities for lease to paying customers. High electricity consumption, not only reflects on the data center’s carbon footprint, but also increases the costs of running the data center itself. We examine the problem of managing a server farm in a way that attempts to maximize the net revenue earned by a cloud provider by renting servers to customers according to a typical Platform- as-a-Service model. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users’ experience while minimizing the amount of consumed electricity. Special emphasis is given to cases where user demand is time- varying and cannot be predicted with absolute accuracy. In order to deal with that, allocation policies resilient to errors in the forecasting, as well as a method for finding the parameters leading to the highest revenues are introduced. The results of several experiments are described, showing that the proposed scheme performs well under different traffic conditions.

  • Profit-Aware Server Allocation for Green Internet Services

    MASCOTS 2010: The 18th Annual Meeting of the IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems

    A server farm is examined, where a number of servers are used to offer a service to impatient customers. Every completed request generates a certain amount of profit, running servers consume electricity for power and cooling, while waiting customers might leave the system before receiving service if they experience excessive delays. A dynamic allocation policy aiming at satisfying the conflicting goals of maximizing the quality of users’ experience while minimizing the cost for the provider is introduced and evaluated. The results of several experiments are described, showing that the proposed scheme performs well under different traffic conditions.

  • Balancing Electricity Bill and Performance in Server Farms with Setup Costs

    FGCS (Elsevier)

    High electricity consumption, associated with running Internet scale server farms, not only reflects on the data center’s greenhouse gas emissions, but also increases the cost of running the data center itself. In this paper, we consider the problem of maximizing the revenues of service providers running large scale data centers subject to setup cost by reducing their electricity bill, while considering the fact that clients consuming the offered services have finite non-deterministic patience. As a solution, we present and evaluate the performance of allocation policies which, in the context of both one and two-tiered systems, dynamically switch servers on and off according to changes in user demand. The algorithms we present aim at maximizing the users’ experience while minimizing the amount of electricity required to run the IT infrastructure in spite of non-stationary traffic which cannot be predicted with the absolute accuracy. The results of several experiments are presented, showing that the proposed schemes perform well under different traffic conditions.

CMPT 1153

1.5(1)