AbstractsComputer Science

QoS assurance and control of large scale distributed component based systems

by Nilabja Roy




Institution: Vanderbilt University
Department: Computer Science
Degree: PhD
Year: 2010
Keywords: queuing models; component based systems; analytical model; simulation model; performance engineering; Resource allocation; colored petri net; lookahead control; workload modeling
Record ID: 1887810
Full text PDF: http://etd.library.vanderbilt.edu/available/etd-12022010-131755/


Abstract

Large scale distributed component based applications provide a number of different services to its clients. Such applications normally serve huge number of concurrent clients and need to provide a decent Quality of Service (QoS). A deployment domain composed of several machines are used to host these applications. The application components are distributed across the machines and communicate among themselves. An important objective of the owner of such a deployment will be to handle as much clients as possible during any given time which will obviously maximize the revenue earned. But this also needs to be done by keeping the costs down and also by providing every customer a minimum amount of QoS. The cost can be reduced by minimizing the number of machines used and using less power. This thesis works towards a solution to the above and comes up with novel application component placement heuristics which makes sure that the overall resources of the domain is utilized in the best possible way. The intuition behind this work is that components are the smallest elements of an application from the perspective of resource usage. By distributing the components in a judicious way across the machines, it is possible to ensure that a minimum of resources is wasted. The work presented here uses a three phase strategy to come up with a solution. In the first phase, the component resource requirement is identified using profiling and workload modeling techniques. In the second phase detailed performance estimation of the application is carried out using analytical methods. In the third and final phase heuristics are proposed which uses the component resource requirement and the performance estimation methods to come up with placing the components across the machines. It ensures that such a placement will waste the least of resources. In the final part of this work, it applies this work in the context of modern data center planning. The most important challenge in modern day data centers is to support large customer bases with high expectation of performance. The incoming workload to the application is highly varying with periodic increase and decrease of workload. If resources are allocated for average workload then performance suffers during peak workload while planning for peak workload keeps resources idle during less workload. Cloud computing is an emerging trend which allows the elastic configuration of resources where machines can be acquired and released on the go. This work proposes a dynamic capacity planning framework for cost minimization based on a look ahead control algorithm which combines performance modeling, workload forecasting and cost optimization to plan for resource allocation in a dynamic environment. The results show how the resources can be allocated just-in-time with workload fluctuations. The dissertation also presents the various way resource is allocated as the various cost components change.