The Zen of Resource Provisioning in a Highly-Virtualized, Service-Driven Data Center
on 02-27-201303:10 PM - last edited on 10-28-201309:35 PM by bcm1
Service-driven organizations are compelled to make the leap into customer-driven organizations, by building a highly scalable and available data center while providing guaranteed service delivery on-demand. The openness and programmability of the networks, the automation of management, and the flexibility of virtualized infrastructure are the essential requirements to proactively engage in a moment’s notice to facilitate the increased complexity of applications and user demands.
The on-demand nature of data center requires the ability to act in either a proactive or reactive manner based on the nature of the workload required, then to distribute the resources seamlessly within the existing infrastructure. To better characterize these requirements, we will discuss the two most practical solutions which are dynamic resource provisioning and distributed resource management.
Dynamic provisioning — the ability to automatically spin up new instances of application resources as workload conditions demand—is one of a key requirement for fully realizing the benefits of a highly virtualized data center. Ideally, the goal is not only to provision and de-provision virtual compute resource or simply move applications and data around, but to actively monitor and direct traffic while dynamically managing network resources. To accommodate an elastic environment such as this, the supporting network services within it must also adapt. When the network tier can’t change as rapidly as the resources behind it, it makes the data center becoming more vulnerable to critical failures and unable to meet the demand. To better control this highly dynamic environment, you need to know how applications are performing, how applications are being delivered, and how traffic is being controlled and directed to the available resources.
Distributed resource management utilizes specific calculation model to determine whether a virtual resource cluster is balanced. This functionality rightfully serves what the virtualized data center needs as workload is shared across an even distribution of hosts in the cluster. If one of the VM requires additional resource than the average level of any other hosts, the system will restore balance by distributing the load across the cluster. Inevitably, the application administrators need the combination of the application performance and the metrics used to calculate underutilization or overutilization of the resources to better tune the environment for optimal results. Without having the administrator understands how each service is utilizing the metrics, what metrics are used and what actions to take, any administrator will have a tough time scaling out the management and distribution of all the different sets of application resources.
Trying to manage data center resources and utilize them effectively while preserving application performance in a virtualized environment has been the core objectives and often challenges for any virtual infrastructure management software. A VM resource management system alone may not feasibly scale to the number and diversity of hosts and VMs supported by today’s modern cloud service providers. What happens when there are interdependencies amongst the VM hosts, the application, the network, and the management systems but there is no tightly integrated API call to communicate or share the messaging in this heterogeneous virtual environment? Other challenges include communication constraints between applications and underlying systems, lack of integration between the network and virtual machine resources, and limited visibility to user demands and application behaviors. It’s like a blind date. Nobody knows what to expect but everyone is eager to do the best possible effort to make the best impression.
What we have to consider is an infrastructure component that is tightly integrated to enable the automation, migration, and scale of applications while increasing visibility across the compute, network and application delivery tier. It needs to combine the application management intelligence of the application delivery network tier with a scalable, business-level policy engine to automate application resource provisioning and management of infrastructure resources. To characterize this further, the infrastructure component acts as a broker between the application delivery network and the underlying application resources which simplifies the on-demand provisioning of application and network resources within a virtualized data center. This application resource broker ensures optimal application performance by dynamically adding and removing application resources as demand requires. The broker works in tandem with the application delivery functions to provide these capabilities through real-time changes in traffic demand, application response time, traffic load, and infrastructure capacity from both compute and network infrastructure. As demand reaches the configured threshold for an application, the system will initiate provisioning actions to ensure that necessary and appropriate application resources are available to meet the defined Service Level Agreements (SLAs). Together, this holistic approach will increase data center availability and drive service innovation through automated, self-service provisioning models that quickly adapts to changing conditions based on infrastructure performance, application need, and user demand.
There are many core cloud use cases where this functionality can play a great deal of significance especially for enabling cloud bursting for a hybrid cloud service use case or for enabling business continuity across globally distributed data centers to automate application resource mobility in the event of a disaster. In the use case of business continuity, the broker can enable a seamless redirection of both new and active users when VM migration occurs between data centers, thus avoiding the risk of a single data center failure. Because the broker is tightly integrated with the VM resource management, it can automatically detect VM movement across sites and ensure an undisrupted end user experience by redirecting client sessions to the right VM cluster in a manner that is fully transparent.
In order to accommodate the changing cloud environment and the varied management requirements, the broker too needs to seamlessly integrate with custom and third party virtual management suites and open orchestration frameworks via the combination of northbound APIs (NBAPI) and standard-based application messaging protocol (such as Advanced Message Queuing Protocol – AMQP).
At minimum, the application resource broker needs to have these five critical elements to fully support your dynamic service-driven data centers and facilitate the increased complexity of cloud based infrastructure:
Simplified administration with automated device discovery to allow quick detection and registration of any newly added network devices or network resources to participate in the elastic application provisioning.
Automated, on-demand provisioning utilizes policy-based decision engine to automatically provision additional application VM instances and application delivery resources to service user demand and de-provision the VM instances and resources when demand subsides
Compound rules and custom actions enable the creation of customized policies in the provisioning process and provide the ultimate flexibility in handling unique infrastructure management needs.
Real-time application-centric monitoring automatically collects and stores historical performance metrics for application workloads, participating VMs, and network devices to better understand application health and aid in future capacity planning needs and billing initiatives.
REST-based Application Programming Interface (API) enables the automated management of the application resource broker functionalities in a RESTful manner using custom and third party management or orchestration tools
Please post your comments about what challenges you face in achieving a self-service, on-demand deployment model in the data center and the significance of having an infrastructure component or an application broker to aid your objectives.
For the next topic, I will discuss cloud bursting as the enabler for hybrid cloud service and the components that aid in achieving the ideal resource orchestration and facilitate extensible programming or customization.