vADC Docs

AutoScaling Docker applications with SteelApp

by brian.gautreau on ‎10-21-2014 06:34 PM (3,565 Views)

AutoScaling in SteelAppSteelApp.png

AutoScaling enables a SteelApp Traffic Manager to scale a pool of back end nodes either up or down based on application response time.  An obvious use case for AutoScaling is for a website that has different flows of traffic during the course of a day. When the server load increases the application response time decreases, so the number of web or application servers is increased to handle the additional load; conversely, when the load drops, the number of servers is reduced as they are no longer needed. This can be especially useful in environments where the customers pay for each of their compute resources (i.e. chargeback). The AutoScaling functionality is enabled through a Cloud API in SteelApp and this previous Splash post covered some of the basics of creating a custom Cloud API.

Dockersmall_v-trans.png

Docker is a framework for applications to be built, packaged and deployed. It works closely with Linux Containers and stores everything a container needs in a re-usable form. Docker, along with Linux Containers, have many use cases, but again going back to the web application already mentioned, a Docker container would have all of a web application's needs built-in; things like Apache libraries and binaries, HTML files, Database connectors and more. When the Docker application is deployed, all the information about it, like memory, image file location, networking, etc. is given to it and is reported back in Docker. This lets SteelApp hook back into it to find information that is needed in a Pool.



Gluing everything together

There are a few things that need to be enabled and configured to allow SteelApp and Docker play well together:

  1. Docker must be listening for REST API calls
  2. A Docker image that is deployable and networkable.
  3. A Cloud API plugin for SteelApp.

When the two APIs are put together they form the framework for building the scalable application pool. The flow starts when SteelApp needs a node for a Pool, SteelApp then tells Docker it needs a new container, Docker creates the container, SteelApp then tells Docker to start the container, SteelApp waits and after it has started, SteelApp finds the IP address of the container from Docker and adds it to its pool. When SteelApp decides that it no longer needs a node, it asks Docker about the containers, SteelApp finds the oldest container, and then tells Docker to destroy it. There can be more or less steps in the process, like a “holding period” for a VM before it is destroyed, but it is otherwise fairly straight forward.

When it is boiled down, the SteelApp Traffic Manager’s Cloud API only has a few tasks to execute; create instances, destroy instances, and get the status of instances. Each of these tasks has a few additional things to do but that is generally what it needs to do and the API of Docker provides a way to access each of these functions.

These few tasks translate to some specific Docker API calls; All Container Statuses, Individual Container details, Start a Container, Stop a Container, Destroy Container, examples of these calls are:

The python script that is attached provides some basic AutoScaling functionality and only needs to be modified to set the appropriate Docker Host (a variable on line 239) to get started. Image ID and Name Prefix can be specified during the Pool setup. Additional parameters can be added using a separate options file that is not covered in this post, but can be understood from the previous Splash article.

Additional Reading

Docker API Docs

SteelApp Traffic Manager Docs

SteelApp for Application Delivery Control | Riverbed

Feature Brief: Stingray's Autoscaling capability