vADC Blog

Using Stingray Traffic Manager in the Dimension Data (OpSource) Cloud

by markbod on ‎05-10-2013 09:43 AM (1,452 Views)

Stingray Traffic Manager is a pure software ADC , and as such has the ultimate flexibility of being able to migrate effortlessly between physical, virtual and cloud environments. Recently I have been taking a look at how easy it is to install and run Stingray within the Dimension Data (OpSource) cloud platform.

Clustering

I’m running a HA pair of Stingrays (version 9.1) in OpSource without any problems. However I needed to switch the cluster communications method from multicast to unicast. Creating a new HA cluster should work fine for version 9.2 and beyond, because the default method is now unicast.

Traffic IP Groups

Creating a Traffic IP group is simple enough and works as expected. Fail over works great too, so creating always available HA services in OpSource Cloud is straight forward.

stm-tip.png

Once you have created your TIP Group, you will need to set up an additional NAT rule in the Cloud settings.

UI-NAT.png

I tested failing over a TIP in a cluster while connecting through the public NAT address and it did so without any noticeable interruption in service. That's a big tick in the HA box from me.

Auto-Scaling

I created an auto-scaling driver for OpSource and ran some tests using a simple Magento e-commerce demo site. The driver and installation instructions can be found here: Dimension Data (OpSource) Auto-Scaling Driver

So I created a Traffic IP Group, Virtual server, and Auto-Scaled Pool for magento. The auto-scaling system had its minimum nodes set to 1, so I always had at least 1 node available in my pool.

stm-conf-summary-1.png

To test the driver I generated a little load and watched for the auto-scaler to kick in and start provisioning a new node for me. My hysteresis value was 20 seconds, so the high load had to be sustained for 20 seconds before the Auto-Scaler would do anything. This is used to stop brief spikes causing unnecessary scaling actions.

stm-grow-log.png

Soon a new node provisioning action was instigated, and OpSource began deploying my node.

UI-grow.png

Once the node had been cloned and booted, the Auto-Scaler updated the pool and Stingray was load balancing across an additional node.

stm-conf-summary-2.png

I left the load generator running a little too long and ended up with three nodes in my pool. However that’s not a problem, because Stingray soon scaled back down to 1 node once the load stopped.

The OpSource Driver is a little unusual in that I have given it an additional “destroy” flag which determines what the driver should do on a scale down action.

I would recommend setting the destroy flag in the Auto-Scaler configuration to “false”. So rather than destroy the nodes, the driver simply powered them off. This will make for much quicker scale up actions for nodes which have been used previously.

UI-shrink-poweroff.png

The next time I needed to scale up the pool, the provisioning time was in seconds (for a simple power on), rather than then several minutes it can take for a full clone.

stm-grow-log-poweron.png

MSM and Global Load Balancing

I haven’t played with either MSM or GLB in the OpSource cloud yet, but I see no reason why there should be any difficulty in doing so. Running a globally load balanced service in multiple OpSource Cloud regions should be relatively simply. If you make use of Stingrays Geo Load Balancing feature then you can ensure your clients are always serviced by their closest OpSource Cloud location.