Video: Stingray Traffic Manager and vFabric Application Director

by riverbed on ‎12-01-2012 05:21 PM - edited on ‎03-20-2017 11:03 AM by Community Manager (1,831 Views)

 

Vinay Reddy demonstrates Riverbed's Stingray Traffic Manager virtual application delivery controller in a VMware vFabric Application Director environment.

Video: Avoiding the Impact of Public Cloud Outages

by fmemon on ‎09-10-2013 05:06 PM - edited on ‎03-20-2017 10:59 AM by Community Manager (1,475 Views)

 

 

Vinay Reddy and Nick Amato discuss how you can avoid public cloud outages using Global Load Balancing (GLB) with the Stingray Traffic Manager.

Video: How to deploy Stingray in AWS VPC

by fmemon on ‎05-23-2013 12:36 PM - edited on ‎03-20-2017 10:56 AM by Community Manager (1,977 Views)

 

 

This video provides step by step instructions on how to properly deploy the Stingray Traffic Manager on Amazon Web Services (AWS) Virtual Private Cloud (VPC), along with best practices.

Video: Global Load Balancing in Amazon AWS cloud with Stingray

by fmemon on ‎09-17-2013 03:55 PM - edited on ‎03-20-2017 10:53 AM by Community Manager (1,857 Views)

 

Vinay Reddy discusses setting up Global Load Balancing (GLB) with the Stingray Traffic Manager running in Amazon AWS.

Video: Deploy Stingray in Amazon AWS Cloud

by riverbed on ‎12-01-2012 05:12 PM - edited on ‎03-20-2017 10:38 AM by Community Manager (2,038 Views)

 

 

In this hands-on technical video, Vinay Reddy, Senior Technical Marketing Engineer at Riverbed Technology, takes you through a step-by-step demo of Stingray Traffic Manager in Amazon AWS Cloud, including:

  • Exploring AWS Marketplace
  • Launching Stingray from the Amazon console
  • Use Amazon console to choose instance types and deployment configuration
  • Open the Stingray admin console and prepare to configure nodes and virtual servers

Video: Introduction to Stingray Load Balancing

by riverbed on ‎12-01-2012 05:14 PM - edited on ‎03-20-2017 10:37 AM by Community Manager (2,357 Views)

 

 

This video gives a general overview of Load Balancing with Stingray as well as recommendations on what Load Balancing algorithms to use depending on the situation.

Video: SSL Decryption with Stingray

by fmemon on ‎05-23-2013 12:33 PM - edited on ‎03-20-2017 10:13 AM by Community Manager (1,824 Views)

 

 

This video discusses what SSL Decryption with Stingray is, why to use it, and how to configure it.

 

Video: Stingray, OPNET, and FlyScript

by fmemon on ‎06-11-2013 06:14 PM - edited on ‎03-20-2017 10:06 AM by Community Manager (1,374 Views)

 

 

This video provides an overview on how you can use FlyScript and Stingray to automatically add the JavaScript snippet required by Riverbed OPNET BrowserMetrix to web pages.

Video: Introduction to TrafficScript

by fmemon on ‎06-27-2013 12:39 PM - edited on ‎03-20-2017 09:45 AM by Community Manager (1,621 Views)

 

 

This video provides a basic introduction to TrafficScript.

Virtual Traffic Manager and Microsoft Lync 2013 Deployment Guide

by fmemon on ‎07-26-2013 02:59 PM - edited on ‎03-13-2017 05:03 PM by aannavarapu (3,416 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Lync 2013.

Virtual Traffic Manager and Microsoft Lync 2010 Deployment Guide

by riverbed on ‎12-02-2012 02:49 PM - edited on ‎03-13-2017 05:01 PM by aannavarapu (3,139 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Lync 2010.

Virtual Traffic Manager and Microsoft SharePoint 2013 Deployment Guide

by fmemon on ‎07-26-2013 03:07 PM - edited on ‎02-27-2017 03:53 PM by aannavarapu (2,600 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft SharePoint 2013.

 

Virtual Traffic Manager and Microsoft SharePoint 2010 Deployment Guide

by riverbed on ‎12-02-2012 01:51 PM - edited on ‎02-27-2017 03:52 PM by aannavarapu (1,566 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft SharePoint 2010.

 

Virtual Traffic Manager and Microsoft Exchange 2016 Deployment Guide

by aannavarapu ‎12-21-2016 09:33 AM - edited ‎02-27-2017 03:51 PM (501 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Exchange 2016.

 

Virtual Traffic Manager and Microsoft Exchange 2013 Deployment Guide

by vreddy on ‎06-17-2013 04:04 PM - edited on ‎02-27-2017 03:50 PM by aannavarapu (4,600 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Exchange 2013.

Virtual Traffic Manager and Microsoft Exchange 2010 Deployment Guide

by aannavarapu ‎12-16-2016 05:45 AM - edited ‎02-27-2017 03:48 PM (400 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Exchange 2010.

A guide to Policy Based Routing with Stingray (Linux and VA)

by markbod on ‎03-22-2013 04:55 AM - edited on ‎02-08-2017 06:16 AM by Yousaf.Shah (4,101 Views)

What is Policy Based Routing?

 

Policy Based Routing (PBR) is simply the ability to choose a different routing policy based on various criteria, such as the last hop used, or the local IP address of the connection. As you may have guessed, PBR is only necessary where your Stingray Traffic Manager is multi-homed (ie multiple default routes) and asynchronous routing is either not possible or not desired.

 

There are only really two types of multi homing which we commonly deal with in Stingray deployments. I am going to refer to them as "Multiple ISP", and "Multiple Link".

 

Multiple ISP

 

This is the simpler scenario, and it is seen when a Stingray is deployed in an infrastructure with two or more independent ISPs. The ISPs all provide different network ranges, and Stingray Traffic IP Groups are the end points for the addresses in those ranges. Stingray must chose the default gateway based on the local Traffic IP Address of the connection.

 

Multiple Link

 

This is slightly more complicated because traffic destined for Stingrays Traffic IP can come in via a number of different gateways. Stingray must ensure that return traffic is sent out of the same gateway as it arrived through. This is also known as "Auto-Last-Hop", and is achieved by keeping track of the Layer 2 mac address associated with the connection.

 

 

Setting up Policy Based Routing on Stingray

 

This guide will show you how to set up a process within Stingray Traffic Manager (STM) such that a PBR policy is applied during software start up. The advantage of configuring Stingray this way is that there are no changes to the underlying OS configuration, and as such it is fully compatible with the Virtual Appliance as well as the software (Linux) version. The steps to set up the PBR are as follows...

 

  • Configure gateways.conf for your environment
  • Upload the gateways.conf to Catalogs -> Extra -> Misc
  • Create a new action called "DynamicPBR" in System -> Alerting -> Actions
      1. This should be a program action, and execute the dynamic-pbr.sh script
  • Create a new Event called "Dynamic PBR" in System -> Alerting -> Events
      1. You want to hook the software started event here

 

Step 1: Upload the dynamic-pbr.sh script

 

Navigate to Catalogs -> Extra Files -> Actions Programs and upload the dynamic-pbr.sh script found attached to this article.

 

actionProgs.png

 

Step 2: Configure the gateways.conf for your environment

 

When the dynamic-pbr.sh script is executed it will attempt to load and process a file called gateways.conf from miscellaneous files. You will need to create that configuration file.

 

config.jpeg.jpg

The configuration is a simple text file with a number of fields separated by white space. The first column should be either MAC (to indicate a “Multiple Link” config) or SRC (to indicate “Multiple ISP”).

 

If you are using the MAC method, then you only need to supply the IP address of each of your gateways and their Layer 2 MAC address.


Each MAC line should read “MAC <Gateway IP> <Gateway MAC>”.

 

If you are using the SRC method, then you should include: local source IP (this can be an individual Traffic IP, or a subnet), the Gateway IP. You should also include information on the local network if you need the Stingray to be able to access local machines other than the gateway. Do this using two additional/optional columns: Local subnet and device.

 

Each SRC line should read: “SRC <Local IP> <Gateway IP> <Local subnet> <local device>

 

 

Step 3: upload the gateways.conf

 

Once you have configured the gateways.conf for your environment, you should upload it to Catalogs -> Extra Files -> Miscellaneous


miscFiles.png

 

Step 4: Create Dynamic PBR Action

 

Now we have the script and configuration file uploaded to Stingray, the next steps are to configure the alerting system to execute them at software start up. First we must create a new program action under System -> Alerting -> Manage Actions.

 

addAction.png

Create a new action called “Dynamic PBR” of type Program. In the edit action screen, you should then be able to select dynamic-pbr.sh from the drop down list.

ActionSettings.png

 

 

Step 5: Create Dynamic PBR Event

 

Now that we have an action, we need to create an event which hooks the “software is running” event. Navigate to System -> Alerting -> Manage Event Types and create a new Event Type called “Dynamic PBR”.

 

In the event list select the software running event under General, Information Messages.

 

eventSettings.png

 

Step 6: Link the event to the action

 

Navigate back to the System -> Alerting page and link our new “Dynamic PBR” event type to the “Dynamic PBR” action.

 

AlertMapping.png

 

Finished

 

Now every time the Stingray software is started, the configuration from the gateways.conf will be applied.

 

How do I check the policy?

 

If you want to check what policy has been applied to the OS, you can do so on the command line. Either open the console or SSH into the Stingray Traffic Manager machine. The policy is applied by setting up a rule and matching routing table for each of the lines in the gateways.conf configuration file.You can check the routing policy by using the iproute2 utility.

 

To check the routing rules, run: “ip rule list”.

 

There are three default rules/tables in Linux: rule 0 looks up the “local” table, rule 32766 looks up “main”, and 32767 looks up “default”. The rules are executed in order. The local rule (0) is maintained by the kernel, so you shouldn’t touch it. The main table (look up rule 32766) and default table (look up rule 32767) tables go last. The main table holds the main routing table of your machine and is the one returned by “netstat –rn”. The default table is usually empty. All other rules in the list are custom, and you should see a rule entry for each of the lines in your gateway configuration file.

 

So where are the routes? Well the rules are passed in order and the lookup points to a table. You can have upto 255 tables in linux. The “main” table is actually table 255. To see the routes in the table you would use the “ip route list” command. Executing “ip route list table main” and “ip route list table 254” should return the same routing information.

 

You will note that the rules added by Stingray are referenced by their number only. So to look at one of your tables you would use its number. For example “ip route list table 10”.

ip-rule-list.jpeg.jpg

Enjoy!

 

Updates

  • 20150317: Modified script to parse configuration files which use windows format line endings.

Installing Brocade vADC

by PaulWallace on ‎08-11-2015 02:25 AM - edited on ‎01-23-2017 09:03 AM by tstace (1,900 Views)

Installing Brocade vADC

 

vTM.pngWe have created dedicated installation and configuration guides for each type of deployment option, as part of the complete documentation set for Brocade Virtual Traffic Manager (Brocade vTM). There are a number of different deployment options, depending on whether you want to install vTM as a virtual appliance, in a cloud, on a bare-metal hardware appliance, or on a private server using a software installation. Choose the target platform here, and follow the link to the designated "Getting Started" guide:

 

Installing as a Virtual Appliance?

Looking to run a trial on Microsoft Hyper-V, VMware, Xen, QVM/KEMU, or Oracle VM? Look here.

 

Installing in a Cloud?

For information on installing in Google Cloud Platform, Amazon EC2, or Microsoft Azure, look here.

 

Installing on a Bare-Metal Hardware Appliance?

To deploy the Brocade vTM appliance image on an approved server hardware platform, look here.

For hardware compatibility information, see the Brocade vTM Hardware Compatibility List.

 

Installing as Pure Software?

If you are installing onto a private server, or inside a VM running Linux or Solaris, look here.

 

Note that these links are valid for the Brocade vTM 17.1 software release. The most recent set of user documentation can always be found on Brocade.com, in the Document Library on the Brocade vTM product page. This includes detailed configuration, REST API and TrafficScript programming guides.

 

(Copies of the 17.1 "Getting Started" guides are attached to this article)

 

Brocade Virtual Application Delivery Controller (vADC) Product Documentation

by on ‎02-21-2013 03:40 AM - edited on ‎01-23-2017 08:56 AM by tstace (12,700 Views)

 

Looking for Installation and User Guides for Brocade vADC?

 

User documentation is no longer included in the software download package for Brocade Virtual Traffic Manager (vTM) and Brocade Services Director (SD). To locate the latest user documentation, see the Application Delivery Controllers product page at Brocade.com.

 

User documentation on Brocade.com:

The user documentation can be found on each product page: for example, go to the product page for Brocade Virtual Traffic Manager and click on the "Resource Library" button on the right, just below the main banner. The resource library opens up, which includes a section on "Administration and User Guides" where you will find a complete list of installation and user documentation, as well as reference documentation for features such as TrafficScript, REST APIs, and more. Only the most recent version of the documentation is available at Brocade.com

 

Looking for Traffic Manager Installation Guides?

We also provide specific Installation Guides for Brocade vTM to take advantage of whether you are installing as a virtual appliance, in the cloud, on a bare-metal hardware appliance, or just as pure software. Alternatively, if you are new to Brocade vTM, and looking to download and install for the first time, then check out the Getting Started community article which will lead you through the steps to download, install and discover how Brocade vTM can transform the way you run applications.

 

Quick links to Traffic Manager Documentation - 17.1

 

Brocade Virtual Traffic Manager Deployment Guides

by on ‎03-26-2013 03:50 AM - edited on ‎12-21-2016 09:35 AM by aannavarapu (8,899 Views)

Brocade Virtual Traffic Manager: Feature Summary

by PaulWallace on ‎11-18-2016 07:15 AM (800 Views)

We have made it easier to see which features are offered in which model of Brocade Virtual Traffic Manager: there are two feature groups, which are common to both fixed-sized licenses using the Brocade vTM, and in the capacity-based licensing scheme using the Brocade Services Director:
 

Advanced Edition: Includes the most common LB capabilities, including SSL/TLS offload, session persistence, service level monitoring, simple TrafficScript Rule Builder, and support for IPv6 and HTTP/2; and also includes capabilities such as Global Load Balancing, Route Health Injection, and customisation using Brocade’s powerful TrafficScript scripting language and Java Extensions;
 
Enterprise Edition: Includes premium L7 services such as Web Content Optimization (WCO) and Web Application Firewall (WAF) and FIPS compliance.
 

 

Model

Brocade vTM Bandwidth Options

Throughput

50M

400M

1G

3G

5G

10G

20G

40G 80G

SSL/TLS TPS

Uncapped

Functionality

Choose from Advanced or Enterprise Editions

Deployment model

Choose from Software, Virtual Appliance or Bare Metal image

License Style

Choose from Perpetual or Subscription

Functionality

Advanced

Edition

Enterprise

Edition

 

Developer Edition

1 Mbps limit 

Brocade vTM Y Y   Y
Brocade Services Director Y Y   -
         

Load Balancing

Y

Y

 

Y

HTTP/2 Support

Y

Y

 

Y

Content Routing

Y

Y

 

Y

Health Monitoring

Y

Y

 

Y

Simple TrafficScript Rule Builder

Y

Y

 

Y

SSL/TLS Offload

Y

Y

 

Y

HTTP Compression

Y

Y

 

Y

Event and Action System

Y

Y

 

Y

Service Protection

Y

Y

 

Y

Analytics

Y

Y

 

Y

HTTP Caching

Y

Y

 

Y

Autoscale

Y

Y

 

Y

XML Parsing

Y

Y

 

Y

Bandwidth Management

Y

Y

 

Y

Rate Shaping

Y

Y

 

Y

Service Level Monitoring

Y

Y

 

Y

Traffic Script

Y

Y

 

Y

Java Extensions

Y

Y

 

Y

Multi Site Manager

Y

Y

 

Y

Global Load Balancing

Y

Y

 

Y

Route Health Injection

Y

Y

 

Y

Web Accelerator Express

-

Y

 

Y

Web Accelerator

-

Y

 

Y

Web Application Firewall

-

Y

 

Y

Kerberos Constrained Delegation

-

Y

 

Y

FIPS

-

Y

 

Y

 

 

Feature Summary 

Feature

Description

Adv

Ent

Load Balancing

Traffic Manager can use a wide variety of algorithms and techniques and balance load based on different criteria (e.g. can send more requests to higher spec machines). Servers can be drained for easy maintenance and uninterrupted service. The client never has to see a server fail.

Y

Y

HTTP/2 Support

Faster web pages with support for HTTP/2 connections. HTTP/2 is
a significant enhancement to the HTTP/1.1 standard: Traffic Manager can automatically negotiate an HTTP/2 connection with the client web browser, which may improve web page load time with techniques such as connection sharing, page request multiplexing and header compression. For even more advanced HTML and web content optimization, the optional Brocade Web Accelerator add-on module is available to create custom optimization profiles for individual applications.

Y

Y

Content Routing

Use Traffic Manager to apply business policies to each request for custom routing decisions, applying HTTP pool selection routing based on L7 attributes such as URL and hostname. Content inspection allows rapid web changes such as the insertion of marketing tags, branding changes, and dynamic watermarking, procedures that may be difficult to achieve by modifying the application itself.

Y

Y

Session Persistence

 

Ensures all requests from a client go to the same server, enabling application data to persist throughout a session without using cookies (e.g. an e-commerce shopping basket).

Y

Y

Health Monitoring

Monitor the health and correct operation of servers with built-In and custom checks. Detect failures of servers and errors in applications, and route traffic away from these servers so that the performance of the application is not compromised and the user experience is maintained.

Y

Y

Simple TrafficScript RuleBuilder

Define rules to control applications with the TrafficScript Rule Builder, using an easy-to-use graphical user interface to create traffic rules and policies. Click and choose from drop-down menus to create simple conditions and actions.

Y

Y

SSL/TLS Offload

Off-loading SSL/TLS key exchanges and decryption to the Traffic Manager frees up the back-end servers use their full resources for generating content and responding to user requests. Decryption on the Traffic Manager allows for deep packet inspection. Content can be re-encrypted for secure forwarding of requests to the back-end infrastructure.

Y

Y

HTTP Compression

Traffic Manager can compress content returned to the client rather than have that workload undertaken by the back- end servers. Compression of content can result in bandwidth being used more efficiently. offloading this workload from the back-end servers can enable it to serve requests faster.

Y

Y

Event Handling and Action System

Configure appropriate responses for key infrastructure events, including email and SNMP alerts, syslog logging and custom user-supplied scripts.

Y

Y

Service Protection

Traffic Manager can enforce an IP black/white list and limit the number
of connections to a service. It can also enforce rules on HTTP content (e.g. enforce RFC compliance) and help protect against malicious attacks such as Denial of Service.

Y

Y

Real-Time Analytics

Measures performance and load and gives a graphical representation of the results which can identify bottlenecks and identify where and when high loading occurs which can be useful for identifying future upgrade needs.

Y

Y

HTTP Caching

Traffic Manager can store copies of frequently-requested data on the Traffic Manager rather than the back end servers, freeing them up to deliver newly requested content. This can reduce the need for additional servers as traffic grows and speed up the response to end user requests.

Y

Y

Autoscaling

Ensure reliable application service delivery by automatically managing traffic changes in real time, distributing traffic among a pool of virtual servers. It can orchestrate the provisioning and rightsizing of applications, helping to migrate traffic across multiple virtual and cloud platforms.

Y

Y

Bandwidth Management

You can limit the total bandwidth (kbits/ sec) a set of connections can use
which can be used to stop a popular
site or application taking up so much bandwidth other sites or applications become unavailable. This can enable service providers to enforce access limits based on criteria such as account type or location.

Y

Y

Rate Shaping

Traffic Manager can restrict the number of requests (per min or sec) to a service, from either all or a set of clients. This can stop a small group of intensive users (including spiders) hogging a service, leading to a poor user experience for all users.

Y

Y

Service Level Monitoring

Monitors the performance of a service
or application and can issue an alert if it falls below a pre-determined level such as going out of scope of an SLA.

Y

Y

TrafficScript

TrafficScript is a sophisticated programming language integrated within the core of Traffic Manager that enables high performance, highly-configurable control of traffic management policies. TrafficScript rules can control all aspects of how traffic is managed and can choose when and where to apply request rate shaping, bandwidth shaping, routing, compression, and caching to prioritize the most valuable users and deliver the best possible levels of service.

Y

Y

XML Parsing

It can also help parse complex XML data using XPath in order to make informed routing decisions based on embedded content. Also includes supports for the offload and acceleration of the translation between XML variants via XSL Transformations (XSLT).

Y

Y

Java Extensions

Java Extensions can be used to re-use existing code libraries to implement business policies. You can write rules in any language that can target the JVM, including Java, Python, Ruby, and many others. You can use third party libraries, and invoke business rules against specific transactions.

Y

Y

Multi-Site Capable

Deploy services across multiple sites with location-specific configuration and simplify and the management of services from multiple datacenter locations.

Y

Y

Advanced Session Persistence

Ensures all requests from a client go to the same server, enabling application data to persist throughout a session without using cookies (e.g. an e-commerce shopping basket). In addition to session persistence based on IP addressing, Advanced Persistence mechanisms can be leveraged via TrafficScript, including Named Node and Universal Persistence techniques.

Y

Y

Global Load Balancing

Improve service availability by automatically failing over to an alternative datacenter or cloud deployment in the event of a catastrophic failure. Improve service performance by performance- sensitive load balancing and location- based traffic routing.

Y

Y

Route Health Injection

Route Health Injection (RHI) helps to maintain service availability and low-latency networking, by providing rapid service redirection to alternate service hosts.

Y

Y

Web Accelerator Express

Simple content optimization to accelerate the delivery of most web pages, requiring no configuration or tuning.

 

Y

Web Accelerator

Advanced Web Content Optimization (WCO) technologies, to accelerate page load times up to 4x for HTML applications, including Microsoft SharePoint, content management systems and cloud applications. WCO profiles can be customized for each application.

 

Y

Web Application Firewall

A scalable Layer-7 Web Application Firewall (WAF) to apply business rules to your online traffic, inspect and block attacks such as SQL injection and cross-site scripting (XSS), and help achieve compliance with PCI-DSS and HIPAA and other regulatory demands.

 

Y

Kerberos Constrained Delegation

Support for Common Access Cards (CAC) to provide seamless access to services that use Kerberos for authentication.

 

Y

FIPS

Embedded FIPS 140-2 level 1 cryptographic module per FIPS 140-2 implementation guidance section g.5 guidelines, to support deployments that require FIPS 140-2 level 1 compliance.

 

Y

 

Architectural Features

In addition, all Brocade vADC models have the following common architectural benefits:

Feature

Description

Data Plane Acceleration Data Plane Acceleration mode for high-performance L4 services, supporting linear scaling of CPS and throughput with additional CPU cores, unlike traditional kernel network stacks. More than one million L4 connection requests per second and up to 140 Gbps are achievable on supported platforms.

Scalability

Traffic Manager can scale horizontally and vertically very easily, across IT environments and different forms of infrastructure ensuring that it can always scale up to match and support demand for an application or a service.

Clustering Traffic Manager has unmatched scale and performance, and is able to scale-up with the latest generation of multi-core CPUs, and scale out with N+M clustering for reliability and throughput.
RESTful Control API Allows Traffic Manager to be configured and controlled by a third-party application and simplifies administration of large/ complex configurations. The Control API enables configuration changes to be automated (e.g. In response to an event).

 

 

Deployment Guide - Global Load Balancing with Parallel DNS

by aidan.clarke on ‎03-24-2013 06:48 PM - edited on ‎11-15-2016 01:48 AM by PaulWallace (7,900 Views)

This guide will walk you through the setup to deploy Global Server Load Balancing on Traffic Manager using the Global Load Balancing feature.  In this guide, we will be using the "company.com" domain.

 

DNS Primer and Concept of operations:

This document is designed to be used in conjuction with the User Guide Version available from Brocade.com.

 

Specifically, this guide assumes that the reader:

  • is familiar with load balancing concepts;
  • has configured local load balancing for the the resources requiring Global Load Balancing on their existing Traffic Managers; and
  • has read the section "Global Load Balancing" of the Traffic Manager User Guide in particular the "DNS Primer" and "About Global Server Load Balancing" sections.

 

Pre-requisite:

  •   You have a DNS sub-domain to use for GLB.  In this example we will be using "glb.company.com" - a sub domain of "company.com";
  •   You have access to create A records in the glb.company.com (or equivalent) domain; and
  •   You have access to create CNAME records in the company.com (or equivalent) domain.

 

Design:

Our goal in this exercise will be to configure GLB to send users to their geographically closes DC as pictured in the following diagram:

 

Design Goal
glb design2.png

We will be using an STM setup that looks like this to achieve this goal:

Detailed STM Design
glb STM design2.png

 

 

Stingray will present a DNS virtual server in each data center.  This DNS virtual server will take DNS requests for resources in the "glb.company.com" domain from external DNS servers, will forward the requests to an internal DNS server, an will intelligently filter the records based on the GLB load balancing logic.

 

 

In this design, we will use the zone "glb.company.com".  The zone "glb.company.com" will have NS records set to the two Traffic IP addresses presented by vTM for DNS load balancing in each data centre (172.16.10.101 and 172.16.20.101).  This set up is done in the "company.com" domain zone setup.  You will need to set this up yourself, or get your DNS Administrator to do it.

 

 

 

DNS Zone File Overview
glb DNS design2.png

 

On the DNS server that hosts the "glb.company.com" zone file, we will create two Address (A) records - one for each Web virtual server that the STM's are hosting in their respective data centre.

 

 

Step 0: DNS Zone file set up

Before we can set up GLB on Traffic Manager, we need to set up our DNS Zone files so that we can intelligently filter the results.

 

Create the GLB zone:

In our example, we will be using the zone "glb.company.com".  We will configure the "glb.company.com" zone to have two NameServer (NS) records.  Each NS record will be pointed at the Traffic IP address of the DNS Virtual Server as it is configured on vTM.  See the Design section above for details of the IP addresses used in this sample setup.

 

You will need an A record for each data centre resource you want Traffic Manager to GLB.  In this example, we will have two A records for the dns host "www.glb.company.com".  On ISC Bind name servers, the zone file will look something like this:

Sample Zone FIle

 

 

;
; BIND data file for glb.company.com
;
 
$TTL    604800
@       IN      SOA     stm1.glb.company.com. info.glb.company.com. (
                             201303211322 ; Serial
                             7200         ; Refresh
                             120          ; Retry
                             2419200      ; Expire
                             604800       ; Default TTL
)

@ IN      NS      stm1.glb.company.com.
@ IN      NS      stm2.glb.company.com.
 
;
stm1 IN      A       172.16.10.101
stm2 IN      A       172.16.20.101
;
www IN      A 172.16.10.100
www IN      A 172.16.20.100

 

 

 

 

 

Pre-Deployment testing:

  - Using DNS tools such as DiG or nslookup (do not use ping as a DNS testing tool) make sure that you can query your "glb.company.com" zone and get both the A records returned.  This means the DNS zone file is ready to apply your GLB logic.  In the following example, we are using the DiG tool on a linux client to *directly* query the name servers that the STM is load balancing  to check that we are being served back two A records for "www.glb.company.com".  We have added comments to the below section marked with <--(i)--| :

Test Output from DiG
user@localhost$ dig @172.16.10.40 www.glb.company.com A

; <<>> DiG 9.8.1-P1 <<>> @172.16.10.40 www.glb.company.com A
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19013
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 2

;; QUESTION SECTION:
;www.glb.company.com. IN A

;; ANSWER SECTION:
www.glb.company.com. 604800 IN A 172.16.20.100 <--(i)--| HERE ARE THE A RECORDS WE ARE TESTING
www.glb.company.com. 604800 IN A 172.16.10.100 <--(i)--|

;; AUTHORITY SECTION:
glb.company.com. 604800 IN NS stm1.glb.company.com.
glb.company.com. 604800 IN NS stm2.glb.company.com.

;; ADDITIONAL SECTION:
stm1.glb.company.com. 604800 IN A 172.16.10.101
stm2.glb.company.com. 604800 IN A 172.16.20.101

;; Query time: 0 msec ;; SERVER: 172.16.10.40#53(172.16.10.40) ;; WHEN: Wed Mar 20 16:39:52 2013 ;; MSG SIZE rcvd: 139

 

 

 

Step 1: GLB Locations

GLB uses locations to help STM understand where things are located.  First we need to create a GLB location for every Datacentre you need to provide GLB between.  In our example, we will be using two locations, Data Centre 1 and Data Centre 2, named DataCentre-1 and DataCentre-2 respectively:

Creating GLB  Locations
  1.   Navigate to "Catalogs > Locations > GLB Locations > Create new Location"
  2.   Create a GLB location called DataCentre-1
  3.   Select the appropriate Geographic Location from the options provided
  4.   Click Update Location
  5.   Repeat this process for "DataCentre-2" and any other locations you need to set up.
Google Chrome081.png
Google Chrome082.png

 

 

Step 2: Set up GLB service

First we create a GLB service so that STM knows how to distribute traffic using the GLB system:

Create GLB Service
  1. Navigate to "Catalogs > GLB Services > Create a new GLB service"
  2. Create your GLB Service.  In this example we will be creating a GLB service with the following settings, you should use settings to match your environment:
    •   Service Name: GLB_glb.company.com
    •   Domains: *.glb.company.com
    •   Add Locations: Select "DataCentre-1" and "DataCentre-2"
GLB Service.png

 

Then we enable the GLB serivce:

 

Enable the GLB Service
  1. Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Basic Settings"
  2. Set "Enabled" to "Yes"
Enable GB Service.png

 

Next we tell the GLB service which resources are in which location:

 

Locations and Monitoring
  1. Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring"
  2. Add the IP addresses of the resources you will be doing GSLB between into the relevant location.  In my example I have allocated them as follows:
    • DataCentre-1: 172.16.10.100
    • DataCentre-2: 172.16.20.100
  3. Don't worry about the "Monitors" section just yet, we will come back to it.
Google Chrome084.png

 

 

Next we will configure the GLB load balancing mechanism:

Load Balancing Method
  1. Navigate to "GLB Services > GLB_glb.company.com > Load Balancing"

 

By default the load balancing "algorithm" will be set to "Adaptive" with a "Geo Effect" of 50%.  For this set up we will set the "algorithm" to "Round Robin" while we are testing.

 

Set GLB Load Balancing Algorithm
  1. Set the "load balancing algorithm" to "Round Robin"
GLB LB Method.png

 

Last step to do is bind the GLB service "GLB_glb.company.com" to our DNS virtual server.

 

Binding GLB Service Profile
  1. Navigate to "Services > Virtual Servers > vs_GLB_DNS > GLB Services > Add new GLB Service"
  2. Select "GLB_glb.company.com" from the list and click "Add Service"
ADD GLB Service Binding.png

 

Step 3 - Testing Round Robin

Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|:

Testing
user@localhost $ dig @172.16.10.101 www.glb.company.com

; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17761
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2

;; QUESTION SECTION:
;www.glb.company.com. IN A

;; ANSWER SECTION:
www.glb.company.com. 60 IN A 172.16.2(i)(i)0.100 <--(i)--| DataCentre-2 response

;; AUTHORITY SECTION:
glb.company.com. 604800 IN NS stm1.glb.company.com.
glb.company.com. 604800 IN NS stm2.glb.company.com.

;; ADDITIONAL SECTION:
stm1.glb.company.com. 604800 IN A 172.16.10.101
stm2.glb.company.com. 604800 IN A 172.16.20.101

;; Query time: 1 msec
;; SERVER: 172.16.10.101#53(172.16.10.101)
;; WHEN: Thu Mar 21 13:32:27 2013
;; MSG SIZE  rcvd: 123


user@localhost $ dig @172.16.10.101 www.glb.company.com

; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9098
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2

;; QUESTION SECTION:
;www.glb.company.com. IN A

;; ANSWER SECTION:
www.glb.company.com. 60 IN A 172.16.1(i)0.100 <--(i)--| DataCentre-1 response

;; AUTHORITY SECTION:
glb.company.com. 604800 IN NS stm2.glb.company.com.
glb.company.com. 604800 IN NS stm1.glb.company.com.

;; ADDITIONAL SECTION:
stm1.glb.company.com. 604800 IN A 172.16.10.101
stm2.glb.company.com. 604800 IN A 172.16.20.101

;; Query time: 8 msec
;; SERVER: 172.16.10.101#53(172.16.10.101)
;; WHEN: Thu Mar 21 13:32:27 2013
;; MSG SIZE  rcvd: 123

Step 4: GLB Health Monitors

Now that we have GLB running in round robin mode, the next thing to do is to set up HTTP health monitors so that GLB can know if the application in each DC is available before we send customers to the data centre for access to the website:

 

 

Create GLB Health Monitors

  1. Navigate to "Catalogs > Monitors > Monitors Catalog > Create new monitor"
  2. Fill out the form with the following variables:
    • Name:   GLB_mon_www_AU
    • Type:    HTTP monitor
    • Scope:   GLB/Pool
    • IP or Hostname to monitor: 172.16.10.100:80
  3. Repeat for the other data centre:
    • Name:   GLB_mon_www_US
    • Type:    HTTP monitor
    • Scope:   GLB/Pool
    • IP or Hostname to monitor: 172.16.20.100:80
Google Chrome081.png
  1. Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring"
  2. In DataCentre-1, in the field labled "Add new monitor to the list" select "GLB_mon_www_AU" and click update.
  3. In DataCentre-2, in the field labled "Add new monitor to the list" select "GLB_mon_www_US" and click update.

Google Chrome084.pngGoogle Chrome082.png

 

Step 5: Activate your preffered GLB load balancing logic

Now that you have GLB set up and you can detect application failures in each data centre, you can turn on the GLB load balancing algorithm that is right for your application.  You can chose between:

GLB Load Balancing Methods
  • Load
  • Geo
  • Round Robin
  • Adaptive
  • Weighted Random
  • Active-Passive

The online help has a good description of each of these load balancing methods.  You should take care to read it and select the one most appropriate for your business requirements and environment.

 

Step 6: Test everything

Once you have your GLB up and running, it is important to test it for all the failure scenarios you want it to cover.

Remember: failover that has not been tested is not failover...

 

Following is a test matrix that you can use to check the essentials:

Test # Condition Failure Detected By / Logic implemented by GLB Responded as designed

1

All pool members in DataCentre-1 not available GLB Health Monitor Yes / No
2 All pool members in DataCentre-2 not available GLB Health Monitor Yes / No
3 Failure of STM1 GLB Health Monitor on STM2 Yes / No
4 Failure of STM2 GLB Health Monitor on STM1 Yes / No
5 Customers are sent to the geographically correct DataCentre GLB Load Balancing Mechanism Yes / No

 

Notes on testing GLB:

The reason we instruct you to use DiG or nslookup in this guide for testing your DNS rather than using a tool that also does an DNS resolution, like ping, is because Dig and nslookup tools bypass your local host's DNS cache.  Obviously cached DNS records will prevent you from seeing changes in status of your GLB while the cache entries are valid.

 

 

The Final Step - Create your CNAME:

Now that you have a working GLB entry for "www.glb.company.com", all that is left to do is to create or change the record for the real site "www.company.com" to be a CNAME for "www.glb.company.com".

Sample Zone File
;
; BIND data file for company.com
;

$TTL    604800
@       IN      SOA     ns1.company.com. info.company.com. (
                            201303211312 ; Serial
                            7200         ; Refresh
                            120          ; Retry
                            2419200      ; Expire
                            604800       ; Default TTL
)      
;
@ IN NS ns1.company.com. ; Here is our CNAME www IN CNAME www.glb.company.com.

Brocade vADC Product Icons and Stencils

by PaulWallace on ‎11-14-2016 03:33 PM (800 Views)
I have zipped up some icons and Visio stencils that our technical teams use when they are creating diagrams. I have included different formats, including solid and sketch styles:
 
Attached three files:
brocade-vadc-product-icons.zip - PNG files
brocade-vadc-visio-stencils-1.zip - Visio files
brocade-vadc-visio-stencils-2.zip - Visio files

 

brocade-sd-ex1.png brocade-sd-ex2.png
 
 

Virtual Web Application Firewall Signature (Baseline Version 201611100932)

by vWAF on ‎11-10-2016 01:33 AM (207 Views)

A new policy (baseline version 201611100932) for the Virtual Web Application Firewall is now available.

Change log:

  • Changed: replace statement - Reason: Tag this rule as MySQL extension
  • Changed: remote file inclusion - Reason: optimize rule
  • Changed: XSS via CSS expression - Reason: optimize pattern

The download in the product is available with a short delay.

Tech Tip: Perceptive Load Balancing

by fmemon on ‎02-21-2013 09:17 AM - edited on ‎11-07-2016 03:13 AM by PaulWallace (1,200 Views)

Stingray can load balance servers in a few different ways. Looking at a Pool's Load Balancing configuration page shows the different options:

 

lb2.png

 

They're all pretty straightforward except for Perceptive; how does that one work?  Perceptive can be thought of as Least Connections skewed to favor the servers with the Fastest Response Time.  Perceptive factors in both connection counts and response times into the load balancing decision to ensure that traffic is distributed evenly amongst the servers in a farm.  It is best understood in the context of a few examples:

 

Heterogeneous Server Farm

 

A great scenario in which to use Perceptive is when your server farm is heterogeneous,  where some servers are more powerful than others.  The challenge is to ensure that the more powerful servers get a greater share of the traffic, but that the weaker servers are not starved.

 

Perceptive will begin by distributing traffic based on connection counts, like Least Connections.  This ensures that the weaker servers are getting traffic and not sitting idle.  As traffic increases the powerful servers will naturally be able to handle it better, leading to a disparity in response times.  This will trigger Perceptive to begin favoring those more powerful servers, as they are responding quicker, by giving them a greater share of the traffic.

 

Heterogeneous workloads

 

Another great scenario in which to use Perceptive is when your workload is heteregeneous, where some requests generate a lot more load on your servers than others.  As in the Heterogeneous Server Farm case, Perceptive will begin by distributing traffic like Least Connections. When the workload becomes more heterogeneous,  some servers will get bogged down with the more CPU intensive requests and begin to respond slower.  This will trigger Perceptive to send traffic away from those servers, to the other servers that are not bogged down and responding quicker.

 

Ramping up traffic to a new server

 

The perceptive algorithm introduces traffic to a new server (or a server that has returned from a failed state) gently. When a new server is added to a pool, the algorithm tries it with a single request, and if it receives a reply, gradually increases the number of requests it sends the new server until it is receiving the same proportion of the load as other equivalent nodes in the pool. The algorithm used to ramp up the load is adaptive, so it isn't possible to make statements of the sort "the load will be increased from 0 to 100% of its fair share over 2 minutes"; the rate at which the load is increased is dependent on the responsiveness of the server. So, for example, a new web server serving a small quantity of static content will very quickly be ramped up to full speed, whereas a Java application server that compiles JSPs the first time they are used (and so is slow to respond to begin with) will be ramped up more slowly.

 

Summary

 

The Perceptive load balancing algorithm factors in both connection counts along with response times into a two step load balancing decision.  When there is little disparity in response times, traffic will be distributed like Least Connections.  When there is a larger disparity in response times, Perceptive will factor this in and favor the servers that are responding quicker, like Fastest Response Time.  Perceptive is great for handling heterogeneity in both the server farm and the workload, ensuring effecient load balancing across your server farm in either case.

 

Read more

 

For a more detailed discussion of the load balancing capabilities of Stingray, check out Feature Brief: Load Balancing in Stingray Traffic Manager, and take a look at the video introduction: Video: Introduction to Stingray Load Balancing

 

vRealize Orchestrator Workflows for vADC

by mbodding on ‎11-01-2016 02:06 PM (602 Views)

The vADC Package

This article describes the installation, configuration, and usage of the vADC Package for VMWare vRealize Orchestrator (vRO).

 

The package contains a number of workflows which can communicate with both the Brocade VTM, and the Brocade Services Director via REST APIs. The workflows support licensing and registration of newly deployed vTMs, and also pushing configuration to the vTMs themselves (either directly or via the Services Director).

 

Installing the Package

To install the vADC package, you will need to start the vRO client application and then switch to the “Administer” or “Design” view.

 

When in the correct view, navigate to the “Package” tab and click the import button.

 

Import the vadc package

 

Select the “com.brocade.vadc.package” file and click open.

 

vRO packages are signed by the certificate of the server on which they were created, so it’s likely that you will see a prompt to import/accept the certificate of my lab server “vro.vmware.lan”. Click “import” to continue.

 

You will then see a list of the workflows which are included in the package. All of the workflows should be selected by default. Once you have confirmed that to be the case, simply click on the “import selected elements” button.

 

Select Elements

 

When the import completes, you should have com.brocade.vadc listed in you packages list. Navigate to the “Design” view, and you should now see a “Brocade.vADC” folder in the workflows listing.

 

Registering your REST Hosts

If you are using Brocade Services Director to license your vTMs, then the first thing you will want to do is register the Services Director REST API end-point. If however you don’t use Services Director, then you will want to start registering vTM REST end-points directly.

 

The next steps are slightly different in each scenario:

 

With Brocade Services Director

Execute the “vRO: Register Services Director” workflow first. Then run the “vRO: Register vTM via Services Director” for each of your licensed vTMs. The vTM must be known to Services Director before you attempt to register it with vRO.

 

If you have a vTM which is not yet licensed through Services Director, then you will have to do one of the following:

 

  1. Use the “SD: License vTM” or “SD: Register vTM” workflows to license the vTM with Services Director, and then run the “vRO: Register vTM via Services Director” to register the end-point for the vTM.
  2. If the vTM is not to be licensed with Services Director, then you can use the “vTO Register vTM” workflow to register it’s API end-point directly.

 

Without Brocade Services Director

If you don’t use Services Director, then the REST API end-points must be set up directly with each vTM using the “vRO Register vTM” workflow.

 

Service Director Registration Walkthrough

The registration workflows are quite simple. They install the certificate from the Services Director or the vTM into vROs key store, and then they register the API end-point as a REST Host. In this example, we’re registering the Services Director, but the vTM direct screen is very similar.

 

Register Services Director

 

Fill out the form by providing a friendly name for the Services Director, the REST end-point < hostnameSmiley Tongueort> of the API, along with the username and password for accessing it.

 

When the workflow completes, the newly registered host should appear in the HTTP-REST section of your inventory.

 

Inventory

 

You can now use this REST Host in the “SD:*” workflows.

 

Licensing a vTM using the SD: License vTM workflow

Once you have your Services Director registered as a HTTP-REST object in your inventory, you can use it to license vTM instances.

 

License a vTM

 

Start up the “SD: License vTM” workflow and select your Services Director as the REST Host. The rest of the form can be completed by providing a “tag” or hostname for the vTM, with Feature Pack, Bandwidth Limit, and Owner.

 

If you are registering a “legacy” vTM (older than 10.1), then you will need to supply the vTM instances hostname in the address box. If it’s a “universal” licensed vTM (10.1 and newer), then it can be an IP address.

 

On the next screen you will need to flag the vTM if it is a “legacy” version.

 

License vTM 2

 

Obviously, if you intend to use the “vTM: *” configuration workflows with this vTM, then the REST API should be enabled.

 

vTM Registration Walkthrough

Now that you have a Services Director registered with vRO, and the vTM is licensed through the Services Director. We can register the vTM end-point with vRO.

 

Kick of the “vRO: Register vTM via Services Director” workflow. There are only two parameters. The first if the Services Director (from your inventory), and the second needs to be the instanceID or tag that you gave the vTM when it was licensed.

 

Register vTM Host

 

Once the vTM is registered it will appear in your REST-HTTP inventory also.

 

vTM Rest Inventory

 

Configuring the vTM

In order to configure a vTM, it needs to be registered as a REST Host with vRO. This can be a direct registration, where vRO communicates with the vTM API directly, or an indirect one, where the communication goes via the Services Director.

 

The vADC package includes a number of workflows. See the workflows section for a complete list. We’re not going to go into great detail on each one, but rather go through a couple of examples.

 

vTM Configuration Example: Adding a Pool

 This is an example of adding a pool to a vTM using the “vTM: Add Pool” workflow.

 

Adding a Pool

 

This workflow needs a Rest Host as its first parameter. This should be the vTM which you have previously registered with vRO. You also need to supply:

 

  • A name
  • A list of nodes
  • The name of an existing persistence class (if needed)
  • The Load Balancing Algorithm
  • Whether to use Passive Monitoring
  • An Array of Health monitors
  • Whether to do SSL Encryption with the nodes
  • Should we do strict SSL Checking on the nodes certificate
  • There’s also a verbose flag

Click “Submit” to start the workflow. If all is well, it should complete successfully, and you should have a new pool on your vTM.

 

Pool Created

 

Here ends example one.

 

vTM Configuration Example: Service Deploy

vRO workflows can be combined into larger workflow chains, and an example of such is provided with the package. Take a look at the “Service: Deploy HTTP Service”, and “Service: Remove HTTP Service” to see how these work. Below is the scema for the deploy workflow:

 

Service Deploy Schema

 

This workflow executes a chain of actions to create the following:

 

  1. Adds a HTTP Health Monitor
  2. Adds SSL Certificates (if SSL Ofload is required)
  3. Adds a Session Persistence Class (if Persistence is required)
  4. Adds a Pool
  5. Adds a Traffic IP Group
  6. Adds a Virtual Server

If any of the steps in the chain fail, then the exception is caught and a roll back begins.

 

Service Deploy 1

 

The workflow when used interactively paginates its inputs. The first section takes an “application ID” which is then used as a prefix for all related resources. A Rest Host (your vTM) obviously, a Traffic IP Group and service port on which to listen.

 

Service Deploy 2

 

The second page is dedicated to SSL. If SSL Offload is enabled, then you must also provide a private key and certificate in PEM format.

 

The final page takes configuration for the pool. This include the nodes, persistence method if required, and options for SSL Encryption. You must also supply a Host Header, and path for use in the services HTTP Health Monitor.

 

Service Deploy 3

 

When executed successfully, this workflow will create all the objects necessary to deliver the service through vTM.

 

vTM Service

Here ends example two.

 

The Workflows

The work flows included can be split into several functional groups: vRO Registration, vTM Licensing, vTM Configuration, and Service Examples.

 

vRO Registration

The following workflows are used to register vTM and Service Director REST API end-points with vRO. The end-points are then used in SD Licensing, and vTM Configuration workflows.

 

vRO: Register Services Director

Register a REST API end-point for a Services Director. This can then be used in the vTM Licensing workflows below.

 

vRO: Register vTM

Register the REST API end-point of a vTM directly. You should use this if you do not have a Brocade Services Director or if you want to call the vTMs API directly.

 

vRO: Register vTM via Services Director

Register a REST API end-point of a vTM, which is licensed and available via the Brocade Services Director.

 

vTM Licensing

The following workflows are used to assign and remove vTM licenses with the Brocade Services Director. vTMs which are configured to self-register on deployment (vTM 10.4 and above on supported platforms) can be approved/licensed with the “Register vTM” workflow. Other vTMs which have been deployed without self-registration should be licensed using the “License vTM” workflow.

 

SD: License vTM

License a vTM with the Services Director.

 

SD: Register vTM

Approve a self-registered vTM (10.4 and above) with the Service Director .

 

SD: Remove vTM

Mark a vTM as “deleted” in the Services Director inventory.

 

vTM Configuration

The workflows listed below all perform add/delete/modify operations on a vTMs configuration. Each workflow requires a “Rest Host” resource, which is a vTM registered by either of the vRO vTM registration workflows above.

 

Workflows which add resources:

 

  • vTM: Add HTTP Monitor – Add a HTTP Monitor on the vTM.
  • vTM: Add Pool – Add a Server Pool on the vTM.
  • vTM: Add Rule – Add a TrafficScript Rule on the vTM.
  • vTM: Add Server Certificate – Add a SSL Server Certificate key pair on the vTM.
  • vTM: Add Session Persistence – Add a Session Persistence class on the vTM.
  • vTM: Add Traffic IP Group – Add a Traffic IP Group on the vTM.
  • vTM: Add VServer – Add a Virtual Server on the vTM

All of the “add” workflows above have a corresponding “delete” workflow:

 

  • vTM: Del HTTP Monitor
  • vTM: Del Pool
  • vTM: Del Rule
  • vTM: Del Server Certificate
  • vTM: Del Session Persistence
  • vTM: Del Traffic IP Group
  • vTM: Del VServer

The Add and Delete workflows above don’t output anything on success, but will raise an Exception if they encounter a problem or an error. The following “listing” workflows all return an array of the elements which are being listed. For example the “vTM: List Pools” workflow will return a list of pools in an array/string.

 

  • vTM: List Health Monitors
  • vTM: List Pools
  • vTM: List Rules
  • vTM: List Server Certificates
  • vTM: List Session Persistence
  • vTM: List Traffic IP Groups
  • vTM: List VServers

There are also some “Get” workflows which retrieve the state of Nodes in a pool, and also the rules applied to a Virtual Server:

 

vTM: Get Pool Nodes

This returns a JSON string of the nodes_table from the API, and also three Array/String lists of the Active, Disabled, and Draining nodes.

 

vTM: Get VServer Rules

This returns three Array/String lists of the rules applied to a Virtual Server: Request rules, Response rules, and Completion Rules.

 

Finally, there are some rules which modify the state of existing configuration objects:

 

  • vTM: Mod Pool Nodes – Set the Nodes in a Pool, both active and draining
  • vTM: Mod Pool Draining Nodes – Drains or undrains the given nodes.
  • vTM: Mod VServer Rules – Set the list of request, response and completion rule
  • vTM: Mod VServer Status – Mark a VServer as enabled or disabled.

Advanced Configuration Options

 

Service Director License Selection

The “Sd: License vTM” rule applies a license to the vTM as part of the instance creation process. The license name is taken from the “licence” attribute for 10.1 and newer vTMs, and from the “fla” attribute for legacy vTMs.

 

License Names

 

The defaults for these keys are: "universal_v3" and "legacy_9.3"

 

If you are using Services Director 2.5 or newer, then you should change the license key to use universal_v4.

 

Also if your legacy FLA is named differently, then you will want to update this to match.

 

REST API Versions

The API Version used by default when registering REST Hosts, is set for compatibility with Brocade Services Director 2.4 and newer, and vTM 9.9 and newer. To that end, the “vRO: Register Services Director” API version is set at 2.2, and the “vRO: Register vTM” workflows are set to version 3.3

 

Updating the Brocade vTM GeoIP database

by apritchard on ‎03-27-2014 05:44 AM - edited on ‎09-21-2016 07:18 AM by PaulWallace (1,900 Views)

This document covers updating the built-in GeoIP database. See TechTip: Extending the Brocade vTM GeoIP database for instructions on adding custom entries to the database.

 

Brocade vTM GeoIP Update package.

 

Brocade now provides a GeoIP Update package, available in the Related Software section of the http://my.brocade.com portal. This package is updated approximately monthly, and its version number can be compared with the GeoIP database version in $ZEUSHOME/zxtm/etc/geo/version.

 

To use this update package:

  1. Download the GeoIP Update from http://my.brocade.com and navigate to:
    Downloads > Application Delivery Controllers > Virtual Traffic Manager > Related Software
        > Brocade Virtual Traffic Manager GeoIP Update 20160907
  2. Upload the package to the target vTM instance (System > Traffic Managers > Upgrade).
  3. Click 'Install this upgrade'. vTM applies the update and restarts.
  4. Verify the audit log shows the upgrade was successful.
  5. Repeat 2-4 for other cluster members.

 

On our AMI on EC2 you don't have the Upgrade UI option, so you must copy the upgrade package to the instance, log in and run /opt/zeus/zxtm/bin/upgrade. e.g.:

/opt/zeus/zxtm/bin/upgrade install geoip_update_20160907.tgz

 

If you upgrade to a new version of the Traffic Manager it will switch to the GeoIP database version included in that version of the Traffic Manager. If you had previously updated to a later version of the database you will need to reapply the GeoIP update.

 

Using the MaxMind GeoIP City database.

 

The GeoIP database shipped with Brocade vTM is based on the MaxMind GeoLite City database. MaxMind also produce a commercially licensed database of IPv4 locations, GeoIP City, which is more detailed. If you are using version 9.6 or later you can switch to using this database.

 

First take a copy of the folder $ZEUSHOME/zxtm/etc/geo and the file $ZEUSHOME/zxtmadmin/lib/perl/Zeus/ZXTM/CountryData.pm.

You can switch back to to original GeoIP database by restoring these and restarting the traffic manager.

 

  1. Download the "CSV with IP addresses in numeric format and separate table for locations" version of the GeoIP City database.
  2. Unzip the resulting archive. This will create a folder such as GeoIP-134_20140218 containing two CSV files with names like GeoIPCity-134-Blocks.csv and GeoIPCity-134-Location.csv.
  3. Download the country codes file from http://dev.maxmind.com/static/csv/codes/iso3166.csv
  4. Download the region codes file from http://dev.maxmind.com/static/csv/codes/maxmind/region.csv
  5. Run the conversion script, specifying paths to the 4 CSV files in the order: locations file, blocks file, country codes, region codes. e.g:
    $ZEUSHOME/zxtm/bin/process_geoip.pl GeoIP-134_20140218/GeoIPCity-134-Location.csv GeoIP-134_20140218/GeoIPCity-134-Blocks.csv iso3166.csv region.csv
  6. This will create a folder called output containing:
    • CountryData.pm
    • base_locations.txt
    • country_codes.txt
    • ip-to-location.bin
    • region_codes.txt
    It will also report on IP ranges whose Region Code is not found in the regions file.
    These will return a region code, but will return an empty string when asked for the region name.
  7. Overwrite $ZEUSHOME/zxtmadmin/lib/perl/Zeus/ZXTM/CountryData.pm
    with a copy of CountryData.pm if they differ.
  8. Replace the contents of $ZEUSHOME/zxtm/etc/geo with copies of the other files in the output folder
  9. Restart the traffic manager
  10. Repeat steps 7-9 for other cluster members

 

If you upgrade to a new version of the Traffic Manager, you will need to reapply these changes.

TechTip: Extending the Brocade vTM GeoIP database

by on ‎03-20-2013 09:12 AM - edited on ‎09-21-2016 07:02 AM by PaulWallace (2,699 Views)

Brocade Virtual Traffic Manager contains a GeoIP database that maps IP addresses to location - longitude and latitude, city, county and country.  The GeoIP database is used by the Global Load Balancing capability to estimate distances between remote users and local datacenters, and it is accessible using the geo.* TrafficScript and Java functions.

 

For example, to discover the 2-letter country code that a site visitor is accessing from, use the following:

 

$ip = request.getRemoteIP();
$countryCode = geo.getCountryCode( $ip );

 

The database that is included in Brocade vTM is derived from MaxMind's GeoLite City database and is updated with each Stingray release. It is also possible to update to a later version using an upgrade package, or to switch to the MaxMind GeoIP City database, see Updating the Brocade vTM GeoIP database for details.

 

What if the IP address I'm using is not recognized?

The database does not include locations for private IP address ranges (as per RFC1918), and other IP ranges may be missing or inaccurate if they were recently allocated or moved.  This tech tip explains how you can extend the internal GeoIP database to add or override IP address ranges.

 

Extending the Brocade vTM GeoIP database

Extensions to the database are stored in the file ZEUSHOME/zxtm/conf/locations.cfg

ZEUSHOME refers to the installation directory of your Traffic Manager, typically /opt/zeus or /usr/local/zeus.

 

The format of each extension is a single line as follows:

 

firstIP    lastIP    lat    lon    CC    RR    city

 

The following rules and definitions apply to each mapping:

 

  • The elements in the line are white-space separated
  • The IP range is inclusive, and the latitude (lat) and longitude (lon) are either "-" or decimal degrees.
  • CC and RR are either "-" or two-letter country and region codes, e.g. US TX for Texas. The special files ZEUSHOME/zxtm/etc/geo/country_codes.txt and ZEUSHOME/zxtm/etc/geo/region_codes.txt provide a full list of the relevant codes.
  • The city name can include spaces, e.g. San Francisco, but does not specifically have to refer to a city (any descriptive text is acceptable).
  • Only the first two fields are required.

 

Some example mappings are provided below:

 

192.168.0.1  192.168.0.128  52.1234  -0.5678  US  TX  Shiny new datacentre
172.16.0.1   172.16.255.255 -        -        -   -   Test VPN
99.98.97.96  99.98.97.99

 

 

Testing the IP Address mappings

 

You can test any changes with the following simple request rule:

 

$text = "";

$text .= whereis( "192.168.35.40" ); $text .= "\n";
$text .= whereis( "192.168.199.199" ); $text .= "\n";
$text .= whereis( "17.18.19.20" ); $text .= "\n";

http.sendResponse( "200", "text/plain", $text, "" );

sub whereis( $ip ) {
return $ip . " is in:\n" .
" Country: " . geo.getCountry( $ip ) . "\n" .
" CountryCode: " . geo.getCountryCode( $ip ) . "\n" .
" Region: " . geo.getRegion( $ip ) . "\n" .
" City: " . geo.getCity( $ip ) . "\n" .
" Long/Lat: " . geo.getLongitude( $ip ) .'/'. geo.getLatitude( $ip ) . "\n";
}

 

Editing Brocade vTM Configuration Files

 

You can edit the ZEUSHOME/zxtm/conf/locations.cfg file directly.  The configuration system will notice the fact that this file has changed and automatically accept the location mappings defined within it.

 

Configuration is normally replicated automatically across a cluster, but not if you edit a configuration file directly in the file system.  You'll need to replicate the updated configuration in one of three ways:

 

  • If you make a configuration change using the UI, REST or SOAP APIs, the configuration on the target vTM  is then replicated across the cluster
  • You can manually initiate a configuration replication operation from the Diagnose -> Cluster Diagnosis page in the Admin Interface of the machine you have updated
  • You can call the ZEUSHOME/zxtm/bin/replicate-config script on the local machine to replicate its configuration across the cluster.

Virtual Web Application Firewall Signature (Baseline Version 201608180911)

by vWAF on ‎08-18-2016 02:27 AM (501 Views)

A new policy (baseline version 201608180911) for the Virtual Web Application Firewall is now available.

Change log:

  • Changed: XSS via STYLE tag - Reason: normalize rule
  • Changed: HTML tag with href attribute - Reason: enhance rule
  • Changed: XSS via LINK tag - Reason: normalize rule
  • Changed: HTML tag with rel attribute - Reason: enhance rule
  • Changed: XSS via OBJECT tag - Reason: normalize rule
  • Changed: Detects <A HREF Link injection tricks - Reason: normalize rule
  • Changed: Catch IFRAME injections - Reason: normalize rule
  • Changed: XSS via BODY tag - Reason: fix rule
  • Changed: XSS via TABLE tag - Reason: normalize rule
  • Changed: XSS via DIV tag - Reason: normalize rule
  • Changed: XSS via META tag - Reason: normalize rule

The download in the product is available with a short delay.

Virtual Web Application Firewall Signature (Baseline Version 201607280841)

by vWAF on ‎07-28-2016 01:42 AM (400 Views)

A new policy (baseline version 201607280841) for the Virtual Web Application Firewall is now available.

Change log:

  • Added: HTTP Proxy Header attack

The download in the product is available with a short delay.

Virtual Web Application Firewall Signature (Baseline Version 201606021442)

by vWAF on ‎06-20-2016 01:33 AM (311 Views)

A new policy (baseline version 201606021442) for the Virtual Web Application Firewall is now available.

Change log:

  • Changed: XSS via IMG tag - Reason: simplify rule
  • Changed: XSS via LAYER tag - Reason: simplify rule
  • Changed: XSS via BASE tag - Reason: simplify rule

The download in the product is available with a short delay.

HowTo: Respond directly to DNS requests using libDNS.rts

by on ‎04-10-2013 08:21 AM - edited on ‎05-13-2016 05:40 AM by PaulWallace (1,336 Views)

 This article uses the libDNS.rts trafficscript library as described in libDNS.rts: Interrogating and managing DNS traffic in Stingray.

 

In this example, we intercept DNS requests. If the client is seeking to resolve www.site.com and they are based in the UK, then we respond directly with a CNAME response, directing them to resolve www.site.co.uk instead.

 

Request rule

 

import libDNS.rts as dns;

$request = request.get();

$packet = dns.convertRawDataToObject($request, "udp");

# Ignore unparsable packets and query responses to avoid

# attacks like the one described in CVE-2004-0789.

if( hash.count( $packet ) == 0 || $packet["qr"] == "1" ) {

   break;

}

$host = dns.getQuestion( $packet )["host"];

$country = geo.getCountry( request.getRemoteIP() );


if( $host == "www.site.com." && $country == "GB" ) {


   $packet = dns.addResponse($packet, "answer",

      "www.site.com", "www.site.co.uk.", "CNAME", "IN", "60", []);

   $packet["qr"] = 1;


   request.sendResponse( dns.convertObjectToRawData($packet, "udp"));

}

libDNS.rts: Interrogating and managing DNS traffic in Stingray

by on ‎04-10-2013 07:35 AM - edited on ‎05-13-2016 05:37 AM by PaulWallace (3,237 Views)

 UPDATE January 2015: libDNS version 2.1 released!

The new release is backward-compatible with previous releases and includes the following bug fixes:

  • Fixed multiple bugs in parsing of DNS records

 

The libDNS.rts library (written by Matthew Geldert and enhanced by Matthias Scheler) attached below provides a means to interrogate and modify DNS traffic from a TrafficScript rule, and to respond directly to DNS request when desired.

 

You can use it to:

 

 

Note: This library allows you to inspect and modify DNS traffic as it is balanced by Stingray.  If you want to issue DNS requests from Stingray, check out the net.dns.resolveHost() and net.dns.resolveIP() TrafficScript functions for this purpose.

 

Overview

 

This rule can be used when Stingray receives DNS lookups (UDP or TCP) using a Virtual Server, and optionally uses a pool to forward these requests to a real DNS server:

 

dns1.png

Typically, a listen IP addresses of the Stingray will be published as the NS record for a subdomain so that clients explicitly direct DNS lookups to the Stingray device, although you can use Stingray software in a transparent mode and intercept DNS lookups that are routed through Stingray using the iptables configuration described in Transparent Load Balancing with Stingray Traffic Manager.

 

Installation

 

 

Step 1:

 

Configure Stingray with a virtual server listening on port 53; you can use the 'discard' pool to get started:

 

dns2.png

If you use 'dig' to send a DNS request to that virtual server (listening on 192.168.35.10 in this case), you won't get a response because the virtual server will discard all requests:

 

$ dig @192.168.35.10 www.zeus.com

; <<>> DiG 9.8.3-P1 <<>> @192.168.35.10 www.zeus.com

; (1 server found)

;; global options: +cmd

;; connection timed out; no servers could be reached

 

Step 2:

 

Create a new rule named libDNS.rts using the attached TrafficScript Source:

 

DNS3.pngThis rule is used as a library, so you don't need to associate it with a Virtual Server.

 

Then associate the following request rule with your virtual server:

 

import libDNS.rts as dns;

$request = request.get();

$packet = dns.convertRawDataToObject($request, "udp");


# Ignore unparsable packets and query responses to avoid

# attacks like the one described in CVE-2004-0789.

if( hash.count( $packet ) == 0 || $packet["qr"] == "1" ) {

   break;

}

log.info( "DNS request: " . lang.dump( $packet ) );


$host = dns.getQuestion( $packet )["host"];


$packet = dns.addResponse($packet, "answer", $host, "www.site.com.", "CNAME", "IN", "60", []);

$packet["qr"] = 1;


log.info( "DNS response: " . lang.dump( $packet ) );

request.sendResponse( dns.convertObjectToRawData($packet, "udp"));

 

This request rule will catch all DNS requests and respond directly with a CNAME directing the client to look up 'www.site.com' instead:

 

$ dig @192.168.35.10 www.zeus.com

; <<>> DiG 9.8.3-P1 <<>> @192.168.35.10 www.zeus.com

; (1 server found)

;; global options: +cmd

;; Got answer:

;; ->>HEADER<

 

 

Documentation

 

This DNS library provides a set of helper functions to query and modify DNS packets.  Internally, a decoded DNS packet is represented as a TrafficScript datastructure and when you are writing and debugging rules, it may be useful to use log.info( lang.dump( $packet ) ) as in the rule above to help understand the internal structures.

 

Create a new DNS packet

 

$packet = dns.newDnsObject();

log.info( lang.dump( $packet ) );

 

Creates a data structure model of a DNS request/response packet. The default flags are set for an A record lookup. Returns the internal hash data structure used for the DNS packet.

 

Set the question in a DNS packet

 

$packet = dns.setQuestion( $packet, $host, $type, $class );

 

Sets the question in a data structure representation of a DNS packet.

 

  • $packet - the packet data structure to manipulate.
  • $host - the host to lookup.
  • $type - the type of RR to request.

 

Returns the modified packet data structure, or -1 if the type is unknown.

 

 

Get the question in the DNS packet

 

$q = dns.getQuestion( $packet );

$host = $q["host"];

$type = $q["type"];

$class = $q["class"];

 

Gets the question from a data structure representing a DNS request/response.  Returns a hash of the host and type in the data structure's question section.

 

Add an answer to the DNS packet

 

$packet = dns.addResponse($packet, $section, $name, $host, $type, $class, $ttl, $additional);

 

Adds an answer to the specified RR section:

 

  • $packet - the packet data structure to manipulate.
  • $section - name of the section ("answer", "authority" or "additional") to add the answer to
  • $name, $host, $type, $class, $ttl, $additional - the answer data

 

Returns the modified packet data structure.

 

Remove an answer from the DNS packet

 

$packet = dns.removeResponse( $packet, $section, $answer ); while( $packet["additionalcount"] > 0 ) { $packet = dns.removeResponse( $packet, "additional", 0); }

 

Removes an answer from the specified RR section.

 

  • $packet - the packet data structure to manipulate.
  • $section - name of the section ("answer", "authority" or "additional") to remove the entry from
  • $answer - the array position of the answer to remove (0 removes the first).

 

Returns the modified packet data structure, or -1 if the specified array key is out of range.

 

Convert the datastructure packet object to a raw packet

 

$data = dns.convertObjectToRawData( $packet, $protocol ); request.sendResponse( $data );

 

Converts a data structure into the raw data suitable to send in a DNS request or response.

 

  • $packet - data structure to convert.
  • $protocol - transport protocol of data ("udp" or "tcp").

 

Returns the raw packet data.

 

 

Convert a raw packet to  an internal datastructure object

 

$data = request.get(); $packet = dns.convertRawDatatoObject( $data, $protocol );

 

Converts a raw DNS request/response to a manipulatable data structure.

 

  • $data - raw DNS packet
  • $protocol - transport protocol of data ("udp" or "tcp")

 

Returns trafficscript data structure (hash).

 

 

Kudos

 

Kudos to Matthew Geldert, original author of this library, and Matthias Scheler, author of version 2.0.

HowTo: Implement a simple DNS resolver using libDNS.rts

by on ‎04-10-2013 08:30 AM - edited on ‎05-13-2016 05:35 AM by PaulWallace (1,511 Views)

This article uses the libDNS.rts trafficscript library as described in libDNS.rts: Interrogating and managing DNS traffic in Stingray.

 

In this example, we intercept DNS requests and respond directly for known A records.

 

The request rule

 

import libDNS.rts as dns;

# Map domain names to lists of IP addresses they should resolve to

$ipAddresses = [

   "dev1.ha.company.internal." => [ "10.1.1.1", "10.2.1.1" ],

   "dev2.ha.company.internal." => [ "10.1.1.2", "10.2.1.2" ]

];

$packet = dns.convertRawDataToObject( request.get(), "udp" );

# Ignore unparsable packets and query responses to avoid

# attacks like the one described in CVE-2004-0789.

if( hash.count( $packet ) == 0 || $packet["qr"] == "1" ) {

   break;

}

$host = $packet["question"]["host"];


if( hash.contains( $ipAddresses, $host )) {

   foreach( $ip in $ipAddresses[$host] ) {

      $packet = dns.addResponse($packet, "answer", $host, $ip, "A", "IN", "60", []);

   }

   $packet["aa"] = "1"; # Make the answer authorative

} else {

   $packet["rcode"] = "0011"; # Set NXDOMAIN error

}

$packet["qr"] = "1"; # Changes the packet to a response

$packet["ra"] = "1"; # Pretend that we support recursion

request.sendResponse( dns.convertObjectToRawData($packet, "udp"));

Launching Brocade vTM on the Google Cloud Platform

by tstace ‎12-18-2015 01:49 AM - edited ‎03-16-2016 09:00 AM (2,598 Views)

Brocade vADC solutions are now supported on Google Cloud Platform, with hourly billing options for applications that need to scale on-demand to match varying workloads. A range of Brocade Virtual Traffic Manager (Brocade vTM) editions are available, including options for the Brocade vTM Developer Edition and Brocade Virtual Web Application Firewall (Brocade vWAF), available as both a virtual machine and as a software installation on a Linux virtual machine.

 

This article describes how to quickly create a new Brocade vTM instance through the Google Cloud Launcher. For additional information about the use and configuration of your Brocade vTM instance, see the product documentation available at https://www.brocade.com/vadc-docs.

 

Launching a Brocade vTM Virtual Machine Instance

 

To launch a new instance of the Brocade vTM virtual machine, use the GCE Cloud Launcher Web site. Type the following URL into your Web browser:

 

https://cloud.google.com/launcher

Browse or use the search tool to locate the Brocade package applicable to your requirements, then click the package icon to see the package detail screen.

 

Brocade vTM Package Details

 

To start the process of deploying a new instance, click Launch on Google Cloud Platform.

 

Before you can launch your new instance, first provide some basic configuration.

 

To deploy a new Brocade vTM instance

 

1.  Choose the project you want to use with this instance, or create a new project.

 

Choose your project

 

2.  Click Continue to progess to the instance configuration page.

 

Brocade vTM Instance Details

3.  Type an identifying name for the instance, then select the desired geographic zone and machine type. Individual zones might have differing computing resources available and specific access restrictions. Contact your support provider for further details.


4.  Ensure the boot disk correspond to your computing resource requirements. Brocade recommends not changing the default disk size as this might affect the performance of your Brocade vTM.

 

5.  By default, GCE creates firewall rules to allow HTTP and HTTPS traffic, and to allow access to the Web-based Brocade vTM Admin UI on TCP port 9090. To instead restrict access to these services, untick the corresponding firewall checkboxes.

 

Note: If you disable access to TCP port 9090, you cannot access the Brocade vTM Admin UI to configure the instance.

 

6.  If you want to use IP Forwarding with this instance, click More and set IP forwarding to "On".

IP Forwarding

 

7.  Brocade vTM needs access to the Google Cloud Compute API, as indicated in the API Access section. Keep this option enabled to ensure your instance can function correctly.

 

8.  Click Deploy to launch the Brocade vTM instance.

 

The Google Developer Console confirms that your Brocade vTM instance is being deployed.

 

 

Next Steps

 

After your new instance has been created, you can proceed to configure your Brocade vTM software through its Admin UI.

 

To access the Admin UI for a successfully deployed instance, click Log into the admin panel.  

 

Brocade vTM Deployment Summary

 

When you connect to the Admin UI for the first time, Brocade vTM presents the Initial Configuration wizard. This wizard captures the networking, date/time, and basic system settings needed by your Brocade vTM software to operate normally.

 

For full details of the configuration process, and for instructions on performing various other administrative tasks, see the Brocade Virtual Traffic Manager: Cloud Services Installation and Getting Started Guide.

TrafficScript rule to protect against "Shellshock" bash vulnerability (CVE-2014-6271)

by mikeg_2 on ‎09-25-2014 08:36 AM - edited on ‎02-17-2016 04:53 AM by PaulWallace (2,106 Views)

 

The following TrafficScript rule rejects requests attempting to exploit the recently discovered vulnerability in bash (CVE-2014-6271, processing of trailing strings after function definitions in the values of environment variables):

 

# the most likely attack is via http headers as they become env variables

foreach( $header in hash.values( http.getHeaders() ) ) {
   if( string.contains( $header, "() {" ) ) {
    $vehicle = " HTTP header ";
break;
}
}

# some apps might use form parameters as environment variables as well:
$qs = http.getQueryString();
if( string.contains( $qs, "() {" ) ) {
   $vehicle .= ($vehicle ? "and query string ":" query string ");
}

if( http.getMethod() == "POST" ) {
foreach( $value in hash.values( http.getFormParams() ) ) {
      if( lang.isArray( $value ) ) {
         $value = array.join( $value, ":" );
}
      if( string.contains( $value, "() {" ) ) {
         $vehicle .= ($vehicle ? "and form param ":" form param ");
break;
}
}
}

if( $vehicle ) {
   $badboy = request.getRemoteIP();
   $country = geo.getCountry( $badboy );
   log.warn( "Attempted CVE-2014-6271 attack via"
             . $vehicle . "from " . $badboy . " in " . $country );
   connection.discard();
}


The rule above can be used to protect a web application that executes a vulnerable version of the bash command interpreter, like cgi- or fcgi-based applications.

 

Since the SteelApp Web UI is such an application itself, it is also vulnerable if the software is running in an environment where "/bin/sh" is a vulnerable version of bash (this might be the case if you have installed SteelApp on Linux, but is *NOT* the case if you are running the Riverbed provided Virtual Appliances).  The rule above can of course be used to secure SteelApp's administration server as well.

 

To do that, you have to change the admin server's port to, for example, 9091, restrict its listening socket to localhost, and create a loopback virtual server on port 9090 that uses the above rule.  This loopback virtual server's default pool has to be ssl-encrypting and must have node localhost:9091.

 

SteelApp Web App Firewall already has an updated baseline that detects the attack on bash, so if your web application is secured by SteelApp Web App Firewall you only need to install the baseline update.

 

 

For detailed information on how this vulnerability CVE-2014-6271 affects Riverbed products, please subscribe to this support knowledge base article: https://supportkb.riverbed.com/support/index?page=content&id=S24997

Virtual Traffic Manager and Microsoft Skype for Business Deployment Guide

by aannavarapu on ‎02-03-2016 06:35 PM (840 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager in Skype for Business architectures.

Virtual Traffic Manager and VMware Horizon View Servers Deployment Guide

by vreddy on ‎03-27-2013 02:20 PM - edited on ‎01-29-2016 01:18 PM by aannavarapu (2,474 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for VMware Horizon View Servers.

 

This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.

Preparing a USB Flash Drive for the Brocade vTM Appliance Image

by tstace on ‎12-16-2015 06:41 AM (817 Views)

Introduction

 

This article discusses how to prepare a bootable USB flash drive for use with the Brocade vTM appliance image.

 

To read more about the process of setting up the appliance image on your prepared USB flash drive, see the Brocade Virtual Traffic Manager: Appliance Image Installation and Getting Started Guide.

 

Erasing a USB flash drive

 

Brocade recommends first using the usb-creator-gtk tool to perform a full erase/format of the USB drive you want to use. This tool is available on most standard Linux-based workstations and includes a graphical user interface.

 

usb-creator-gtk

 
To erase a USB flash drive with usb-creator-gtk

 

  1. Insert a USB drive into your workstation.
  2. Select your USB drive in the "Disk to use" list.
  3. Click Erase Disk.

Alternatively, use any tool or command-line program that is able to fully erase your USB drive.

 

After you have completed this process, follow the instructions to set up the Brocade vTM appliance image in the Brocade Virtual Traffic Manager: Appliance Image Installation and Getting Started Guide.

 

Note: To deploy the Brocade vTM appliance image on a USB flash drive, your selected flash drive must be bootable. Check with your USB flash drive vendor to verify its suitability if it appears not to be detected when attempting to boot from it after following the procedure above.

Feature Brief: Application Acceleration with Brocade Virtual Traffic Manager

by on ‎02-26-2013 06:26 AM - edited on ‎12-16-2015 05:55 AM by PaulWallace (2,200 Views)

The Brocade Virtual Traffic Manager employs a range of protocol optimization and specialized offload functions to improve the performance and capacity of a wide range of networked applications.

 

  • TCP Offload applies to most protocol types and is used to offload slow client-side connections and present them to the server as if they were fast local transactions.  This reduces the duration of a connection, reducing server concurrency and allowing the server to recycle limited resources more quickly
  • HTTP Optimizations apply to HTTP and HTTPS protocols.  Efficient use of HTTP keepalives (including carefully limiting concurrency to avoid overloading servers with thread-per-connection or process-per-connection models) and upgrading client connections to the most appropriate HTTP protocol level will reduce resource usage and connection churn on the servers
  • Performance-sensitive Load Balancing selects the optimal server node for each transaction based on current and historic performance, and will also consider load balancing hints such as LARD to prefer the node with the hottest cache for each resource
  • Processing Offload: Highly efficient implementations of SSL, compression and XML processing offload these tasks from server applications, allowing them to focus on their core application code
  • Content Caching will cache static and dynamic content (discriminated by use of a 'cache key') and eliminate unnecessary requests for duplicate information from your server infrastructure

 

Further specialised functions such as Web Content Optimization, Rate Shaping (Dynamic rate shaping slow applications) and Prioritization (Detecting and Managing Abusive Referers) give you control over how content is delivered so that you can optimize the end user experience.

 

The importance of HTTP optimization

 

There's one important class of applications where ADCs make a very significant performance difference using TCP offload, request/response buffering and HTTP keepalive optimization.

 

A number of application frameworks have fixed concurrency limits. Apache is the most notable (the worker MPM has a default limit of 256 concurrent processes), mongrel (Ruby) and others have a fixed number of worker processes; some Java app servers also have an equivalent limit. The reason the fixed concurrency limits are applied is a pragmatic one; each TCP connection takes a concurrency slot, which corresponds to a heavyweight process or thread; too many concurrent processes or threads will bring the server to its knees and this can easily be exploited remotely if the limit is not low enough.

 

The implication of this limit is that the server cannot service more than a certain number of TCP connections concurrently. Additional connections are queued in the OS' listen queue until a concurrency slot is released. In most cases, an idle client keepalive connection can occupy a concurrency slot (leading to the common performance detuning advice for apache recommending that keepalives are disabled or limited).

 

When you benchmark a concurrency-limited server over a fast local network, connections are established, serviced and closed rapidly. Concurrency slots are only occupied for a short period of time, connections are not queued for long, so the performance achieved is high.

 

However, when you place the same server in a production environment, the duration of connections is much greater (slow, lossy TCP; client keepalives) so concurrency slots are held for much longer. It's not uncommon to see an application server running in production at <10% utilization, but struggling to achieve 10% of the performance that was measured in the lab.

 

The solution is to put a scalable proxy in front of the concurrency-limited server to offload the TCP connection, buffer the request data, use connections to the server efficiently, offload the response, free up a concurrency slot and offload the lingering keepalive connection.

 

Customer Stories

 

 

"Since Traffic Manager was deployed, there has been a major improvement to the performance and response times of the site."

David Turner, Systems Architect, PLAY.COM

 

"With Traffic Manager we effortlessly achieved between 10-40 times improvement in performance over the application working alone."

Steve Broadhead, BroadBand Testing

 

"Traffic Manager has allowed us to dramatically improve the performance and reliability of the TakingITGlobal website, effectively managing three times as much traffic without any extra burden."

 Michael Furdyk, Director of Technology, TakingITGlobal

 

"The performance improvements were immediately obvious for both our users and to our monitoring systems – on some systems, we can measure a 400% improvement in performance."

Philip Jensen, IT Section Manager, Sonofon

 

"700% improvement in application response times… The real challenge was to maximize existing resources, rather than having to continually add new servers."

Kent Wright, Systems Administrator, QuantumMail

 

 

Read more

 

Tuning Brocade Virtual Traffic Manager

by on ‎02-21-2013 11:15 AM - edited on ‎12-16-2015 05:50 AM by PaulWallace (5,800 Views)

This technical brief describes recommended techniques for installing, configuring and tuning Stingray Traffic Manager.  You should also refer to the Stingray Product Documentation for detailed instructions on the installation process of Stingray software.

 

Getting started

 

 

Tuning Stingray Traffic Manager

 

 

Tuning the operating system kernel

 

The following instructions only apply to Stingray software running on a customer-supplied Linux or Solaris kernel:

 

Debugging procedures for Performance Problems

 

 

Load Testing

 

 

Conclusion

 

The Stingray software and the operating system kernels both seek to optimize the use of the resources available to them, and there is generally little additional tuning necessary except when running in heavily-loaded or performance-critical environments.

 

When tuning is required, the majority of tunings relate to the kernel and tcp stack and are common to all networked applications.  Experience and knowledge you have of tuning webservers and other applications on Linux or Solaris can be applied directly to Stingray tuning, and skills that you gain working with Stingray can be transferred to other situations.

 

Good background references include:

 

 

The importance of good application design

 

TCP and kernel performance tuning will only help to a small degree if the application running over HTTP is poorly designed.  Heavy-weight web pages with large quantities of referenced content and scripts will tend to deliver a poorer user experience and will limit the capacity of the network to support large numbers of users.

 

Initiatives such as Google’s PageSpeed and Yahoo’s YSlow seek to promote good practice in web page design in order to optimize performance and capacity.

 

Stingray Aptimizer Web Content Optimization capability applies best-practice rules for content optimization dynamically, as the content is delivered by the Stingray ADC.  It applies browser-aware techniques to reduce bandwidth and TCP round-trips (image, CSS, JavaScript and HTML minification, image resampling, CSS merging, image spriting) and it automatically applies URL versioning and far-future expires to ensure that clients cache all content and never needlessly request an update for a resource which has not changed.

 

Stingray Aptimizer is a general purpose solution that complements TCP tuning to give better performance and a better service level.  If you’re serious about optimizing web performance, you should apply a range of techniques from layer 2-4 (network) up to layer 7 and beyond to deliver the best possible end-user experience while maximizing the capacity of your infrastructure.

Introducing Zeusbench

by on ‎02-21-2013 03:29 AM - edited on ‎12-16-2015 05:50 AM by PaulWallace (3,351 Views)

zeusbench.pngThe Brocade Virtual Traffic Manager includes a useful benchmarking tool named 'zeusbench' that we use for our internal performance testing and as a load generation tool on the training courses that we run. The very first incarnation of ZeusBench was donated to the Apache project a long time ago, and is now well known as ApacheBench. The new incarnation starts from a completely fresh codebase; it has a new 'rate' mode of operation, is less resource intensive and is often more accurate in its reporting.

 

You’ll find ZeusBench in the admin/bin directory of your Traffic Manager installation ($ZEUSHOME). Run zeusbench with the -–help option to display the comprehensive help documentation:

 

$ $ZEUSHOME/admin/bin/zeusbench --help

 

Using zeusbench

 

zeusbench generates HTTP load by sending requests to the desired server. It can run in two different modes:

 

Concurrency

 

When run with the –c N option, zeusbench simulates N concurrent HTTP clients. Each client makes a request, reads the response and then repeats, as quickly as possible.

 

Concurrency mode is very stable and will quickly push a web server to its maximum capacity if sufficient concurrent connections are used. It's very difficult to overpower a web server unless you select far too many concurrent connections, so it's a good way to get stable, repeatable transactions-per-second results. This makes it suitable for experimenting with different performance tunings, or looking at the effect of adding TrafficScript rules.

 

Rate

 

When run with the –r N option, zeusbench sends new HTTP requests at the specified rate (requests per second) and reads responses when they are returned.

 

Rate mode is suitable to test whether a web server can cope with a desired transaction rate, but it is very easy to overwhelm a server with requests. It's great for testing how a service copes with a flash-crowd - try running one zeusbench instance for background traffic, then fire off a second instance to simulate a short flash crowd effect. Rate-based tests are very variable; it's difficult to get repeatable results when the server is overloaded, and it’s difficult to determine the maximum capacity of the server (use a concurrency test for this).

 

Comparing concurrency and rate

 

The charts below illustrate two zeusbench tests against the same service; one where the concurrency is varied, and one where the rate is varied:

 

candr.jpgMeasuring transactions-per-second (left hand axis, blue) and response times (right hand axis, red) in concurrency and rate-based tests

 

The concurrency-based tests apply load in a stable manner, so are effective at measuring the maximum achievable transactions-per-second. However, they can create a backlog of requests at high concurrencies, so the response time will grow accordingly.

 

The rate-based tests are less prone to creating a backlog of requests so long as the request rate is lower then the maximum transactions-per-second. For lower request rates, they give a good estimate of the best achievable response time, but they quickly overload the service when the request rate nears or exceeds the maximum sustainable transaction rate.

 

Controlling the tests

 

  1. The –n N option instructs zeusbench to run until it has sent N requests, then stop and report the results.
  2. The –t N option instructs zeusbench to run for N seconds, then stop and report the results.
  3. The –f option instructs zeusbench to run forever (or until you hit Ctrl-C, at which point zeusbench stops and reports the results).
  4. The –v option instructs zeusbench to report each second on the progress of the tests, including the number of connections started and completed, and the number of timeouts or unexpected error responses.

 

Keepalives

 

Keepalives can make a very large difference to the nature of a test. By default, zeusbench will open a TCP connection and use it for one request and response before closing it. If zeusbench is instructed to use keepalives (with the –k flag), it will reuse TCP connections indefinitely; the –k N option can specify the number of times a connection is reused before it is closed.

 

Other zeusbench options

 

We’ve just scratched the surface of the options that zeusbench offers. zeusbench gives you great control over timeouts, the HTTP requests that it issues, the ability to ramp up concurrency or rate values over time, the SSL parameters and more.

 

Run

 

$ $ZEUSHOME/admin/bin/zeusbench --help

 

for the detailed help documentation

 

Running benchmarks

 

When running a benchmark, it is always wise to sanity-check the results you obtain by comparing several different measurements. For example, you can compare the results reported by zeusbench with the connection counts charted in the Traffic Manager Activity Monitor. Some discrepancies are inevitable, due to differences in per-second counting, or differences in data transfers and network traffic counts.

 

Benchmarking requires a careful, scientific approach, and you need to understand what load you are generating and what you are measuring before drawing any detailed conclusions. Furthermore, the performance you measure over a low-latency local network may be very different to the performance you can achieve when you put a service online at the edge of a high-latency, lossy WAN.

 

For more information, check out the document:Tuning Brocade Virtual Traffic Manager

 

The article Dynamic Rate Shaping of Slow Applications gives a worked example of the use of zeusbench to determine how to rate-shape traffic to a slow application.

 

Also note that many Traffic Manager licenses impose a maximum bandwidth limit (development licenses are typically limited to 1Mbits); this will obviously impede any benchmarking attempts. You can commence a managed evaluation and obtain a license key that gives unrestricted performance if required.

Load Testing Recommendations for Brocade Virtual Traffic Manager

by on ‎02-21-2013 11:06 AM - edited on ‎12-16-2015 05:48 AM by PaulWallace (3,619 Views)

Load Testing is a useful activity to stress test a deployment to find weaknesses or instabilities, and it’s a useful way to compare alternative configurations to determine which is more efficient.  It should not be used for sizing calculations unless you can take great care to ensure that the synthetic load generated by the test framework is an accurate representation of real-world traffic.

 

One useful application of load testing is to verify whether a configuration change makes a measurable difference to the performance of the system under test.  You can usually infer that a similar effect will apply to a production system.

 

Introducing zeusbench

 

The zeusbench load testing tool is a load generator that Brocade vADC engineering uses for our own internal performance testing.  zeusbench can be found in $ZEUSHOME/admin/bin.   Use the --help option to display comprehensive help documentation.

 

Typical uses include:

 

  • Test the target using 100 users who each repeatedly request the named URL; each user will use a single dedicated keepalive connection.  Run for 30 seconds and report the result:

 

# zeusbench –t 30 –c 100 –k http://hostSmiley Tongueort/path

 

  • Test the target, starting with a request rate of 200 requests and stepping up by 50 requests per second every 30 seconds, to a maximum of 10 steps up.  Run forever (until Ctrl-C), using keepalive connection; only use a keepalive connection 3 times, then discard.  Print verbose (per-second) progress reports:

 

# zeusbench –f –r 200,50,10,30 –k –K 3 –v http://hostSmiley Tongueort/path

 

For more information, please refer to Introducing Zeusbench

 

Load testing checklist


If you conduct a load-testing exercise, bear the following points in mind:

 

Understand your tests

 

Ensure that you plan and understand your test fully, and use two or more independent methods to verify that it is behaving the way that you intend.  Common problems to watch out for include:

 

  • Servers returning error messages rather than correct content; the test will only measure how quickly a server can error!;
  • Incorrect keepalive behavior; verify that connections are kept-alive and reused as you intended;
  • Connection rate limits and concurrency control will limit the rate at which Brocade will forward requests to the servers;
  • SSL handshakes; most simple load tests will perform an SSL handshake for each request; reusing SSL session data will significantly alter the result.

 

Verify that you have disabled or de-configured features that you do not want to skew the test results.  You want to reduce the configuration to the simplest possible so that you can focus on the specific configuration options you intend to test.  Candidates to simplify include:

 

  • Access and debug logging;
  • IP Transparency (and any other configuration that requires iptables and conntrack);
  • Optimization techniques like compression or other web content optimization;
  • Security policies such as service protection policies or application firewall rules;
  • Unnecessary request and response rules;
  • Advanced load balancing methods (for simplicity, use round robin or least connections).

 

It’s not strictly necessary to create a production-identical environment if the goal of your test is simply to compare various configuration alternatives – for example, which rule is quicker.  A simple environment, even if suboptimal, will give you more reliable test results.

 

Run a baseline test and find the bottleneck

 

Perform end-to-end tests directly from client to server to determine the maximum capacity of the system and where the bottleneck resides.  The bottleneck is commonly either CPU utilization on the server or client, or the capacity of the network between the two.

 

Re-run the tests through the traffic manager, with a basic configuration, to determine where the bottleneck is now.  This will help you to interpret the results and focus your tuning efforts.  Measure your performance data using at least two independent methods – benchmark tool output, activity monitor, server logs, etc – to verify that your chosen measurement method is accurate and consistent.  Investigate any discrepancies and ensure that you understand their cause, and disable the additional instrumentation before you run the final tests.

 

Important: Note that tests that do not overload the system can be heavily skewed by latency effects.  For example, a test that repeats the same fast request down a small number of concurrent connections will not overload the client, server or traffic manager, but the introduction of an additional hop (adding in the traffic manager for example) may double the latency and halve the performance result.  In reality, you will never see such an effect because the additional latency added by the traffic manager hop is not noticeable, particularly in the light of the latency of the client over a slow network.

 

Understand the different between concurrency and rate tests

 

zeusbench and other load testing tools can often operate in two different modes – concurrent connections tests (-c) and connection rate tests (-r).

 

The charts below illustrate two zeusbench tests against the same service; one where the concurrency is varied, and one where the rate is varied:

 

candr.jpg

Measuring transactions-per-second (left hand axis, blue) and response times (right hand axis, red) in concurrency and rate-based tests

 

The concurrency-based tests apply load in a stable manner, so are effective at measuring the maximum achievable transactions-per-second. However, they can create a backlog of requests at high concurrencies, so the response time will grow accordingly.

 

The rate-based tests are less prone to creating a backlog of requests so long as the request rate is lower then the maximum transactions-per-second. For lower request rates, they give a good estimate of the best achievable response time, but they quickly overload the service when the request rate nears or exceeds the maximum sustainable transaction rate.

 

Concurrency-based tests are often quicker to conduct (no binary-chop to find the optimal request rate) and give more stable results.  For example, if you want to determine if a configuration change affects the capacity of the system (by altering the CPU demands of the traffic manager or kernel), it’s generally sufficient to find a concurrency value that gives a good, near-maximum result and repeat the tests with the two configurations.

 

Always check dmesg and other OS logs

 

Resource starvation (file descriptors, sockets, internal tables) will all affect load testing results and may not be immediately obvious.  Make a habit of following the system log and dmesg regularly.

 

Remember to tune and monitor your clients and servers as well as the Traffic Manager; many of the kernel tunables descried above are also relevant to the clients and servers.

Getting Started with Brocade vADC

by PaulWallace ‎08-11-2015 06:49 AM - edited ‎12-16-2015 03:31 AM (1,900 Views)

 Getting Started with Brocade vADC

 

vTM.pngWelcome to Brocade Application Delivery solutions! This article gives a quick five-step guide to how you can download Brocade Virtual Traffic Manager, including how to download the Developer Edition of the software, installing and configuring your first services, and then how to request a full-performance 30-day license key to experience the full power of Brocade Application Delivery solutions.

 

1. Download Brocade vADC

The quickest way to get started with Brocade vADC is to download the Developer Edition of Brocade Virtual Traffic Manager. This will give you access to all the features and full programmability of Brocade vADC software, including add-on options for web application firewall (WAF) and web content optimization (WCO). The software needs no license key, but is limited to 1 Mbps/100 SSL tps throughput, but can be used to explore the software and develop applications and interfaces.
 

2. Install Brocade vADC

Grab either the software (Linux or Solaris) or Virtual Appliance (VMware, Xen or OracleVM) installation and follow the instructions in the appropriate Getting Started guide. We have created dedicated installation and configuration guides for each type of environment, as part of the documentation set for Brocade Virtual Traffic Manager - choose the right environment, and install the software and you are ready to go.
 
(You can find the complete set of user documentation here)
 

3. Discover Brocade vADC

These Brocade community pages include a wide range of resources to help you explore Brocade vADC solutions. From setting up simple services, through to remote management with REST and SOAP, and high-performance data-plane control using TrafficScript, Brocade gives you a comprehensive set of tools to differentiate and prioritise applications and services. See below for some good starting points!
 

4. Scale up Brocade vADC

Ready to scale up your performance? When you are ready to move to the next level, you can request a 30-day evaluation license to test the full power of Brocade Virtual Traffic Manager. You will be sent a license key by email, which you can upload into the Developer Edition - then you can run complete performance and load testing in your own data center or cloud platforms. This evaluation license key will include Brocade Virtual Traffic Manager, again with add-on options for web application firewall (WAF) and web content optimization (WCO).
 
 

5. More to explore on the Brocade Community

 
 

VMware View AlwaysOn Desktop Reference Architecture - TrafficScript Rule

by aannavarapu ‎08-18-2014 04:17 PM - edited ‎11-23-2015 04:18 PM (824 Views)

The following is a Traffic Script Rule for load-balancing VMware View Servers in an AlwaysOn Architecture serving all VMware clients.

 

This Rule is used both as a Request and a Response rule. More details on the usage and configurations of the Brocade Virtual Traffic Manager can be found in the Deployment Guide (link below).

 

This link is used as a reference in the Brocade Virtual Traffic Manager and VMware Horizon View Servers Deployment Guide

 

 

#// TS Rule for LB VMWare View

  

#// Set the Debug Level (possible values: 0,1,2,3)

#// 0 = Logging Off

#// 1 = Informational logging

#// 2 = Full Debug

#// 3 = Full Debug INCLUDING PASSWORDS - USE WITH EXTREME CARE

$debug = 2;

  

#// Rule to Direct Traffic Based on AD Group Membership 

#// Please declare the names of the pools you have configured, and ensure 

#// that the trafficscript!variable_pool_use Global setting is set to 'yes' 

 

#// What is the name of your VTM AD Authenticator

#// This can be setup under "Catalogs > Authenticators" in the Traffic Manager GUI.

$authenticator = "AD_AUTHENTICATOR"; 

 

#// What are the names of your pools to differentiate between Site A and Site B?

#// These are setup under "Services > Pools" in the Traffic Manager GUI.

$view_siteA_pool = "SITE-A-POOL";

$view_siteB_pool = "SITE-B-POOL";

 

#// Following are some View Specific variables that we need.

#// 'server' is the name of your view Connection Server

#// 'dns' is the DNS search suffix (single suffix only) used in your setup

#// 'domain' is the NT4 style name of your active directory

#// 'guid' is the GUID of the view connection server, and can be found by checking

#//     'HKLM\SOFTWARE\VMware, Inc.\VMware VDM\Node Manager\ConnectionServer Cluster GUID' with regedit.exe

#//    on your connection server.  

$view_info = 

    [ "guid" => "648a7ef8-dcba-abcd-efgh-58973ad94085", 

      "server" => "rvbdvTMs1", 

      "dns" => "brocade.local", 

      "domain" => "brocade" ]; 

 

 

#// Here we need to know about the names of the AD Groups to map to each Site

$siteA_AD_Groupname = "brcd_DC0";

$siteB_AD_Groupname = "brcd_DC1";

 

#// Here we need to know about the IP addresses of your SITEB Traffic Manager.  

#// Site B VTM IP addresses (up to two supported. If your setup requires more, please

#// Contact your Brocade sales representative to arrange for Brocade Professional #// Services to provide a quote for customising this deployment guide.    

 

$site_B_VTM1_IP = "10.10.10.6"; 

$site_B_VTM2_IP = "";   

  

#// Is this a request or response rule?

$rulestate = rule.getstate();

 

#// What is the Client's IP address - useful for logging:

$client_ip = request.getRemoteIP(); 

 

if ($rulestate == "REQUEST" ){

if( $debug > 0 ) {log.info("#*#* Start Reqeust Section *#*#");}

#//We need to extract the tunnelID for persistence purposes then break out of the rule to let the tunnel through

$path = http.getPath();

if (string.startswith($path, "/ice/tunnel")){

    $tunnelID = http.getQueryString();

    if ($tunnelID){

      if ($debug > 0){log.info("Request from client: " . $client_ip . " Using tunnelID:". $tunnelID . " as persistence value");}

      connection.setPersistenceKey( $tunnelID );

      $choose_pool = data.get( $tunnelID );

      if( $debug > 0 ) { log.info( "Matching this to pool: ".$choose_pool ); }

      if( $choose_pool ) {

        pool.select( $choose_pool );

      }

    } else {

      log.error("Request from client: " . $client_ip . " Found no tunnelID:". $tunnelID . " in request for /ice/tunnel: This shouldn't happen, contact support.");

    }

    if ($debug > 1){log.info("Request from client: " . $client_ip . " Request is for /ice/tunnel - exiting rule to let it through");}

    #//Bypass Script if the path is for "/ice/tunnel" to let the desktop session start

    break;

}

 

 

$JSESSIONID = http.getCookie("JSESSIONID");

if($JSESSIONID){

  if ($debug > 0) {log.info("Request from client: " . $client_ip . " JSESSIONID=" .$JSESSIONID." found in the request...");}

  connection.setPersistenceKey($JSESSIONID);

  $choose_pool = data.get( $JSESSIONID );

  if( $debug > 0 ) { log.info( "Matching this to pool: ".$choose_pool ); }

  if( $choose_pool ) {

    pool.select( $choose_pool );

  }

} else {

  if ($debug > 0) {log.info("Request from client: " . $client_ip . " No JSESSIONID found in the request...");}

}

 

#//Bypass Script if the path is for / as it could be GLB health monitor 

if( $path == "/") { break;}

 

#// Collect the HTTP request headers and body

$req_headers = http.getRequest();

if(!http.getHeader("Expect")=="100-continue"){

      $req_body = http.getBody();

}

if ($debug > 1){log.info("Request from client: " . $client_ip . " HTTP Request is:\n" . $req_headers);}

if ($debug > 1){log.info("Request from client: " . $client_ip . " HTTP BODY is:\n" . $req_body);}

 

 

 

#// Reset flags to the needed defaults.

$username = ''; 

$password = ''; 

 

#// Inspect the request and see if it is something we are interested in:

#// syntax is xml.xpath.matchNodeSet( doc, nspacemap, query )

$is_xml = false;

if (!string.regexmatch($req_body, '\<\?xml',"i")){

  #//Document is _not_ an XML

                  if ($debug > 1){log.info("Request from client: " . $client_ip . " Request is NOT an XML document - exiting");}

                  $is_xml = false;

                  break;

} else {

  #//Document is an XML Doc

  $is_xml = true;

  if ($debug > 0){log.info("Request from client: " . $client_ip . " Request is an XML document");}

}

 

#// test to see if we have been sent a "<get-configuration>" request

$get_configuration = xml.xpath.matchNodeCount( $req_body, "", "//broker/get-configuration" );

if ($debug > 0){log.info("Request from client: " . $client_ip . " get-config is:" . $get_configuration);}

 

#// test to see if we have been sent a "<get-tunnel-connection>" request

$get_tunnel_connection = xml.xpath.matchNodeCount( $req_body, "", "//broker/get-tunnel-connection" );

if ($debug > 0){log.warn("Request from client: " . $client_ip . " get-tunnel-connection is:" . $get_tunnel_connection);}

 

#// test to see if we have been sent a "<do-submit-authentication>" request

$do_submit_authentication = xml.xpath.matchNodeCount( $req_body, "", "//broker/do-submit-authentication" );

if ($debug > 0){log.info("Request from client: " . $client_ip . " do-submit-authentication is:" . $do_submit_authentication);}

 

#// test to see if we have been sent a "<do-logout>" request

$do_logout = xml.xpath.matchNodeCount( $req_body, "", "//broker/do-logout" );

if ($debug > 0){log.info("Request from client: " . $client_ip . " do-logout is:" . $do_logout);}

 

#// test to see if we have been sent a "<get-tunnel-connection>" request

if ($get_tunnel_connection == 1 && $is_xml == true){

  if( $debug > 0 ){ log.info( "Request from client: " . $client_ip . " <get-tunnel-connection> identified from: " . $client_ip ); }

    connection.data.set("connection_state", "get-tunnel-connection");

 }

#// If we have a <get-configuration> query, we will send the first response and exit

if ($get_configuration == 1 && $is_xml == true){

  if( $debug > 0 ){ log.info( "Request from client: " . $client_ip . " <get-configuration> response - Sending first_response to client: " . $client_ip ); }

  sendFirstResponse($view_info, $debug);

  break;

}

#// If we have been sent authentication credentials, we will go to work

if ($do_submit_authentication == 1 && $is_xml == true){

  if( $debug > 0 ){ log.info( "Request from client: " . $client_ip . " <do-submit-authentication> identified from: " . $client_ip ); }

 

  $xml_user_credentials_data = xml.xpath.matchNodeSet($req_body,"","//broker/do-submit-authentication/screen/params/param/values/value/text()");

  $xml_user_credentials_fieldnames = xml.xpath.matchNodeSet($req_body,"","//broker/do-submit-authentication/screen/params/param/name/text()");

 

  #// we check that $xml_user_credentials_fieldnames contains "username, domain, password":

  if(string.regexmatch( $xml_user_credentials_fieldnames, "username, domain, password")){   

    if ($debug > 0){log.info("Request from client: " . $client_ip . " <do-submit-authentication>: extracted username, domain, password fields in submitted request.");}

    if ($debug > 1){log.info("Request from client: " . $client_ip . " <do-submit-authentication> extracted XML Fields: " .$xml_user_credentials_fieldnames);}

    if ($debug > 2){log.info("Request from client: " . $client_ip . " <do-submit-authentication> extracted XML Values: " .$xml_user_credentials_data);}

 

    #//lets extract the username and password:

    $credentials = string.split($xml_user_credentials_data,",");

    $username = $credentials[0];

    $password = $credentials[2];

    #// Currently we don't need the domain name, so we won't extract it, but it is here for future use if needed

    #//$cred_domain = $credentials[1];

    

    $auth = auth.query( $authenticator, $username, $password ); 

    #// We should check to ensure auth.query returned successfully:

    #/// If $auth returns 'Error' then it means something is wrong with the authenticator and the admin needs to investigate

    if( $auth['Error'] ) { 

        log.error( "Request from client: " . $client_ip ." Error with authenticator " . $authenticator . ": " . $auth['Error'] );

    }

   

    #// Lets extract the list of groups the user is a 'memberOf'

    $groups = $auth['memberOf']; 

    #// If there is only one group, "$auth['memberOf']" will return a string,

    #// not an array, so we need to force the $groups value to be an array

    $group_isArray = lang.isarray($groups);

    if ($group_isArray != true){

      if ($debug > 1){log.info("Connection From: " . $client_ip .": $auth['memberOf'] returned a single group, forcing $group to be an array");}

      $groups = lang.toArray($groups);

    }

 

    if ($debug > 1){log.info("Request from client: " . $client_ip ." Full Auth Info: " . lang.dump($auth));}

    if ($debug > 1){log.info("Request from client: " . $client_ip ." Group Info: " . lang.dump($groups));}

    

 

    #// Map Site B users to the Site B pool of servers

    foreach ( $group in $groups){

      if( $debug > 0 ) {log.info("$group is" . lang.dump($group));}

      

       if( string.contains( $group, $siteB_AD_Groupname ) ){ 

          if( $debug > 0) { log.info( "Request from client: " . $client_ip ." User: ".$username." member of SiteB Users group" );} 

          pool.select( $view_siteB_pool ); 

          break; 

       } else{

        if( $debug > 0) { log.info( "Request from client: " . $client_ip ." User: ".$username." is NOT a member of SiteB Users group" );} 

 

       }

    }

    #// Map Site A users to the Site A pool of servers

    foreach ( $group in $groups){

       if( string.contains( $group, $siteA_AD_Groupname ) ) { 

          if( $debug > 0 ) { log.info( "Request from client: " . $client_ip ." User: ".$username." member of Default SiteA Users group" ) ;} 

          pool.select( $view_siteA_pool );

          break; 

       } else{

        if( $debug > 0) { log.info( "Request from client: " . $client_ip ." User: ".$username." is NOT a member of SiteA Users group" );} 

 

       }

    }

  }

} #//end do-submit-authentication

 

 if( $debug > 0 ) {log.info("#*#* End Reqeust Section *#*#");}

} #// End of REQUEST Section

 

sub sendFirstResponse( $info, $debug )  { 

 #//$first_response = '<?xml version="1.0"?><broker version="7.0"><configuration><result>ok</result><broker-guid>'.$info["guid"].'</broker-guid><broker-service-principal><type>kerberos</type><name>'.$info["server"].'$@'.$info["dns"].'</name></broker-service-principal><authentication><screen><name>windows-password</name><params><param><name>domain</name><values><value>'.$info["domain"].'</value></values></param></params></screen></authentication></configuration></broker>';

$first_response = '<?xml version="1.0"?><broker version="7.0"><set-locale><result>ok</result></set-locale><configuration><result>ok</result><broker-guid>'.$info["guid"].'</broker-guid><broker-service-principal><type>kerberos</type><name>'.$info["server"].'$@'.$info["dns"].'</name></broker-service-principal><authentication><screen><name>windows-password</name><params><param><name>domain</name><values><value>'.$info["domain"].'</value></values></param></params></screen></authentication></configuration></broker>';

  if( $debug > 1 ){ log.info( "first_response data&colon;\n" .$first_response); } 

  http.sendResponse( "200 OK", "text/xml;charset=UTF-8", $first_response, "XFF: VTM_SiteA" ); 

}

###// AK: should be ==

if ($rulestate == "RESPONSE" ){

if( $debug > 0 ) {log.info("#*#* Start Response Section *#*#");}

$debug_node = connection.getNode();

$debug_pool = connection.getPool();

$debug_http_response_code=http.getResponseCode();

 

$resp_headers = http.getResponseHeaders();

if ($debug > 1){log.info("Response to: " . $client_ip . " HTTP Response Headers are:" . lang.dump($resp_headers));}

 

#//$resp_body = http.getResponseBody( 96 );  #// What is the nature of the response?

$content_type = http.getResponseHeader( "Content-Type" );

if ($debug > 1) { log.info("Content type of response is" . $content_type ); }

if( string.startsWith( $content_type, "text/xml" ) ) {

  #// TODO: What if the response doesn't contain a </broker> tag?

  $resp_body = http.stream.readResponse( 4096, "</broker>" ); #// Limit was arbitrarily chosen

} else {

    if( $debug > 0 ) {log.warn( "This response was not XML - not extracting content for logging." );}

}

 

#// ASSUMPTION: Any XML response we care about (i.e. the one with the session ID in it) is less than 4096 bytes.

#//             If it's longer, then we'll just stream it to the client.

if( string.length( $content_type ) < 4096 ) {

 

 if ($debug > 1) { log.info( "Grabbed response body:\n" . $resp_body ); }

 

 if( $debug > 0 ) {log.info("RESPONSE: connection was sent to node:" . $debug_node);}

 if( $debug > 0 ) {log.info("RESPONSE: connection was sent to pool:" . $debug_pool);}

 if( $debug > 0 ) {log.info("RESPONSE: Server Responded with code:" . $debug_http_response_code);}

   $response_cookies = http.getResponseCookies();

   

  if($response_cookies){

    if($response_cookies["JSESSIONID"]){

      $JSESSIONID = $response_cookies["JSESSIONID"];

      #// need to bypasS for <set-locale> message

       if (string.regexmatch($resp_body, '\<\?xml',"i")){

          if( $debug > 0 ) {log.info("#### PARSING XML RESPONSE ###");}

          $set_locale = xml.xpath.matchNodeCount($resp_body,"","//broker/set-locale/result");

          if( $debug > 0 ) {log.info("#### SETLOCALE IS: ". $set_locale . " ###");}

          if (!$set_locale){

             connection.setPersistenceKey( $JSESSIONID );

             data.set( $JSESSIONID, $debug_pool );

             if ($debug > 0) {log.info("Response Rule - Request from client: " . $client_ip . " Response set JSESSIONID=" . $JSESSIONID .". Extracting and using for persistence key");}

          } else {

             if ($debug > 0) {log.info("Response Rule - Request from client: " . $client_ip . " Response set JSESSIONID=" . $JSESSIONID .". _NOT_ extracting and using for persistence key");}

          }

       } else {

          if( $debug > 0 ) {log.warn( "Response data did not seem to be XML." );}

      }

    }

  } else {

    if ($debug > 0) {log.info("Response Rule - Request from client: " . $client_ip . " No JSESSIONID found in response...");}

  }

  

  if (connection.data.get("connection_state") ==  "get-tunnel-connection"){

    if ($debug > 0){log.info("Response to: " . $client_ip . " HTTP Response Headers are:" . lang.dump($resp_headers));}

    if ($debug > 0){log.info("Response to: " . $client_ip . " HTTP Response BODY is:" . $resp_body);}

        

    #// Inspect the response and see if it is something we are interested in:

    #// syntax is xml.xpath.matchNodeSet( doc, nspacemap, query )

    if (string.regexmatch($resp_body, '\<\?xml',"i")){

      #//Document is an XML Doc

      if ($debug > 0){log.info("Request from client: " . $client_ip . " Response to <get-tunnel-connection> is an XML document");}

      $tunnelID = xml.xpath.matchNodeSet($resp_body,"","//broker/tunnel-connection/connection-id/text()");

      if ($tunnelID){

        if ($debug > 0) {log.info("Response Rule - Request from client: " . $client_ip . " Response set tunnelID=" . $tunnelID .". Extracting and using for persistence key");}

        connection.setPersistenceKey( $tunnelID );

        data.set( $tunnelID, $debug_pool );

      } 

    } else {

       lang.warn( "Didn't think response body was XML." );

    } 

  } #// End of connection_state == get-tunnel-connection

 

  if ($debug > 1){log.info("Response to: " . $client_ip . " HTTP Response Headers are:" . lang.dump($resp_headers));}

  if ($debug > 1){log.info("Response to: " . $client_ip . " HTTP Response BODY is:" . $resp_body);}

}

 

if( $resp_body ) {

   http.stream.startResponse( http.getResponseCode(), $content_type, "" );

   http.stream.writeResponse( $resp_body );

   http.stream.continueFromBackend();

}

 

if( $debug > 0 ) {log.info("#*#* End Response Section *#*#");}

} ##/// End of RESPONSE Section#//

 

Queuing for Web Applications (libQueue)

by mbodding ‎11-10-2015 06:30 PM - edited ‎11-11-2015 03:17 PM (1,100 Views)

Overview

 

The libQueue TrafficScript library allows you to create and manage HTTP sessions on the Traffic Manager, and limit the number of sessions which are allowed through to the backend. With this library we can set a max number of serviceable sessions, and then the vTM will ensure that no more than that number of sessions are allowed through to the backend servers.

 

The library allows you to name multiple queues if you have multiple applications running through the same vTM, and use counters to keep track of the number of sessions, being created, re-used, and destroyed on the system.

 

Whenever the max number of users is reached the vTM will start delivering holding pages, which can be configured to include the users position in the queue. Ajax or a meta-rfresh can be used to refresh the users position periodically to keep them appraise of their position.

 

Please read the comments in the libQueue library for more information on its configuration.

 

Usage

 

Once the library has been imported, you will need to call the init() function to initialise the defaults for use. You can then if you wish override any of the configuration parameters as required. The do_req() function is used to determine if the user can be sent through to the application or sent to the Waiting Room. 

 

import libQueue as queue;

$config=[];
$houseKeeping=[];
$counters=[];

queue.init($config, $houseKeeping, $counters);

# =====================================================================
# == Configuration Overrides for this host
$config["poolName"] = "MyApplicationPool";
$config["queueName"] = "MyQueue";
$config["waitingRoomTemplatePage"] = "myApplicationWaitingRoom.html";
$config["waitingRoomTooBusyPage"] = "toobusy.html";
# =====================================================================

$pool = queue.do_req($config,$houseKeeping,$counters);
if ($pool == $config["poolName"] ) {
pool.use("www.demo.local");
} else {
pool.use("WaitingRoom");
}

 

The waitingRoomTemplatePage will be parsed for template variables whenever it is delivered to a user. This allows you to brand the page, and also include information about the users queue position in the response. The current template variables available for the waiting room page are:

 

  • QUEUE_ID (Displays the PID of the vtm Child process managing this user)
  • SESSION_ID (Displays the Session ID of the current user)
  • QUEUE_POSITION (The users current position in their queue)

All occurences of the above text will be replaced with their values, when the page is sent to the user.

 

More Information

 

The vTM software is designed to scale vertically within in a machine, and so it creates a single threaded child process per core. This library tries to scale in the same manor and so each child process creates and manages its own queue.

 

Each child is responsible for managing its own queue. This includes putting new user sessions into the queue, and running house keeping task on that queue to expire timedout sessions. When a queue is not empty, the child which owns the queue is also responsible for migrating users out of the queue and into the application.

 

The library needs to be configured with the number of cores available to the vTM, so that each child process can deQueue a proportianate number of users whenever migration occurs.

 

Debugging

There is a debug function in the library, which can be set to any number between 0 and 5. However it is recommended that debugging only be enabled during testing, and set to 0 for production use.

 

User Counters

The User Counters in the configuration can be used to visualise sessions being created, re-used, timed-out, and destroyed, for both the Application Pool and the Queue (Waiting Room). The default counter assignments are:

 

  1. Session New (a new session has been Created)
  2. Session Update (A users session has been extended) 
  3. Session Expire (The housekeeping process has removed an expired session)
  4. Application Sessions (A session is assigned to the application pool)
  5. WaitingRoomSessions (A session which is in the queue)
  6. WaitingRoomExpire (A user in the queue has timed out)
  7. WaitingRoomMigrated (A session has migrated from the queue to the application)
  8. SessionComplete (A user hit an exit point and the session was deleted) 

 

Timeouts

There a three configuration parameters for timeouts. InitialTimeout is given to all new sessions and is there to prevent automated clients from using up real sessions. Once a session has been reused, then we switch to the standard timeout, either the sessionTimeout or the WaitingRoomTimeout depending on where the user was assigned.

 

Application and WaitingRoom sizes

The script needs to be given the max number of application sessions allowed in the sessionLimit config value. when this limit is reached users will be sent to the WaitingRoom (queue). There is also a waitingRoomLimit, which when exceeded will cause the user to simply be given WaitingRoomTooBusy page.

 

Virtual Traffic Manager and Magento

by fmemon on ‎06-23-2014 12:14 PM - edited on ‎09-30-2015 12:41 PM by aannavarapu (908 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Magento.

 

This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.

Virtual Traffic Manager and SAP NetWeaver Deployment Guide

by pwallace_1 on ‎01-31-2014 08:08 AM - edited on ‎09-30-2015 12:41 PM by aannavarapu (1,327 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for SAP NetWeaver.

 

This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.

Virtual Traffic Manager Plugin for VMware vRealize Orchestrator Deployment Guide

by aannavarapu ‎09-30-2015 12:14 PM - edited ‎09-30-2015 12:40 PM (1,240 Views)

This Document provides step by step instructions on how to deploy the Brocade Virtual Traffic Manager plugin in VMware's vRealize Orchestrator.

 

This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.

 

The vTM Plugin is also attached to this page as a zip file. Please unzip to the get the plugin in the dar format.

Virtual Traffic Manager and Oracle Glassfish Server Deployment Guide

by riverbed on ‎12-02-2012 01:56 PM - edited on ‎09-30-2015 12:40 PM by aannavarapu (1,335 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Oracle GlassFish Server.

 

This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.

Virtual Traffic Manager and Oracle WebLogic Applications (PeopleSoft and Blackboard) Deployment Guide

by riverbed on ‎12-02-2012 01:54 PM - edited on ‎09-30-2015 12:39 PM by aannavarapu (1,939 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Oracle WebLogic Applications. Sample applications that can be deployed using this document include Oracle's PeopleSoft and Blackboard's Academic Suite.

 

This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.

Virtual Traffic Manager and Oracle Application Server 10G Deployment Guide

by riverbed on ‎12-02-2012 02:00 PM - edited on ‎09-30-2015 12:39 PM by aannavarapu (1,351 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Oracle Application Server 10G.

 

This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.

Virtual Traffic Manager and Oracle Enterprise Manager 12c Deployment Guide

by vreddy on ‎03-26-2013 03:51 PM - edited on ‎09-30-2015 12:38 PM by aannavarapu (1,316 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Oracle Enterprise Manager 12c.

 

This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.

Virtual Traffic Manager and Oracle EBS 12.1 Deployment Guide

by vreddy on ‎03-26-2013 03:48 PM - edited on ‎09-30-2015 12:38 PM by aannavarapu (1,137 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Oracle EBS 12.1.

 

This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.

Virtual Traffic Manager and Microsoft IIS Deployment Guide

by riverbed on ‎12-02-2012 01:59 PM - edited on ‎09-30-2015 12:37 PM by aannavarapu (2,311 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft IIS.

 

This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.

Virtual Traffic Manager and Microsoft Intelligent Application Gateway Deployment Guide

by riverbed on ‎12-02-2012 01:58 PM - edited on ‎09-30-2015 12:37 PM by aannavarapu (1,390 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Intelligent Application Gateway.

 

This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.

Virtual Traffic Manager and Microsoft Outlook Web Access Deployment Guide

by riverbed on ‎12-02-2012 02:06 PM - edited on ‎09-30-2015 12:36 PM by aannavarapu (1,533 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Outlook Web Access.

 

This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.

Virtual Traffic Manager with Loggly

by fmemon on ‎02-13-2013 12:00 AM - edited on ‎09-30-2015 12:23 PM by aannavarapu (1,873 Views)

Loggly is a cloud-based log management service.  The idea with Loggly is that you direct all your applications, hardware, software, etc. to send their logs to Loggly.  Once all the logs are in the Loggly cloud you can :

  • Root cause and solve problems by performing powerful and flexible searches across all your devices and applications
  • Set up alerts on log events
  • Measure application performance
  • Create custom graphs and analytics to better understand user behavior and experience

 

Having your Virtual Traffic Manager (vTM) logs alongside your application logs will provide valuable information to help further analyze and debug your applications.  You can export both the vTM event log as well as the request logs for each individual Virtual Server to Loggly.

 

vTM Event Log

The vTM event log contains both error logs and informational messages.  To export the vTM Event Log to Loggly we will first create an Input into Loggly.  In the Loggly web interface navigate to Incoming Data -> Inputs and click on "+ Add Input".  The key field is the Service Type which must be set to Syslog UDP.

loggly1.jpg

After creating the input you'll be given a destination to send the logs to.  The next step is to tell the vTM to send logs to this destination.

loggly3.jpg

In the vTM web interface navigate to System > Alerting and select Syslog under the drop down menu for All Events. Click Update to save the changes.

loggly 4.jpg

The final step is to click on Syslog and update the sysloghost to the Loggly destination.

loggly5.jpg

Virtual Server Request Logs

Connections to a virtual server can be recorded in request logs. These logs can help track the usage of the virtual server, and can record many different pieces of information about each connection.  To export virtual server request logs to Loggly first navigate to Services > Virtual Servers > (your virtual server) > Request Logging. First set log!enabled to Yes, its not on by default. Scroll down and set syslog!enabled to Yes and set the syslog!endpoint to the same destination as with the vTM Event Logs.  Click Update to save the changes.

loggly6.jpg

Alternatively you can create a new input in Loggly for request logs if you don't want them to get mixed up with the Event Logs.

 

Making sure it works

An easy way to make sure it works is to modify the configuration by creating and deleting a virtual for example.  This will generate an event in the vTM Event Log.  In Loggly you should see the light turn green for this input.

loggly7.jpg

The Virtual Traffic Manager is designed to be flexible, being the only software application delivery controller that can be seamlessly deployed in private, public, and hybrid clouds.  And now by exporting your vTM logs you can take full advantage of the powerful analysis tools available within Loggly.

Virtual Traffic Manager and Microsoft Exchange 2010 Deployment Guide

by vreddy on ‎03-26-2013 03:44 PM - edited on ‎09-30-2015 10:41 AM by aannavarapu (2,302 Views)

This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Exchange 2010.

 

"This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software"

Brocade vTM Kernel Modules for Linux Software

by on ‎02-22-2013 07:25 AM - edited on ‎09-23-2015 07:23 AM by PaulWallace (5,400 Views)

The Brocade vTM Kernel Modules may be installed on a supported Linux system to enable advanced networking functionality – IP Transparency and Multi-Hosted Traffic IP Addresses.

 

Note: The  Kernel Modules are pre-installed in Brocade vTM Virtual Appliances, and in Cloud images where they are applicable. The Kernel Modules are not available for Solaris.

 

The modules

 

IP Transparency Module (ztrans)

 

The IP Transparency Module enables support for IP transparency in Brocade Virtual Traffic Manager software. When used, Brocade vTM will set the source IP address of packets it transmits to a server to match the source address of the remote client. Refer to the User Manual (Brocade vTM Product Documentation) for details of how to configure IP transparency. ztrans is supported for kernel versions up to and including version 3.2.

 

Multi-hosted IP Module (zcluster)

 

The Multi-hosted IP Module allows a set of clustered Traffic Managers to share the same IP address. The module manipulates ARP requests to deliver connections to a multicast group that the machines in the cluster subscribe to. Responsibility for processing data is distributed across the cluster so that all machines process an equal share of the load. Refer to the User Manual (Brocade vTM Product Documentation) for details of how to configure multi-hosted Traffic IP addresses. zcluster is supported for kernel versions up to and including version 3.19.

 

Installation

 

Prerequisites

 

Your build machine must have the kernel header files and appropriate build tools to build kernel modules.

 

You may build the modules on one machine and copy them to an identical machine if you wish to avoid installing build tools and kernel headers on your production traffic manager.

 

Installation

 

Unpack the kernel modules tarball, and cd into the directory created:

 

# tar –xzf brocade_vtm_modules_installer-2.9.tgz   

# cd brocade_vtm_modules_installer-2.9

 

Review the README within for late-breaking news and to confirm kernel version compatibility.

 

As root, run the installation script install_modules.pl to install all modules:

 

# ./install_modules.pl

 

To install only a specific set of modules, add the modules names as parameters. For example to install the only the IP Transparency module:

 

# ./install_modules.pl ztrans

 

If installation is successful, restart the vTM software:

 

# $ZEUSHOME/restart-zeus

 

If the installation fails, please refer to the error message given, and to the distribution specific guidelines you will find in the README file inside the stingray_modules_installer package.

 

Kernel Upgrades

 

If you upgrade your kernel, you will need to re-run the install-modules.pl script to re-install the modules after the kernel upgrade is completed.

 

Latest Packages

 

Packages for the kernel modules are now available via the normal Brocade vTM download service.

Brocade Virtual Traffic Manager Plugin for VMware vRealize Orchestrator

by aannavarapu ‎08-11-2015 11:14 AM - edited ‎08-11-2015 12:06 PM (2,500 Views)

This article describes the installation procedure for the vTM plugin for vRO and how it enables automating the most common configurations of a Traffic Manager in a vCenter environment. The plugin uses the SOAP API of the Traffic Manager to enable vRealize Orchestrator workflows. The workflows available are classified into CRUD operations including but not limited to adding, deleting and reading pool, node, rule and virtual server configurations. Additional workflows for attaching and detaching vTM instances are included in the plugin.

 

 

Spoiler

Note: The Plugin can be downloaded from this article. It is added as a zip file. Unzip it to get the plugin in .dar format

 

Virtual Traffic Manager Overview

 

Brocade Virtual Traffic Manager (vTM) is a software-based application delivery controller (ADC) designed to deliver faster and more reliable access to public web sites and private applications. vTM frees applications from the constraints of legacy, proprietary, hardware-based load balancers, which enables them to run on any physical, virtual, or cloud environment. With vADC products from Brocade, organizations can:

 

  • Make applications more reliable with local and global load balancing
  • Scale application servers by up to 3x by offloading TCP and SSL connection overhead
  • Accelerate applications by up to 4x by using web content optimization (WCO)
  • Secure applications from the latest application attacks, including SQL injection, XSS, CSRF, and more
  • Control applications effectively with built-in application intelligence and full-featured scripting engine

 

Virtual Traffic Manager offers much more than basic load balancing. It controls and optimizes end-user services by inspecting, transforming, prioritizing, and routing application traffic. The powerful TrafficScript® engine facilitates the implementation of traffic management policies that are unique to an application by allowing organizations to build custom functionality or to leverage existing features in Virtual Traffic Manager in a specialized way. With vTM, organizations can deliver:

 

Performance

Improve application performance for users by offloading encryption and compression from the web server by dynamic caching and reducing the number of TCP sessions on the application.

 

Reliability and scalability

Increase application reliability by load balancing traffic across web and application servers, balancing load across multiple data centers (private or public clouds), monitoring the response time of servers in real-time to decide the fastest way to deliver a service, protecting against traffic surges, and by managing the bandwidth and rate of requests used by different classes of traffic.

 

Advanced scripting and application intelligence

Manage application delivery more easily with fine-grained control of users and services using TrafficScript, an easy-to-use scripting language that can parse any user transaction, and take specific, real-time action based on user, application, request, or more. Development teams use TrafficScript to enable a point of control in distributed applications, while operations teams use it to quickly respond to changing business requirements or problems within an application before developers can fix it.

 

Application acceleration

Dramatically accelerate web-based applications and websites in real-time with optional web content optimization (WCO) functionality. It dynamically groups activities for fewer long distance round trips, resamples and sprites images to reduce bandwidth, and minifies JavaScript and combines style sheets to give the best possible response time for loading a web page on any browser or device.

 

Application-layer security

Enhance application security by filtering out errors in web requests, and protecting against external threats, with the option of a comprehensive Layer-7 firewall to defend against deliberate attacks.

 

Why vTM plugin for vRO

 

With businesses focusing more on automation and orchestration of IT services in today’s hybrid deployments, the attention towards product integrations using APIs has increased. The vTM plugin for vRO is a great solution in making available the core load balancing functions to the vCenter environment in the form of workflows and actions using SOAP API. The plugin, in addition to automating the configuration aspects of load balancing, will prevent misconfigurations. This helps businesses to accelerate and reduce IT costs while retaining quality.

 

Requirements

  • vTM Plugin DAR file (Version 1.0.0)
  • vRealize Orchestrator Server 
  • vRealize Orchestrator Client

 

Installing vTM Plugin

To install the Virtual Traffic Manager plugin for vRO:

 

  • Download the vTM plugin DAR file from this page
  • Login to the vRealize Orchestrator UI and click on “Plug-ins” on the left frame
  • Scroll to the bottom on the right frame and click on the empty text-box for “Plug-in file
  • Browse and locate the vTM plugin DAR file named “o11nplugin-brocade.dar”. Once Selected, click open and then “Upload and Install
  • Click “Apply Changes
  • Click “Startup Options” on the left frame and select “Restart service” to register the plugin

 plugin-install.png

 

Certificate Installation

In some cases, vRealize Orchestrator requires self signed certificates of Virtual Traffic Managers to be imported to enable workflow interactions. Because of this requirement, it is recommended to do so.

 

  • Login to the vRealize Orchestrator UI and click on “Network” on the left frame
  • On the right frame of the UI, select the “SSL Trust Manager” tab
  • At the bottom of the page for input “Import from URL”, type the https admin UI address of the SteelApp Traffic Manager and click Import
  • Click “Import” when asked for confirmation to import. However, ensure that the Common Name of the certificate for a vTM matches how we connect to it (either IP or FQDN) from the workflows

 cert-install.png

 

Packaged Workflows

Once the Brocade vTM plugin is deployed using the admin UI of vRO, the workflows packaged along with the plugin are accessible to be run from the vRealize Orchestrator client.

 

  • Login to the vRealize Orchestrator client software
  • Select the Workflows tab and expand the list of workflow to find the folder “Brocade
  • The vTM plugin comes packaged with the following workflows. The workflows are categorized as Create, Read, Update and Delete (CRUD) operations in addition to a couple of workflows for Inventory management in vRO. The table describes what each workflow does

 

Workflow

Workflow Type

Workflow Description

Add vTM to Inventory

Inventory Configuration

Attaches a VTM ADC to vRO Inventory.

 

Input: username, password, IP and Port Number

Output: None

Remove vTM from Inventory

Inventory Configuration

Detaches a vTM ADC from the vCO Inventory.

Input: vTM Instance

Output: None

Add Node to Pool

Create Operations

Adds an IP address and port number of a service as a node to a named Pool.

 

Input: Pool, IP address, Port Number

Output: Node

Add Pool to vTM

Create Operations

Adds a Pool to a vTM. At least one node needs to be entered while creating a pool.

 

Input: STM, Pool name, ip address, port number

Output: Pool

Add Request Rule to Virtual Server

Create Operations

Adds a Traffic Script request rule to a Virtual Server. The field Rule text takes the complete traffic script code as input. The enable option attaches it to the virtual server.

 

Input: Virtual Server, Rule Name, Rule Text, enable, run frequency

Output: Rule

Add Response Rule to Virtual Server

Create Operations

Adds a Traffic Script response rule to a Virtual Server. The field Rule text takes the complete traffic script code as an input. The enable option attaches it to the virtual server.

 

Input: Virtual Server, Rule Name, Rule Text, enable, run frequency

Output: Rule

Add Virtual Server to vTM

Create Operations

Adds a Virtual Server to a vTM. A default pool needs to be selected for the virtual server. By default, the virtual server binds to all IP addresses in the vTM.

 

Input: vTM, port Number, Protocol, Default Pool, Virtual Server Name

Delete Node from Pool

Delete Operations

Deletes a selected Node from a Pool.

 

Input: Pool, Node

Output: None

Delete Pool from vTM

Delete Operations

Deletes a selected Pool from a vTM.

 

Input: vTM, Pool

Output: None

Delete Request Rule from Virtual Server

Delete Operations

Deletes a selected request rule from a Virtual Server.

 

Input: Virtual Server, Rule

Output: None

Delete Response Rule from Virtual Server

Delete Operations

Deletes a selected response rule from a Virtual Server.

 

Input: Virtual Server, Rule

Output: None

Delete Virtual Server from vTM

Delete Operations

Deletes a selected Virtual Server from a vTM.

 

Input: STM, Virtual Server

Output: None

Get Nodes from Pool

Read Operations

Gets the List of Nodes from a selected Pool.

 

Input: Pool

Output: Array of Nodes

Get Pools from vTM

Read Operations

Gets the List of Pools from a vTM.

 

Input: vTM

Output: Array of Pools

Get Request Rules from Virtual Server

Read Operations

Gets the List of Request rules from a selected Virtual Server.

 

Input: Virtual Server

Output: Array of Request Rules

Get Response Rules from Virtual Server

Read Operations

Gets the List of Response rules from a selected Virtual Server.

 

Input: Virtual Server

Output: Array of Response Rules

Get Virtual Servers from vTM

Read Operations

Gets the List of Virtual Servers from an vTM.

 

Input: STM

Output: Array of Virtual Servers

Disable Node in Pool

Update Operations

Disables a selected Node in a Pool.

 

Input: Pool, Node

Output: None

Drain Node in Pool

Update Operations

Drains a selected Node in a Pool.

 

Input: Pool, Node

Output: None

Enable Node in Pool

Update Operations

Makes a Node active in a Pool.

 

Input: Pool, Node

Output: None

Enable Virtual Server

Update Operations

Enables/Disables a selected Virtual Server

 

Input: vTM, Virtual Server, Enable

Output: None

Update LB Algorithm for Pool

Update Operations

Updates the LB algorithm for a selected Pool

 

Input: vTM, Pool, LB type

Output: None

 

 

 workflows.png

 

Tech Tip: Using the RESTful Control API with Perl - listpools

by ricknelson on ‎03-28-2013 03:04 PM - edited on ‎07-30-2015 01:31 PM by Community Manager (2,801 Views)

The following code uses Stingray's RESTful API to list all the pools defined on a cluster. The code is written in Perl. This example has more extensive comments then the following examples and most of these are applicable to all the examples. This program does a single GET request for the list of pools and then loops through that list, each element of which is a hash, and then outputs the pool name.

 

listpools.pl

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
#!/usr/bin/perl
 
 
use REST::Client;
 
use MIME::Base64;
 
use JSON;
 
 
print "Pools:\n";
 
 
# Since Stingray is using a self-signed certificate we don't need to verify it
 
$ENV{'PERL_LWP_SSL_VERIFY_HOSTNAME'} = 0;
 
 
# Set up the connection
 
my $client = REST::Client->new();
 
 
# Setup the basic authorization header with the encoded Userid and Password.
 
# These need to match a UserId and Password for a Stingray user
 
$client->addHeader("Authorization", "Basic " . encode_base64("admin:admin"));
 
 
# Do the HTTP GET to get the lists of pools
 
$client->GET("/api/tm/1.0/config/active/pools");
 
 
# Deserialize the JSON response into a hash
 
my $response = decode_json($client->responseContent());
 
if ($client->responseCode() == 200) {
 
    # Obtain a reference to the children array
 
    my $poolArrayRef = $response->{children};
 
    foreach my $pool (@$poolArrayRef) {
 
        print $pool->{name} . "\n";
 
    }
 
} else {
 
    # We weren't able to connect to the Stingray or there was a problem with the request.
 
    # The most likely reasons for this are:
 
    # - the hostname of the Stingray instance is incorrect
 
    # - this client doesn't have network access to the Stingray instance or port 9070
 
    # - the RESTful API is disabled
 
    # - the RESTful API is using using a different port
 
    # - the URL is incorrect
 
    print "Error: status=" . $client->responseCode() . " Id=" . $response->{error_id} . ": " . $response->{error_text} . "\n";
 
}

 

Running the example

 

This code was tested with Perl 5.14.2 and version 249 of the REST::Client module.

 

Run the Perl script as follows:

 

$ listpoolnodes.pl
Pools:
 
Pool11
 
 
Pool2
   

 

Read More

 

Tech Tip: Using the RESTful Control API with Perl - listpoolnodes

by ricknelson on ‎03-28-2013 03:13 PM - edited on ‎07-30-2015 01:26 PM by Community Manager (1,109 Views)

The following code uses Stingray's RESTful API to list all the pools defined for a cluster and for each pool it lists the nodes defined for that pool, including draining and disabled nodes. The code is written in Perl. This example builds on the previous listpools.pl example. This program does a GET request for the list of pool and then while looping through the list of pools, a GET is done for each pool to retrieve the configuration parameters for that pool.

 

listpoolnodes.pl

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
#!/usr/bin/perl
 
 
use REST::Client;
 
use MIME::Base64;
 
use JSON;
 
 
print "Pools:\n\n";
 
 
# Since Stingray is using a self-signed certificate we don't need to verify it
 
$ENV{'PERL_LWP_SSL_VERIFY_HOSTNAME'} = 0;
 
 
my $url = "/api/tm/1.0/config/active/pools";
 
# Set up the connection
 
my $client = REST::Client->new();
 
 
$client->addHeader("Authorization", "Basic " . encode_base64("admin:admin"));
 
 
# Request a list of pools
 
$client->GET($url);
 
 
# Decode the json response
 
my $response = decode_json($client->responseContent());
 
if ($client->responseCode() == 200) {
 
    # Obtain a reference to the children array
 
    my $poolArrayRef = $response->{children};
 
    foreach my $pool (@$poolArrayRef) {
 
        my $poolName = $pool->{name};
 
        $client->GET("$url/$poolName");
 
        my $poolConfig = decode_json $client->responseContent();
 
        if ($client->responseCode() == 200) {
 
            my $nodes = $poolConfig->{properties}->{basic}->{nodes};
 
            my $draining = $poolConfig->{properties}->{basic}->{draining};
 
            my $disabled = $poolConfig->{properties}->{basic}->{disabled};
 
            print "Pool: $poolName\n";
 
            print "    Nodes: ";
 
            foreach my $node (@$nodes) {
 
                print "$node ";
 
            }
 
            print "\n";
 
            if (scalar(@$draining) gt 0) {
 
                print "    Draining Nodes: ";
 
                foreach my $node (@$draining) {
 
                    print "$node ";
 
                }
 
                print "\n";
 
            }
 
            if (scalar(@$disabled) gt 0) {
 
                print "    Diabled Nodes: ";
 
                foreach my $node (@$disabled) {
 
                    print "$node ";
 
                }
 
                print "\n";
 
            }
 
            print "\n";
 
        } else {
 
            print "Error getting pool config: status=" . $client->responseCode() . " Id=" . $poolConfig->{error_id} . ": " . $poolConfig->{error_text} . "\n"
 
        }
 
    }
 
} else {
 
    print "Error getting list of pools: status=" . $client->responseCode() . " Id=" . $response->{error_id} . ": " . $response->{error_text} . "\n";
 
}

 

Running the example

 

This code was tested with Perl 5.14.2 and version 249 of the REST::Client module.

 

Run the Perl script as follows:

 

$ listpoolnodes.pl
Pools:
 
Pool1
    Nodes:  192.168.1.100 192.168.1.101
    Draining:  192.168.1.101
    Disabled:  192.168.1.102
 
Pool2
    Nodes:  192.168.1.103 192.168.1.104
>
 
Read More

 

Load Balancing MS RDP with a Session Broker

by Community Manager ‎07-28-2015 12:36 PM - edited ‎07-29-2015 09:40 AM (1,401 Views)

A Traffic Script for load baancing MS Terminal Services when the Session Broker service is being used as discussed here.

 

This will parse the x.224 RDP Routing Token that is handed back by the Terminal Server when the user has a brokered session still running on a different host. When the Stingray Traffic Manager gets one of these x.224 Routing tokens, it will honour it and send the client to the appropriate server in the Terminal Server Pool.

 

There is a very good description of how MS Terminal Services uses the x.224 header here.

 

I have tested it pretty thouroughly in my AWS EC2 lab environment, but I am pretty keen for any feedback from anyone using it.

 

 

# Traffic Script for load balancing Microsoft Terminal Services Farms.

# This configuration requires the TS Farm to be set up using a server with the
# session broker role installed. See MS TechNet article for details of how
# to set this up: http:technet.microsoft.com/en-us/library/cc772418.aspx

# Note, in this configuration, you will not need to set up Round Robin DNS,
# instead, just point the farm name DNS entry at the Traffic Group IP you
# bind to the Virtual Server on the Stingray.

# NB: this configuration requires you to create a Persistence Class of
# "Named Node session persistence" and bind it to the pool. Put the name of
# your persistence pool into the "myPersistenceProfile" variable below

##########################################
$myPersistenceProfile = "tslb_namedNode";#
##########################################


### NOTHING TO EDIT BELOW THIS LINE
##########################################
# Check to see if this is a new connection or traffic from an existing flow
if (!connection.data.get("first_run")) {

    # Read TPKT Header, 4 bytes
    $tpkt_head = request.get(4);
    $tpkt_head = string.skip($tpkt_head, 2);

    # Bytes 3/4 are High/Low header length bits
    $rest_head = string.bytesToInt($tpkt_head) - 4;
    $total = $tpkt_head;

    if ($rest_head > 0) {
        # Read in rest of header
        $header = request.get(string.bytesToInt($total));
        string.skip($header, 13);



        $pos = string.find($header, string.hexdecode("0D0A"));
        if ($pos > 0) {
            $poss_token = string.left($header, $pos);
            if (string.regexmatch($poss_token, "[C|c]ookie:\\s(msts|mstshash)=(.+)$")) {
                $kval = $1;
                $vval = $2;
                # We have a cookie to parse
                if ($kval == "mstshash") {
                    # If we have an mstshash cookie, let the session be load balanced normally.
                } else if ($kval == "msts" && string.regexmatch($vval, "^(\\d+)\\.(\\d+)\\.\\d+$")) {
                    log.info("ip: ".$1);
                    log.info("port: ".$2);

                    # Here we parse the captured "msts=" cookie to extract the back end node info to route the connection
                    $ip = string.bytesToDotted(string.reverse(string.intToBytes($1, 4)));
                    $port = string.bytesToInt(string.reverse(string.intToBytes($2, 2)));
                    $node = $ip.
                    ":".$port; # NB: not IPv6 safe
                    connection.setPersistence($myPersistenceProfile);
                    connection.setPersistenceNode($node);
                } else {
                    log.info("discard");
                    connection.discard();
                }
            }
        }
    }
    # Set the connection run flag to prevent us from running this rule again on traffic from the same flow
    connection.data.set("first_run", "yes");
}

 

Tech Tip: Using the RESTful Control API with Perl

by on ‎03-05-2013 02:41 AM - edited on ‎07-22-2015 02:45 PM by PaulWallace (8,002 Views)

This article explains how to use Stingray's RESTful Control API with Perl.  It's a little more work than with Tech Tip: Using the RESTful Control API with Python - Overview but once the basic environment is set up and the framework in place, you can rapidly create scripts in Perl to manage Stingray's configuration.

 

Getting Started

 

The code examples below depend on several Perl modules that may not be installed by default on your client system: REST::Client, MIME::Base64 and JSON.

 

  • On a Linux system, the best way to pull these in to the system perl is by using the system package manager (apt or rpm).
  • On a Mac (or a home-grown perl instance), you can install them using CPAN

 

Preparing a Mac to use CPAN

 

Install the package 'Command Line Tools for Xcode' either from within the Xcode or directly from https://developer.apple.com/downloads/.

 

Some of the CPAN build scripts indirectly seek out /usr/bin/gcc-4.2 and won't build if /usr/bin/gcc-4.2 is missing.  If gcc-4.2 is missing, the following should help:

 

$ ls -l /usr/bin/gcc-4.2

ls: /usr/bin/gcc-4.2: No such file or directory

$ sudo ln -s /usr/bin/gcc /usr/bin/gcc-4.2

 

Installing the perl modules

 

It may take 20 minutes for CPAN to initialize itself, download, compile, test and install the necessary perl modules:

 

$ sudo perl –MCPAN –e shell

cpan> install Bundle::CPAN

cpan> install REST:: Client

cpan> install MIME::Base64

cpan> install JSON

 

Your first Perl REST client application

 

This application looks for a pool named 'Web Servers'.  It prints a list of the nodes in the pool, and then sets the first one to drain.

 

#!/usr/bin/perl


use REST::Client;

use MIME::Base64;

use JSON;


# Configurables

$poolname = "Web Servers";

$endpoint = "stingray:9070";

$userpass = "admin:admin";


# Older implementations of LWP check this to disable server verification

$ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0;


# Set up the connection

my $client = REST::Client->new( );


# Newer implementations of LWP use this to disable server verification

# Try SSL_verify_mode => SSL_VERIFY_NONE.  0 is more compatible, but may be deprecated

$client->getUseragent()->ssl_opts( SSL_verify_mode => 0 );


$client->setHost( "https://$endpoint" );

$client->addHeader( "Authorization", "Basic ".encode_base64( $userpass ) );


# Perform a HTTP GET on this URI

$client->GET( "/api/tm/1.0/config/active/pools/$poolname" );

die $client->responseContent() if( $client->responseCode() >= 300 );


# Add the node to the list of draining nodes

my $r = decode_json( $client->responseContent() );

print "Pool: $poolname:\n";

print "   Nodes:        " . join( ", ", @{$r->{properties}->{basic}->{nodes}} ) . "\n";

print "   Draining:     " . join( ", ", @{$r->{properties}->{basic}->{draining}} ) . "\n";


# If the first node is not already draining, add it to the draining list

$node = $r->{properties}->{basic}->{nodes}[0];

if( ! ($node ~~ @{$r->{properties}->{basic}->{draining}}) ) {

    print "      Planning to drain: $node\n";

    push @{$r->{properties}->{basic}->{draining}}, $node;

}


# Now put the updated configuration

$client->addHeader( "Content-Type", "application/json" );

$client->PUT( "/api/tm/1.0/config/active/pools/$poolname", encode_json( $r ) );

die $client->responseContent() if( $client->responseCode() >= 300 );


my $r = decode_json( $client->responseContent() );

print "   Now draining: " . join( ", ", @{$r->{properties}->{basic}->{draining}} ) . "\n";

 

Running the script

 

$ perl ./pool.pl 

Pool: Web Servers:

   Nodes:        192.168.207.101:80, 192.168.207.103:80, 192.168.207.102:80

   Draining:     192.168.207.102:80

      Planning to drain: 192.168.207.101:80

   Now draining: 192.168.207.101:80, 192.168.207.102:80

 

Notes

 

This script was tested against two different installations of perl, with different versions of the LWP library.  It was necessary to disable SSL certificate checking using:

 

$ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0;

 

... with the older, and:

 

# Try SSL_verify_mode => SSL_VERIFY_NONE.  0 is more compatible, but may be deprecated

$client->getUseragent()->ssl_opts( SSL_verify_mode => 0 );

 

with the new.  The older implementation failed when using SSL_VERIFY_NONE.  YMMV.

Tech Tip: Using the RESTful Control API with Python - Overview

by ricknelson on ‎02-27-2013 09:59 AM - edited on ‎07-22-2015 12:14 PM by PaulWallace (5,000 Views)

This article explains how to use Stingray's REST Control API using the excellent requests Python library.

 

There are many ways to install the requests library.  On my test client (MacOSX), the following was sufficient:

 

$ sudo easy_install pip

$ sudo pip install requests

 

Resources

 

The REST API gives you access to the Stingray Configuration, presented in the form of resources.  The format of the data exchanged using the Stingray RESTful API will depend on the type of resource being accessed:

 

  • Data for Configuration Resources, such as Virtual Servers and Pools are exchanged in JSON format using the MIME type of “application/json”, so when getting data on a resource with a GET request the data will be returned in JSON format and must be deserialized or decoded into a Python data structure.  When adding or changing a resource with a PUT request the data must be serialized or encoded from a Phython data structure into JSON format.
  • Files, such as rules and those in the extra directory are exchanged in raw format using the MIME type of “application/octet-stream”.

 

Working with JSON and Python

 

The json module provides functions for JSON serializing and deserializing.  To take a Python data structure and serialize it into JSON format use json.dumps() and to deserialize a JSON formatted string into a Python data structure use json.loads().

 

Working with a RESTful API and Python

 

To make the programming easier, the program examples that follow utilize the requests library as the REST client. To use the requests library you first setup a requests session as follows, replacing <userid> and <password> with the appropriate values:

 

client = requests.Session()
client.auth = ('<userid>', '<password>')
client.verify = False

 

The last line prevents it from verifying that the certificate used by Stingray is from a certificate authority so that the self-signed certificate used by Stingray will be allowed.  Once the session is setup, you can make GET, PUT and DELETE calls as follows:

 

response = client.get()
response = client.put(, data = , headers = )
response = client.delete()

 

The URL for Stingray RESTful API will be of the form:

 

https://<STM hostname or IP>:9070/api/tm/1.0/config/active/

 

followed by a resource type or a resource type and resource, so for example to get a list of all the pools from the Stingray instance, stingray.example.com, it would be:

 

https://stingray.example.com:9070/api/tm/1.0/config/active/pools

 

And to get the configuration information for the pool, “testpool” the URL would be:

 

https://stingray.example.com:9070/api/tm/1.0/config/active/pools/testpool

 

For most Python environments, it will probably be necessary to install the requests library.  For some Python environments it may also be necessary to install the httplib2 module.

 

Data Structures

 

JSON responses from a GET or PUT are deserialized into a Python dictionary that always contains one element.   The key to this element will be:

 

  • 'children' for lists of resources.  The value will be a Python list with each element in the list being a dictionary with the key, 'name', set to the name of the resource and the key, 'href', set to the URI of the resource.
  • 'properties' for configuration resources.  The value will be a dictionary with each key value pair being a section of properties with the key being set to the name of the section and the value being a dictionary containing the configuration values as key/value pairs.  Configuration values can be scalars, lists or dictionaries.

 

Please see Feature Brief: Stingray's RESTful Control API for examples of these data structures and something like the Chrome REST Console can be used to see what the actual data looks like.

 

Read More

 

Stingray Libraries and Add-Ons

by on ‎04-10-2013 03:09 AM - edited on ‎07-21-2015 05:07 PM by PaulWallace (2,603 Views)

This page indexes some useful libraries and add-ons for Stingray Traffic Manager.

 

Tools

 

 

TrafficScript Libraries

 

 

API Libraries

 

Stingray Traffic Manager - Cacti Template

by markbod on ‎06-07-2013 09:31 AM - edited on ‎07-21-2015 04:39 PM by PaulWallace (1,003 Views)

cacti_logo.gifChangeLog - Vesion 0.1 - 2013-06-07

This is the initial release of a Stingray Traffic Manager template for Cacti. I have been working on this for the past few days, but be warned that I am a complete Cacti n00b and so this may well be as useful as a chocolate teapot. However I unashamedly release it into the community, in the hope that (with your help) this template will grow and mature into something quite useful!

 

Watch out for the bugs!

Upgrading Stingray Traffic Manager Virtual Appliance

by on ‎02-22-2013 11:35 AM - edited on ‎07-15-2015 03:53 PM by rickl44 (4,500 Views)

These instructions describe how to upgrade Stingray Traffic Manager Virtual Appliance instances. For instructions on upgrading on other platforms, please refer to Upgrading Stingray Traffic Manager.

 

Before you start

 

There are a few things that have to be checked before an upgrade is attempted to make sure it goes smoothly:

 

  • Memory requirements: Make sure the machine has enough memory. Stingray Traffic Manager requires at the very least 1GB of RAM; 2GB or more are recommended. If the traffic manager to be upgraded has less memory, please assign more memory to the virtual machine.

 

  • Disk Space requirements: Ensure there is enough free disk space. For the upgrade to succeed, at least 500Mb must be free on the root partition, and at least 300MB on the /logs partition.

 

The unix command df shows how much space is available, for example:

 

root@stingray-1/ # df -k

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/sda5 1426384 839732 514764 62% /

varrun    517680      44 517636 1% /var/run

varlock   517680       0 517680 0% /var/lock

udev      517680      48 517632 1% /dev

devshm    517680       0 517680 0% /dev/shm

/dev/sda1 139985    8633 124125 7% /boot

/dev/sda8 621536   17516 604020 3% /logs

 

If the disks are too full, you have to free up some space. Please follow the suggestions in the topic Freeing disk space on the Virtual Appliance.

 

Upgrading the Virtual Appliance

 

Riverbed software is stored on one of two primary partitions, and log files are stored on a separate disk partition.

 

Full Upgrades are required when you upgrade to a new major or minor version number, such as from 8.1 to 9.0, or 9.0 to 9.1.  Full upgrades include a new operating system installation.

 

A full upgrade is installed in the unused primary partition, configuration (including the /root directory) is migrated across and the bootloader updated to point to the new partition. You can edit the bootloader configuration to fall back to the other primary partition if you need to roll back to the previous instance.

 

Incremental Upgrades are required when you install a release with a new revision number, such as from 9.1 to 9.1r1.  The new software is added to the currently active primary partition. You can use the 'rollback' script to make a previous revision active.

 

Important note

 

If you wish to upgrade from one major.minor version to a later major.minor version with a later revision, you will need to upgrade in two steps: the full upgrade, and the subsequent incremental upgrade.

 

For example, suppose that you are running version 8.1r3 and you wish to upgrade to version 9.1r2.  You must perform the following two steps:

 

  • Perform a full upgrade from your current version to the closest major.minor version, i.e a full upgrade to 9.1
  • Perform a subsequent incremental upgrade from 9.1 to 9.1r2.

 

Performing a Full Upgrade

 

Upgrades between major and minor versions (e.g. 8.1 to 9.0 or 9.0 to 9.1) can either be performed via Administration Server (when upgrading from version 9.0 or later) or using a command-line script (z-upgrade-appliance) to install the new version into a spare section of the hard disk. This process involves one reboot and the downtime associated with that reboot.

 

Any configuration changes made in the existing version after the upgrade has been run won't be preserved when the new version is started; you should reboot the appliance as soon as possible (using the System -> Reboot button in the UI or using the 'reboot' command).

 

Before upgrading it is prudent to have a backup of your configuration.

 

Command line method

  • Download the installation zpkg package from. This will be a file called something like ZeusTM_91_Appliance-x86_64.zpkg.
  • Copy the file onto the appliance to the /logs partition using an scp or sftp client (e.g. psftp).
  • Log in to the appliance using an 'ssh' client (putty is a good choice) or the console; you can log in using any username that is in the admin group.
  • Check the disk space requirements explained above are still fulfilled after you've uploaded the package.
  • Once connected to the console of the appliance run: z-upgrade-appliance <filename>
  • Confirm that you want to upgrade the appliance.

 

Administration Server (upgrading from 9.0 or later)

Download the tgz upgrade package from Riverbed Support: Software -  Stingray Traffic Manager, go to the System -> Upgrade page, upload the upgrade tgz package, and follow the instructions.

 

Once complete, the current configuration will be migrated to the newly installed version, and this version will be automatically selected on the next reboot.

 

Performing an Incremental Upgrade

 

Upgrading revisions in the same product version (e.g. 9.0 to 9.0r2, or 9.0r2 to 9.0r3) are performed using the Administration Server. Download the tgz upgrade package from Riverbed Support: Software -  Stingray Traffic Manager, go to the System -> Upgrade page, upload the upgrade tgz package, and follow the instructions.

 

You will need to complete this process for each Appliance in your cluster.

 

Expected downtime for an upgrade will be a couple of seconds while the Traffic Manager software is restarted. On very rare occasions, it will be necessary to reboot the Appliance to complete the upgrade. The user interface will inform you if this is necessary when the upgrade is complete. You should ensure that the Appliance is rebooted at the most appropriate time.

Upgrading and reinstalling SteelApp Traffic Manager Virtual Appliances

by jbrookman on ‎07-31-2014 02:07 AM - edited on ‎07-09-2015 06:43 AM by Matt.Thomson (6,301 Views)

In many cases, it is desirable to upgrade a virtual appliance by deploying a virtual appliance at the newer version and importing the old configuration.  For example, the size of the traffic manager disk image was increased in version 9.7, and deploying a new virtual appliance lets a customer take advantage of this larger disk.  This article documents the procedure for deploying a new virtual appliance with the old configuration in common scenarios.

 

These instructions describe how to upgrade and reinstall Brocade Virtual Traffic Manager appliance instances (either in a cluster or standalone appliances). For instructions on upgrading on other platforms, please refer to Upgrading Stingray Traffic Manager.

 

Upgrading a standalone Virtual Appliance

 

This process will replace a standalone virtual appliance with another virtual appliance with the same configuration (including migrating network configuration). Note that the Brocade Virtual Traffic Manager Cloud Getting Started Guide contains instructions for upgrading a standalone EC2 instance from version 9.7 onwards; if upgrading from a version prior to 9.7 and using the Stingray Application Firewall (now Brocade Virtual Web Application Firewall) these instructions must be followed to correctly back up and restore any firewall configuration.

 

  1. Make a backup of the traffic manager configuration (See section "System > Backups" in the Brocade Virtual Traffic Manager User Manual), and export it.
  2. If you are upgrading from a  version prior to 9.7 and are using the Stingray Application Firewall, back up the Stingray Application Firewall configuration
    1. Log on to a command line
    2. Run /opt/zeus/stop-zeus
    3. Copy /opt/zeus/zeusafm/current/var/lib/config.db off the appliance.
  3. Shut down the original appliance.
  4. Deploy a new appliance with the same network interfaces as the original.
  5. If you backed up the application firewall configuration earlier, restore it here onto the new appliance, before you restore the traffic manager configuration:
    1. Copy the config.db file to /opt/zeus/stingrayafm/current/var/lib/config.db (overwriting the original)
    2. Check that the owner on the config.db file is root, and the mode is 0644.
  6. Import and restore the traffic manager configuration via the UI.
  7. If you have application firewall errors
    1. Use the Diagnose page to automatically fix any configuration errors
    2. Reset the Traffic Manager software.

 

Upgrading a cluster of Virtual Appliances (except Amazon EC2)

 

This process will replace the appliances in the cluster, one at a time, maintaining the same IP addresses. As the cluster will be reduced by one at points in the upgrade process, you should ensure that this is carried out at a time when the cluster is otherwise healthy, and of the n appliances in the cluster, the load can be handled by (n-1) appliances.

 

  1. Before beginning the process, ensure that any cluster errors have been resolved.
  2. Nominate the appliance which will be the last to be upgraded (call it the final appliance).  When any of the other machines needs to be removed from the cluster, it should be done using the UI on this appliance, and when a hostname and port are required to join the cluster, this appliance's hostname should be used.
  3. If you are using the Brocade Virtual Web Application Firewall first ensure that vWAF on the final appliance in the cluster is upgraded to the most recent version, using the vWAF updater.
  4. Choose an appliance to be upgraded, and remove the machine from the cluster:
    • If it is not the final appliance (nominated in step 2), this should be done via the UI on the final appliance
    • If it is the final appliance, the UI on any other machine may be used.
  5. Make a backup of the traffic manager configuration (System > Backups) on the appliance being upgraded, and export the backup.  This backup only contains the machine specific info for that appliance (networking config etc).
  6. Shut down the appliance, and deploy a new appliance at the new version.  When deploying, it needs to be given the identical hostname to the machine it's replacing.
  7. Log on to the admin UI of the new appliance, and import and restore the backup from step 5.
  8. If you are using the Brocade Virtual Web Application Firewall, accessing the Application Firewall tab in the UI will fail and there will be an error on the Diagnose page and an 'Update Configuration' button. Click the Update Configuration button once, then wait for the error to clear.  The configuration is now correct, but the admin server still needs to be restarted to pick up the configuration:

    # $ZEUSHOME/admin/rc restart

    Now, upgrade the application firewall on the new appliance to the latest version.
  9. Join into the cluster:
      • For all appliances except the final appliance, you must not select any of the auto-detected existing clusters.  Instead manually specify the hostname and port of the final appliance.
      • If you are using Brocade Virtual Web Application Firewall, there may be an issue where the config on the new machine hasn't synced the vWAF config from the old machine, and clicking the 'Update Application Firewall Cluster Status' button on the Diagnose page doesn't fix the problem. If this happens, firstly get the clusterPwd from the final appliance:
        1. # grep clusterPwd /opt/zeus/zxtm/conf/zeusafm.conf
          clusterPwd = <your cluster pwd>
        2. On the new appliance, edit /opt/zeus/zxtm/conf/zeusafm.conf (with e.g. nano or vi), and replace the clusterPwd with the final appliance's clusterPwd.
        3. The moment that file is saved, Brocade vWAF should get restarted, and the config should get synced to the new machine correctly.
      • When you are upgrading the final appliance, you should select the auto-detected existing cluster entry, which should now list all the other cluster peers.
      • Once a cluster contains multiple versions, configuration changes must not be made until the upgrade has been completed, and 'Cluster conflict' errors are expected until the end of the process.
  10. Repeat steps 4-9 until all appliances have been upgraded.

 

Upgrading a cluster of STM EC2 appliances

 

Because EC2 licenses are not tied to the IP address, it is recommended that new EC2 instances are deployed into a cluster before removing old instances.  This ensures that the capacity of the cluster is not reduced during the upgrade process.  This process is documented in the "Creating a Traffic Manager Instances on Amazon EC2" chapter in the Brocade Virtual Traffic Manager Cloud Getting Started Guide.  The clusterPwd may also need to be fixed as above.

Tech Tip: Using the RESTful Control API with Ruby - listpools

by ricknelson on ‎03-26-2013 05:07 PM - edited on ‎07-14-2015 04:42 PM by PaulWallace (723 Views)

The following code uses Stingray's RESTful API to list all the pools defined on a cluster. The code is written in Ruby. This example has more extensive comments then the following examples and most of these are applicable to all the examples. This program does a single GET request for the list of pools and then loops through that list, each element of which is a hash, and then outputs the pool name.

 

listpools.rb

 

require 'rest_client'

require 'base64'

require 'json'


puts "Pools:\n\n"


# Set the URL

url = 'https://stingray.example.com:9070/api/tm/1.0/config/active/pools'

# Setup the basic authorization header with the encoded Userid and Password.

# These need to match a UserId and Password for a Stingray user

auth = 'Basic ' + Base64.encode64('admin:admin')

begin

    # Do the HTTP GET to get the lists of pools

    response = RestClient.get(url, {:authorization => auth})

    data = JSON.parse(response.body) # Deserialize the JSON response into a hash

    pools = data['children']

    pools.each do |pool|

        puts pool['name']

    end

rescue => e

    # We weren't able to connect to the Stingray or there was a problem with the request.

    # The most likely reasons for this are:

    # - the hostname of the Stingray instance is incorrect

    # - this client doesn't have network access to the Stingray instance or port 9070

    # - the RESTful API is disabled

    # - the RESTful API is using using a different port

    # - the URL is incorrect

    puts "Error getting pool list: URL=#{url} Error: #{e.message}"

end

 

Running the example

 

This code was tested with Ruby 1.9.3 and version 1.6.7 of the rest-client module.

 

Run the Ruby script as follows:

 

$ listpools.rb

Pools:

 

Pool1

Pool2

 

Read More

 

Tech Tip: Using the RESTful Control API with Ruby - listpoolnodes

by ricknelson on ‎03-26-2013 06:02 PM - edited on ‎07-14-2015 04:39 PM by PaulWallace (723 Views)

The following code uses Stingray's RESTful API to list all the pools defined for a cluster and for each pool it lists the nodes defined for that pool, including draining and disabled nodes. The code is written in Ruby. This example builds on the previous listpools.rb example.  This program does a GET request for the list of pool and then while looping through the list of pools, a GET is done for each pool to retrieve the configuration parameters for that pool.

 

listpoolnodes.rb

 

require 'rest_client'

require 'base64'

require 'json'


puts "Pools:\n\n"


url = 'https://stingray.example.com:9070/api/tm/1.0/config/active/pools'

auth = 'Basic ' + Base64.encode64('admin:admin')

begin

    # Do the HTTP GET to get the lists of pools

    response = RestClient.get(url, {:authorization => auth})

    data = JSON.parse(response.body)

    pools = data['children']

    pools.each do |pool|

        poolName = pool['name']

        begin

            # Do the HTTP GET to get the properties of a pool

            response = RestClient.get(url + '/' + URI.escape(poolName), {:authorization => auth})

            poolConfig = JSON.parse(response.body)

            # Since we are getting the properties for a pool we expect the first element to be 'properties'.

            # The value of the key 'properties' will be a hash containing property sections.  All the properties

            # that this program cares about are in the 'basic' section.  'nodes' is the array of all active or

            # draining nodes in this pool.  'draining' the array of all draining nodes in this pool.  'disabled'

            # is the array of all disabled nodes in this pool.

            nodes = poolConfig['properties']['basic']['nodes']

            draining = poolConfig['properties']['basic']['draining']

            disabled = poolConfig['properties']['basic']['disabled']

            puts "Pool: #{poolName}"

            print "   Node: "

            nodes.each do |node|

                print node + ' '

            end

            puts

            if draining.length > 0

                print "   Draining: "

                draining.each do |node|

                    print node + ' '

                end

                puts

            end

            if disabled.length > 0

                print "   Disabled: "

                disabled.each do |node|

                    print node + ' '

                end

                puts

            end

            puts

        rescue => e

            puts "Error getting pool data&colon; URL=#{url + '/' + URI.escape(poolName)} Error: #{e.message}"

        end

    end

rescue => e

    puts "Error getting pool list: URL=#{url} Error: #{e.message}"

end

 

Running the example

 

This code was tested with Ruby 1.9.3 and version 1.6.7 of the rest-client module.

 

Run the Ruby script as follows:

 

$ listpoolnodes.rb

Pools:

 

Pool1

    Nodes:  192.168.1.100 192.168.1.101

    Draining:  192.168.1.101

    Disabled:  192.168.1.102

 

Pool2

    Nodes:  192.168.1.103 192.168.1.104

 

Read More

 

Feature Brief: Stingray's RESTful Control API

by ricknelson on ‎02-27-2013 09:42 AM - edited on ‎07-14-2015 04:31 PM by PaulWallace (5,201 Views)

Overview

 

Stingray's RESTful Control API allows HTTP clients to access and modify Stingray cluster configuration data.  For example a program using standard HTTP methods can create or modify virtual servers and pools, or work with other Stingray configuration objects.

 

The RESTful Control API can be used by any programming language and application environment that supports HTTP.

 

Resources

 

The Stingray RESTful API is an HTTP based and published on port :9070.  Requests are made as standard HTTP requests, using the GET, PUT or DELETE methods.  Every RESTful call deals with a “resource”.  A resource can be one of the following:

 

  • A list of resources, for example, a list of Virtual Servers or Pools.
  • A configuration resource, for example a specific Virtual Server or Pool.
  • A file, for example a rule or a file from the extra directory.

 

Resources are referenced through a URI with a common directory structure.  For this first version of the Stingray RESTful API the URI for all resources starts with “/api/tm/1.0/config/active”, so for example to get a list of pools, the URI would be “/api/tm/1.0/config/active/pools” and to reference the pool named “testpool”, the URI would be “/api/tm/1.0/config/active/pools/testpool.

 

When accessing the RESTful API from a remote machine, HTTPS must be used, but when accessing the RESTful API from a local Stingray instance, HTTP can be used.

 

By default, the RESTful API is disabled and when enabled listens on port 9070.  The RESTful API can be enabled and the port can be changed in the Stingray GUI by going to System->Security->REST API.

 

To complete the example, to reference the pool named “testpool” on the Stingray instance with a host name of “stingray.example.com”, the full URI would be “https://stingray.example.com:9070/api/tm/1.0/config/active/pools/testpool”.  To get a list off all the types of resources available you can access the URL,  “https://stingray.example.com:9070/api/tm/1.0/config/active".

 

To retrieve the data for a resource you use the GET method, to add or change a resource you use the PUT method and to delete a resource you use the DELETE method.

 

Data Format

 

Data for resource lists and configuration resources are returned as JSON structures with a MIME type of "application/json".  JSON allows complex data structures to be represented as strings that can be easily passed in HTTP requests.  When the resource is a file, the data is passed in its raw format with a MIME type of "application/octet-stream".

 

For lists of resources the data returned will have the format:

 

{

     "children": [{

          "name": "",

          "href": "/api/tm/1.0/config/active/pools/"

     }, {

          "name": "",

          "href": "/api/tm/1.0/config/active/pools/"

     }]

}

 

For example, the list of pools, given two pools, “pool1” and “pool2” would be:

 

{

     "children": [{

          "name": "pool1",

          "href": "/api/tm/1.0/config/active/pools/pool1"

     }, {

     "children": [{

          "name": "pool2",

          "href": "/api/tm/1.0/config/active/pools/pool2"

     }]

}

 

For configuration resources, the data will contain one or more sections of properties, always with at least one section named "basic", and the property values can be of different types.  The format looks like:

 

{
     "properties": {

          "<section name>": {

               "<property name>": "<string value>",

               "<property name>": <numeric value>,

               "<property name>": <boolean value>,

               "<property name>": [<value>, <value>],

               "<property name>": [<key>: <value>, <key>: <value>]

     },

          "<section name>": {

               "<property name>": "<string value>",

               "<property name>": <numeric value>"
     }

 

Accessing the RESTful API

 

Any client or program that can handle HTTP requests can be used to access the RESTful API. Basic authentication is used with the usernames and passwords matching those used to administer Stingray.  To view the data returned by the RESTful API without having to do any programming, there are browser plug-ins that can be used.  One that is available, is the Chrome REST Console.  It is very helpful during testing to have something like this available.  One nice thing about a REST API is that it is discoverable, so using something like the Chrome REST Console, you can walk the resource tree and see everything that is available via the RESTful API.  You can also add, change and delete data.  For more information on using the Chrome REST Console see: Tech Tip: Using Stingray's RESTful Control API with the Chrome REST Console

 

When adding or changing data, use the PUT method and for configuration resources, the data sent in the request must be in JSON format and must match the data format returned when doing a GET on the same type of resource.  For adding a configuration resource you do not need to include all properties, just the minimum sections and properties required to add the resource and this will vary for each resource.  When changing data you only need to include the sections and properties that need to be changed.  To delete a resource use the DELETE method.

 

Notes

 

  • An important caution when changing or deleting data is that this version of the RESTful API does do data integrity checking.  The RESTful API will allow you to makes changes that would not be allowed in the GUI or CLI.  For example, you can delete a Pool that is being used by a Virtual Server.  This means that when using the RESTful API, you should be sure to understand the data integrity requirements for the resources that you are changing or deleting and put validation in any programs you write.
  • This release of the RESTful API is not compatible with Multi-Site Manager, so both cannot be enabled at the same time.

 

Read more

 

Managing the growth of website content

by on ‎03-27-2013 05:03 AM - edited on ‎07-14-2015 04:16 PM by PaulWallace (600 Views)

As the content on websites grow, the structure of their URLs can change dramatically.  The addition of new applications and components can play havoc with the established URL space, and the development cost of supporting the old links that have been published in articles, referenced by search engines and bookmarked by users can be very high.

 

Stingray is in a great place to address this problem, and let your applications use the URL spaces most suited to them with little concern for backwards compatibility.  This article presents one technique you can employ to access this problem in a scalable and manageable manner.

 

A simple example

 

Let's start with a simple example; suppose you published content at the following URLs:

 

www.example.com/news.html
www.example.com/about.html
www.example.com/careers.html
www.example.com/demos.html

 

As your company grew, offices were added, more products were developed and content began to be broken out by location.  The original URL structure was no longer sustainable, and a deeper layer of content was necessary:

 

www.example.com/corporate/news.html
www.example.com/product/news.html
www.example.com/product/demos.html
www.example.com/about/cambridge.html
www.example.com/about/honalulu.html
www.example.com/careers/overview.html
www.example.com/careers/honalulu/engineering.html
www.example.com/careers/cambridge/research.html

 

The challenge is to serve out the best content to people who make requests to the old URLs.  You want a better solution than manually adding redirects to your webserver configuration file; ideally a solution that can be used by your web content team without the intervention from IT.

 

A solution

 

Ideally, we would like to issue an HTTP redirect to the most appropriate page. This is, of course, simple using TrafficScript.

 

 

... but you want to avoid having to build and maintain a rule that looks like this:

 

1
2
3
4
5
6
7
8
9
10
11
12
13
$url = http.getPath();
 
if( $url == "/news.html" ){
 
 
} else if( $url == "/about.html" ) {
 
 
} else if ...
 
}

 

Wouldn't it be easier if you could maintain a file with a list of redirects, and Stingray could act on that?

 

/news.html http://www.example.com/product/news.html
/about.html http://www.example.com/about/cambridge.html

 

etc...?

 

You can use the ResourceTable libraries from HowTo: Store tables of data in TrafficScript - part 1 to help you do exactly that:

 

1
2
3
4
5
6
7
import ResourceTableSmall as table; 
 
$path = http.getPath(); 
 
$redirect = table.lookup( "redirects.txt", $path );
 
if( $redirect ) http.redirect( $redirect );

 

Managing the file of redirects

 

That leaves us with one problem - how best to manage the (albeit simple) file that contains the redirects?  The format is simple enough (space-separated key / value pairs, one per line) that anyone can edit it, but how do they get it into the Stingray configuration without using the complex and powerful Stingray Admin Interface?

 

There are a couple of simple approaches:

 

 

The REST approach is particularly attractive - you can use browser plugins like Chrome's REST Console to push configuration files into Stingray.

 

However, you may not want someone to use either approach directly because it would imply giving them a full administrative logon to the Stingray cluster.  In that case, you could consider a simple commandline tool that they can use to upload the updated configuration file, using the Collected Tech Tips: Using the RESTful Control API or  Collected Tech Tips: SOAP Control API examples APIs.

Tech Tip: Using the RESTful Control API with Ruby - startstopvs

by ricknelson on ‎03-27-2013 09:24 AM - edited on ‎07-14-2015 03:58 PM by PaulWallace (3,700 Views)

The following code uses Stingray's RESTful API to enable or disabled a specific Virtual Server.   The code is written in Ruby.  This program checks to see if the Virtual Server "test vs" is enabled and if it is, it disables it and if it is disabled, it enables it.  A GET is done to retrieve the configuration data for the Virtual Server and the "enabled" value in the "basic" properties section is checked.  This is a boolean value, so if it is true it is set to false and if it is false it is set to true. The changed data is then sent to the server using a PUT.

 

startstopvs.rb

 

require 'rest_client'

require 'base64'

require 'json'


vs = "test vs"

# Because there is a space in the virtual serve name it must be escaped

url = 'https://stingray.example.com:9070/api/tm/1.0/config/active/vservers/' + URI.escape(vs)


auth = 'Basic ' + Base64.encode64('admin:admin')

begin

    # Get the config data for the virtual server

    response = RestClient.get(url, {:authorization => auth})

    # Decode the json response.  The result will be a hash

    vsConfig = JSON.parse(response.body)

    if vsConfig['properties']['basic']['enabled']

        # the virtual server is enabled, disable it.  We only need to send the data that we

        # are changing so create a new hash with just this data.

        newVSConfig = {'properties' => {'basic' => {'enabled' => false}}}

        puts "#{vs} is enabled.  Disable it."

    else

        # the virtual server is disabled, enable it

        newVSConfig = {'properties' => {'basic' => {'enabled' => true}}}

        puts "#{vs} is disabled.  Enable it."

    end

    # PUT the new data

    response = RestClient.put(url, JSON.generate(newVSConfig), {:content_type => :json, :authorization => auth})

rescue => e

    puts "Error: URL=#{url} Error: #{e.message}"

end

 

Running the example

 

This code was tested with Ruby 1.9.3 and version 1.6.7 of the rest-client module.

 

Run the Ruby script as follows:

 

$ startstopvs.rb

test vs is enabled. Disable it.

 

Notes

 

This program it is sending only the 'enabled' value to the server by creating a new  hash just this value in the 'basic' properties section.  Alternatively, the entire Virtual Server configuration could have been returned to the server with just the enabled value changed.  Sending just the data that has changed reduces the chances of overwriting another user's changes if multiple programs are concurrently accessing the RESTful API.

 

Read More

 

Tech Tip: Using the RESTful Control API with Ruby - addpool

by ricknelson on ‎03-27-2013 10:55 AM - edited on ‎07-14-2015 03:52 PM by PaulWallace (938 Views)

The following code uses Stingray's RESTful API to add a pool.   The code is written in Ruby. This program creates a new pool, "rbtest", first doing a GET to make sure the pool doesn't already exist, and if the pool doesn't exist, the pool is created by doing a PUT with just the minimum data need to create a pool.  In this case the program creates a properties hash with just one node.  All other values will get default values when Stingray creates the pool.

 

addpool.rb

 

require 'rest_client'

require 'base64'

require 'json'


poolName = 'rbtest'

poolConfig = {'properties' => {'basic' => {'nodes' => ['192.168.168.135:80']}}}


url = 'https://stingray.example.com:9070/api/tm/1.0/config/active/pools/' + poolName

auth = 'Basic ' + Base64.encode64('admin:admin')

begin

    # First see if the pool already exists.  If it does exist, a 404 will returned

    # which will causue a resource.not_found exception

    response = RestClient.get(url, {:authorization => auth})

    puts "Pool #{poolName} already exists"

rescue => e

    # If a 404 is returned then e.response will be a json object, otherwise it may not be

    if defined?(e.response)

        error = JSON.parse(e.response)

        if error['error_id'] == 'resource.not_found'

            begin

                # PUT the new data

                response = RestClient.put(url, JSON.generate(poolConfig), {:content_type => :json, :authorization => auth})

                if response.code == 201 # When creating a new resource we expect to get a 201

                    puts "Pool #{poolName} added"

                else

                    puts "Bad status code #{response.code} when adding pool"

                end

            rescue => e

                puts "Error: URL=#{url} Error: #{e.message}"

            end

        else

            puts "Error: URL=#{url} Error: #{e.message}"

        end

    else

        puts "Error: URL=#{url} Error: #{e.message}"

    end

end

 

Running the example

 

This code was tested with Ruby 1.9.3 and version 1.6.7 of the rest-client module.

 

Run the Ruby script as follows:

 

$ addpool.py

Pool rbtest added

 

Notes

 

The only difference between doing a PUT to change a resource and a PUT to add a resource is the HTTP status code returned.  When changing a resource 200 is the expected status code and when adding a resource, 201 is the expected status code.

 

Read More

 

Tech Tip: Using the RESTful Control API with Ruby - deletepool

by ricknelson on ‎03-27-2013 11:21 AM - edited on ‎07-14-2015 03:48 PM by PaulWallace (606 Views)

The following code uses Stingray's RESTful API to delete a pool.  The code is written in Ruby.  This program deletes the "rbtest" pool created by the addpool.rb example.  To delete a resource you do a HTTP DELETE on the URI for the resource.  If the delete is successful a 204 HTTP status code will be returned.

 

deletepool.rb

 

require 'rest_client'

require 'base64'

require 'json'


poolName = 'rbtest'

url = 'https://stingray.example.com:9070/api/tm/1.0/config/active/pools/' + poolName

auth = 'Basic ' + Base64.encode64('admin:admin')

begin

    # Try to delete the pool.  If it exists and is deleted we will get a 204. If it doesn't

    # exist we will get a 404, which will causue a resource.not_found exception

    response = RestClient.delete(url, {:authorization => auth})

    if response.code == 204

        puts "Pool #{poolName} deleted"

    else

        puts "Bad status code #{response.code} when deleting pool"

    end

rescue => e

    # If a 404 is returned then e.response will be a json object, otherwise it may not be

    if defined?(e.response)

        error = JSON.parse(e.response)

        if error['error_id'] == 'resource.not_found'

            puts "Pool #{poolName} not found"  

        else

            puts "Error: URL=#{url} Error: #{e.message}"

        end

    else

        puts "Error: URL=#{url} Error: #{e.message}"

    end

end

 

Running the example

 

This code was tested with Ruby 1.9.3 and version 1.6.7 of the rest-client module.

 

Run the Ruby script as follows:

 

$ delelepool.rb

Pool rbtest deleted

 

Read More

 

Tech Tip: Using the RESTful Control API with Ruby - addextrafile

by ricknelson on ‎03-27-2013 02:25 PM - edited on ‎07-14-2015 03:45 PM by PaulWallace (650 Views)

The following code uses Stingray's RESTful API to a file to the extra directory.   The code is written in Ruby.  This program adds the file 'validserialnumbers' to the extra directory and if the file already exists it will be overwrite it.  If this is not the desired behavior, code can be added to to check for the existence of the file as was done in the addpool example.

 

addextrafile.rb

 

require 'rest_client'

require 'base64'

require 'json'


fileName = 'validserialnumbers'

url = 'https://stingray.example.com:9070/api/tm/1.0/config/active/extra/' + fileName

auth = 'Basic ' + Base64.encode64('admin:admin')


validSerialNumbers = < 'application/octet-stream', :authorization => auth})

    # If the file already exists, it will be replaced with this version and 204 will be returned

    # otherwise 201 will be returned.    

    if response.code == 201 || response.code == 204

        puts "File #{fileName} added"

    else

        puts "Bad status code #{response.code} when adding file #{fileName}"

    end

rescue => e

    puts "Error: URL=#{url} Error: #{e.message}"

end

 

Running the example

 

This code was tested with Ruby 1.9.3 and version 1.6.7 of the rest-client module.

 

Run the Ruby script as follows:

 

$ addextrafile.rb

File added

 

Notes

 

Since this is a file and not a configuration resource, JSON will not be used and the MIME type will be "application/octet-stream".  Another difference when dealing with files is how Stingray handles adding a file that already exists.  If the file already exists, Stingray will overwrite the it and return a HTTP status code of 204.  If the file doesn't already exist, the HTTP status code will be a 201.

 

Read More

 

Tech Tip: Using the RESTful Control API with Perl - startstopvs

by ricknelson on ‎03-28-2013 03:18 PM - edited on ‎07-14-2015 03:28 PM by PaulWallace (6,500 Views)

The following code uses Stingray's RESTful API to enable or disabled a specific Virtual Server.   The code is written in Perl.  This program checks to see if the Virtual Server "test vs" is enabled and if it is, it disables it and if it is disabled, it enables it.  A GET is done to retrieve the configuration data for the Virtual Server and the "enabled" value in the "basic" properties section is checked.  This is a boolean value, so if it is true it is set to false and if it is false it is set to true. The changed data is then sent to the server using a PUT.

 

startstopvs.pl

 

#!/usr/bin/perl


use REST::Client;

use MIME::Base64;

use JSON;

use URI::Escape;


# Since Stingray is using a self-signed certificate we don't need to verify it

$ENV{'PERL_LWP_SSL_VERIFY_HOSTNAME'} = 0;


my $vs = "test vs";

# Because there is a space in the virtual serve name it must be escaped

my $url = "/api/tm/1.0/config/active/vservers/" . uri_escape($vs);


# Set up the connection

my $client = REST::Client->new();

$client->setHost("https://stingray.example.com:9070");

$client->addHeader("Authorization", "Basic " . encode_base64("admin:admin"));


# Get configuration data for the virtual server

$client->GET($url);


# Decode the json response.  The result will be a hash

my $vsConfig = decode_json $client->responseContent();


if ($client->responseCode() == 200) {

    if ($vsConfig->{properties}->{basic}->{enabled}) {

        # The virtual server is enabled, disable it.  We only need to send the data that we 

        # are changing so create a new hash with just this data.

        %newVSConfig = (properties => { basic => { enabled => JSON::false}});

        print "$vs is Enabled.  Disable it.\n";

    } else {

        # The virtual server is disabled, enable it.

        %newVSConfig = (properties => { basic => { enabled => JSON::true}});

        print "$vs is Diabled.  Enable it.\n";

    }

    $client->addHeader("Content-Type", "application/json");

    $client->PUT($url, encode_json(\%newVSConfig));

    $vsConfig = decode_json $client->responseContent();

    if ($client->responseCode() != 200) {

        print "Error putting virtual server config. status=" . $client->responseCode() . " Id=" . $vsConfig->{error_id} . ": " . $vsConfig->{error_text} . "\n";

    }

} else {

    print "Error getting pool config. status=" . $client->responseCode() . " Id=" . $vsConfig->{error_id} . ": " . $vsConfig->{error_text} . "\n";

}

 

Running the example

 

This code was tested with Perl 5.14.2 and version 249 of the REST::Client module.

 

Run the Perl script as follows:

 

$ startstopvs.pl

test vs is enabled. Disable it.

 

Notes

 

This program it is sending only the 'enabled' value to the server by creating a new hash with just this value in the 'basic' properties section.  Alternatively, the entire Virtual Server configuration could have been returned to the server with just the enabled value changed.  Sending just the data that has changed reduces the chances of overwriting another user's changes if multiple programs are concurrently accessing the RESTful API.

 

Read More

 

Tech Tip: Using the RESTful Control API with Perl - addpool

by ricknelson on ‎04-02-2013 02:32 AM - edited on ‎07-14-2015 03:25 PM by PaulWallace (1,243 Views)

The following code uses Stingray's RESTful API to add a pool.   The code is written in Perl. This program creates a new pool, "pltest", first doing a GET to make sure the pool doesn't already exist, and if the pool doesn't exist, the pool is created by doing a PUT with just the minimum data needed to create a pool.  In this case the program creates a properties hash with just one node.  All other values will get default values when Stingray creates the pool.

 

addpool.pl

 

#!/usr/bin/perl


use REST::Client;

use MIME::Base64;

use JSON;


# Since Stingray is using a self-signed certificate we don't need to verify it

$ENV{'PERL_LWP_SSL_VERIFY_HOSTNAME'} = 0;


my $poolName = 'pltest';

my %pool = (properties => {basic => {nodes => [ '192.168.168.135:80']}});

my $url = "/api/tm/1.0/config/active/pools/$poolName";


# Set up the connection

my $client = REST::Client->new();

$client->setHost("https://stingray.example.com:9070");

$client->addHeader("Authorization", "Basic " . encode_base64("admin:admin"));


#First see if the pool already exists

$client->GET($url);

if ($client->responseCode == 404) {

    $client->addHeader("Content-Type", "application/json");

    $client->PUT($url, encode_json(\%pool));

    my $poolConfig = decode_json $client->responseContent();

    if ($client->responseCode() == 201) { # When creating a new resource we expect to get a 201

        print "Pool $poolName added";

    } else {

        print "Error adding pool. status=" . $client->responseCode() . " Id=" . $vsConfig->{error_id} . ": " . $vsConfig->{error_text} . "\n";

    }

} else {

    if ($client->responseCode() == 200) {

        print "Pool $poolName already exists";

    } else {

        print "Error getting pool config. status=" . $client->responseCode() . " Id=" . $vsConfig->{error_id} . ": " . $vsConfig->{error_text} . "\n";

    }

}

 

Running the example

 

This code was tested with Perl 5.14.2 and version 249 of the REST::Client module.

 

Run the Perl script as follows:

 

$ addpool.pl

Pool pltest added

 

Notes

 

The only difference between doing a PUT to change a resource and a PUT to add a resource is the HTTP status code returned.  When changing a resource 200 is the expected status code and when adding a resource, 201 is the expected status code.

 

Read More

 

Tech Tip: Using the RESTful Control API with Perl - deletepool

by ricknelson on ‎04-02-2013 03:12 AM - edited on ‎07-14-2015 03:19 PM by PaulWallace (731 Views)

The following code uses Stingray's RESTful API to delete a pool.  The code is written in Perl.  This program deletes the "pltest" pool created by the addpool.pl example.  To delete a resource you do a HTTP DELETE on the URI for the resource.  If the delete is successful a 204 HTTP status code will be returned.

 

deletepool.pl

 

#!/usr/bin/perl


use REST::Client;

use MIME::Base64;

use JSON;


# Since Stingray is using a self-signed certificate we don't need to verify it

$ENV{'PERL_LWP_SSL_VERIFY_HOSTNAME'} = 0;


my $poolName = 'pltest';

my $url = "/api/tm/1.0/config/active/pools/$poolName";


# Set up the connection

my $client = REST::Client->new();

$client->setHost("https://stingray.example.com:9070");

$client->addHeader("Authorization", "Basic " . encode_base64("admin:admin"));


#First see if the pool already exists

$client->DELETE($url);

if ($client->responseCode == 204) {

    print "Pool $poolName deleted";

} elsif ($client->responseCode == 404) {

    print "Pool $poolName not found";

} else {

    print "Error deleting pool $poolName. Status: " . $client->responseCode . " URL: $url";

}

 

Running the example

 

This code was tested with Perl 5.14.2 and version 249 of the REST::Client module.

 

Run the Perl script as follows:

 

$ delelepool.pl

Pool pltest deleted

 

Read More

 

Tech Tip: Using the RESTful Control API with Perl - addextrafile

by ricknelson on ‎04-02-2013 03:31 AM - edited on ‎07-14-2015 03:08 PM by PaulWallace (803 Views)

The following code uses Stingray's RESTful API to a file to the extra directory.   The code is written in Perl.  This program adds the file 'validserialnumbers' to the extra directory and if the file already exists it will be overwrite it.  If this is not the desired behavior, code can be added to to check for the existence of the file as was done in the addpool example.

 

addextrafile.pl

 

#!/usr/bin/perl


use REST::Client;

use MIME::Base64;


#Since Stingray is using a self-signed certificate we don't need to verify it

$ENV{'PERL_LWP_SSL_VERIFY_HOSTNAME'} = 0;


my $fileName = 'validserialnumbers';

my $url = "/api/tm/1.0/config/active/extra/$fileName";

my $validSerialNumbers = <<END;

123456

234567

345678

END


# Set up the connection

my $client = REST::Client->new();

$client->setHost("https://stingray.example.com:9070");

$client->addHeader("Authorization", "Basic " . encode_base64("admin:admin"));

# For files, the MIME type is octet-stream

$client->addHeader("Content-Type", "application/octet-stream");


$client->PUT($url, $validSerialNumbers);

# If the file already exists, it will be replaced with this version and 204 will be returned

# otherwise 201 will be returned.

if ($client->responseCode() == 201 || $client->responseCode() == 204) {

    print "File $fileName added";

} else {

    print "Error adding file $fileName.  Status: " . $client->responseCode() . " URL: $url";

}

 

Running the example

 

This code was tested with Perl 5.14.2 and version 249 of the REST::Client module.

 

Run the Perl script as follows:

 

$ addextrafile.pl

File validserialnumbers added

 

Notes

 

Since this is a file and not a configuration resource, JSON will not be used and the MIME type will be "application/octet-stream".  Another difference when dealing with files is how Stingray handles adding a file that already exists.  If the file already exists, Stingray will overwrite the it and return a HTTP status code of 204.  If the file doesn't already exist, the HTTP status code will be a 201.

 

Read More

 

libLDAP.rts: a TrafficScript LDAP Library

by markbod on ‎12-18-2012 09:24 AM - edited on ‎07-14-2015 02:35 PM by PaulWallace (984 Views)

The libLDAP.rts library and supporting library files (written by Mark Boddington) allow you to interrogate and modify LDAP traffic from a TrafficScript rule, and to respond directly to an LDAP request when desired.

 

You can use the library to meet a range of use cases, as described in the document Managing LDAP traffic with libLDAP.rts.

 

Note: This library allows you to inspect and modify LDAP traffic as it is balanced by Stingray.  If you want to issue LDAP requests from Stingray, check out the auth.query() TrafficScript function for this purpose, or the equivalent Authenticating users with Active Directory and Stingray Java Extensions Java Extension.

 

Overview

 

A long, long time ago on a Traffic Manager far, far away, I (Mark Boddington) wrote some libraries for processing LDAP traffic in TrafficScript:

 

  • libBER.rts – This is a TrafficScript library which implements all of the required Basic Encoding Rules (BER) functionality for LDAP. It does not completely implement BER though, LDAP doesn't use all of the available types, and this library hasn't implemented those not required by LDAP.
  • libLDAP.rts – This is a TrafficScript library of functions which can be used to inspect and manipulate LDAP requests and responses. It requires libBER.rts to encode the LDAP packets.
  • libLDAPauth.rts – This is a small library which uses libLdap to provide simple LDAP authentication to other services.

 

That library (version 1.0) mostly focused on inspecting LDAP requests. It was not particularly well suited to processing LDAP responses. Now, thanks to a Stingray PoC being run in partnership with the guys over at Clever Consulting, I've had cause to revist this library and improve upon the original. I'm pleased to announce libLDAP.rts version 1.1 has arrived.

 

 

What's new in libLdap Version 1.1?

 

  • Lazy Decoding. The library now only decodes the envelope  when getPacket() or getNextPacket() is called. This gets you the MessageID and the Operation. If you want to process further, the other functions handle decoding additional data as needed.
  • New support for processing streams of LDAP Responses. Unlike Requests LDAP Responses are typically made up of multiple LDAP messages. The library can now be used to process multiple packets in a response.
  • New SearchResult processing functions: getSearchResultDetails(), getSearchResultAttributes() and updateSearchResultDetails()

 

Lazy Decoding

 

Now that the decoding is lazier it means you can almost entirely bypass decoding for packets which you have no interest in. So if you only want to check BindRequests and/or BindResponses then those are the only packets you need to fully decode. The rest are sent through un-inspected (well except for the envelope).

 

Support for LDAP Response streams

 

We now have several functions to allow you to process responses which are made up of multiple LDAP messages, such  as those for Search Requests. You can use a loop with the "getNextPacket($packet["lastByte"])" function to process each LDAP message as it is returned from the LDAP server. The LDAP packet hash  now has a "lastByte" entry to help you keep track of the messages in the stream. There is also a new skipPacket() function to allow you to skip the encoder for packets which ou aren't modifying.

 

Search Result Processing

 

With the ability to process response streams I have added a  number of functions specifically for processing SearchResults. The getSearchDetails() function will return a SearchResult hash which contains the ObjectName decoded. If you are then interested in the object you can  call getSearchResultAttributes() to decode the Attributes which have been returned. If you make any changes to the Search Result you can then call updateSearchResultDetails() to update the packet, and then encodePacket() to re-encode it. Of course if at any point you determine that no changes are needed then you can call skipPacket() instead.

 

Example - Search Result Processing

 

import libDLAP.rts as ldap;

$packet = ldap.getNextPacket(0);

while ( $packet ) {

   # Get the Operation

   $op = ldap.getOp($packet);


   # Are we a Search Request Entry?

   if ( $op == "SearchRequestEntry" ) {

      $searchResult = ldap.getSearchResultDetails($packet);

      # Is the LDAPDN within example.com?

      if ( string.endsWith($searchResult["objectName"], "dc=example,dc=com") ) {


         # We have a search result in the tree we're interested in. Get the Attributes

         ldap.getSearchResultAttributes($searchResult);

         # Process all User Objects

         if ( array.contains($searchResult["attributes"]["objectClass"], "inetOrgPerson") ) {


            # Log the DN and all of the attributes

            log.info("DN: " . $searchResult["objectName"] );

            foreach ( $att in hash.keys($searchResult["attributes"]) ) {

               log.info($att . " = " . lang.dump($searchResult["attributes"][$att]) );

            }


            # Add the users favourite colour

            $searchResult["attributes"]["Favourite_Colour"] = [ "Riverbed Orange" ];


            # If the password attribute is included.... remove it

            hash.delete($searchResult["attributes"], "userPassword");


            # Update the search result

            ldap.updateSearchResultDetails($packet, $searchResult);


            # Commit the changes

            $stream .= ldap.encodePacket( $packet );

            $packet = ldap.getNextPacket($packet["lastByte"]);

            continue;

         }

      }

   }

   # Not an interesting packet. Skip and move on.

   $stream .= ldap.skipPacket( $packet );

   $packet = ldap.getNextPacket($packet["lastByte"]);

}

response.set($stream);

response.flush();

 

This example reads each packet in turn by calling getNextPa