vADC Docs

Feature Brief: Introduction to the Stingray Architecture

by on ‎03-12-2013 04:21 PM (3,074 Views)

Stingray Traffic Manager operates as a layer 7 proxy.  It receives traffic on nominated IP addresses and ports and reads the client request.  After processing the request internally, Stingray selects a candidate 'node' (back-end server).  It writes the request using a new connection (or an existing keepalive connection) to that server and reads the server response.  Stingray processes the response, then writes it back to the client.

stingray1.png

Stingray performs a wide range of traffic inspection, manipulation and routing tasks, from SSL decryption and service protection, through load balancing and session persistence, to content compression and bandwidth management.  This article explains how each task fits within the architecture of Stingray.

Virtual Servers and Pools

The key configuration objects in Stingray are the Virtual Server and the Pool:

stingray2.png

The Virtual Server manages the connections between the remote clients and Stingray. It listens for requests on the published IP address and port of the service.

The Pool manages the connections between Stingray and the back-end nodes (the servers which provide the service). A pool represents a group of back-end nodes.

Everything else

All other data-plane (relating to client traffic) functions of Stingray are associated with either a virtual server or a pool:

stingray3.png

Health Monitors run asynchronously, probing the servers with built-in and custom tests to verify that they are operating correctly.  If a server fails, it is taken out of service.

1. Virtual Server's processing (request)

The Virtual Server listens for TCP connections or UDP datagrams on its nominated IP and port.  It reads the client request and processes it:

  • SSL Decryption is performed by a virtual server. It references certificates and CRLs that are stored in the configuration catalog.

  • Service Protection is configured by Service Protection Classes which reside in the catalog. Service Protection defines which requests are acceptable, and which should be discarded immediately.

  • A Virtual Server then executes any Request Rules. These rules reside in the catalog. They can manipulate traffic, and select a pool for each request.

2. Pool's processing

The request rules may select a pool to handle the request. If they complete without selecting a pool, the virtual server's 'default pool' is used:

  • The pool performs load-balancing calculations, as specified by its configuration. A number of load balancing algorithms are available.

  • A virtual server's request rule may have selected a session persistence class, or a pool may have a preferred session persistence class. In this case, the pool will endeavour to send requests in the same session to the same node, overriding the load-balancing decision. Session persistence classes are stored in the catalog and referenced by a pool or rule.

  • Finally, a pool may SSL-encrypt traffic before sending it to a back-end node. SSL encryption may reference client certificates, root certs and CRLs in the catalog to authenticate and authorize the connection.

3. Virtual Servers's processing (response)


The pool waits for a response from a back-end node, and may retry requests if an error is detected or a response is not received within a timeout period. When a response is received, it is handed back to the virtual server:

  • The virtual server may run Response Rules to modify the response, or to retry it if it was not acceptable. Response rules are stored in the catalog.

  • A virtual server may be configured to compress HTTP responses. They will only be compressed if the remote client has indicated that they can accept compressed content.

  • The virtual server may be configured to write a log file entry to record the request and response. HTTP access log formats are available, and formats for other protocols can be configured.

  • A request rule may have selected a Service Level Monitoring class to monitor the connection time, or the virtual server may have a default class. These servive level monitoring classes are stored in the catalog, and are used to detect poor response times from back-end nodes.

  • Finally, a virtual server may assign the connection to a Bandwidth Management Class. A bandwidth class is used to restrict the bandwidth available to a connection; these classes are stored in the catalog.

Many of the more complex configuration objects are stored in the configuration catalog. These objects are referenced by a virtual server, pool or rule, and they can be used by a number of different services if desired.

Other configuration objects

Two other configuration objects are worthy of note:

  • Monitors are assigned to a pool, and are used to asynchronously probe back-end nodes to detect whether they are available or not. Monitors reside in the catalog.

  • Traffic IP Groups are used to configure the fault-tolerant behavior of Stingray. They define groups of IP addresses that are shared across a fault-tolerant cluster.

Configuration

Core service objects - Virtual Servers, Pools, Traffic IP Groups - are configured using the 'Services' part of the Stingray Admin server:

Screen Shot 2013-03-12 at 23.13.11.pngCatalog objects and classes - Rules, Monitors, SSL certificates, Service Protection, Session Persistence, Bandwidth Management and Service Level Monitoring classes - are configured using the 'Catalogs' part of the Stingray Admin server:

Screen Shot 2013-03-12 at 23.13.35.png

Most types of catalog objects are referenced by a virtual server or pool configuration, or by a rule invoked by the virtual server.

Read more

For more information on the key features, refer to the Product Briefs for Stingray Traffic Manager