vADC Docs

Using Stingray Traffic Manager as a Webserver

by on ‎04-22-2013 06:44 AM - edited on ‎07-08-2015 04:12 PM by PaulWallace (4,264 Views)

Stingray Traffic Manager has a host of great, capable features to improve the performance and reliability of your web servers.  However, is there sometimes a case for putting the web content on Stingray directly, and using it as a webserver?

 

The role of a webserver in a modern application has shrunk over the years.  Now it's often just a front-end for a variety of application servers, performing authentication and serving simple static web content... not unlike the role of Stingray.  Often, its position in the application changes when Stingray is added:

 

webserver1.png

 

Now, if you could move the static content on to Stingray Traffic Manager, wouldn't that help to simplify your application architecture still further? This article presents three such ways:

 

A simple webserver in TrafficScript

 

TrafficScript can send back web pages without difficulty, using the http.sendResponse() function.  It can load web content directly from Stingray's Resource Directory (the Extra Files catalog).

 

Here's a simple TrafficScript webserver that intercepts all requests for content under '/static' and attempts to satisfy them with files from the Resource directory:

 

# We will serve static web pages for all content under this directory
$static = "/static/";

$page = http.getPath();

if( !string.startsWith( $page, $static )) break;

# Look for the file in the resource directory

$file = string.skip( $page, string.length( $static ));

if( resource.exists( $file )) {

   # Page found!

   http.sendResponse( 200, "text/html", resource.get( $file ), "" );

} else {

   # Page not found, send an error back

   http.sendResponse( 404, "text/html", "Not found", "" );

}

 

Add this file (as a request rule) to your virtual server and upload some files to Catalog > Extra Files.  You can then browse them from Stingray, using URLs beginning /static.

 

This is a very basic example.  For example, it does not support mime types (it assumes everything is text/html) - you can check out the Sending custom error pages article for a more sophisticated TrafficScript example that shows you how to host an entire web page (images, css and all) that you can use as an error page if your webservers are down.

 

However, the example also does not support directory indices... in fact, because the docroot is the extra files catalog, there's no (easy) way to manage a hierarchy of web content.  Wouldn't it be better if you could serve your web content directly from a directory in disk?

 

A more sophisticated Web Server in Java

 

The article Serving Web Content from Stingray using Java presents a more sophisticated web server written in Java.  It runs as a Java Extension and can access files in a nominated docroot outside of the Stingray configuration.  It supports mime types and directory indices.

 

Another webserver - in Python

 

Stingray can also run application code in Python, by way of the PyRunner.jar: Running Python code in Stingray Traffic Manager implementation that runs Python code on Stingray's local JVM.  This article Serving Web Content from Stingray using Python presents an alternative webserver written in Python.

 

Optimizing for Performance

 

Perhaps the most widely used feature in Stingray Traffic Manager, after basic load balancing, health monitoring and the like, is Content Caching.  Content Caching is a very easy and effective way to reduce the overhead of generating web content, whether the content is read from disk or generated by an application.  The content generated by our webserver implementations is fairly 'static' (does not change) so it's ripe for caching to reduce the load on our webservers.

 

There's one complication - you can't just turn content caching on and expect it to work in this situation.  That's because content caching hooks into two stages in the transaction lifecycle in Stingray:

 

  • Once all response rules have completed, Stingray will inspect the final response and decide whether it can be cached for future reuse
  • Once all request rules have completed, Stingray will examine the current request and determine if there's a suitable response in the cache

 

In our webserver implementations, the content was generated during the request processing step and written back to the client using http.sendResponse (or equivalent).  We never run any response rules (so the content cannot be cached) and we never get to the end of the request rules (so we would not check the cache anyway).

 

The elegant solution is to create a virtual server in Stingray specifically to run the WebServe extension.  The primary virtual server can forward traffic to your application servers, or to the internal 'webserver' virtual server as appropriate.  It can then cache the responses (and respond directly to future requests) without any difficulty:


webserver2.png

 

The primary Virtual Server decides whether to direct traffic to the back-end servers, or to the internal web server using an appropriate TrafficScript rule:

 

  • Create a new HTTP virtual server - this will be used to run the Webserver Extension. Configure the virtual server to listen on localhost only, as it need not be reachable from anywhere else.  You can chose an unused high port, such as 8080 perhaps.

 

  • Add the appropriate TrafficScript rule to this virtual server to make it run the WebServer Extension.

 

  • Create a pool called 'Internal Web Server' that will direct traffic to this new virtual server. The pool should contain the node localhost:[port].

 

  • Extend the TrafficScript rule on the original virtual server to:

 

# Use the internal web server for static content

if( string.startsWith( http.getPath(), "/static" )) {

   pool.use( "Internal Web Server" );

}

 

  • Enable the web cache on the original virtual server.

 

Now, all traffic goes to the original virtual server as before. Static pages are directed to the internal web server, and the content from that will be cached. With this configuration, Stingray should be able to serve web pages as quickly as needed.

 

Don't forget that all the other Stingray features like content compression, logging, rate-shaping, SSL encryption and so on can all be used with the new internal web server. You can even use response rules to alter the static pages as they are sent out.

 

Now, time to throw your web servers away?

 

Contributors