Application Delivery (ADX)

Load Balancing Distribution Predictors with ServerIron

by pmorrissey on ‎05-29-2009 03:08 PM - edited on ‎10-30-2013 05:42 PM by bcm1 (2,755 Views)

Load Balancing Distribution Predictors with ServerIron

 

    

The ServerIron uses a parameter called the predictor to determine how to balance the client load across servers. You can fine-tune how traffic is distributed across multiple real servers by selecting one of the following load balancing metrics (predictors):

 

 

 

Check here for Script/Code examples to enable the various predictors Setting/Changing load-balancing distribution predictor (non-dynamic) and weight of servers

 

Least Connections Predictor

 

Sends the request to the real server that currently has the fewest active connections with clients. For sites where a number of servers have similar performance, the least connections option smoothes distribution if a server gets

bogged down. For sites where the capacity of various servers varies greatly, the least connections option

maintains an equal number of connections among all servers. This results in those servers capable of processing

and terminating connections faster receiving more connections than slower servers over time. NOTE: The Least Connections predictor does not depend on the number of connections to individual ports on a real server but instead depends on the total number of active connections to the server. The Least Connections predictor can be applied globally to apply for the entire ServerIron ADX or locally per virtual server

 

Round Robin Predictor

 

Directs the service request to the next server, and treats all servers equally regardless of the number of

connections. For example, in a configuration of four servers, the first request is sent to server1, the second

request is sent to server2, the third is sent to server3, and so on. After all servers in the list have received one

request, assignment begins with server1 again. If a server fails, SLB avoids sending connections to that server

and selects the next server instead. The Round Robin predictor can be applied globally to apply for the entire

ServerIron ADX or locally per virtual server

 

Weighted Round Robin Predictor

 

Like the Round Robin predictor, the Weighted Round Robin predictor treats all servers equally regardless of the

number of connections or response time. It does however use a configured weight value that determines the

number of times within a sequence that the each server is selected in relationship to the weighted values of other

servers. For example, in a simple configuration with two servers where the first server has a value of 4 and the

second server has a value of 2 the sequence of selection would occur as described in the following:

 

1. The first request is sent to Server1

2. The second request is sent to Server2

3. The third request is sent to Server1

4. The fourth request is sent to Server2

5. The fifth request is sent to Server1

6. The sixth request is sent to Server1

 

Notice that that over this cycle of server connections, Server1 which has a weight of 4 was accessed four times

and Server2 that has a weight of 2 was accessed only twice. This cycle will repeat as long as this predictor is in use. The Weighted Round Robin predictor can be applied globally to apply for the entire ServerIron ADX or locally per virtual Server

 

Weighted Predictor and Enhanced Weighted Predictor

 

Assigns a performance weight to each server. Weighted and Enhanced load balancing are similar to least

connections, except servers with a higher weight value receive a larger percentage of connections at a time. You

can assign a weight to each real server, and that weight determines the percentage of the current connections that

are given to each server. NOTE: it is required that you configure a weight for any real server that is bound to a VIP that is expected to load balance based on a weighted or enhanced weighted predictor

For example, in a configuration with five servers of various weights, the percentage of connections is calculated as follows:

 

• Weight server1 = 7

• Weight server2 = 8

• Weight server3 = 2

• Weight server4 = 2

• Weight server5 = 5

• Total weight of all servers = 24

 

The result is that server1 gets 7/24 of the current number of connections, server2 gets 8/24, server3 gets 2/24,

and so on. If a new server, server6, is added with a weight of 10, the new server gets 10/34.

If you set the weight so that your fastest server gets 50 percent of the connections, it will get 50 percent of the

connections at a given time. Because the server is faster than others, it can complete more than 50 percent of the

total connections overall because it services the connections at a higher rate. Thus, the weight is not a fixed ratio

but adjusts to server infrastructure (i.e adding servers) capacity over time.

 

Weighted versus Enhanced

 

In example above with weighted, server 2 would receive the first 8 connections first while in enhanced weighted, the connections are round-robin until each server has reach their weight and cycle start again.

 

Dynamic Weighted Predictor

 

Provides a dynamic weighted predictor that enables ServerIron to make load balancing decisions

using real time server resource usage information, such as CPU utilization and memory consumption. The

ServerIron retrieves this information through SNMP protocol from MIBs available on the application servers.

 

To achieve this capability, a software process in ServerIron polls MIB’s on the real servers. [ all the real servers must run an SNMP agent demon and support MIBs that can be queried by the SNMP manager on the ServerIron]

 

You can fine-tune how traffic is distributed across these real servers by enabling Dynamic Weighted Predictor on

the ServerIron. The Dynamic Weighted predictors can be applied globally to apply for the entire ServerIron ADX or locally per virtual Server

.

Dynamic-Weighted Direct

 

The SNMP response value from real server is considered as direct performance weight of that server. Direct

weighted load balancing is similar to least connections #question, except that servers with a higher weight value receive a

larger percentage of connections. You can assign a weight to each real server and that weight determines the

percentage of the current connections that are given to each server.#question

NOTE: it is required that you configure a weight for any real server that is bound to a VIP that is expected to load

balance based on a Dynamic-Weighted predictor

 

For example, in a configuration with five servers of various weights, the percentage of connections is calculated as

follows:

 

• Weight server1 = 7

• Weight server2 = 8

• Weight server3 = 2

•Weight server4 = 2

• Weight server5 = 5

• Total weight of all servers = 24

 

The result is that server1 gets 7/24 of the current number of connections, server2 gets 8/24, server3 gets 2/24,

and so on. If a new server, server6, is added with a weight of 10, the new server gets 10/34.

If you set the weight so that your fastest server gets 50 percent of the connections, it will get 50 percent of the

connections at a given time. Because this server is faster than the others, it can complete more than 50 percent of

the total connections overall because it services the connections at a higher rate. Thus, the weight is not a fixed

ratio but adjusts to the server capacity over time.

 

Dynamic-Weighted Reverse

 

The SNMP response from each server is regarded as reverse performance weight. Dynamic-weighted reverse

load balancing is similar to dynamic-weighted direct , except that the servers with a lower weight value receive a

larger percentage of connections. You can assign a weight to each real server, and that weight determines the

percentage of the current connections that are given to each server.#question

 

Links

 

 

 

 

Script/Code examples for the aboveSetting/Changing load-balancing distribution predictor (non-dynamic) and weight of servers

 

 

Consult Product Documentation for Further info.

http://www.foundrynet.com/services/documentation/index2.html#SI

Contributors