Application Delivery (ADX)

Reply
Occasional Contributor
Posts: 6
Registered: ‎07-16-2009

Performance of ADX as NAT44

Hi

We are using ADX 1216-4 for NAT44.

Brocade representative person told us that we should expect 3.5Gbps throughput for NAT44.

Actually we see total cpu usage at less than 2.7Gbps.

For example:

===

#sh int eth 17

10GigabitEthernet17 is up, line protocol is up

  Hardware is 10GigabitEthernet, address is 748e.f8aa.1b40 (bia 748e.f8aa.1b50)

  Configured speed 10Gbit, actual 10Gbit, configured duplex fdx, actual fdx

  Member of 1 L2 VLANs, port is tagged, port state is FORWARDING

  STP configured to OFF, priority is level0, flow control disabled

  mirror disabled, monitor disabled

  Not member of any active trunks

  Not member of any configured trunks

  Port name is ADX-IN

  MTU 1500 bytes, encapsulation ethernet

  IPv6 is disabled

  30 second input rate: 450845864 bits/sec, 121306 packets/sec, 4.70% utilization

  30 second output rate: 1305556696 bits/sec, 147495 packets/sec, 13.29% utilization

  15203971853 packets input, 8193664510119 bytes, 0 no buffer

  Received 44126 broadcasts, 0 multicasts, 15203927727 unicasts

  0 input errors, 0 CRC, 0 frame, 0 ignored

  0 runts, 0 giants, DMA received 0 packets

  18199859634 packets output, 19323357183734 bytes, 0 underruns

  Transmitted 389 broadcasts, 0 multicasts, 18199859245 unicasts

  0 output errors, 0 collisions, DMA transmitted 0 packets

telnet@ADX-NAT#sh cpu

peak: 97.2 percent busy at 3130 seconds ago

7239 sec avg: 66.7 percent busy

   1 sec avg: 63.6 percent busy

   5 sec avg: 63.9 percent busy

  60 sec avg: 63.6 percent busy

300 sec avg: 64.9 percent busy

ASB1/1 peak: 98.9% in 1h41m, last sec: 52.5%, 5 sec: 53.9%, 60 sec: 51.7%, 300 sec: 52.4%

ASB1/2 peak: 99.3% in 1h58m, last sec: 66.2%, 5 sec: 67.6%, 60 sec: 68.1%, 300 sec: 70.1%

ASB1/3 peak: 98.8% in 1h41m, last sec: 56.8%, 5 sec: 59.7%, 60 sec: 58.9%, 300 sec: 58.2%

ASB1/4 peak: 99.1% in 1h18m, last sec: 68.6%, 5 sec: 65.5%, 60 sec: 62.7%, 300 sec: 61.8%

===

Just 450Mbps + 1300Mbps bandwidth causes about 60% of system cpu usage.

and another one time:

===

#sh int eth 17

10GigabitEthernet17 is up, line protocol is up

  Hardware is 10GigabitEthernet, address is 748e.f8aa.1b40 (bia 748e.f8aa.1b50)

  Configured speed 10Gbit, actual 10Gbit, configured duplex fdx, actual fdx

  Member of 1 L2 VLANs, port is tagged, port state is FORWARDING

  STP configured to OFF, priority is level0, flow control disabled

  mirror disabled, monitor disabled

  Not member of any active trunks

  Not member of any configured trunks

  Port name is ADX-IN

  MTU 1500 bytes, encapsulation ethernet

  IPv6 is disabled

  30 second input rate: 680610320 bits/sec, 174224 packets/sec, 7.08% utilization

  30 second output rate: 1899152800 bits/sec, 215083 packets/sec, 19.33% utilization

  15010546615 packets input, 8100107178828 bytes, 0 no buffer

  Received 44026 broadcasts, 0 multicasts, 15010502589 unicasts

  0 input errors, 0 CRC, 0 frame, 0 ignored

  0 runts, 0 giants, DMA received 0 packets

  17961735710 packets output, 19064528437361 bytes, 0 underruns

  Transmitted 388 broadcasts, 0 multicasts, 17961735322 unicasts

  0 output errors, 0 collisions, DMA transmitted 0 packets

telnet@ADX-NAT#sh cpu

peak: 97.2 percent busy at 1579 seconds ago

7248 sec avg: 66.1 percent busy

   1 sec avg: 69.7 percent busy

   5 sec avg: 69.9 percent busy

  60 sec avg: 68.9 percent busy

300 sec avg: 68.0 percent busy

ASB1/1 peak: 98.9% in 1h15m, last sec: 68.0%, 5 sec: 69.9%, 60 sec: 73.4%, 300 sec: 74.1%

ASB1/2 peak: 99.3% in 1h32m, last sec: 91.5%, 5 sec: 92.5%, 60 sec: 95.2%, 300 sec: 94.2%

ASB1/3 peak: 99.1% in 1h54m, last sec: 77.5%, 5 sec: 77.5%, 60 sec: 80.6%, 300 sec: 80.2%

ASB1/4 peak: 99.1% in 52m37s, last sec: 76.2%, 5 sec: 83.3%, 60 sec: 82.5%, 300 sec: 82.0%

===

Here 680Mbps + 1900Mbps (2580Mbps half-duplex) makes one of cores are fully loaded and customers got packetloss and speed reduction.

We using 99 NAT IP Pools, each pool have one IP. To each pool by ACL we sending several blocks of private IPs.

Creating one NAT pool with several NAT IPs causes problems with sticky sessions and different customer's requests natting to different NAT IPs which causes troubles with sites and messaging services.

We has been opened the case in Brocade TAC about this problem, but may be somebody already got such experience and can give us recommendations how to increase performance of ADX.

Valeri Streltsov

TiERA Broadband

Join the Community

Get quick and easy access to valuable resource designed to help you manage your Brocade Network.

vADC is now Pulse Secure
Download FREE NVMe eBook