Design & Build

Campus Network Infrastructure-Validation Test: MLX Core & ICX Access Resiliency and VoIP Call Quality

by Community Manager ‎11-09-2012 12:14 PM - edited ‎08-06-2014 08:25 AM (541 Views)

Synopsis: Test cases validating the high availability and resiliency of the Brocade MLX Router with MCT and VRRP-E at the core, and Brocade ICX Switches with stacking at the access layer. Tests include VoIP call quality measurements for a typical campus configuration using the International Telecommunications Union (ITU) G.107 standard.

 

Contents

Preface

Overview

Campus networks connect diverse kinds of devices to data center applications, carry voice communications using voice over IP (VoIP) as well as internet traffic to desktop, laptop, pad and mobile hand held smart phones. The campus has to handle diverse traffic types ranging from traditional backend client/server applications, print and file servers, to high bandwidth real time video over wired and wireless LAN (WLAN) connections. As more traffic flows and device types proliferate in the campus network, users expect high availability and the same rapid response times for client/server applications when they use social media, streaming video and unified communications with peer-to-peer video conferencing.

 

The Strategic Solution Lab (SSL) at Brocade has defined a Campus LAN base reference architecture that meets the requirements of campus network. The architecture is modular and based on building blocks that provide reliable, scalable and tested configurations.  This validation test applies to the Core and Access blocks when using Brocade’s MLX router as the core router and Brocade’s ICX series switches at the access layer.

 

Purpose of This Document

The validation test measures the convergence time of the MLX with MCT with VRRP-E, and the ICX with stacking when link and node failures occur. It also measures voice quality for VoIP traffic originating at the access layer between devices connected to the ICX stack and a separate SX800 switch.

 

Audience

This content is of interest to network architects and designers responsible for high-performance campus networks.

 

Objectives

This test validates the high availability and resiliency of the MLX with MCT and VRRP-E at the core, and the ICX with stacking at the access layer. It also provides VoIP call quality measurements for a typical campus configuration using the International Telecommunications Union (ITU) G.107 standard.

 

Test Conclusions

The following summarizes the test cases and results obtained in this validation test.

 

Test case

Result

Comments

Stack’s active (master) switch power failure

  • No packet loss for L2 bridged and L3 routed traffic each

Power failure by removing power cable

Recovery from the power failure of the old active (master) switch

  • No packet loss for L2 bridged and L3 routed traffic each
 

Stack’s member switch power failure

  • 1.53 sec and 1.23 sec packet loss for L2 bridged and L3 routed traffic each

Power failure by removing power cable

Recovery from the power failure of the member switch

  • 0.59 sec and 0.78 sec packet loss for L2 bridged and L3 routed traffic each
 

Stack’s active (master) switch  switch-over by CLI command

  • No packet loss for L2 bridged and L3 routed traffic each

Stack’s active switch switch-over by CLI has no impact to the network

Stacking cable’s removal

  • 350 msec packet loss for L2 bridged and L3 routed traffic each

Stacking cable is removed physically

Stacking cable’s re-insertion

  • No packet loss for L2 bridged and L3 routed traffic each
 

MCT cluster node power failure

  • 50 msec and 51 msec packet loss for L2 bridged and L3 routed traffic each

Power failure by removing power cable

Recovery from the power failure of  MCT cluster node

  • 2.62 sec and 2.44 sec packet loss for L2 bridged and L3 routed traffic each
 

MCT LAG member (CCEP) failure

  • 120 msec and 51 msec packet loss for L2 bridged and L3 routed traffic each

LAG member (CCEP) failure by removal of a fiber cable

Recovery from the MCT LAG member (CCEP) failure

  • No packet loss for L2 bridged and L3 routed traffic

LAG member (CCEP) recovery by insertion of a fiber cable

MCT LAG member (CCEP) failure by removal of linecard

  • 180 msec and 290 msec packet loss for L2 bridged and L3 routed traffic each.

Removal of linecard in the SX800_Saturn

Recovery from the MCT LAG member (CCEP) failure by insertion of  linecard

  • 4 sec packet loss for both L2 bridged and L3 routed traffic

Insertion of linecard 2 in the SX800_Saturn

VOIP call quality

  • Minimum 92, maximum 93, and average 93 values for both R-factor CQ and R-factor LQ

Very high call quality

 

Related Documents

 

References

 

About Brocade

Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

 

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (www.brocade.com)

 

Key Contributors

The content in this guide was provided by the following key contributors.

Test Architect: Daniel de la Rosa, Strategic Solutions Lab

Test Engineer: Chris Sung Ung Yoon, Strategic Solutions Lab

 

Document History

Date

Version

Description

2012-11-09

1.0

Initial Version

 

Test Plan

Scope

The test plan included four test cases three of which addressed resiliency of the MLX with MCT and VRRP-E at the core and the ICX switch stack at the access layer to various link and switch outages. Recovery time, or convergence time, was measured with simulated traffic flows. A final test was conducted to measure VoIP call quality according to International Telecommunications Union (ITU) standard G.107.

  1. Measure ICX stack recovery times for switch/link failures.
  2. Measure MLX MCT/VRRP-E node failure recovery time
  3. Measure MLX MCT LAG recovery time for cluster client edge port (CCEP) failure.
  4. Measure VoIP voice call quality for core/access topology.

 

Three traffic flows were synthetically generated using load generators.

  • Flow 1: Layer 3 routed bi-directional traffic simulating client/server application traffic.
  • Flow 2: Layer 2 bridged bi-directional traffic simulating client/server application traffic
  • Flow 3: Concurrent VoIP calls flowing between the SX800 and ICX switch stack. The flow simulated 2,000 concurrent callers using Avalanche simulators connected to the SX800 and ICX stack at the access layer.

 

Test Configuration

The diagram below shows the test configuration.

 

MLX-ICX_TestConfiguration.jpg

   Test Configuration (click to enlarge)

 

 

DUT Description

Identifier

Vendor

Model

Notes

MLXe4_Pluto-134671

Brocade

MLX-e 4 slot chassis router

 

MLX8_Venus_75118

Brocade

MLX 8 slot chassis router

 

ICX6610_Mars

Brocade

ICX Switch Stack

ICX Unit 1: ICX 6610

ICX Unit 2: ICX 6610

ICX Unit 3: ICX 6610

SX800_Saturn

Brocade

FastIron SX-800 Switch

 

 

Brocade MLX/MLXe Core Router

Brocade MLX Series routers are designed to enable cloud-optimized networks by providing industry-leading 100 Gigabit Ethernet (GbE), 10 GbE, and 1 GbE wire-speed density; rich IPv4, IPv6, Multi-VRF, MPLS, and Carrier Ethernet capabilities; and advanced Layer 2 switching.

 

The Brocade MLX Series includes existing Brocade MLX Routers with up to 7.68 Tbps of routing capacity and the Brocade MLXe Routers with up to 15.36 Tbps of routing capacity. All chassis types provide an astounding 4.8 billion packets per second (PPS) routing performance and feature data center-efficient rear exhaust. Both models are available in 4-, 8-, 16-, and 32-slot chassis, and deliver up to 256 10 GbE, 1536 1 GbE, 64 OC-192, or 256 OC-48 ports in a single system.

 

Brocade ICX Series Switches

The Brocade ICX 6610 Switch redefines the economics of enterprise networking by providing unprecedented levels of performance, availability, and flexibility in a stackable form factor—delivering the capabilities of a chassis with the flexibility and cost-effectiveness of a stackable switch.

 

Today's enterprise campus networks are expected to deliver services thought impossible just a few years ago. High-Definition (HD) video conferencing, real-time collaboration, Unified Communications (UC), and Virtual Desktop Infrastructure (VDI) are only a few of the applications that organizations are deploying to enhance employee productivity, improve customer service, and create a competitive advantage. The Brocade ICX 6610 helps organizations meet this challenge by delivering wire-speed, non-blocking performance across all ports to support latency-sensitive applications such as real-time voice and video streaming and VDI. Brocade ICX 6610 switches can be stacked using four full-duplex 40 Gbps stacking ports that provide an unprecedented 320 Gbps of backplane stacking bandwidth with full redundancy, eliminating inter-switch bottlenecks. Additionally, each switch can provide up to eight 10 Gigabit Ethernet (GbE) ports for high-speed connectivity to the aggregation or core layers.

 

Brocade FastIron SX Series Switches

The Brocade FastIron SX Series of switches provides an industry-leading price/performance campus aggregation and core solution that offers a scalable, secure, low-latency, and fault-tolerant IP services infrastructure for 1 and 10 Gigabit Ethernet (GbE) enterprise deployments. Organizations can leverage a high-performance non-blocking architecture and an end-to-end high-availability design with redundant management modules, fans, load-sharing switch fabrics, and power supplies.

 

The FastIron SX Series has an extensive feature set, making it well suited for real-time collaborative applications, IP telephony, IP video, e-learning, Wireless LANs (WLANs), and raising the organization's productivity. The FastIron SX Series delivers wire-speed performance and ultra-low latency, which are ideal for converged network applications such as VoIP and video conferencing. These platforms present the industry's most scalable and resilient PoE design, with a robust feature set to secure and simplify the deployment of an edge-to-core converged network. In addition, the FastIron SX Series supports high-density 10 Gigabit Ethernet (GbE) for enterprise backbone deployments.

 

DUT Specifications

 

Identifier

Release

Configuration Options

Notes

MLXe4_Pluto-134671

NI 5.2.00c

See below

Eight slot chassis, AC power

MLXe4_Pluto-134671

NI 5.2.00c

NI-X-4-HSF

Brocade MLX 4-slot system high-speed switch fabric module

MLX8_Venus_75118

NI 5.2.00c

See below

Sixteen slot chassis, AC power

MLX8_Venus_75118

NI 5.2.00c

NI-X-16-8-HSF

Brocade MLX 8/16-slot system high-speed switch fabric module

MLX8_Venus_75118

NI 5.2.00c

NI-MLX-10Gx8-M

Brocade MLX Series 8-port 10 GbE (M) module with IPv4/IPv6/MPLS hardware support. Support for up to 512,000 IPv4 routes. Requires SFP+ optics and high-speed switch fabric modules

MLXe4_Pluto-134671

MLX8_Venus_75118

NI 5.2.00c

NI-MLX-1Gx20-GC

Brocade MLX Series 20-port 10/100/1000 copper module with IPv4/IPv6/MPLS hardware support

MLXe4_Pluto-134671

MLX8_Venus_75118

NI 5.2.00c

NI-MLX-MR

Brocade MLX system management module, 1 GB SDRAM, dual PCMCIA slots, EIA/TIA-232, and 10/100/1000 Ethernet ports for out-of-band management

ICX6610_Mars

FI 07.3.00a

ICX 6610-48

Brocade  ICX 6610-48 switch with 48 RJ-45 Ports

ICX6610_Mars

FI 07.3.00a

ICX 6610-48P

Brocade ICX 6610-48P switch with 48 PoE+ Ports

ICX6610_Mars

FI 07.3.00a

ICX 6610-24P

Brocade ICX 6610-24P with 28 PoE+ Ports

SX800_Saturn

FI 07.3.00b

FI-SX800-AC

FastIron SX 800 bundle with 8-slot chassis, fan tray, two switch fabrics, and one AC power supply

SX800_Saturn

FI 07.3.00b

SX-FIZMR-PREM

FastIron SX 800/SX 1600 Management Module with no ports. The loaded software image supports advanced Layer 2 and

full Layer 3 IPv4 services in systems configured with all IPv4 or third-generation line modules.

SX800_Saturn

FI 07.3.00b

SX-FI42XG

FastIron SX 2-port XFP 10 GbE module

SX800_Saturn

FI 07.3.00b

SX-FI424C

FastIron SX 24-port 10/100/1000 Mbps Ethernet module

 

Test Equipment

 

Traffic flows (Flow 1 and Flow 2) were generated using a Spirent Test Center (STC) model SPT9000A with software release 3.90. Traffic Flow 3 was generated with a Spirent Avalanche Application Load tester, model 3100 chassis with 3.90 firmware and two 10 GbE cards attached to the SX800 and to the ICX stack to simulate VoIP calling between handsets.

 

References

 

DUT Configuration

 

MLX Multi-chassis Trunking Configuration

 

The following shows the MLX configuration for MCT for MLXe4_Pluto-134671 and MLX8_Venus-75118:

 

Pluto MCT cluster configuration:

lag "ICL:smileytongue:luto-Venus" dynamic id 2

ports ethernet 4/4 to 4/5

primary-port 4/4

deploy

!

lag "Pluto-Mars" dynamic id 1

ports ethernet 4/2 to 4/3

primary-port 4/2

deploy

!

lag "Pluto-Saturn" dynamic id 3

ports ethernet 4/1

primary-port 4/1

deploy

!

cluster Pluto_Venus 1

rbridge-id 1

session-vlan 4090

member-vlan 101 to 201

icl Pluto_Venus ethernet 4/4

peer 100.1.1.2 rbridge-id 2 icl Pluto_Jupiter

deploy

client Saturn

  rbridge-id 100

client-interface ethernet 4/1

  deploy

client Mars

  rbridge-id 200

client-interface ethernet 4/2

  deploy

Venus MCT cluster configuration :

lag "ICL:smileytongue:luto-Venus" dynamic id 2

ports ethernet 4/4 to 4/5

primary-port 4/4

deploy

!

lag "Venus-Mars" dynamic id 1

ports ethernet 4/1 ethernet 4/3

primary-port 4/1

deploy

!

lag "Venus-Saturn" dynamic id 3

ports ethernet 4/2

primary-port 4/2

deploy

!

cluster Pluto_Venus 1

rbridge-id 2

session-vlan 4090

member-vlan 101 to 201

icl Pluto_Jupiter ethernet 4/4

peer 100.1.1.1 rbridge-id 1 icl Pluto_Venus

deploy

client Saturn

  rbridge-id 100

client-interface ethernet 4/2

  deploy

client Mar

  rbridge-id 200

client-interface ethernet 4/1

deploy

telnet@MLXe4_Pluto_134671#sh cluster

Cluster Pluto_Jupiter 1

=======================

Rbridge Id: 1, Session Vlan: 4090

Cluster State: Deploy

Client Isolation Mode: Loose

Configured Member Vlan Range: 101 to 201

Active Member Vlan Range: 101 151 160 201

ICL Info:

---------

Name            Port  Trunk

Pluto_Jupiter   4/4 2   

Peer Info:

----------

Peer IP: 100.1.1.2, Peer Rbridge Id: 2, ICL: Pluto_Jupiter

KeepAlive Interval: 30 , Hold Time: 90, Fast Failover

Active Vlan Range:  101 151 160 201

Peer State: CCP Up (Up Time:   0 days: 7 hr: 9 min:15 sec)

Client Info:

------------

Name            Rbridge-id Config     Port Trunk FSM-State               

Mars            200        Deployed   4/2 1   Up                      

Saturn          100        Deployed   4/1 3   Up

telnet@MLX8_Venus_75118#sh cluster

Cluster Pluto_Jupiter 1

=======================

Rbridge Id: 2, Session Vlan: 4090

Cluster State: Deploy

Client Isolation Mode: Loose

Configured Member Vlan Range: 101 to 201

Active Member Vlan Range: 101 151 160 201

ICL Info:

---------

Name            Port  Trunk

Pluto_Jupiter   4/4 2   

Peer Info:

----------

Peer IP: 100.1.1.1, Peer Rbridge Id: 1, ICL: Pluto_Jupiter

KeepAlive Interval: 30 , Hold Time: 90, Fast Failover

Active Vlan Range:  101 151 160 201

Peer State: CCP Up (Up Time:   0 days: 7 hr:10 min:59 sec)

Client Info:

------------

Name            Rbridge-id Config     Port Trunk FSM-State               

Mar             200        Deployed   4/1 1   Up                      

Saturn          100        Deployed   4/2 3   Up  

 

MLX VRRP-E Configuration

The configuration uses Brocade VRRP-E on the MCT cluster for L3 routing and L3 link & node level redundancy. The following shows the MLX configuration for MCT for MLXe4_Pluto-134671(VRRP-E master) and MLX8_Venus-75118 (VRRP-E slave).

 

Pluto VRRP-E configuration:

router vrrp-extended

!

!

interface ve 101

ip address 101.1.1.1/24

ip vrrp-extended vrid 1

  backup priority 200

  ip-address 101.1.1.254

  advertise backup

short-path-forwarding

  activate

!

interface ve 201

ip address 201.1.1.1/24

ip vrrp-extended vrid 2

  backup priority 200

  ip-address 201.1.1.254

  advertise backup

short-path-forwarding

  activate

Venus VRRP-E configuration :

router vrrp-extended

!

!

interface ve 101

ip address 101.1.1.2/24

ip vrrp-extended vrid 1

  backup priority 150

  ip-address 101.1.1.254

  advertise backup

short-path-forwarding

  activate

!

interface ve 201

ip address 201.1.1.2/24

ip vrrp-extended vrid 2

  backup priority 150

  ip-address 201.1.1.254

  advertise backup

short-path-forwarding

activate

telnet@MLXe4_Pluto_134671#sh ip vrrp-extended br

Total number of VRRP-Extended routers defined: 2

Flags Codes - P:smileytongue:reempt 2:V2 3:V3

Short-Path-Fwd Codes - ER: Enabled with revertible option, RT: reverted, NR: not reverted

Inte- VRID Current Flags  State  Master IP       Backup IP       Virtual IP      Short-

rface Priority Address         Address         Address         Path-Fwd

------------------------------------------------------------------------------------------

v101  1    200 P2     Master Local           101.1.1.2       101.1.1.254     Enabled

v201  2    200 P2     Master Local           201.1.1.2       201.1.1.254     Enabled

telnet@MLX8_Venus_75118#sh ip vrrp-extended br

Total number of VRRP-Extended routers defined: 2

Flags Codes - P:smileytongue:reempt 2:V2 3:V3

Short-Path-Fwd Codes - ER: Enabled with revertible option, RT: reverted, NR: not reverted

Inte- VRID Current Flags  State  Master IP       Backup IP       Virtual IP      Short-

rface Priority Address         Address         Address         Path-Fwd

------------------------------------------------------------------------------------------

v101  1    150 P2     Backup 101.1.1.1       Local           101.1.1.254     Enabled

v201  2 150      P2     Backup 201.1.1.1       Local           201.1.1.254     Enabled

 

 

References

 

ICX Switch Stack Configuration

The access layer includes an ICX switch stack using three ICX6610 switches with the 40Gbps stacking ports. Logically, all three ICX switches appear as a single switch providing link and node redundancy. ICX unit#3 is the active (master) switch

 

stack unit 1

  module 1 icx6610-48-port-management-module

  module 2 icx6610-qsfp-10-port-160g-module

  module 3 icx6610-8-port-10g-dual-mode-module

  priority 128

  stack-trunk 1/2/1 to 1/2/2

  stack-trunk 1/2/6 to 1/2/7

  stack-port 1/2/1 1/2/6

stack unit 2

  module 1 icx6610-48p-poe-port-management-module

module 2 icx6610-qsfp-10-port-160g-module

  module 3 icx6610-8-port-10g-dual-mode-module

  priority 100

  stack-trunk 2/2/1 to 2/2/2

  stack-trunk 2/2/6 to 2/2/7

  stack-port 2/2/1 2/2/6

stack unit 3

  module 1 icx6610-24p-poe-port-management-module

  module 2 icx6610-qsfp-10-port-160g-module

  module 3 icx6610-8-port-10g-dual-mode-module

  priority 128

  stack-trunk 3/2/1 to 3/2/2

  stack-trunk 3/2/6 to 3/2/7

  stack-port 3/2/1 3/2/6

stack enable

stack mac 0000.0000.0011

!

!

hitless-failover enable

Stack status in the steady state :

 

MLX-ICX_ICXStackConfiguration.jpg

 

 

 

 

References

 

 

Spirent Test Center Synthetic Load Configurations

 

Create the traffic Flow 1 and Flow 2 on the STC.

 

MLX-ICX_SpirentFlow-L3Routed.jpg

   Traffic Flow 1: Layer 3 Routed (click to enlarge)

 

 

 

MLX-ICX_SpirentFlow-L2Bridged.jpg

   Traffic Flow 2: Layer 2 Bridged (click to enlarge)

 

 

The following verifies the layer 2 bridged and layer 3 routed bi-directional traffic flows between STC port 5/1 and 2/5 are working.

 

 

telnet@ICX6610_Mars#sh interface ethernet 1/3/1 to 1/3/2 ethernet 2/3/1 to 2/3/2

10GigabitEthernet1/3/1 is up, line protocol is up

  Hardware is 10GigabitEthernet, address is 0000.0000.0011 (bia 748e.f834.97dd)

  Interface type is 10Gig SFP+

  Configured speed 10Gbit, actual 10Gbit, configured duplex fdx, actual fdx

  Member of 3 L2 VLANs, port is tagged, port state is FORWARDING

  BPDU guard is Disabled, ROOT protect is Disabled

  Link Error Dampening is Disabled

  STP configured to ON, priority is level0, mac-learning is enabled

  Flow Control is enabled

  Mirror disabled, Monitor disabled

  Member of active trunk ports 1/3/1,1/3/2,2/3/1,2/3/2, primary port

  Member of configured trunk ports 1/3/1,1/3/2,2/3/1,2/3/2, primary port

  No port name

  MTU 10240 bytes, encapsulation ethernet

  300 second input rate: 194222584 bits/sec, 189672 packets/sec, 2.24% utilization

  300 second output rate: 195345176 bits/sec, 189902 packets/sec, 2.25% utilization

  101499620955 packets input, 12998435407724 bytes, 0 no buffer

  Received 1136598 broadcasts, 1104686 multicasts, 101497379671 unicasts

  0 input errors, 0 CRC, 0 frame, 0 ignored

  0 runts, 0 giants

  101791066397 packets output, 13095939434109 bytes, 0 underruns

  Transmitted 23 broadcasts, 26723 multicasts, 101791039651 unicasts

  0 output errors, 0 collisions

  Relay Agent Information option: Disabled                       

Egress queues:

Queue counters Queued packets    Dropped Packets

    0           403903836                   0

    1            50045015                   0

    2                   0                   0

    3                   0                   0

    4                   0                   0

    5              211189                   0

    6               26755                   0

    7                   0                   0

10GigabitEthernet1/3/2 is up, line protocol is up

  Hardware is 10GigabitEthernet, address is 0000.0000.0011 (bia 748e.f834.97de)

  Interface type is 10Gig SFP+

  Configured speed 10Gbit, actual 10Gbit, configured duplex fdx, actual fdx

  Member of 3 L2 VLANs, port is tagged, port state is FORWARDING

  BPDU guard is Disabled, ROOT protect is Disabled

  Link Error Dampening is Disabled

  STP configured to ON, priority is level0, mac-learning is enabled

  Flow Control is enabled

  Mirror disabled, Monitor disabled

  Member of active trunk ports 1/3/1,1/3/2,2/3/1,2/3/2, secondary port, primary port is 1/3/1

  Member of configured trunk ports 1/3/1,1/3/2,2/3/1,2/3/2, secondary port, primary port is 1/3/1

  No port name

  MTU 10240 bytes, encapsulation ethernet

  300 second input rate: 194212592 bits/sec, 189660 packets/sec, 2.23% utilization

  300 second output rate: 194382024 bits/sec, 189826 packets/sec, 2.24% utilization

  101498391448 packets input, 12998437095210 bytes, 0 no buffer

  Received 0 broadcasts, 26724 multicasts, 101498364724 unicasts

  0 input errors, 0 CRC, 0 frame, 0 ignored

  0 runts, 0 giants

  101740546595 packets output, 13030069535435 bytes, 0 underruns

  Transmitted 31 broadcasts, 26723 multicasts, 101740519841 unicasts

  0 output errors, 0 collisions

  Relay Agent Information option: Disabled

Egress queues:

Queue counters Queued packets    Dropped Packets

    0           401090154                   0

    1                   0                   0

    2                   0                   0

    3                   0                   0

    4                   0                   0

    5                   1                   0

    6               26755                   0

    7                   0                   0                    

10GigabitEthernet2/3/1 is up, line protocol is up

  Hardware is 10GigabitEthernet, address is 0000.0000.0011 (bia 748e.f834.5891)

  Interface type is 10Gig SFP+

  Configured speed 10Gbit, actual 10Gbit, configured duplex fdx, actual fdx

  Member of 3 L2 VLANs, port is tagged, port state is FORWARDING

  BPDU guard is Disabled, ROOT protect is Disabled

  Link Error Dampening is Disabled

  STP configured to ON, priority is level0, mac-learning is enabled

  Flow Control is enabled

  Mirror disabled, Monitor disabled

  Member of active trunk ports 1/3/1,1/3/2,2/3/1,2/3/2, secondary port, primary port is 1/3/1

  Member of configured trunk ports 1/3/1,1/3/2,2/3/1,2/3/2, secondary port, primary port is 1/3/1

  No port name

  MTU 10240 bytes, encapsulation ethernet

  300 second input rate: 194804584 bits/sec, 190239 packets/sec, 2.24% utilization

  300 second output rate: 194957560 bits/sec, 190388 packets/sec, 2.24% utilization

  101688794461 packets input, 13023143082580 bytes, 0 no buffer

  Received 595079 broadcasts, 45209 multicasts, 101688154173 unicasts

  0 input errors, 0 CRC, 0 frame, 0 ignored

  0 runts, 0 giants

  101611558370 packets output, 13012549609128 bytes, 0 underruns

  Transmitted 29 broadcasts, 26703 multicasts, 101611531638 unicasts

  0 output errors, 0 collisions                                  

  Relay Agent Information option: Disabled

Egress queues:

Queue counters Queued packets    Dropped Packets

    0           142781961                   0

    1               44953                   0

    2                   0                   0

    3                   0                   0

    4                   0                   0

    5                   0                   0

    6               26703                   0

    7                   0                   0

10GigabitEthernet2/3/2 is up, line protocol is up

  Hardware is 10GigabitEthernet, address is 0000.0000.0011 (bia 748e.f834.5892)

  Interface type is 10Gig SFP+

  Configured speed 10Gbit, actual 10Gbit, configured duplex fdx, actual fdx

  Member of 3 L2 VLANs, port is tagged, port state is FORWARDING

  BPDU guard is Disabled, ROOT protect is Disabled

  Link Error Dampening is Disabled

  STP configured to ON, priority is level0, mac-learning is enabled

  Flow Control is enabled

  Mirror disabled, Monitor disabled

  Member of active trunk ports 1/3/1,1/3/2,2/3/1,2/3/2, secondary port, primary port is 1/3/1

  Member of configured trunk ports 1/3/1,1/3/2,2/3/1,2/3/2, secondary port, primary port is 1/3/1

  No port name

  MTU 10240 bytes, encapsulation ethernet

  300 second input rate: 194805608 bits/sec, 190239 packets/sec, 2.24% utilization

  300 second output rate: 194976840 bits/sec, 190407 packets/sec, 2.24% utilization

  101689464419 packets input, 13023356466796 bytes, 0 no buffer

  Received 0 broadcasts, 26715 multicasts, 101689437704 unicasts

  0 input errors, 0 CRC, 0 frame, 0 ignored

  0 runts, 0 giants

  101611353864 packets output, 13012460438509 bytes, 0 underruns

  Transmitted 21 broadcasts, 26705 multicasts, 101611327138 unicasts

  0 output errors, 0 collisions

  Relay Agent Information option: Disabled

Egress queues:

Queue counters Queued packets    Dropped Packets

    0           142622767                   0

    1                   0                   0

    2                   0                   0

    3                   0                   0

    4                   0                   0

    5                   0                   0

    6               26705                   0                    

    7                   0                   0

 

 

Test Cases

Test Case #1: Measure ICX Stack Recovery Time for Switch/Link Failures

 

DUT

  • ICX6610_Mars ICX switch stack

 

Purpose

To measure the time required to recovery from

  • Master switch failure and recovery
  • Member switch failure and recovery
  • Stacking cable’s failure and recovery

 

Test Procedure

Note the active (master) switch, ICX unit#3, is not in the traffic forwarding path, but the standby switch, ICX unit#1, and the member switch, ICX unit#2 are forwarding the traffic flows between STC port 5/1 and 2/5. Hence, half of the traffic sent by STC 5/1 is forwarded to ICX unit#2 via stacking port 1/2/6 of ICX unit#1 so that the traffic flows are load-balanced between eth 1/3/1, 1/3/2, 2/3/1, 2/3/2 of the stack.

 

 

Step 1: Fail Master Switch

For stack’s active (master) switch (ICX unit #3) simulate a failure by removing power cable.

 

Expected Result

No packet loss for layer 2 bridged and layer 3 routed traffic flows since these are not passing through the master switch

 

Actual Result

No packet loss

 

Step 2: Restore Master Switch

Restore power to master switch (ICX unit #3).

 

Expected Result

No packet loss for layer 2 bridged and layer 3 routed traffic flows since these are not flowing through the master switch.

 

Actual Result

No Packet Loss

 

Step 3: Fail Member Switch

 

For a member switch (ICX unit #2) simulate a failure by removing power cable.

 

Expected Result

Traffic halts until stack reconfigures around failed member switch.

 

Actual Result

Layer 2 bridged traffic stopped for 1.53 sec. Layer 3 routed traffic stopped for 1.23 sec.

 

Step 4: Restore Member Switch

For the member switch (ICX unit #2) restore power.

 

Expected Result

Traffic halts until stack reconfigures to include new member switch.

 

Actual Result

Layer 2 bridged traffic stopped for 0.59 sec. Layer 3 routed traffic stopped for 0.78 sec.

 

Step 5: Change Master Switch

Change master switch from Unit #1 to Unit #3.

 

Expected Result

Traffic does not halt.

 

Actual Result

Traffic did not halt for Layer 2 or Layer 3 flows.

 

Step 6: Stacking Cable Failure

Remove stacking cable, eth 1/2/6 between Unit #1 and Unit #2.

 

Expected Result

Traffic halts while stack reconfigures to route traffic around stacking link failure.

 

Actual Result

Traffic halted for 350 msec. Traffic from Unit #1 to Unit #2 moved to stacking link between Unit #2 and Unit #3

 

Step 7: Stacking Cable Replacement

Replace stacking cable, eth 1/2/6 between Unit #1 and Unit #2.

 

 

Expected Result

No halt in traffic flow.

 

Actual Result

No halt in traffic flow.

 

 

Test Case #2: Measure MLX MCT/VRRP-E Node Failure Recovery Time

DUT

  • MLXe4_Pluto-134671 and MLX8_Venus_75118 in MCT/VRRP-E cluster

Purpose

To measure traffic convergence time for MCT cluster node failure and recovery.

 

 

Test Procedure

 

Step 1: Verify Traffic Flow in MCT Cluster

Verify the Layer 2 bridged and Layer 3 routed traffic flows between STC port 5/1 and 2/5 are flowing through each node in the MLX MCT cluster.

 

 

telnet@MLXe4_Pluto_134671#sh statistics | in PORT|Util

PORT 4/1 Counters:

InUtilization              4.49%      OutUtilization              4.51%

PORT 4/2 Counters:

InUtilization              2.27%      OutUtilization              2.24%

PORT 4/3 Counters:

InUtilization              2.25%      OutUtilization              2.24%

telnet@MLX8_Venus_75118#sh statistics | in PORT|Util

PORT 4/1 Counters:

InUtilization 2.24%      OutUtilization              2.24%

PORT 4/2 Counters:

InUtilization 4.48%      OutUtilization              4.50%

PORT 4/3 Counters:

InUtilization              2.25%      OutUtilization              2.25%

 
 
Step 2: Fail MLX Switch

Pull the power cable on MLXe4_Pluto-134671.

 

Expected Result

Traffic flows halt while cluster detects node failure and reconfigures.

 

Actual Result

The Layer 2 traffic flow halted for 50 msec and the layer 3 traffic flow halted for 51 msec.

 

 

Step 2: Restore Failed MLX switch

Restore power to MLXe4_Pluto-134671.

 

Expected Result

Traffic flows halt while cluster reconfigures with new node.

 

Actual Result

The Layer 2 traffic flow halted for 2.62 sec. and the layer 3 traffic flow halted for 2.44 sec.

 

Test Case #3: Measure MLX MCT LAG Recovery Time for Cluster Client Edge Port Failure

DUT

  • MLXe4_Pluto-134671 and MLX8_Venus_75118 in MCT/VRRP-E cluster
  • SX800_Saturn switch

 

Purpose

To measure traffic convergence time when an MCT LAG member’s link and the cluster client edge port (CCEP) fails and is restored.

 

 

Test Procedure

 

Step 1: Verify Traffic Flow in MCT Cluster

Verify the Layer 2 bridged and Layer 3 routed traffic flows between STC port 5/1 and 2/5 are flowing through each node in the MLX MCT cluster.

 

telnet@MLXe4_Pluto_134671#sh statistics | in PORT|Util

PORT 4/1 Counters:

InUtilization              4.49%      OutUtilization              4.51%

PORT 4/2 Counters:

InUtilization              2.27%      OutUtilization              2.24%

PORT 4/3 Counters:

InUtilization              2.25%      OutUtilization              2.24%

telnet@MLX8_Venus_75118#sh statistics | in PORT|Util

PORT 4/1 Counters:

InUtilization 2.24%      OutUtilization              2.24%

PORT 4/2 Counters:

InUtilization 4.48%      OutUtilization              4.50%

PORT 4/3 Counters:

InUtilization              2.25%      OutUtilization              2.25%

 
 
Step 2: Fail MCT CCEP LAG Member Link

Simulate MCT CCEP failure by removing the eth 4/1 cable.

 

Expected Result

Traffic halts while LAG reconfigures traffic flows.

 

Actual Result

Layer 2 traffic halted for 120 msec and Layer 3 traffic halted for 51 msec.

 

Step 3: Restore MCT CCEP LAG Member Link

Reconnect the eth 4/1 cable.

 

Expected Result

No halt in traffic flow.

 

Actual Result

No halt in traffic flow.

 

Step 4: Fail MCT CCEP

Remove line card 2 in the SX800_Saturn failing one path in the MCT LAG.

 

Expected Result

Traffic halts while LAG reconfigures around link failure.

 

Actual Result

Layer 2 traffic flow halted for 180 msec and Layer 3 traffic halted for 290 msec.

 

Step 5: Restore CCEP

Reinsert line card 2 in the SX800_Saturn, 4 sec packet loss for both L2 bridged and L3 routed traffic.

 

Expected Result

Traffic halts while LAG reconfigures with new link.

 

Actual Result

Layer 2 traffic flow halted for 4 sec and Layer 3 traffic halted for 4 sec.

 

 

Test Case #4: Measure VoIP Voice Call Quality for Core/Access Topology

DUT

  • MLXe4_Pluto-134671 and MLX8_Venus_75118 in MCT/VRRP-E cluster
  • SX800_Saturn switch
  • ICX6610_Mars ICX switch stack

Purpose

Measure VOIP call quality using ITC G.107 standard for a simulation of concurrent 2,000 user calling configuration.

 

 

Test Procedure

Step 1: Verify Traffic Flow in MCT Cluster

 

Verify the Layer 2 bridged and Layer 3 routed traffic flows between STC port 5/1 and 2/5 are flowing through each node in the MLX MCT cluster.

 

 

telnet@MLXe4_Pluto_134671#sh statistics | in PORT|Util

PORT 4/1 Counters:

InUtilization              4.49%      OutUtilization              4.51%

PORT 4/2 Counters:

InUtilization              2.27%      OutUtilization              2.24%

PORT 4/3 Counters:

InUtilization              2.25%      OutUtilization              2.24%

telnet@MLX8_Venus_75118#sh statistics | in PORT|Util

PORT 4/1 Counters:

InUtilization 2.24%      OutUtilization              2.24%

PORT 4/2 Counters:

InUtilization 4.48%      OutUtilization              4.50%

PORT 4/3 Counters:

InUtilization 2.25%      OutUtilization              2.25%

 
 
Step 2: Configure Avalanche Call Simulation

Create concurrent 2000 user’ call simulation using Avalanche traffic simulation for SIP. Configure the Avalache eth 8 as Caller (Client) and Avalache eth9 as Callee (Server).

 

Expected Result

Measurements of call quality should be acceptable.

 

Actual Result

1. Minimum 92, maximum 93, and average 93 values for both R-factor CQ and R-factor LQ  and it shows the VOIP calls’ quality from users’ perspective is in very satisfied level.

2. RFactor:CQ(Conversation Quality)and RFactor:LQ(Listening Quality)results for concurrent 2000 users’ call simulation using SIP of Avalache are shown below.

 

 

MLX-ICX_RFactorCQandLQMeasurements.jpg   

   Avalanche RFactor CQ and LQ Measurements (click to enlarge)

 

 

3. The R-factor CQ and LQ is a Voice-over-IP (VoIP) metric that measures the conversational quality and the listening quality each across IP networks according to ITU (International Telecommunication Union) standard G.107. The RFactor:CQ graph provides minimum, maximum, and average values. Values below 50 are generally unacceptable, and typical telephone connections do not go above 93, giving a typical range of 50-93.  The table below shows a typical representation of call quality levels.

 

 

MLX-ICX_ITU-RFactorandMOSCallQualityMetrics.jpg

    ITU R Factor and MOS Call Quality Metrics (click to enlarge)

 

 

4. The Client Statistics for Concurrent 2000 Users’ Call Simulation using SIP of Avalanche:
The Client Statistics SIPNG tab displays a summary of SIPNG activity, including SIPNG session/transaction statuses, active sessions/transactions, and RTP packet counts/bandwidth. These statistics show the network and/or device under test behavior when responding to a number of SIPNG sessions generated by the client.

 

 

MLX-ICX_ITU-AvalancheSIPNGMeasurements.jpg

   Avalanche SIPNG Measurements (click to enlarge)

 

 

 

Notes for SIPNG Measurements:

Field

Description

SIPNG sessions/second graph

The SIPNG session statuses as follows:

  • Attempted: The number of SIPNG call initiations sent to the server per second. This value should equal the sum of the successful, unsuccessful, and aborted values below.
  • Successful: The number of successful SIPNG sessions per second. A session is considered successful if the call was accepted and terminated normally.
  • Unsuccessful: The number of unsuccessful SIPNG sessions per second. A session is unsuccessful as a result of a timer expiration, or if any transaction in the session is unsuccessful.
  • Aborted: The number of SIPNG sessions that were aborted per second. A session is aborted as a result of network issues, such as a connection timeout, data timeout, connection reset, or other TCP/IP layer errors.

Active SIPNG sessions graph

The number of active SIPNG sessions. A SIPNG session is considered active as soon as a call is accepted, and closed after the call is terminated.

SIPNG transactions/second graph

Three types of transactions, REGISTER, INVITE, and BYE, are included in the SIPNG transaction statuses as follows:

  • Attempted: The number of SIPNG transactions sent to the server per second.
  • Successful: The number of successful SIPNG transactions per second. A transaction is considered successful if it finishes the call flow normally.
  • Unsuccessful: The number of unsuccessful SIPNG transactions per second. A transaction is considered unsuccessful if it cannot finish the call flow normally.

Active SIPNG transactions graph

The number of active SIPNG transactions. A SIPNG transaction (REGISTER, INVITE, or BYE) is considered active as soon as it is sent to the server, and closed after the server responds.

SIPNG RTP packets/second graph

The incoming and outgoing number of SIPNG RTP packets per second. RTP data streams are used to carry the voice data.

SIPNG RTP bandwidth Kbytes/sec graph

The incoming and outgoing SIPNG RTP bandwidth traffic (in Kbytes/second). RTP data streams are used to carry the voice data.

Comments
by wherr on ‎03-20-2013 11:39 AM

Good summary of the Test Cases and Results chart.

by ktahani on ‎06-02-2014 08:46 AM

This is a great initiative!:smileyvery-happy:

Contributors