Design & Build

Data Center Infrastructure-Validation Test: MLXe With 100 GbE Line Card, Performance and QoS Test

by Brook.Reams_1 on ‎08-24-2012 10:50 AM - edited on ‎04-08-2014 04:32 PM by Community Manager (47 Views)

Synopsis: Performance validation tests of the Brocade MLXe with 100 GbE module using varying frame sizes to show how QoS affects traffic on 100 GbE ports under congestion conditions.

 

Contents

 

Preface

Overview

The Brocade Data Center Infrastructure Base Reference Architecture includes several Core Router building blocks. Brocade MLX and MLXe Routers are used in these blocks. This validation test includes the MLXe with 100 GbE line card and 10 GbE line cards validating performance and QoS at 100 GbE link rate.

The Brocade MLXe Router delivers up to 15.36 Tbps of routing capacity. All chassis types provide an astounding 4.8 billion pps routing performance and feature data center-efficient rear exhaust. The MLXe is available in 4-, 8-, 16-, and 32-slot chassis, and deliver up to 256 10 GbE, 1536 1 GbE, 64 OC-192, or 256 OC-48 ports in a single system while the MLXe 32 slot chassis can provide 32 100GbE wire speed ports with the two port 100 GbE module.

 

Starting with NetIron release 5.2.00, the MLXe offers support for 100 GbE via two modules; a one port module (BR-MLX-100Gx1-X) and a two port module (BR-MLX-100Gx2-X). The two port module is the industry’s first two port 100 Gigabit Ethernet (GbE) module providing wire-speed connectivity over both ports when installed in the Brocade MLXe. The 100 GbE modules support advanced MPLS and IPv4/IPv6 capabilities with a Forwarding Information Base (FIB) that can hold up to 1 million IPv4 and 240,000 IPv6 FIB entries.

 

Software-Defined Networking (SDN) is a powerful new network paradigm designed for the world's most demanding networking environments. The Brocade MLX Series enables SDN by supporting the OpenFlow protocol, which allows communication between an OpenFlow controller and an OpenFlow-enabled router. Using this approach, organizations can control their networks programmatically, transforming the network into a platform for innovation through new network applications and services.

 

In the world of super-computing, 100 GbE link rates are in demand for managing and processing large data sets commonly shared by multiple universities and research institutes. The MLX with 100 GbE line cards were demonstrated in a long haul network between the University of Victoria and Seattle, WA at the Super Computing 2011 Conference.

 

See the Related Documents section for more information.

 

Purpose of This Document

The validation test demonstrates the ability of the MLXe with 100 GbE line card to forward 100% line rate traffic at 128 byte frame size, or larger, without frame loss. It verifies the ability of QoS to correctly prioritize traffic classes under congestion conditions to route at 100 GbE line rate.

 

Audience

This content is of interest to network architects and designers responsible for high-performance data center networks and service provider networks such as Internet Service Providers (ISPs), transit networks, Content Delivery Networks (CDNs), hosting providers, and Internet Exchange Points (IXPs) who need ot meet skyrocketing traffic requirements and reduce the cost per bit.

 

Objectives

This test validates the performance of the MLXe with 100 GbE module for varying frame sizes and demonstrates the behavior of QoS on 100 GbE ports when traffic congestion occurs in the MLXe.

 

Test Conclusions

  1. The MLXe with 100 GbE modules forwards frames at line rate (100 Gbps) from all frame sizes between 128 to 9216 bytes.
  2. The MLXe with 100 GbE modules forwards traffic at line rate (100 Gbps) with mixed frame sizes (less than 128 bytes up to 9216 bytes).
  3. When traffic congestion occurs on a 100 GbE port, frames are discarded correctly according to CoS/DSPC QoS markings while maintaining line rate forwarding (100 Gbps).

 

Related Documents

 

References

This video shows the MLXe 100G network demonstration between the University of Victoria and the Caltech booth at the Super Computing 2011 Conference in Seattle (November 13-17 2011)

.

 

About Brocade

Brocade networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

 

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (www.brocade.com)

 

Key Contributors

The content in this guide was provided by the following key contributors.

Test Architect: Chris Sung Ung Yoon, Strategic Solutions Lab

Test Engineer:

 

Document History

Date                  Version        Description

2012-08-24         1.0                Initial Version

2013-01-04         1.1                Edited test cases and results for clarity

 

Test Plan

Scope

Performance tests at varying packet sizes and QoS behavior under congestion conditions for an MLX2 with the 100 GbE module. The testing included the following test cases.

  1. 100 GbE throughput for frame sizes from 128 to 9216 bytes.

2.      100 GbE QoS behavior with congestion conditions on the MLXe router.

 

Test Configuration

The testing configuration consisted of 1 MLXe-8 running NetIron 5.3a with 2 x 100GbE modules and two 8 x 10GbE modules. Test traffic was generated using a Spirent Test Center (STC) simulator with a mix of 50% IPv4 and 50% IPv6 packets.

 

Different test topologies were used for each test. Refer to each test for a diagram of the test configuration. The first configuration tested traffic forwarding between 100 GbE modules, the second traffic forwarding between 10 x 10GbE and a 100GbE module and the third tested QoS behavior when the MLXe experiences congestion on a 100 GbE port. In these traffic forwarding tests, layer 3 VE interfaces are used with an inbound ACL for all the VE interfaces on the 100GbE port for mixed IPv4 and IPv6 traffic.

 

DUT Description

 

Identifier

Vendor

Model

Notes

MLXe

Brocade

MLXe-8 eight slot chassis router

 

BR-MLX-100Gx2-X

Brocade

Two port x 100 GbE module

 

BR-MLX-10Gx8-X

Brocade

Eight port x 10 GbE Module

 

BR-MLX-10Gx8-M

Brocade

Eight port x 10 GbE Module

 

 

Brocade MLXe Core Router

Brocade MLX Series routers are designed to enable cloud-optimized networks by providing industry-leading 100 Gigabit Ethernet (GbE), 10 GbE, and 1 GbE wire-speed density; rich IPv4, IPv6, Multi-VRF, MPLS, and Carrier Ethernet capabilities; and advanced Layer 2 switching.

 

The Brocade MLX Series includes existing Brocade MLX Routers with up to 7.68 Tbps of routing capacity and the Brocade MLXe Routers with up to 15.36 Tbps of routing capacity. All chassis types provide an astounding 4.8 billion packets per second (PPS) routing performance and feature data center-efficient rear exhaust. Both models are available in 4-, 8-, 16-, and 32-slot chassis, and deliver up to 256 10 GbE, 1536 1 GbE, 64 OC-192, or 256 OC-48 ports in a single system.

 

A two port 100 GbE module provides wire-speed performance over both ports in the Brocade MLXe, and wire-speed connectivity over a single port in Brocade MLX Routers using the cost-saving Ports on Demand (PoD) feature. The MLX and MLXe with 100 GbE line cards enable the industry's first multi-terabit trunks.

 

DUT Specifications

 

Identifier

Release

Configuration Options

Notes

BR-MLXE-8-AC

NI 5.3.00

 

Eight slot chassis, AC power

NI-MLX-MR

NI 5.3.00

 

Brocade MLX system management module, 1 GB SDRAM, dual PCMCIA slots, EIA/TIA-232, and 10/100/1000 Ethernet ports for out-of-band management

NI-X-16-8-HSF

NI 5.3.00

 

Brocade MLX 8/16-slot system high-speed switch fabric module

BR-MLX-100Gx2-X

NI 5.3.00

 

Brocade MLX Series 2-port 100 GbE module with IPv4/IPv6/MPLS hardware support— requires high-speed switch fabric modules and CFP optics

NI-MLX-10Gx8-M

NI 5.3.00

 

Brocade MLX Series 8-port 10 GbE (M) module with IPv4/IPv6/MPLS hardware support. Support for up to 512,000 IPv4 routes. Requires SFP+ optics and high-speed switch fabric modules.

BR-MLX-10Gx8-X

NI 5.3.00

 

Brocade MLX Series 8-port 10 GbE (X) module with IPv4/IPv6/MPLS hardware support. Support for up 1 million IPv4 routes. Requires SFP+ optics and high-speed switch fabric modules

 

Test Equipment

Traffic was generated using a Spirent Test Center model SPT9000A with software release 3.90.

 

Test Cases

Test Case #1: MLXe 100 GbE to 100 GbE Performance

 

DUT

See the test configuration diagram below.

 

DataCenter-Infrastructure_ValidationTest-MLXe100GEto100GE.JPG

   Test #1 Configuration: 100 GbE to 100 GbE Forwarding

 

Purpose

To valid line rate performance across packet size.

 

Test Procedure

 

Step 1: Create VE Interfaces on MLXe 100 GbE Port

Create five VE interfaces to forward IPv4 and IPv6 traffic on each interface, eth 5/1 and eth 5/2 (100 GbE ports). See the CLI commands below.

--------

vlan 96 name "100G to 100G traffic"

tagged ethe 5/1

router-interface ve 96

!

vlan 97 name "100G to 100G traffic"

tagged ethe 5/1

router-interface ve 97

!

vlan 98 name "100G to 100G traffic"                              

tagged ethe 5/1

router-interface ve 98

!

vlan 99 name "100G to 100G traffic"

tagged ethe 5/1

router-interface ve 99

!

vlan 100 name "100G to 100G traffic"

tagged ethe 5/1

router-interface ve 100

!

vlan 101 name "100G to 100G traffic"

tagged ethe 5/2

router-interface ve 101

!

vlan 102 name "100G to 100G traffic"

tagged ethe 5/2

router-interface ve 102

!

vlan 103 name "100G to 100G traffic"

tagged ethe 5/2

router-interface ve 103

!                                                                

vlan 104 name "100G to 100G traffic"

tagged ethe 5/2

router-interface ve 104

!

vlan 105 name "100G to 100G traffic"

tagged ethe 5/2

router-interface ve 105

interface ve 96                                                  

ip address 96.85.1.1/24                                         

ip access-group 100G_Demo_IPv4 in                               

ipv6 address 96:85::1:1/112                                     

ipv6 traffic-filter 100G_Demo_IPv6 in                           

!                                                                

interface ve 97                                                  

ip address 97.85.1.1/24                                         

ip access-group 100G_Demo_IPv4 in                               

ipv6 address 97:85::1:1/112                                     

ipv6 traffic-filter 100G_Demo_IPv6 in                           

!                                                                

interface ve 98                                                  

ip address 98.85.1.1/24                                         

ip access-group 100G_Demo_IPv4 in                               

ipv6 address 98:85::1:1/112                                     

ipv6 traffic-filter 100G_Demo_IPv6 in                           

!                                                                

interface ve 99                                                  

ip address 99.85.1.1/24                                         

ip access-group 100G_Demo_IPv4 in                               

ipv6 address 99:85::1:1/112                                     

ipv6 traffic-filter 100G_Demo_IPv6 in                           

!                                                                

interface ve 100                                                 

ip address 100.85.1.1/24                                        

ip access-group 100G_Demo_IPv4 in                               

ipv6 address 100:85::1:1/112                                    

ipv6 traffic-filter 100G_Demo_IPv6 in                           

!                                                                

interface ve 101                                                 

ip address 101.85.1.1/24                                        

ip access-group 100G_Demo_IPv4 in                               

ipv6 address 101:85::1:1/112                                    

ipv6 traffic-filter 100G_Demo_IPv6 in                           

!                                                                

interface ve 102                                                 

ip address 102.85.1.1/24                                        

ip access-group 100G_Demo_IPv4 in                               

ipv6 address 102:85::1:1/112                                    

ipv6 traffic-filter 100G_Demo_IPv6 in                           

!                                                                

interface ve 103                                                 

ip address 103.85.1.1/24                                        

ip access-group 100G_Demo_IPv4 in                               

ipv6 address 103:85::1:1/112                                    

ipv6 traffic-filter 100G_Demo_IPv6 in                           

!                                                                

interface ve 104                                                 

ip address 104.85.1.1/24                                        

ip access-group 100G_Demo_IPv4 in                               

ipv6 address 104:85::1:1/112                                    

ipv6 traffic-filter 100G_Demo_IPv6 in                           

!                                                                

interface ve 105                                                 

ip address 105.85.1.1/24                                        

ip access-group 100G_Demo_IPv4 in                               

ipv6 address 105:85::1:1/112                                    

ipv6 traffic-filter 100G_Demo_IPv6 in

!

ip access-list extended 100G_Demo_IPv4                           

permit ip host 96.85.1.3 host 101.85.1.3                        

permit ip host 97.85.1.3 host 102.85.1.3                        

permit ip host 98.85.1.3 host 103.85.1.3                        

permit ip host 99.85.1.3 host 104.85.1.3                        

permit ip host 100.85.1.3 host 105.85.1.3                       

permit ip host 101.85.1.3 host 96.85.1.3                        

permit ip host 102.85.1.3 host 97.85.1.3                        

permit ip host 103.85.1.3 host 98.85.1.3                        

permit ip host 104.85.1.3 host 99.85.1.3                        

permit ip host 105.85.1.3 host 100.85.1.3                       

!                                       

ipv6 access-list 100G_Demo_IPv6

permit ipv6 host 96:85::1:3 host 101:85::1:3

permit ipv6 host 97:85::1:3 host 102:85::1:3

permit ipv6 host 98:85::1:3 host 103:85::1:3

permit ipv6 host 99:85::1:3 host 104:85::1:3

permit ipv6 host 100:85::1:3 host 105:85::1:3

permit ipv6 host 101:85::1:3 host 96:85::1:3

permit ipv6 host 102:85::1:3 host 97:85::1:3

permit ipv6 host 103:85::1:3 host 98:85::1:3

permit ipv6 host 104:85::1:3 host 99:85::1:3

permit ipv6 host 105:85::1:3 host 100:85::1:3

--------

 

Step 2: Configure Sprient Test Center

 

The STC interfaces, 9/1 and 11/1, are configured to generate 100 Gbps traffic bi-directionally. This traffic consists of five 10Gbps streams of IPv4 and IPv6 traffic on each interface so each MLXe VE interface sees both IPv4 and IPv6 traffic streams.

 

Step 3: Vary Frame Sizes and Measure Through Put of 100 GbE Port
  1. Vary the STP frame sizes from 128 bytes to 9216 bytes with mixed IPv4 and IPv6 traffic and measure performance of 100 GbE port.
  2. Run performance tests for mixed frame sizes using the following configuration

DataCenter-Infrastructure_ValidationTest-MLXeTest1SprientTestFrames.JPG

   Spirent i-MIX traffic pattern

 

Expected Result

  1. For all frame sizes from 128 to 9216 bytes, 100% line rate at 100 Gbps.
  2. For mixed frame sizes, 100% line rate at 100 Gbps.

Actual Result

  • This test achieved 100% line rate as shown in Figure 4.

DataCenter-Infrastructure_ValidationTest-MLXeTest1-128to9216Byte.JPG

   STC Rx traffic rate graph for 128 bytes frame size and onward up to 9216 bytes

 

  • With mixed frame sizes, 100% line rate was achieved as shown below in Spirent display.

DataCenter-Infrastructure_ValidationTest-MLXeTest1-128to9216Byte.JPG

   STC Rx traffic rate graph for mixed frame sizes

 

  • For both performance tests, the utilization of both high speed switch fabric modules (HSFM) in the MLXe was between 8.1% and 8.7% as shown by the sfm-utilization command below.

--------

telnet@MLXe8-2_134672#sh sfm-utilization all

SFM#2

----------+-----------+---------+-----------+---------

last 1 second  utilization = 8.1%

last 5 seconds utilization = 8.1%

last 1 minute  utilization = 8.1%

last 5 minutes utilization = 8.1%

SFM#3

----------+-----------+---------+-----------+---------

last 1 second  utilization = 8.7%

last 5 seconds utilization = 8.7%

last 1 minute  utilization = 8.7%

last 5 minutes utilization = 8.7%

--------

 

Test Case #2: MLXe 10 x 10 GbE to 100 GbE Performance

 

DUT

See the test configuration diagram below.

 

DataCenter-Infrastructure_ValidationTest-MLXe10x10GEto100GE.JPG

   Test #2 Configuration: 10 x 10 GbE to 100 GbE Forwarding

 

Purpose

To valid performance of multiple 10 GbE ports to a single 100 GbE port in the MLXe for varying packet sizes.

 

Test Procedure

 

Step 1: Create VE Interfaces on MLXe 100 GbE Port

Create ten VE interfaces to forward IPv4 and IPv6 traffic for MLXe interfaces eth 5/1 (100 GbE port) and eth 3/1-8, 4/1-2 (10 GbE ports). See the CLI commands below.

 

--------

vlan 201 name "100G to 10x10G traffic"

tagged ethe 5/1

router-interface ve 201

!

vlan 202 name "100G to 10x10G traffic"

tagged ethe 5/1

router-interface ve 202

!

vlan 203 name "100G to 10x10G traffic"

tagged ethe 5/1

router-interface ve 203

!

vlan 204 name "100G to 10x10G traffic"

tagged ethe 5/1

router-interface ve 204                                         

!

vlan 205 name "100G to 10x10G traffic"

tagged ethe 5/1

router-interface ve 205

!

vlan 206 name "100G to 10x10G traffic"

tagged ethe 5/1

router-interface ve 206

!

vlan 207 name "100G to 10x10G traffic"

tagged ethe 5/1

router-interface ve 207

!

vlan 208 name "100G to 10x10G traffic"

tagged ethe 5/1

router-interface ve 208

!

vlan 209 name "100G to 10x10G traffic"

tagged ethe 5/1

router-interface ve 209

!

vlan 210 name "100G to 10x10G traffic"

tagged ethe 5/1                                                 

router-interface ve 210

!

vlan 301 name "100G to 10x10G traffic"

tagged ethe 3/1

router-interface ve 301

!

vlan 302 name "100G to 10x10G traffic"

tagged ethe 3/2

router-interface ve 302

!

vlan 303 name "100G to 10x10G traffic"

tagged ethe 3/3

router-interface ve 303

!

vlan 304 name "100G to 10x10G traffic"

tagged ethe 3/4

router-interface ve 304

!

vlan 305 name "100G to 10x10G traffic"

tagged ethe 3/5

router-interface ve 305

!

vlan 306  name "100G to 10x10G traffic"                                           

tagged ethe 3/6

router-interface ve 306

!

vlan 307 name "100G to 10x10G traffic"

tagged ethe 3/7

router-interface ve 307

!

vlan 308 name "100G to 10x10G traffic"

tagged ethe 3/8

router-interface ve 308

!

vlan 309 name "100G to 10x10G traffic"

tagged ethe 4/1

router-interface ve 309

!

vlan 310 name "100G to 10x10G traffic"

tagged ethe 4/2

router-interface ve 310

interface ve 201                                                 

ip address 192.168.201.1/24                                     

ip access-group 100G_Demo_IPv4_2 in                             

ipv6 address 2001::1/64                                         

ipv6 traffic-filter 100G_Demo_IPv6_2 in                         

!                                                                

interface ve 202                                                 

ip address 192.168.202.1/24                                     

ip access-group 100G_Demo_IPv4_2 in                             

ipv6 address 2002::1/64                                         

ipv6 traffic-filter 100G_Demo_IPv6_2 in                         

!                                                                

interface ve 203

ip address 192.168.203.1/24

ip access-group 100G_Demo_IPv4_2 in

ipv6 address 2003::1/64

ipv6 traffic-filter 100G_Demo_IPv6_2 in

!

interface ve 204

ip address 192.168.204.1/24

ip access-group 100G_Demo_IPv4_2 in

ipv6 address 2004::1/64

ipv6 traffic-filter 100G_Demo_IPv6_2 in

!

interface ve 205

ip address 192.168.205.1/24

ip access-group 100G_Demo_IPv4_2 in

ipv6 address 2005::1/64

ipv6 traffic-filter 100G_Demo_IPv6_2 in

!

interface ve 206

ip address 192.168.206.1/24

ip access-group 100G_Demo_IPv4_2 in

ipv6 address 2006::1/64

ipv6 traffic-filter 100G_Demo_IPv6_2 in                         

!

interface ve 207

ip address 192.168.207.1/24

ip access-group 100G_Demo_IPv4_2 in

ipv6 address 2007::1/64

ipv6 traffic-filter 100G_Demo_IPv6_2 in

!

interface ve 208

ip address 192.168.208.1/24

ip access-group 100G_Demo_IPv4_2 in

ipv6 address 2008::1/64

ipv6 traffic-filter 100G_Demo_IPv6_2 in

!

interface ve 209

ip address 192.168.209.1/24

ip access-group 100G_Demo_IPv4_2 in

ipv6 address 2009::1/64

ipv6 traffic-filter 100G_Demo_IPv6_2 in

!

interface ve 210

ip address 192.168.210.1/24

ip access-group 100G_Demo_IPv4_2 in

ipv6 address 2010::1/64                                         

ipv6 traffic-filter 100G_Demo_IPv6_2 in

!

interface ve 301

ip address 192.168.31.1/24

ipv6 address 3001::1/64

!

interface ve 302

ip address 192.168.32.1/24

ipv6 address 3002::1/64

!

interface ve 303

ip address 192.168.33.1/24

ipv6 address 3003::1/64

!

interface ve 304

ip address 192.168.34.1/24

ipv6 address 3004::1/64

!

interface ve 305

ip address 192.168.35.1/24

ipv6 address 3005::1/64

!

interface ve 306                                                 

ip address 192.168.36.1/24

ipv6 address 3006::1/64

!

interface ve 307

ip address 192.168.37.1/24

ipv6 address 3007::1/64

!

interface ve 308

ip address 192.168.38.1/24

ipv6 address 3008::1/64

!

interface ve 309

ip address 192.168.39.1/24

ipv6 address 3009::1/64

!

interface ve 310

ip address 192.168.40.1/24

ipv6 address 3010::1/64

!

ip access-list extended 100G_Demo_IPv4_2                         

permit ip 192.168.201.0 0.0.0.255 any                           

permit ip 192.168.202.0 0.0.0.255 any                           

permit ip 192.168.203.0 0.0.0.255 any                           

permit ip 192.168.204.0 0.0.0.255 any                           

permit ip 192.168.205.0 0.0.0.255 any                           

permit ip 192.168.206.0 0.0.0.255 any                           

permit ip 192.168.207.0 0.0.0.255 any                           

permit ip 192.168.208.0 0.0.0.255 any                           

permit ip 192.168.209.0 0.0.0.255 any                           

permit ip 192.168.210.0 0.0.0.255 any    

!

ipv6 access-list 100G_Demo_IPv6_2

permit ipv6 2001::/64 any

permit ipv6 2002::/64 any

permit ipv6 2003::/64 any

permit ipv6 2004::/64 any                                       

permit ipv6 2005::/64 any                                       

permit ipv6 2006::/64 any                                       

permit ipv6 2007::/64 any                                       

permit ipv6 2008::/64 any                                       

permit ipv6 2009::/64 any                                       

permit ipv6 2010::/64 any                                       

!          

--------

 

Step 2: Configure Spirent Test Center

 

Configure the STC 9/1 interface for 100Gbps traffic and the STC 5/1-8, 8/1-2 interfaces for 10Gbps traffic bi-directionally. Ten 5 Gbps streams are created for IPv4 and for IPv6 traffic. Each MLXe VE interface on the 100 GbE port receives a total of 20 5 Gbps streams so that 10 streams have IPv4 and 10 streams have IPv6 traffic.

 

Step 3: Vary Frame Sizes and Measure Through Put of 100 GbE Port
  1. Vary the STP frame sizes from 128 bytes to 9216 bytes with mixed IPv4 and IPv6 traffic and measure performance of 100 GbE port.

Expected  Result

  1. For all frame sizes from 128 to 9216 bytes, 100% line rate at 100 Gbps.

Actual Result

  • This test achieved 100% line rate as shown below in the Spirent display.

DataCenter-Infrastructure_ValidationTest-MLXeTest1-128to9216Byte.JPG

   STC Rx traffic rate graph for 128 bytes frame size and onward up to 9216 bytes

 

  • For both performance tests, the utilization for both speed switch fabric modules (HSFM) in the MLXe was between 8.1% and 8.7% as shown by the sfm-utilization command below.

--------

telnet@MLXe8-2_134672#sh sfm-utilization all

SFM#2

----------+-----------+---------+-----------+---------

last 1 second  utilization = 8.1%

last 5 seconds utilization = 8.1%

last 1 minute  utilization = 8.1%

last 5 minutes utilization = 8.1%

SFM#3

----------+-----------+---------+-----------+---------

last 1 second  utilization = 8.7%

last 5 seconds utilization = 8.7%

last 1 minute  utilization = 8.7%

last 5 minutes utilization = 8.7%

--------

 

Test Case #3 MLXe Congestion Condition with QoS

 

DUT

See the test configuration diagram below.

 

DataCenter-Infrastructure_ValidationTest-MLXe100GEwithQoSJPG.JPG

   Test #3 Configuration: MLXe Congestion Condition with QoS

 

Purpose

To qualify forwarding performance under congestion conditions when using QoS.

 

Test Procedure

In the 3rd topology depicted below in Figure 3, we created 1 VE interface for eth 5/1, eth 5/2, and eth 3/1-8, 4/1-2 each to test QoS. STC 11/1 generates 80Gbps traffic and STC 5/1-8, 8/1-2 generates 10 x 8Gbps traffic uni-directionally, and in order to generate the 80Gbps traffic, two 40Gbps stream with COS 5 (set with DSCP 5) from STC 11/1 and twenty 20Gbps streams with COS 0 (set with DSCP 0) from STC 5/1-8, 8/1-2 are transmitted to 11 VE interfaces. Hence, outbound interface eth 5/1 is congested.

 

Step 1: Create VE Interfaces on MLXe 100 GbE Ports

Create a VE interface on eth 5/1 and eth 5/2 (100 GbE ports) and eth 3/1-8 and 4/1-2 (10 GbE ports). The following commands are used.

 

--------

vlan 400 name "For 100G QoS test"

tagged ethe 5/1

router-interface ve 400

!                                                               

vlan 401 name "For 100G QoS test"                               

tagged ethe 5/2                                                

router-interface ve 401

!     

vlan 301 name "100G to 10x10G traffic"

tagged ethe 3/1

router-interface ve 301

!

vlan 302 name "100G to 10x10G traffic"

tagged ethe 3/2

router-interface ve 302

!

vlan 303 name "100G to 10x10G traffic"

tagged ethe 3/3

router-interface ve 303

!

vlan 304 name "100G to 10x10G traffic"

tagged ethe 3/4

router-interface ve 304

!

vlan 305 name "100G to 10x10G traffic"

tagged ethe 3/5

router-interface ve 305

!

vlan 306  name "100G to 10x10G traffic"                                          

tagged ethe 3/6

router-interface ve 306

!

vlan 307 name "100G to 10x10G traffic"

tagged ethe 3/7

router-interface ve 307

!

vlan 308 name "100G to 10x10G traffic"

tagged ethe 3/8

router-interface ve 308

!

vlan 309 name "100G to 10x10G traffic"

tagged ethe 4/1

router-interface ve 309

!

vlan 310 name "100G to 10x10G traffic"

tagged ethe 4/2

router-interface ve 310

!

interface ve 400

ip address 193.0.0.1/24

ipv6 address 400::1/64

!                                                               

interface ve 401

ip address 193.1.1.1/24

ipv6 address 401::1/64

--------

 

Step 2: Configure Sprient Test Center

 

  1. STC interface 11/1 is configured to generate 80 Gbps of traffic consisting of two 40 Gbps flows each with a class of service (CoS) of 5 set via a Differentiated Service Code Point (DSCP) of 5.
  2. STC interfaces 5/1-8, 8/1-2 are configured to generate 10 x 8 Gbps of unidirectional traffic with 20 x 20 Gbps streams having a COS of 0 via a DSCP setting of 0.
  3. All STC streams are transmitted to the 11 VE interfaces on the MLXe) to create congestion on the outbound interface of eth 5/1 which is a 100 GbE port.

 

Expected Result

When congestion occurs on the 100 GbE port, traffic will be dropped according to the CoS/DSPC priority markings. The port will sustainforwarding at the 100 Gbps link rate.

 

Actual Result

When 80 Gbps of traffic, 40 Gbps of IPv4 and 40 Gbps of IPv6, with COS set to 5 arrives on the ingress TM of line card 5 with eth 5/1 and 5/2 (100 GbE ports), it gets credit from the exgress TM of line card 5. Therefore, this 80 Gbps of COS 5 traffic is forwarded to eth 5/1 without any frame drops.

 

DataCenter-Infrastructure_ValidationTest-MLXeTest3-80GECOS5Forwading-NoDrop.JPG

   COS 5 Traffic Forwards without Frame Drops

 

At the same time another 80 Gbps traffic set to COS 0 arrives at the ingress TM of line cards 3 & 4 destined for line card 5, eth 5/1. However, it doesn’t get credit from the egress TM of line card 5 for all 80Gbps as that would exceed the forwarding capacity of the 100 GbE port. Instead, the TM provides credit for 20Gbps of traffic which is forwarded to eth 5/1. The remaining 60 Gbps of inbound traffic is dropped. The following command on the MLXe shows the incremental packet discard count at the ingress TM of line cards 3 and 4. This confirms that QoS on the 100 GbE port is operating correctly.

 

--------

telnet@MLXe8-2_134672#sh tm statistics | in Port|Counters|Discard

--------- Ports 3/1 - 3/4 ---------

Ingress Counters:

   TotalQue Discard Pkt Count:               71888745

   Oldest Discard Pkt Count:                 0

Egress Counters:

   Discard Pkt Count:                        0

--------- Ports 3/5 - 3/8 ---------

Ingress Counters:

   TotalQue Discard Pkt Count:               71887612

   Oldest Discard Pkt Count:                 0

Egress Counters:

   Discard Pkt Count:                        0

--------- Ports 4/1 - 4/4 ---------

Ingress Counters:

   TotalQue Discard Pkt Count:               27483000

   Oldest Discard Pkt Count:                 0

Egress Counters:

   Discard Pkt Count:                        0

--------- Ports 4/5 - 4/8 ---------

Ingress Counters:

   TotalQue Discard Pkt Count:               0

   Oldest Discard Pkt Count:                 0

Egress Counters:

   Discard Pkt Count:                        0                  

--------- Ports 5/1 ---------

Ingress Counters:

   TotalQue Discard Pkt Count:               0

   Oldest Discard Pkt Count:                 0

Egress Counters:

   Discard Pkt Count:                        0

--------- Ports 5/2 ---------

Ingress Counters:

   TotalQue Discard Pkt Count:               0

   Oldest Discard Pkt Count:                 0

Egress Counters:

   Discard Pkt Count:                        0

telnet@MLXe8-2_134672#sh tm statistics | in Port|Counters|Discard

--------- Ports 3/1 - 3/4 ---------

Ingress Counters:

   TotalQue Discard Pkt Count:               171778986

   Oldest Discard Pkt Count:                 0

Egress Counters:

   Discard Pkt Count:                        0

--------- Ports 3/5 - 3/8 ---------

Ingress Counters:

   TotalQue Discard Pkt Count:               171777824

   Oldest Discard Pkt Count:                 0

Egress Counters:

   Discard Pkt Count:                        0

--------- Ports 4/1 - 4/4 ---------

Ingress Counters:

   TotalQue Discard Pkt Count:               64276509

   Oldest Discard Pkt Count:                 0

Egress Counters:

   Discard Pkt Count:                        0

--------- Ports 4/5 - 4/8 ---------

Ingress Counters:

   TotalQue Discard Pkt Count:               0

   Oldest Discard Pkt Count:                 0

Egress Counters:

   Discard Pkt Count:                        0                  

--------- Ports 5/1 ---------

Ingress Counters:

   TotalQue Discard Pkt Count:               0

   Oldest Discard Pkt Count:                 0

Egress Counters:

   Discard Pkt Count:                        0

--------- Ports 5/2 ---------

Ingress Counters:

   TotalQue Discard Pkt Count:               0

   Oldest Discard Pkt Count:                 0

Egress Counters:

   Discard Pkt Count:                        0

--------

Contributors