Ethernet Fabric (VDX, CNA)

Reply
N/A
jessie.floyd.ctr
Posts: 1
Registered: ‎07-05-2012

poor transfer performance with tagged vlans on RedHat 5.8

This is our scenario:

We have 3 RedHat systems all using Brocade 1020 adapters. A production, test and development server, each has a single 1020 adapter and is configured as an active/stand-by interface using a bonded interface.  We are running 3.1.0.0 firmware code   We have configured two VLANs and assigned to both ports (customer and management interface).  As an example I'll use vlan ID 10 and 500  vlan 10 is the customer network and vlan 500 is managment.  Once the bonded interface is configured we then have created two virtual interfaces to represent both vlans -- bond0.10 and bon0.500. 

Production data is moved nightly from the primary server to each 'down-stream' clients (dev and test) using the 500 vlan.  Our DBA has had intermittent success in moving the 500 GB of production data in the window afforded by the user community.  During out troubleshooting we've discovered that the interfaces are only able to push roughly 3.5 Gbps of data.  Our benchmark tool: iperf-2.0.5-1.el5.

For a point of reference our hardware is: HP 580 g7 with 512 GB of memory running four (4) Intel E7540 (6 core processors) total cpu count = 24.

OS kernel: 64 bit, 2.6.18-308.16.1.el5.

In reducing variables we created a simple cross-over connection between the servers and tested a conventional interface and were able to achieve 7.8 Gbps using iperf with 8 threads.  Next we connected to our 5020 (Cisco) switch and were able to maintain the same result.  Satisfied, we then connected to a FEX connected to the same 5020 and the speed was consistent.  At this point we started addressing the OS configurations and re-created the bonded interface with vlans.  The performance immediately dropped to 3.4 Gbps.  We rolled off the vlan configuration and the performance improved to 7.8 Gbps again.  At this point we were quite sure it was the vlan tagged component to the interface, but for good measure we removed bonding and configured a standard interface with vlans and confirmed the speed was reduced to 3.4 Gbps.

Question:

Is there a Linux side kernel issue with the vlan segment of the IP stack which we can change to restore performance to the expected 7.8 Gbps?

Using the 1020 adapter is there a known limitation with vlan tagged interfaces on RHEL?

Join the Community

Get quick and easy access to valuable resource designed to help you manage your Brocade Network.