Ethernet Switches & Routers

Reply
Contributor
Posts: 40
Registered: ‎01-28-2013

How do I configure bonding on my FCX to do Round Robin to a Linux host?

I have two Linux servers that I need to pass a lot of data back and forth between (GlusterFS members).

Using 6 gigabit Ethernet connection from each to my FCX, I setup each pair of 6 as their own 802.3ad link-aggregate like this...

 

interface ethernet 1/1/1

 link-aggregate configure key 10000

 link-aggregate active

!

interface ethernet 1/1/2

 link-aggregate configure key 10000

 link-aggregate active

!

interface ethernet 1/1/3

 link-aggregate configure key 10000

 link-aggregate active

!

interface ethernet 1/1/4

 link-aggregate configure key 10000

 link-aggregate active                                            

!

interface ethernet 1/1/5

 link-aggregate configure key 10000

 link-aggregate active

!

interface ethernet 1/1/6

 link-aggregate configure key 10000

 link-aggregate active

!

interface ethernet 1/1/13

 link-aggregate configure key 10010

 link-aggregate active

!

interface ethernet 1/1/14

 link-aggregate configure key 10010

 link-aggregate active

!

interface ethernet 1/1/15

 link-aggregate configure key 10010

 link-aggregate active

!

interface ethernet 1/1/16

 link-aggregate configure key 10010                               

 link-aggregate active

!

interface ethernet 1/1/17

 link-aggregate configure key 10010

 link-aggregate active

!

interface ethernet 1/1/18

 link-aggregate configure key 10010

 link-aggregate active

 

My transfer tests don't show any difference from when each server only had 1 gigabit Ethernet to the switch. I know this is because the link aggregation only sees once source and one destination, so it routes the traffice down one of the ports of the link aggregate. I want to use the round robin so that each frame is routed down a different port.

Is this possible on my FCX648S-HPOE running version FCXS07202d ?

Contributor
Posts: 40
Registered: ‎01-28-2013

Re: How do I configure bonding on my FCX to do Round Robin to a Linux host?

OK, I deleted the link-aggregates on ethernet ports 1/1/1 to 1/1/6 and ethernet 1/1/13 to 1/1/18, then I created thre trunks using the command:

(conf)# trunk server ethernet 1/1/1 to 1/1/6

(conf)# trunk server ethernet 1/1/13 to 1/1/18

(conf)# trunk deploy

 

Now looking through the statistics (after I cleared them), I see that when I transfer data from system1 connected to ports 1/1/13 to 1/1/18, to system2 connected to ports 1/1/1 to 1/1/6, the data is coming in evenly on all four of the ports of the trunk, but the traffic is going out of the switch on only one of the trunk ports of 1/1/1 to 1/1/6. Both Linux servers are set to use bond mode 0 (round robin). I even reversed my test and sent data from system2 to system1, with same results. The data comes in evenly on all ports of the incoming trunk, but leaves on only one port of the outgoing trunk.

 

I can't figure out why the traffic is not using more than one port going out of the switch.

Can anyone push me in the right direction?

Contributor
Posts: 40
Registered: ‎01-28-2013

Re: How do I configure bonding on my FCX to do Round Robin to a Linux host?

Well, it still wasn't working the way I wanted. Using the trunks, my traffic was coming into the switch evenly distributed on all ports from system1, but only going out 1 of the ports to system2.
 
I read here ( http://louwrentius.com/achieving-450-mbs-network-file-transfers-using-linux-bonding.html ) that each port, that is a connection from system1, should be on its own vlan. Then each port that is a connection from system2 should be in vlans matching the vlans from system1 (I know it sounds confusing, just look at the link). 
 
So we have system1
eth0 -> vlan1
eth1 -> vlan2
eth2 -> vlan3
eth3 -> vlan4
 
and system2
eth0 -> vlan1
eth1 -> vlan2
eth2 -> vlan3
eth3 -> vlan4
 
The theory is that with round robin configured on the linux servers, each network frame goes out a different NIC and gets to the remote server on different NICs. So you are utilizing all NICs for sessions between only two hosts. Now we are getting about 3.696 Gbps throughput.
I want to know if this is common practice for storage network configuration?

Join the Community

Get quick and easy access to valuable resource designed to help you manage your Brocade Network.

vADC is now Pulse Secure
Download FREE NVMe eBook