Ethernet Fabric (VDX, CNA)

Reply
N/A
Posts: 1
Registered: ‎07-31-2012

Issue Creating a LAG between VDX6720 to ESXi 5 with Emulex dual 10Gb

I have followed the recommended design for connecting 3 IBM x3550M4 servers running ESXi 5 upd 1 to two Brocade VDX 6720 switches.

The Brocade port channels, one for each server are set to "channel-group 2 mode on type standard" eg ports 1/0/1 and 2/01 are in port-channel 2.

On ESXi both 10 Gb Emulex nics are attached to a vSwitch and the nics are teamed.

When both ports are enabled on the switches significant performance issues are experienced.

When the ports on rbridge 2 are disabled performance improves dramatically.

Is there a guide on the recommended method for creating a lag between a vmware esxi host and a pair of vdx 6720 switches ?

Geoff

Occasional Contributor
Posts: 5
Registered: ‎11-23-2010

Re: Issue Creating a LAG between VDX6720 to ESXi 5 with Emulex dual 10Gb

What load balancing policy have you set on the Server side ?

Helpful link :

http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1001938

You need to have static nic teaming with load balancing mode set to IP hash on the server side .

Retired-Super Contributor
Posts: 260
Registered: ‎05-12-2010

Re: Issue Creating a LAG between VDX6720 to ESXi 5 with Emulex dual 10Gb

Hi Geoff,

I passed your note to a couple of folks who are very knowledgeable about ESXi and VDX configuration.  Perhaps you can provide some additional details about what your configuration is and what you mean by "significant performance issues", as that will ensure a common base of understanding.

  1. How is the load balancing set for the nics that are teamed on the ESXi servers?
  2. What other settings have you configured on the VDX's?
  3. Can you expand on performance issues...it sounds like when you shut down rbridge-2, all the traffic goes down the other path, say rbridge-1, But, If you shutdown the other path instead, does it behave as it should?
  4. What are you using to measure performance of the links in the vLAG?

The overall comment I got back about configuring NIC Teaming with VDX using VCS vLAG was:

"At the end of the day, you take the HBA with the (2) NIC ports, team them up on the ESXi server and plug them into a single VDX with LAG or dual VDX using VCS vLAG, and it comes up."

Nandini is also pointing out some configuration requirements for ESXi NIC Teaming since they only support static LAG (NIC Teaming) not dynamic, so that's another detail you can clarify about your configuration.


Finally, could you verify what happens when you do the following?

1. Create NIC team on Server A following Nandini's comment about ESXi requiring static LAG.

2. Plug one link of the NIC Team into VDX-1 and the othe into VDX-2 creating a VCS vLAG.

3. Does the vLAG come up on both VDX?

4. Repeat this procedure for Server B.

5. Does the vLAG come up on both VDX?

6. Repeat this procedure for Server C

7. Does the vLAG come up on both VDX

I hope the above is helpful

Best.

Brook.

Join the Community

Get quick and easy access to valuable resource designed to help you manage your Brocade Network.

Click to Register
Download FREE NVMe eBook