Ethernet Fabric (VDX, CNA)

Reply
Mel
Contributor
Posts: 63
Registered: ‎10-16-2010

FCoE Question

Dell has come out with a new blade  switch for its m1000e chassis. It's what they call the Dell  PowerConnect M8428-K, which is really an OEM'd Brocade switch. The  M8428-K is a 10G CEE blade switch that has 16 internal downlink  (server-facing) CEE ports, 8 CEE external ports for uplink to a top of  rack FCF, like the B-8000 or the Nexus 5K, and 4 1/2/4/8GB native FC  ports.

With  this switch, one can use the CEE ports for FCoE pass-through. I am  assuming that it does FIP snooping. I believe that is a requirement to  being a FCoE forwarder. However, one can also leverage the FC ports and  configure the switch as an FCF, where the de-encapsulation of the FC  packet from the Ethernet frame and forwarding to the FC SAN can be done  on the switch itself without having to worry about an FCF at the top of  rack. I imagine Brocade came up with this for those environments that  dont have an FCF at the top of rack - otherwise, I cant imagine why  anyone would want to leverage the blade as a FCF. Not much convergence  there.

All  this having been said, can a hybrid approach be taken, in which the  8428-K's FCF capabilities can be used for some of the FC SAN traffic  simultaneously with the top of rack FCF? Does that make sense? Can this  even be done? What is the value in using 2 FCFs?

Or is it a zero-sum-game, where its either the blade acting as the FCF or the top of rack?

See attached diagram.

Thank you

Retired-Super Contributor
Posts: 260
Registered: ‎05-12-2010

Re: FCoE Question

Hi PWBF,

I've gotten some input on your question from our product marketing/product management team.

---------

One of the main advantages of the Dell M8428-k is precisely that it doesn’t require an external top-of-rack switch to “split” the FC and Ethernet traffic to the existing SAN and LAN networks, thus reducing the amount of hardware to purchase and manage, and bringing significant savings in CapEx and OpEx. Since the same switch already provides native connectivity to FC fabrics, why would you want to connect to an external top-of-rack to do the FCF? As a matter of fact, the M8428-k does not support acting as an “FCoE passthrough” as you have described it. It has been designed to split the FC and Ethernet traffic within the blade server chassis to provide native connectivity to existing SAN and LAN. This approach also drastically reduces the number of cables and SFPs that have to be deployed per rack, which turn out to be the most common points of failure, so it also ends up improving overall system availability, which is a good thing. Finally, it also enables more bandwidth per server.

If you compare a Dell solution with an alternative solution requiring external top-of-rack FCF switches, you will find the Dell solution comprise less hardware, less cables and SFPs while delivering more server and more bandwidth per server. See the graphic below for details.

UCSvsDELL.JPG

A little off-topic, but the same is true for the Brocade Converged 10GbE Switch Module for IBM BladeCenter (aka Brocade 8470). They were both designed with the same philosophy in mind.

--------

I hope this helps out.

Mel
Contributor
Posts: 63
Registered: ‎10-16-2010

Re: FCoE Question

Brooke, thanks.

I actually want to close this thread. I posted the same question twice and I shouldnt have. Can you join us at this link?

http://community.brocade.com/thread/5310?tstart=0

I'll post my response and questions there. Thank you very much!

Retired-Super Contributor
Posts: 260
Registered: ‎05-12-2010

Re: FCoE Question

Thread Closed and Moved to New Thread

Mel
Contributor
Posts: 63
Registered: ‎10-16-2010

Re: FCoE Question

Brook, by the way, very nice slide there...

One question...how did you get to the 456Gbps number on the UCS?

Thanks

Retired-Super Contributor
Posts: 260
Registered: ‎05-12-2010

Re: FCoE Question

Hi Mel,

Thanks, but of course, I'm not the artist :-).

I pinged Product Management on your question and got the following response.

-----------

On each top-of-rack 6140XP there are 18 x 10GbE LAN connections and 6 x 8Gb SAN connections, totaling 456 Gbps.

(18 x 10 + 6 x 8) x 2 = 456 Gbps

So even if each UCS chassis has an external bandwidth of 80 Gbps and 7 x 80 = 560 Gbps, it has to go through the top-of-rack switches, and they become oversubsribed since the maximum uplink bandwidth is 456 Gbps.

------------

Hope that helps...

Mel
Contributor
Posts: 63
Registered: ‎10-16-2010

Re: FCoE Question

Brook, dont hate me, but that slide needs to be cleaned up. The BW numbers dont match the uplink BW and the number of cables.

I think the Dell M8428k with the m100e is a great proposition, no doubt. But these numbers on the Cisco side are wrong.

If there are 56 cables from the UCS chassis to the ToRs, that means the uplink BW from the chassis to the ToRs is 560 Gbps, not 456Gbps. And the number of cables in your calcs take that into account - they include 56 cables.

Also, the ToRs do not have 36 cables to the LAN infrastructure. Each ToR can have no more than 2 uplink modules. If you want MAX LAN and MAX SAN uplink BW from each ToR, you would use 1 6-port 10GE module and 1 6-port FC module, so thats a total of 12 Ethernet cables from the ToRs to the LAN and 12 FC cables from the ToRs to the SAN.

So, that means you have a total Uplink BW from chassis to ToR of 560 Gbps and total ToR uplink BW to LAN of 120 Gbps and 96Gbps to the SAN.

And the total number of cables is 56 from UCS to ToR + 12 from ToRs to LAN + 12 from ToRs to SAN = 80 cables, not 104.

Mel
Contributor
Posts: 63
Registered: ‎10-16-2010

Re: FCoE Question

Made some corrections myself from initial post, so please read the updated post on the board and not from your email.

Mel
Contributor
Posts: 63
Registered: ‎10-16-2010

Re: FCoE Question

OK, wait!!! LOLOL.. I figured the logic out...

Besides the uplink modules, your diagram deployes -- as uplinks -- the remaining 12 10G fixed ports that are left on each ToR after the 56 cables are connected from the chassis.

Got it! lol...I was wondering where you guys got 18 10GE as uplinks on each ToR. lol

Join the Community

Get quick and easy access to valuable resource designed to help you manage your Brocade Network.