09-17-2010 08:30 AM
For a number of years we have been told that Trunking is best solution for performance as well as the resiliancy of our ISL's. We currently utilize Brocade DCX switches as part of our remote replication solution. We have two sites that are geographically separate (aprox 30 miles) . We utilize DWDM (both Verizon and AT&T providers) of 2Gb circuits. Our ISL's (4 per switch pair) are set up into 2 trunk groups of 2 ISL's per. As we are moving towards 4Gb circuits we have had problems with one of our providers who uses CISCO ONS 10DME cards that will prevent us from trunking. After going round and round we are now hearing that trunking isn't really required for our Fiber Channel PPRC traffic and that seperate ISL's(non-trunked) using DLS will provide us with the same performance and resiliancy. I am a bit perplexed as the Brocade documentation still leads me to believe that Trunking provides me a number of benefits. Anyone care to weigh in on this. Thanks, Steve
09-22-2010 03:52 AM
Trunking is not the only solution, you can get the same performance with normal ISL's, make sure you have Portcfgqos and Portcfgcreditrecovery disabled on the respective DWDM connecting ports.
09-22-2010 04:20 AM
I think that the big differences are within the routing table and when load will be shift.
Within trunking the links will disribute the load within a trunk if thresholds are reached.
DLS will disrtibute or optimize routes when:
1) a switch boots up
2) E_Port goes offline or online
3) EX Port goes offline
4) a device goes offline.
Keep in mind the wording routes. A route can be overloaded and it will be balanced if one of the four events from above happens in the affected fabric.
I prefere Trunking becaues if a port of a trunks fails this has no effect on the routing table if master less trunking is active (4Gbit switches or newer)
If you have FICON you will have additional limitations on the routing policys and setting and FOS levels which has to be checked to find the proper configuration.
Best regards Andreas
09-22-2010 04:22 AM
GK. What about the issue of recovery / resiliancy in a non-trunked config. I heard from Brocade that a non-trunked solution has a longer wait to redrive IO if one of the ISL's / circuits drops or has errors, then a trunked solution.
09-22-2010 04:30 AM
Andrea, Thanks for the info. We are native Fiber Channel (replication traffic (PPRC)) so the switches are totally Fiber channel, no FICON. Its been this way for a number of years. We are getting told now as we are moving to 4Gb circuits with our DWDM providers (and are having issues with one of them) that we can go non-trunked and everything will be the same..... After all the years of hearing Brocade exclaim the virtues of Trunking (performance and resiliancy) I am surprise when I hear them say " go non-trunked with DLS and everything will be the same". We will test as best we can to see what changes are noted when we try non-trunked...... Regards, Steve
09-22-2010 04:46 AM
you wrote: Having issues with one of them
Does this mean that you have one trunk? And you try to create a second trunk for a resilient layout without success?
Some cards support trunking only within the same DWDM card but not over different cards or different DWDM boxes. In case of transparent cards you can not create a trunk if the link distance is not nearly equal.
09-22-2010 04:58 AM
Andrea, Our current config consists of two DCX switch pairs (with DWDM circuits in between). Each switch pair has 2 trunk groups of 2 ISL's per trunk group. We are using 2Gb OTR cards in our Nortel equipment for the DWDM circuit. As we move towards 4Gb, one provider has 4Gb OTR cards, the other is trying a TDM solution ( CISCO ONS 10DME cards) which does not allow us the ability to trunk. (Noteur network group uses 2 providers with cost being more of an issue then technical compatability.) So we have been going round and round trying to make things work. Now our Brocade Rep. is saying non trunked ISL's are just as good of a solution. To me it seems like a departure from everything we have been told in the past as well as being different from current Brocade Documentation. Regards, Steve
12-07-2010 08:36 AM
I would like to hear the outcome of this, particularly the Cisco DWDM side of things and the trunking part.
Our company has dark fiber that connects our two sites...dual path. We have Silkworm 48000's at the Primary Datacenter and 4900's at our secondary. FabricA has 2 ISL trunks consisting of 2 4G connections each using just the dark fiber. Our FabricB has the same setup but it goes over our Cisco ONS DWDM gear. The problem is that I noticed the ISL's are not trunking like on the other fabric.
I'm pretty sure they used too when we first set up the fabric. Now, in order for the connections to work over the Cisco gear, ISL R_Rdy mode must be enabled. Looking over some Brocade documentation and after several calls to IBM, it seems that in firmware 6.3.1b, when ISL R_Rdy mode is enabled, ISL trunking is disabled.
Was the disabling of ISL trunking when using R_Rdy made in a firmware release or was this something that we just missed and it never worked.
When I thought it was working, we were on 5.3.0c. Our next upgrade was to 6.1.0a and I can't tell you if trunking was still working or not in that release. Our last upgrade was to 6.3.1b and was a few months ago. We realized ISL trunking was not working rather recently.
Can anyone shed some light whether ISL trunking when using R_Rdy mode was disabled in a firmware release and if so, around what firmware revision?