06-27-2011 02:18 PM
I have two Brocade 5300 switches connected via 9 x 2Gbps ISLs that are extended through an active DWDM solution to be about 20km apart. This fabric is used exclusively for sending primary backup copies to an offsite location. The fabric serves both Open Systems (Fibre Channel) as well as Mainframe (FICON, not using CUP). None of the ISLs participates in a trunk.
When I look at the nightly throughput on the ISL ports, 8 of the 9 ISLs show identical throughput patterns, which tells me that they are sharing the load equally among them. However, the 9th ISL has an I/O pattern all its own. Lately, it has been significantly higher than the other 8 ports, though this is not always the case.
So the question is, how does FabricOS determine which ISL to use for traffic? In this case, all of the paths are the same cost and all are an equal number of hops from source to destination. Is it possible that I am seeing Fibre Channel traffic on 8 of the 9 ISLs and FICON traffic on the 9th?
06-27-2011 10:57 PM
depends on which routing policy is in use (see aptpolicy) :
- port based routing : output isl is chosen regarding source ID
-> one initiator always take the same isl (unless fabric change and dls enabled)
- exchange based routing : output is chosen regarding source ID, destination ID and Originator Exchange ID
-> all initiators are using all isls
See attached drawings
Hope this helps
06-28-2011 05:54 AM
Your images were helpful, but still leave the original question open. We are using exchange-based routing with AP Shared Link Policy (see output of 'aptpolicy' below). From what I understand of exchange-based routing, all 9 ISLs should be balanced, which you have seen is not the case. So what scenario could possibly lead to one ISL having its own I/O pattern even though the fabric is using exchange-based routing?
Thanks again for your response!
06-28-2011 06:25 AM
Your information pompted me to do some other investigation, which has led me to the answer to my question.
Using the command 'topologyshow' on the local switch, I see that port 32 has a different list of "In Ports" than the rest of the ISLs (I am truncating the output below for brevity):
So 8 of the 9 ISLs have the same In Ports list as Out Port 0, but Out Port 32 has only two In Ports, ports 35 and 43, which happen to belong to the same server. So all of the traffic is being spread across 8 of the ISLs except traffic from one server, which is on its own ISL. This explains what I was seeing.
The obvious follow-up questions are why is it this way and what can I do to make all of the ISLs participate in shared routing?
06-28-2011 06:28 AM
Are all ISLs using the same physical PATH ?
Hardware issues on one ISL may explain why trafic is not well balanced ...
Please provide switchshow, trunkshow, islshow and porterrshow, it may help
06-28-2011 07:02 AM
Finally, the link below has answered my follow-up questions. It seems that there is a limit of 8 ISLs or trunks in a single DPS group. Thanks again for your help!
06-28-2011 09:30 AM
Yes, trunking is on our roadmap (we even have the licenses), but our DWDM solution is so old that we need to disable Brocade's proprietary trunking on the ISLs or else the unit mangles the packets. We have a project beginning to replace the active DWDM with a passive DWDM that would allow us to use colored SFPs in the Brocade at 8Gbps instead of 2Gbps. This would allow me to reduce the number of ISLs and still increase aggregate throughput.
Thanks again for your help!
06-28-2011 10:52 PM
One advice I would have is to be very careful with 8Gbps ISLs ... We tried to migrate some 4 Gbps (LW links around 300m only) which were working absolutely fine to 8 Gbps and had lots of CRC in production whereas links where tested before with spinfab without any errors. It seems 8 Gbps is quite more hardware / links error sensitive.
I'm interested in having feedback on your migration from 2 to 8 Gbps when it will be done
07-08-2011 12:36 AM
I need one clarity on ISL data flow. I created ISL between two M48 and used only 2 ports(4g) for that from 1Blade and create one trunk. And after some days i added 8ports(4g) from 2nd blade and create second Trunk. I checked in both switches and it showing 32G trunk and 8G trunk. Now i want to use library and storage for new server through ISL. My Question is what is the data path flow ??? Will data share both trunks Or 32G trunk will use first and 8G after that ???? As showing in islshow.
1: 13->125 10:00:00:05:1e:36:72:98 102 SW1 sp: 4.000G bw: 32.000G TRUNK
2:255->144 10:00:00:05:1e:36:72:98 102 SW1 sp: 4.000G bw: 8.000G TRUNK
1:125-> 13 10:00:00:05:1e:36:4e:ae 101 SW01 sp: 4.000G bw: 32.000G TRUNK
2:144->255 10:00:00:05:1e:36:4e:ae 101 SW01 sp: 4.000G bw: 8.000G TRUNK