11-05-2010 12:58 PM
I'm assessing a solution for implementing some 5000 series FC switches in a data center.
Where do hardware resources exist?
That sounds pretty general, so let me clarify. Are there processors located on somewhere where I should isolate stuff?
For instance, if each FC interface module contains four ports, and each of these ports plugs into an isolated backplane card that supports two, then these two go into the "motherboard," Then isolating infrastructure to utilize port 0 and port 9 would be the "best case." But, if all of the 8 modules of four fibre channel ports connect directly into a single backplane, then straight into the "motherboard;" then it makes no difference if infrastructure is utilizing ports 0 and 1, instead of 0 and 9, because it's all the same.
11-05-2010 11:41 PM
I don't know if I understand your question correct.
All Brocade switches 4 and 8 gbit products have so called port groups of 8 ports each. These port groups can be used to create a trunk group (2 or more ports).
It is not possible to create a trunk with ports from different port groups.
The TEC specification shows that this box had an aggreate switch IO bandwidth of 256Gbit. This means for my understanding no oversubscription.
This is normal with Brocade SAN Switches and directors.
4 Gbit * 32 Ports * 2 (duplex) = 256Gbit
You have a 800 nano second latency between port to port. This means it dosen matter where you connect a device.
I have attached the Hardware reference guide.
I hope this helps.
11-08-2010 07:13 AM
Basically, I was concerned with most greatly reducing latency and best practice. I'm new to FC switches, and appreciate your input.
Thanks very much!