11-12-2014 04:56 AM
I want to simplify this question as much as possible, for my own sanity.... I have two UCS domains in my datacenter, one is on a pair of first generation UCS 6140XP Fabric Interconnects and is working as expected. The second domain is running on a pair of second generation UCS 6296 Fabric Interconnects, and is having a problem with Storage Array connectivity. These connect through a pair of Brocade 5300 switches
If I zone the first generation UCS domain to a new storage array, it immediately logs into the array, no muss, no fuss. If I zone the second generation UCS instance to a new storage array, blades will not connect until they have been rebooted (I've tried rescanning, mapping luns, etc to no avail). The basic vSAN configurations between the UCS domains are effectively the same. The only difference that I can see is that the switches are sending an FDMI Host Name on the second generation FI's. I can't imagine where this would cause any problem at all, but this is the only difference that I can find so far.
Now, to take one step back farther. I have a second datacenter (completely divorced from a fiber channel perspective, as it is 1600 miles away), and it only has second generation FI's in it (but it has the same 5300 SAN switches). Neither of those will allow blades to see new storage arrays, just like the second generation FI's in my primary datacenter. So, there is something with the Second Generation FIs and connecting to new Storage Arrays that I am missing here, but what?
With all that said, I can zone in any other machine type, a Dell servers, IBM servers anything at all and it connects right into the new storage array without issue. Something is up with these second generation FI's....
The Brocade FOS is old: 6.4.1b I think.
Cisco UCS: 2.2.3e (but it has always exhibited this behavior on the full 2.0, 2.1, and 2.2 code streams, it is only becoming more of a problem as we now have over 100 blades in the second generation environments).
Thanks in advance,