12-03-2010 08:15 AM
Greetings to all,
We need to add 2 new DCX4S switches to our running production fabric.
Production Fabric consists of:
DTSAN008, subordinate, 48000 @ FOS 6.3.0a, Domain ID of 8
DTSAN011, subordinate, 4100 @ FOS 6.3.0a, Domain ID of 11
DTSAN014, principle, 4100 @ FOS 6.3.0a, Domain ID of 14
Running config on fabric for about 200 servers.
6.3.x required for Xiotech SAN compliance
New Switch Additions:
DTSAN4S1, @ FOS 6.4.1, Domain ID 1
DTSAN4S2, @ FOS 6.4.1, Domain ID 2
No config on new switches.
6.4.1 required for 64 port blades
We have NO FC or IP routing, No FCIP. Single flat fabric in production, and just want to add to it.
After the 4S switches are added, over time we will migrate ALL servers off of the 48000 and 4900's onto the 4S switches. We will end up with only the 2 DCX4S switches running all FC traffic to our Xiotech SAN.
My main concern is to add them with no segmentation,no loss of I/O.
DTSAN4S1 needs to be principle when fabric migration is finished.
Request tips/advice on this activity. Any thing to configure, any thing to watch out for etc.
12-26-2010 07:02 PM
Very easy. Just add them to the fabrics. They will merge normally when fabric operating parameters are the same, no zoning conflicts are present and the other usual stuff. As soon as you decommision the older switches the DCX' s will become principle thru a fibre channel BF sequence and principle switch election which is non-disruptive.
12-28-2010 08:31 AM
Yup just add them to yoru fabric. I would suggest a 4 port ISL trunk between them and the 48K and a 2 port ISL between the DCX and the 4100's. Is this a redundant fabric, as in, do you have an "a" fabric and a "b" fabric, or is this just one zoneset? If it is one config (as in you have no fabric redundancy) then you will also want to ISL the two DCX's together. You will want to use ports from the same ASIC, and ports that are adacent to one another, or you will end up with single ISL's that are not trunked.
This assumes that you have the correct licenses for ISL trunking.
We have seen no issues putting a 6.4.x switch on a 6.3 existing fabric.
Having said that, if you are not in a hurry to get these going, I would suggest waiting for 6.4.1(x) for a minor revision of that code to correct any outstanding issues. Remember that the 64 port 8gb blades are bleeding edge, and the code might not be 100% ready for them.
If you don't want to wait for the old switches to expire, you can force a principal switch election, with the command
fabricprincipal --enable -priority 0x01 -force
This command says to force a fabric rebuild and give the switch you run the command on the highest priority when re-electing a principal switch.
If you dont do this, than from my experience, the switch with the lowest WWN becomes the principal switch, which is usually your oldest switch. I think this is legacy mode however.
I would start with
do this AFTER you add the new switch(es) to your fabric.
before you add the switches to your fabric, you willl want to ensure that there is no zoning configuration on the new ones. I usually do a combo of cfgclear and cfgsave (BEFORE YOU PLUG IN THE NEW SWITCH!) to ensure that there is no config on the new switch, otherwise the fabric will segment when adding the new one.
When migrating to the DCX, it is my opinion that you will want to migrate the storage ports first, I am not sure as to how you are enabling redundancy, so you would want to do this piece with care!. It will cause all your hosts to do a path failover, which is inevitable when you are moving to new switches.
You will have to go through this pain twice, once to move the storage port(s), and once to move the hba's on the hosts. Ensure that your multipath drivers are functioning correctly, or you will have to either do it cold, or will have unexpected loss of IO.