12-05-2013 10:39 AM
I have two physical fabrics that used to have McData EOS based switches. Those switches have been gone a while and I want to get the fabrics back to native mode. I don't think it is too difficult, but I'm looking for a best practices type checklist/procedure.
I have two identical fabrics each fabric has:
A 5300 with 48 active ports ISL'd to
a 5300 with 80 active ports ISL'd via LR optics to
a 5300 with 80 active ports ISL'd to both
a 48000 with 112 ports and
a 5100 with 40 ports.
The second 5300 (the one with the LR optics noted) is the seed switch.
We had been running 6.4.2b and I have just recently brought everything up to 6.4.3e. Since then the fabrics have been informing me that I should select a new seed switch. That is a new message, and of course it likely relates to the jump in FOS. I'm not certain, but I think that perhaps that is also related to my still running interop mode 2 while having no EOS switches. Even if not, then I still feel that this is a good time to get the interop mode concern resolved. If nothing else it is keeping me from considering going to FOS 7.
I use DCFM, and from what I have seen with a few searches it looks like I will need to push and activate the zoneset again as I go through this process, but I am looking for a 'tried and true' checklist to help make sure that things go smoothly.
Any assistance would be appreciated.
Solved! Go to Solution.
12-05-2013 11:58 AM
Excellent topic start.
I cannot give you a tried and true checklist as I never had to use any interop mode.
If I remember correctly interop mode changes the DID (at least).
I know a lot of older OSses don't like DID or PID changes on their target ports (ie storage (tape or disk)).
HPUX 11v3 has something called agile adressing to circumvent this, so does AIX with dynamic tracking.
HPUX 11 pre v3 needs help to get this fixed.
If you don't have OSses that are picky about their targets DID/PID you should be fine on that front.
But it never hurts to check anyway.
Just my 2 cents
12-09-2013 09:32 AM
This weekend I forged ahead with what I knew and I thought I would file this anecdote as an update:
I 'prepared' for the event by saving and exporting the copies of both zone databases using DCFM/
I started at one physical end of the fabric and disabled each switch in turn, and modified the interop parm. The modification process automatically re-enables the switch; in retrospect this may not always be a good thing.
Once all switches had been updated I used DCFM to push and activate the zoneset.
Here are the several conditions and side-effects I experienced:
1) a few of my storage units ended up with no hosts logged into one port. For those ports I tricked the ports into re-logging in themselves by changing their speeds from Auto to 8Gb, then back again. That 'fixed' them.
I also ended up with a significant number of ESX hosts logged into storage ports that they were not zoned for. That puzzling since these were all connections that were not zoned, and for the most part, connections that had never been zoned at any point in history.
None of the ended up accessing LUNs that they shouldn't have, but in verifying the changes they were a whole lot of anomalies that had to be tracked down and resolved.
My 'guess' on this is that perhaps the switches were, during the period after the mode change and before the zoneset was re-activated, operating in open zone mode. Unfortunately the ESX hosts rather aggressively look for resources and so, if they suddenly found a storage path they just logged in and tried to register themselves. In some cases the connections showed up as registered but not logged in at the storage units, and even worse in some cases they showed up as still logged in. As if a connection that was established before the zoneset was pushed persisted even without a zone. I don't know this for certain but it is my best and only explanation for what we saw.
The amount of time consumed by tracking these things down took me outside my specified change window, so I will be rescheduling the second fabric for another date. I can design a way of prohibiting, or at least minimizing the opportunity for this sort of stuff to happen next time, but it is going to make the procedure significantly more complex.
12-09-2013 09:49 AM - edited 12-09-2013 09:50 AM
EVen more kudos for doing a follow up, something I rarely see happening.
If you want to prohibit hosts to see targets when no zoning is effective, look at the defzone command.
With that command you can enable or disable access if zones are not effective (well with defzone enabled a special zone is effective prohibiting the rest to access resources). defzone --show shows the current setting.
12-09-2013 02:46 PM
Attached is an older document which covers the migration...When you change the interop mode all zoning tables are erased. You will need to manually recreate them after the switch reboots in mode 0 - several techniques for regaining it are in the attached.
Keep in mind your FC PIDs will change as well, so watch out for old HP-UX and AIX platforms ..