Fibre Channel (SAN)

Reply
Occasional Contributor
Posts: 12
Registered: ‎01-16-2008

Migration to DCX-4s Directors

Good morning all.

I manage a relatively small SAN environment made up of a few hundred TB of storage, a hundred plus hosts, an LTO4 library, a couple VTLs, and everything is tied together on two redundant fabrics made up of 17 switches each.

At the core of each of my fabrics is a Brocade Silkworm 12k, although they are only 2GB switches they have been VERY good to me for years and often make me look like some sort of hero.  However Brocade has decided it is time for them to rest and put them on the End of Support list.

Directly attached to the 12K(s) are my storage subsystems and the ISLs of the 16 edge switches (per fabric).  Attached to the edge switches are my hosts. (With the exception of the switches hosting my tape environment, since everything involved was capable of 4GB speeds if decided to keep that traffic all intraswitch... man the speeds we see... but the tape environment is of no importance to this post)

Each edge switch has four ISL(s) attached to the core switch of its respective fabric.  Trunking is in place and since the 12Ks are 2GB that means there are two 4GB trunks per edge switch (again to their respective cores).  For reference the trunk pairs on the edge switches are ports 0 and 1; and ports 30 and 31.

We have ordered and will soon take delivery of two DCX-4s backbones; each with two 48 port blades fully populated with 8GB SFPs.

Is anybody aware of a a handy dandy road map already in place for a least disruptive migration when replacing director/core switches?

Facts:

1. Each host is currently dual attached in our environment, that is to say it has one connection to each fabric for both redundancy and load balancing. Multi-pathing drivers are in place where applicable.

2. There are two 4GB ISL trunks from each edge switch to each director/core in their respective environments.

3. Storage subsystems are also attached to each fabric for redundancy and load balancing.

Here is what I have envisioned:

1.  Configure the switches, implementing all of our local security policies, etc.

2.  Configure the new Fabric Manager, and figure it out.

3.  Attach ISLs from each DCX-4s to the respective 12K that it will replace.

4.  Pre-stage my fibre cables to be used as ISLs from each existing edge switch, to their respective DCX-4s.  (note all edge switches are 4GB switches.  there are currently two 4GB trunks per switch to the 2GB 12Ks.  My intention is to reduce the number of  ports used by only implementing one 8GB trunk per edge switch to their respective DCX-4s.  Same throughput, fewer ports... makes sense in my head.)

5.  Pre-stage my fibre cables to attach my storage subsystems to the new DCX-4s(s).

***This is where my plan becomes a little more abstract.***

There is some important information that will help in this migration.  Would it be better to disable switch ports prior to removing existing ISLs or should I just disconnect the fibre?  If the ports should be disabled, should I do it on the existing 12K or the edge switch?  I figure if I do it on the edge switch, I can re-enable the ports when they have been attached to their new DCX-4s.

Do you think it is safe to have two 4GB trunks, remove one (either through disabling of ports or cable removal), connect the ports of the now removed trunk to the DCX-4s (re-enable the ports if necessary) creating the 8GB trunk, and finally disable or remove the remaining 4GB trunk?  All without stopping I/O? (the ISLs between the existing 12Ks and new DCX-4s(s) will remain in place until all storage subsystems have been migrated to the DCX-4s(s) and all edge switches have their 8GB trunks to their new core/director.

I know this is a lot of info to cram in a posting (and a lot more rambling.)  Hopefully I've asked questions in a way people can help out.  I know I'm not the first person to ever replace their core switches... is there a smarter way to do it?

Thank you in advance for any help or info you can provide.  Sorry for the rambling.

Contributor
Posts: 53
Registered: ‎06-24-2009

Re: Migration to DCX-4s Directors

Hi PhilGuy,

I hope you use WWN zoning and have no HP-UX servers or your life will be more difficult.

I would disable the ISL port before removing the cable. It doesn't matter on which switch but, as you say, doing it on the edge means you can recable then simply re-enable the port.

For the ISLs, a couple of things to think about. If you just have one 8 Gb trunk, yes you save ports but you lower the availability of the solution if the ASIC fails. It may be preferable to use ports 0 and 31 (or 0 and 63). More importantly, you are basically upgrading your SAN from 2Gb to 4Gb. Normally this will mean you should be getting more throughput overall. Thus the 8 Gb for ISL may well no longer be sufficient. Further, it is far from the general practice of having an ISL overscription of 7:1. Perhaps two 8 Gb trunks would be required.

Alastair

Occasional Contributor
Posts: 12
Registered: ‎01-16-2008

Re: Migration to DCX-4s Directors

Alastair,

Thank you for your reply.  Yes, zoning is all by WWN.

You know, unfortunately for us we do have some legacy HP-UX systems and from what I understand any change in topology will affect their SAN attached devices (volumes/LUNs).  Have you had experience with this?  Is there a clean way to do it?  They would potentially be affected by several moves (i.e. the migration of the storage subsystems to the new switches, oh geez maybe even each port of the subsystem since I can't move them all at one time, and the migration of the edge switch they are attached to-to the new cores.)

In regards to your advice on ISLs.  Is it possible for me create a trunk using ports 0 and 31?  I always though it was isolated to the groupings (per ASIC) Brocade was nice enough to color code.

We are in fact upgrading from 2GB to 4GB (since that is how fast the edge switches are) however I don't expect an increase in overall throughput.  Even in our 2GB environment the devices operating as high as 50%-75% of their max throughput are few and far between (and this would be peak traffic, not constant).  (Our highest I/O activity is during our backup processes and as I mentioned that traffic is contained within two edge switches, and does not travel through either core.)  In other words, our 2GB (because of the core) infrastructure does not limit our throughput, our applications and databases just don't (currently) utilize it.  As I find performance bottlenecks caused by single 8GB trunks I will either add ports to the same trunk or create a new trunk.

Thank you again for your insight.

Phil

Contributor
Posts: 53
Registered: ‎06-24-2009

Re: Migration to DCX-4s Directors

Phil,

Concerning HP-UX, when you move an array port from the 12K to the DCX, all LUNs that are masked to that port will get new device paths and so all the VGs concerned will have to be rebuilt. I think you may be able to do a vgexport, change the names to the new ones and do a vgimport. But I'm no HP-UX guru. I would use the "inq" program from EMC, or similar, to note the device name changes. You will have to repeat this as each array port is moved.

Changing ports at the server end has no consequence on the device names.

No, you cannot trunk ports 0 and 31. As you say, trunked ports have to be on the same ASIC.

Re the ISLs, if your throughput never goes higher than about 1.4 Gb/s, then why trunk the ISLs? There is no need in your case.

Personally, I would go anyway for two 8 Gb trunks using ports 0&1 and 30&31. Why? Because you never know what will be required next year and its a pain rearranging when your port count is low and you no longer have ports 1 and/or 30 free to do so. You lose no port count over what you have today and you are ready for big boom just around the corner. Just my 2c worth. :-)

In any case, next step will be to replace from 4100s (or whatever) with 5300s (or something). Would you use just a single 8 Gb port for your ISLs? Of course not!

Alastair

Join the Community

Get quick and easy access to valuable resource designed to help you manage your Brocade Network.

vADC is now Pulse Secure
Download FREE NVMe eBook