Fibre Channel (SAN)

Reply
Occasional Contributor
Posts: 12
Registered: ‎06-09-2010

Collapsing a few switches

I have a few chassis switches within a dual fabric configuration and added an 8GB 300E to each fabric. Three total in each fabric. The EMC Clarion is cabled to the chassis switch and I want to move it over to the 300E, each array controller is zoned OK and sees all hosts. After a cable move to the 300E from the 4024 Chassis (FOS 5.0.5) I get no connectivity from the host to storage. The ports from the Clarion are logged into the fabric as F ports and are visible in the name server, so I would presume this would allow hosts to see OK.

Checked all zoning, all seemed OK. Hosts are using PowerPath 4.3.

Graphics are added.

Further questions or thoughts would be appreciated, thanks.

Super Contributor
Posts: 425
Registered: ‎03-03-2010

Re: Collapsing a few switches

Are the hosts in AIX? Then you have to a rmdev and cfgmgr. Btw this is a disruptive activity. You should have app down, DB wodown, File system unmount  from all the hosts and then you should have performed this activity.Even if multipathing SW is there, some times the HBAs behave differently. Now you have to do something from host side other wise, the only way is remove the maping form storage, unzone the devices , then check the host connectivity, i.e F port and rezone it and then map it to the storage.

Occasional Contributor
Posts: 12
Registered: ‎06-09-2010

Re: Collapsing a few switches

Definitely a disruptive task indeed. No UNIX systems in the mix. All hosts are down; this is a service outage opportunity. fcping reports OK from target to host. 

Super Contributor
Posts: 425
Registered: ‎03-03-2010

Re: Collapsing a few switches

fcping will work because zoning i sintact

put >cfgactvshow | grep zonename

it will show you active configuration

also you can try portdisable/enable command. otherwise you have to do what I have mentioned above post

Occasional Contributor
Posts: 12
Registered: ‎06-09-2010

Re: Collapsing a few switches

I think this is a multipathing issue. PowerPath versions are at 4.3. FOS on 300e is 6.3.0b while FOS on 4024 is 5.0.5. Hop count possibly, there are no more that two.

Super Contributor
Posts: 425
Registered: ‎03-03-2010

Re: Collapsing a few switches

always try to localize the devices, and may be you have to upgarde powerpath version, even if HBA driver version/FW version

Occasional Contributor
Posts: 6
Registered: ‎10-02-2008

Re: Collapsing a few switches

The problem is not that the storage is not visible on the host, but that the host configuration has not been updated to reflect the topological changes on the switch. The fact that the storage was moved from one switch to another means the domain ID and port id for the storage has changed. This information is normally visible to the HBA. So you will have to basically refresh the device configuration on the host to get rid of the old device configuration and obtain the new device configuration.

This can be done dynamically or disruptively. Example in Windows, you'd go to Disk Management, Right Click and select Rescan... then go to your multipathing s/ware and do what needs to be done there.

For Solaris, you'd need to do a "cfgadm -c unconfigure <controller-ID>; cfgadm -c configure <controller-ID>; devsfadm -v".

Alternatively, a reconfiguration reboot of the host also would update the OS, and most multipathing s/ware.

Highlighted
Occasional Contributor
Posts: 6
Registered: ‎10-02-2008

Re: Collapsing a few switches

...one more thing: I was not able to view the diagram you provided, but as suggested by the previous respondent, the topology of your SAN could be optimized. I presumed you moved that storage to the 300E switch for the increase b/width available. However, unless the EMC storage is 8Gbs, you will not get any additional benefit moving the array to the 300E 8Gbs switch unless you co-locate (move the primary hosts that uses this storage to the 300E) the hosts to the 300E switch.

As a long term project, you might also want to consider implementing a core-edge topology fabric (if that's not what currently in use) to ensure you can easily add and remove devices to the fabric in the future.

Contributor
Posts: 53
Registered: ‎06-24-2009

Re: Collapsing a few switches

I don't think FOS 5.0 and 6.3 are compatible. You can check in the 6.3 release notes but I think you need to upgarde to 5.3 minimum on the old switches.

Having said that, I agree with the others. Have you tried rebooting at least one of the servers involved? What does PowerPath say?

Super Contributor
Posts: 425
Registered: ‎03-03-2010

Re: Collapsing a few switches

Do you use port based zoning? Then it will happen.

Join the Community

Get quick and easy access to valuable resource designed to help you manage your Brocade Network.

vADC is now Pulse Secure
Download FREE NVMe eBook