06-29-2010 12:06 PM
I have a few chassis switches within a dual fabric configuration and added an 8GB 300E to each fabric. Three total in each fabric. The EMC Clarion is cabled to the chassis switch and I want to move it over to the 300E, each array controller is zoned OK and sees all hosts. After a cable move to the 300E from the 4024 Chassis (FOS 5.0.5) I get no connectivity from the host to storage. The ports from the Clarion are logged into the fabric as F ports and are visible in the name server, so I would presume this would allow hosts to see OK.
Checked all zoning, all seemed OK. Hosts are using PowerPath 4.3.
Graphics are added.
Further questions or thoughts would be appreciated, thanks.
06-30-2010 06:46 AM
Are the hosts in AIX? Then you have to a rmdev and cfgmgr. Btw this is a disruptive activity. You should have app down, DB wodown, File system unmount from all the hosts and then you should have performed this activity.Even if multipathing SW is there, some times the HBAs behave differently. Now you have to do something from host side other wise, the only way is remove the maping form storage, unzone the devices , then check the host connectivity, i.e F port and rezone it and then map it to the storage.
06-30-2010 08:33 AM
fcping will work because zoning i sintact
put >cfgactvshow | grep zonename
it will show you active configuration
also you can try portdisable/enable command. otherwise you have to do what I have mentioned above post
06-30-2010 11:12 AM
The problem is not that the storage is not visible on the host, but that the host configuration has not been updated to reflect the topological changes on the switch. The fact that the storage was moved from one switch to another means the domain ID and port id for the storage has changed. This information is normally visible to the HBA. So you will have to basically refresh the device configuration on the host to get rid of the old device configuration and obtain the new device configuration.
This can be done dynamically or disruptively. Example in Windows, you'd go to Disk Management, Right Click and select Rescan... then go to your multipathing s/ware and do what needs to be done there.
For Solaris, you'd need to do a "cfgadm -c unconfigure <controller-ID>; cfgadm -c configure <controller-ID>; devsfadm -v".
Alternatively, a reconfiguration reboot of the host also would update the OS, and most multipathing s/ware.
06-30-2010 11:24 AM
...one more thing: I was not able to view the diagram you provided, but as suggested by the previous respondent, the topology of your SAN could be optimized. I presumed you moved that storage to the 300E switch for the increase b/width available. However, unless the EMC storage is 8Gbs, you will not get any additional benefit moving the array to the 300E 8Gbs switch unless you co-locate (move the primary hosts that uses this storage to the 300E) the hosts to the 300E switch.
As a long term project, you might also want to consider implementing a core-edge topology fabric (if that's not what currently in use) to ensure you can easily add and remove devices to the fabric in the future.
07-01-2010 12:58 AM
I don't think FOS 5.0 and 6.3 are compatible. You can check in the 6.3 release notes but I think you need to upgarde to 5.3 minimum on the old switches.
Having said that, I agree with the others. Have you tried rebooting at least one of the servers involved? What does PowerPath say?