Fibre Channel (SAN)

Reply
New Contributor
Posts: 2
Registered: ‎09-03-2010

FC switches migration (3800 -> 4100)

After migrating SAN enviroment I have problems with tapes and Storage at new enviroment.

My migration was from FC switches 3800 (FOS v3.2.0a, domain ID1 and 2; so this two switches were in fabric) to new 4100 (FOS v6.1.0h, domain ID 3 and 4 this are in fabric ).

At begining I saw all paths as I should see (Solaris8/9; mpxio -veritas at one host and OS level mpxio at second one).

But After some time half of paths were lost. When I separete zoning for devices only in them domain ID then I do not have any problems but when I create zoning to use fabric between FC switches I'm starting to have a problems with paths.

I tested many possibility sollutions. The last test was:

1. Trace which path have problem (solaris8 hba1; hitachi hba1)

2. shudtwon host

3. create zoning to see disks only by that one path

4. boot to single User

5. check if path is available

And it was not available

in logs I have info like:

Aug 20 10:14:46 HOSTNAME qlc: qlc(2): ql_status_error: Port Down Retry=29h, d_id=40d00h, lun=0h, count=8

Aug 20 10:14:46 HOSTNAME qlc: qlc(2): ql_send_logo: Received LOGO from = 40d00h

Aug 20 10:14:46 HOSTNAME qlc: qlc(2): ql_unsol_callback: sending unsol logout for 40d00h to transport

Aug 20 10:14:46 HOSTNAME qlc: qlc(2): ql_login_port: d_id=40d00h, loop_id=3h, wwpn=50060e80141a3710h

I found article that ask for contact with storage vendor, I open case at storage level but they say: zoning is correct, storage is correct.

Did anyone have similar case in past?

Could it be FOS problem I have another enviroment migrated from 3800->4100 FOS 5.1.0a and all is going well.

Best Regards,

Kris                   

Super Contributor
Posts: 425
Registered: ‎03-03-2010

Re: FC switches migration (3800 -> 4100)

it is not clear to me about ur path offline.

have you checked Brocade compatibility matrix, you may have to upgrade  HBA driver . If HIATCHI peopel are saying, the zoning are correct. then it should be. You may also have to upgarde the multipathing SW in the server. Ask HDS for latest multipathing SW.

<<<<At begining I saw all paths as I should see (Solaris8/9; mpxio -veritas at one host and OS level mpxio at second one).

But After some time half of paths were lost. When I separete zoning for devices only in them domain ID then I do not have any problems but when I create zoning to use fabric between FC switches I'm starting to have a problems with paths>>>>

separate the zoning means? do u do port based zoning/NS based?

Solaris HBA/HITACHI HBA? HITACHI does not have any HBA.

It is not clear which problem you have ? you are having problem with disk drives or tape drives too as u have mentioned? have you done tape zoning wiht 1 HBA and storag ezoning with another HBA?

New Contributor
Posts: 2
Registered: ‎09-03-2010

Re: FC switches migration (3800 -> 4100)

Hi

"have you checked Brocade compatibility matrix, you may have to upgrade   HBA driver . If HIATCHI peopel are saying, the zoning are correct. then  it should be. You may also have to upgarde the multipathing SW in the  server. Ask HDS for latest multipathing SW."

Drivers are not updated, as I mentioned we have two similar enviroments and in one where is FOS in version like 5 there is no problem, but where is version 6 there is a problem so maybe this is an issue. About MP software the same situation some hosts have veritas at one site and some OS multipath and no issue there.

"separate the zoning means? do u do port based zoning/NS based?"

This mean that zoning:

zone:  host_solaris9_global

                50:06:0e:80:14:1a:32:00 -> hitachi domain3

                20:00:00:e0:8b:8a:58:ce -> solaris 9 hba1(connected to domain 3)

                20:00:00:e0:8b:8a:72:73 -> solaris 9 hba2 (connected to domain 4)

                50:06:0e:80:14:1a:38:10 -> hitachi domain 4

to:

host_solaris9_zone1

                50:06:0e:80:14:1a:32:00 -> hitachi domain3

                20:00:00:e0:8b:8a:58:ce -> solaris 9 hba1(connected to domain 3)

host_solaris9_zone2

                20:00:00:e0:8b:8a:72:73 -> solaris 9 hba2 (connected to domain 4)

                50:06:0e:80:14:1a:38:10 -> hitachi domain 4

So NS zoning, with devices around domain ID3 and ID4 no ISL connection used, but for tapes zoning is (NS with portNumber)

zone:  host_solaris8_global

                50:06:0e:80:14:1a:30:00 -> hitachi domain3

                20:00:00:e0:8b:8a:50:ce -> solaris 8 hba1(connected to domain 3)

                20:00:00:e0:8b:8a:70:73 -> solaris 8 hba2 (connected to domain 4)

                50:06:0e:80:14:1a:38:10 -> hitachi domain 4

                3,4

                4,4

this are only two hosts in there, other hosts have the same problems as well with tapes. So issue is with all devices that are connected by FC (fabric domain ID 3 and domain id 4)

I do not know what I can check more, at old switches it just worked. I have an idea to get back to old envroment with all FC connected devices to check if situation will clear there. If yes then this is an issue with FC switches software/ h/w or drivers itself.

So problems is with path of any devices in fabric in random mode I never know which path will be not available

Regards,

Kris

Super Contributor
Posts: 425
Registered: ‎03-03-2010

Re: FC switches migration (3800 -> 4100)

From your data, it seems, server and storage are localized and NS zoning is perfect. Check with HBA driver version, or you may have to do an FOS upgrade.

Compatibility should be checked, both MP and Firmware/Driver of the HBAs.I do believe the problem will be resolved

For revert back ,always we do it: like revert back the changes. But in this case the real problem should be known.What is HDS is asking you to change now?Did  they find any clue?

Pls do let me know.

Join the Community

Get quick and easy access to valuable resource designed to help you manage your Brocade Network.