Fibre Channel (SAN)

Reply
Occasional Contributor
Posts: 7
Registered: ‎04-11-2011

Physical Fabric Layout Design

I have a couple of fabrics that I have just cleaned up logically(soft zoning and standardized naming for all) and I want to improve the physical layout.

Each fabric is made up of the following:

switches

Brocade 4100 x 2

Brocade 4024 x 1(HP Blade chassis Modules)

Storage:

EVA 8000 x 2

MSA G3 x 1

Various Tape Libraries

At the moment the targets and initiators are mixed on the 4100's, initiators only on the 4024 and there are some ISL's but not full mess.

My thoughts are to start by creating a full mess between the switches and then splitting the Host ports from the targets across switches so that I have connectivity to all storage targets even if an ISL fails. However all of the diagrams I have seen show targets should get consolidated to a CORE and initiators on an edge. Should I consolidate like ports or disperse?

Thanks,

George

Occasional Contributor
Posts: 8
Registered: ‎08-18-2011

Re: Physical Fabric Layout Design

For your environment - that is, a pretty small one, You need not bother about "core" and "edge". In either sense. (For sure no proper "Core Fabric"  will happen there.)

First, you need to consider your limitations:

- (most likely) do not have port trunking license (no real need anyway)

- your BladeSystem switches have (at most, if properly licensed) 8 external ports, you need (exactly for dual DPS) 4 for ISL's, thus nothing really free there

- you have three CONDOR ASIC based switches:

     - they are pretty much equals as far as performance goes, with a slight edge to 4024 as there are (I presume) fewer ports used

All this considered, here's a straightforward way how to wire the 4024: (most constraints => that shall be your starting point)

16* blade

2* 4100_1 (DPS - see http://community.brocade.com/message/15287)

2* 4100_2  (DPS)

2* spare (for ISL repair if need be), can be used for not-important dedicated-to-blades storage/tapes etc.

2* free -> good candidates for Tapes/storage dedicated to Blades

4100_1:

2*4024 (DPS)

2*4100_2 (DPS)

1* spare (for ISLs)

4100_2:

2*4024 (DPS)

2*4100_1 (DPS)

1* spare (for ISLs)

For ISL mesh, err, triangle, FSPF (Fabric Shortest Path First) will fit well.

Now, about port allocation:

storage:

EVA A goes 4100_1_F0 and F1

EVA B goes 4100_2_F0 and F1

If you have more ports, double that.

Tapes - if possible, arrange them in groups approximately same (load) size and match with eith their initiators allways on the same switch(fer fabric).

Also, the same goes for hosts, doing round robin makes little sense in this enviroment (unless 4-ports worth of HBA's is on hosts), just make sure you host is connected to the same "side" as it preferred EVA/MSA/TAPE/whatever controller is connected to.

Basically, get as good/fast ISL's as possible within your constraints (presume no Trunking) and try to load them as little as possible. Local, intra-switch, traffic is basically wire-speed.

Aside from that - if you were going for new pizza boxes, always prefer 2 single-port HBA's to one dual-port.

Message was edited by: minosi mixed up with BC Smiley Happy - of course 8 ports max per 4024 - question remains how many enabled...

Occasional Contributor
Posts: 7
Registered: ‎04-11-2011

Re: Physical Fabric Layout Design

I do have trunk licenses so I will enable it and use two links per switch. in a fully rednundant triangle as you suggest. Do I have to enable FSPS? Does taht get enabled on the trunks only or switchwide?

As for my Port allocation:

My EVA's have 4 ports each per fabric, and hosts are accessing from all over the fabric. Which of the below options is prefereable:

A) Keep all host ports for each EVA on one switch in the fabric?

     EVA1 C1P1 -> 4100_1

     EVA1 C2P1 -> 4100_1

     EVA1 C1P3 -> 4100_1

     EVA1 C2P3 -> 4100_1

     EVA2 C1P1 -> 4100_2

     EVA2 C2P1 -> 4100_2

     EVA2 C1P3 -> 4100_2

     EVA2 C2P3 -> 4100_2

B) Split EVA host ports betweeen switches?

     EVA1 C1P1 -> 4100_1

     EVA1 C2P1 -> 4100_1

     EVA1 C1P3 -> 4100_2

     EVA1 C2P3 -> 4100_2

     EVA2 C1P1 -> 4100_1

     EVA2 C2P1 -> 4100_1

     EVA2 C1P3 -> 4100_2

     EVA2 C2P3 -> 4100_2

Occasional Contributor
Posts: 7
Registered: ‎04-11-2011

Re: Physical Fabric Layout Design

I also have Virtual Connect gateways going to some other Chassis. These are Gateways that use NPIV and WWN Masking in case you don't have experience with them. Behind those gateways are alos some VM's running on VMware which may eventually use NPIV as well.

Should I cable them to the same switch or separate switches in the fabric? 

Occasional Contributor
Posts: 8
Registered: ‎08-18-2011

Re: Physical Fabric Layout Design

First,  would STRONGLY suggest you take a look into IBM's Introduction.to.SANs (sg245470, a bit oldie, but goodie).

Second look up (at least) "Managing Trunking Connections" in your FOS Administrator’s Guide.

If you have trunks, and available ports, the you should go for four-port ISL's in a triangle. If you _really_ need to connect some fast dedicated stuff to BladeSystem, then three-port ISL's.

Brocade's trunks are godsend for performance, cherrish them.

As for the EVA ports, you will never saturate 4(!) 4G ports by a single host ... unless you are CNN, of course... about that you would already know. heh

Genrally there are two approaches

1) best bang for the buck => go for as little hops-as-possible, even sacrificing a bit bandeith - it is essential when port/hba/switch resources are limited.

     - advantage: great performance can be achieved if properly configured for little money

     - disadvantage: inflexible (constant changes needed, cost time and downtime)

2) get fast-enough ISL's so that you can pool resources and while sacrificing (a little) latency and bandwith get great flexibility and all the benefits of a pool of resources

     - advantage: flexible

     - disadvantage: requires proper investment

Given you have Trunking, i presumer port counts are not a big issue. Thuis you shall go for flexibility => "connect everything everywhere" thus B).

This also requires manually balancing dual-port hosts between switches or going four-port hosts.

If  2 dual-port hba's on servers, allways try  to get 4 paths to a given volume - 2 per controller, 2 per fabric. Not more. Less if MPIO causes trouble. You really do not want multipathing wars (32 paths, etc..) on your shoulders.

But really, spend a week reading and many things will get out of the darkness.

Occasional Contributor
Posts: 8
Registered: ‎08-18-2011

Re: Physical Fabric Layout Design

Re: another chassis

Well, IFAIR those "Virtual Connect Gateways" are actually FC switches .... so you indeed have more that 3 switches/fabric

For now, I would connect the other chassis the same as the first - if no trunking then DPS, just remember it is not a good idea for more than 2 ports per DPS group. (As opposed to trunks)

Basically choose the most powerfull and flexible switches(so far those seem like 4100's) as the "centres" of a "star"-like topology.

If you want further detail, you shall really consult a specialist ... or experiment a bit.

Occasional Contributor
Posts: 7
Registered: ‎04-11-2011

Re: Physical Fabric Layout Design

Ok, I'm going to go with my plan B) and your plan 2) then.

So give a 1 port host and an 4 port EVA, per fabric. I should only allow 2 paths per fabric for a total of four? Best way to do that would be with some zones taht restrict to just 2 of 4 target port. In the past I have zoned all of those port but i can easily remove some of the ports that are furthest away(most hops).

Occasional Contributor
Posts: 8
Registered: ‎08-18-2011

Re: Physical Fabric Layout Design

Best zoning (outside really big environments) is point-to-point. Single initiator, single target.

You might want to group some targets, but on a small SAN i do not really see the benefit - you will not really save work and lose a bit of visibility.

On very big SANs it is done commonly for (smallish) performance and mostly scalability/maintainability/byrocracy/you-name-it reasons.

Occasional Contributor
Posts: 7
Registered: ‎04-11-2011

Re: Physical Fabric Layout Design

No the VC modules I am using act more like a passthrough module. They log on using up to 4 uplink ports which in turn show up as F-Ports on your switches. They use NPIV to allow traffic to reach the blades behind them.

I'm going to stick with the Mesh I've got running since we started this thread.

Thanks for all of the help I gave you a best answer for one of your previous.

Occasional Contributor
Posts: 7
Registered: ‎04-11-2011

Re: Physical Fabric Layout Design

I've got one path out of 8 that isn't showing up in Windows, but it does show up in ESX.

Could that be the Multipathing wars you mentioned?

Any thoughts on how to troubleshoot?

I have enabled IOD and changed the routing policy to port based, and diable DLS. Could that be my headache?

Thanks,

George

Join the Community

Get quick and easy access to valuable resource designed to help you manage your Brocade Network.