Fibre Channel (SAN)

Reply
se
New Contributor
Posts: 4
Registered: ‎09-07-2013

Design guidelines two c7000

First of all, I’m a FC novice but want to learn

We have one HP C7000 blades system with 2 x Brocade 4Gb SAN Switches and currently utilizing 12 ports for a MSA2000 and EVA6400. We want to add an additional C7000 with 2x Brocade 8Gb SAN swithces and a 3PAR 7400 to the mix.

I guess we need two external switches. What about HP StorageWorks 8/24 SAN Switch?

What would be the best design topology wise?

I have read and tried (perhaps failed) to understand some of the techniques and concepts in the HP SAN reference guide. The Access Gateway mode of the blade switches sounds to me like the way to go and then connect all the storage to the external StorageWorks 8/24 SAN Switches. If this is correct, I can’t wrap my mind around a procedure to migrate without downtime.

Avoiding downtime is paramount!

I hope this is not too confusing and thank you all in advance.

/Søren

Valued Contributor
Posts: 931
Registered: ‎12-30-2009

Re: Design guidelines two c7000

We have one HP C7000 blades system with 2 x Brocade 4Gb SAN Switches and currently utilizing 12 ports for a MSA2000 and EVA6400. We want to add an additional C7000 with 2x Brocade 8Gb SAN swithces and a 3PAR 7400 to the mix.

I guess we need two external switches. What about HP StorageWorks 8/24 SAN Switch?

For scalability in the future I would indeed use external switches.

Which one depends on  your needs, the 8/24 (B300) is a great entry switch but it doesn't have redundant PSU's.

If that important to your organization you need to consider other types.

Also the B300 is the 8G platform, while the Gen5 (=16G) successor is already on the market like the b6505, here you have to chose your SFP's with care as an 16G sfp can only do 16/10/8 (AFAIK) which leaves out the 4G devices.

Have your HP rep in on this with functional requirements and the tech. details of your setup.

What would be the best design topology wise?

If you're storage is moving to the external switches, a redundant cascaded fabric would be enough.

It's explained in http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00403562/c00403562.pdf on pag 26

A crude drawing is attached, each connection could be more than one ISL, depending on your needs and config

I have read and tried (perhaps failed) to understand some of the techniques and concepts in the HP SAN reference guide. The Access Gateway mode of the blade switches sounds to me like the way to go and then connect all the storage to the external StorageWorks 8/24 SAN Switches. If this is correct, I can’t wrap my mind around a procedure to migrate without downtime.

An access gateway remove the management burden of the bladed switches, but you have to setup/maintain the AG map as needed.

Again AG or not depend on your organizational needs.

Migrating the existing enclosure you have to the sketched setup is done one fabric at a a time and requires all attached host to have a properly configured multipath solution, and zoning based on wwpn

In general what I would do;

1-connect one b300 1 to your existing enclosure (zoning information is transferred to the switch)

2-move storage to that enclosure

3-if access gateway is required configure it at this point

4-check the paths on your hosts

5-if OK move one to the next switch and repeat the steps

Now you've reached a point were all exiting storage has moved to the external switches and you can hook up your other gear.

If you went for AG your new enclosure switches need to be setup similar.

I left out the details about settings needed for a successful merge or howto config your AG.

Have your HP rep help you on this (probably have to buy it as a service (install and configure or something)).

If you're willing to learn take a look at the Acces Gateway Admin Guide and the FOS Admin Guide

se
New Contributor
Posts: 4
Registered: ‎09-07-2013

Re: Design guidelines two c7000

Thank you a lot for your answers they were very helpful :-)

In the end, I will properly buy installation from HP if I’m not comfortable enough, but I want to understand!

Y

ou speak of “organizational needs” regarding to AG or not. Can you elaborate?


I have skimmed through the AG Admin guide and It’s more complicated that I imagined. From the SAN reference guide I got the impression that AG was an easy way to minimize maintenance from the blade switches, and maybe it still is :-)


The default mapping of ports in AG I can easier understand but I am concerned about performance.

If I utilize 4 uplink (N_ports) on each blade switch and with 16 blade servers the ratio pr. port is 1:4. I could up this number by utilizing more ports (12 more across all and the 8/24 switches is filled 100% - I’ve attached a drawing).


As I understand it, the mapping to N_ports is fixed and no load balancing occurs?


Maybe port-based mapping is NOT the way to go?


If the blade switches is operating in “switch mode” load balancing is of less concern?


Thanks


FC design.jpg

Valued Contributor
Posts: 931
Registered: ‎12-30-2009

Re: Design guidelines two c7000

You speak of “organizational needs” regarding to AG or not. Can you elaborate?


Basically this boils down to the functional requirements.of you Fabric/SAN design.

Is our admin skill set sufficient for day-to-day operations.for instance.

Or "The SAN should be of a nSPOF configurations"

Or "C7000 Enclosures should be identical in HW/SW and setup and the Fabric SAN should be scalable to fit 300 enclosures." etc. etc.


The last one would, for instance, hit a a limitations about how many switches can participate in a fabric, although i'm not sure about the current maximum number. Thereby forcing you to set those embedded switches to AG mode.

I have skimmed through the AG Admin guide and It’s more complicated that I imagined. From the SAN reference guide I got the impression that AG was an easy way to minimize maintenance from the blade switches, and maybe it still is :-)


The most heard reasons for AG mode are.

-Reduction of firmware maintenance

-Hitting technical limits in the Fabric

-Compatibility with other vendors

-License reduction

The default mapping of ports in AG I can easier understand but I am concerned about performance.

If I utilize 4 uplink (N_ports) on each blade switch and with 16 blade servers the ratio pr. port is 1:4. I could up this number by utilizing more ports (12 more across all and the 8/24 switches is filled 100% - I’ve attached a drawing).

Do you have performance metrics to (dis)prove you'll get performance issues when only lesser uplinks?

My current shop uses 3 or 4 uplinks but are rarely fully utilized. A previous shop I worked only used one uplink per switch which worked for them, although for redundancy I would have used two uplinks.

Bottomline get performance data of your environment if you don't have it already. SAN Health can collect performance metrics over a period of time to get you started quickly.


As I understand it, the mapping to N_ports is fixed and no load balancing occurs? 


Incorrect and depends on your view of loadbalancing.

The AG map can be changed to fit your needs, but be default it mapped two internal against one external port (AFAICR).

In the default setup load is balanced manually by mapping F port to N ports, however If you mend a loadbalance as an ISL trunk , then the answer would it depends. Brocade as a feature called F-port trunking, but I cannot remember if this was for HBA/CNA only or if switches in AG mode can use it as well. That said if it does exist for switches it will probably cost you a license to enable the feature.

Maybe port-based mapping is NOT the way to go?


With switches in AG mode port-based zoning still works, but for sure is not the way to go.

It actually has nothing to do with AG mode, but more with "do I expect >1 wwpn per port".

So if you have ESX or Compellent storage with NPIV enabled, then you better use wwpn zoning.

In today's setups you'll mostly find wwpn zoning.


If the blade switches is operating in “switch mode” load balancing is of less concern?


If you refer to loadbalancing the ISL it would depend on things like

-how many ISL are in place

-how is the routing policy set

-is there a trunking license

But yes usually "it just works"



Join the Community

Get quick and easy access to valuable resource designed to help you manage your Brocade Network.