02-25-2011 12:37 PM
I have what I think is a very basic question, but not being an expert on SAN Fabric design I have not yet been able to provide the proper answer to my business.
I am working on a DataCenter project to deliver a "highly available" "Mission Critical Infrastructure" to a number of manufacturing facilities. These sites are inherently small (on average less than 100 blade servers per site) with no interconnection between locations. The original design of the SAN fabric has proven not to be scalable to the extended scope of the project and My group is charged with recommending a new design. I am not a SAN expert by trade, but I have been given the task of putting this together. The previous design was essentially a series of HP Blade Enclosures with Brocade Switch modules each chained to the next and finally connecting to the Disk Array. This was a bad design from the start, and now has finally been addressed because of added capacity exceeding the maximum hop count.
There are practical and budgetary limitations of this design, so I worked with the SAN experts at HP and we all came to agreement that the best method for this scenario would be to deploy 2 Brocade 5100 switches in each Data center (one devoted to each of the 2 SAN fabrics). Each Blade Enclosure would connect to the 5100 via 2 trunked ISL links. This would at least create a dedicated core switch for each Fabric that was protected by redundant power supplies. While this may not be the ideal method, it was universally agreed that given the constraints it was the best fit.
The problem is that as part of the initial deployment each data center purchased 2 Brocade 300 switches. These switches were chained off of the blade infrastructure and provided connectivity to non-blade servers and our tape drives.
The business feels that these would be perfectly appropriate switches to serve as the core for the Mission Critical environment. They feel that since there are 2 fabrics and the systems all run MPIO that there would be no impact if a switch went down. I am not comfortable with this design, but I can't identify any emperical evidence to justify the significant cost. The 300 switches would be able to handle the expected growth from a port density standpoint (and from what I see over-subscription is not an issue). So the business sees this as simply paying $70,000 per power supply for something that is already falut tolerant.
I have been searching for some sort of definitive counter argument to outline why the Brocade 300 switches are not appropriate for a mission critical core, but I have come up empty.
Can anyone out there summarize in a couple points why this is a bad idea?
02-28-2011 01:10 AM
Are you searching for arguments against the Brocade 300 solution? I would agree with the business argument.
Ask yourself what the 5100 has more (except port) to offer then the 300?
And of those pro's found, how much are you going to us them?
In my opinion:
-If they are there and
-If they have enough ports
Although the switch in itself has no redundant parts (apart for sfp) the setup you've designed is an redundant fabric design.
If you wish, you can dismiss power feed issues by inserting STS/ATS which can switch between feeds.
Also there are more parts that can fail which are not redundant in your design, you've got just one diskarray which has internal redundancy but its just one diskarray. Failure of that array and your done. If its Mission critical I would first take a look at how I can remove the diskarray as SPOF.
03-04-2011 11:39 AM
1 or 2 HP switch modules? I think that's something to watch out for.
Its kinda hard to tell with this little info, like how many hp enclosures are we talking about? and storage ports?
03-08-2011 01:40 PM
I am using Brocade 300's as mission critical cores and have no issues. In selecting the 300's over the 5100's I found a few differences such as:
Obviously ports, but you can always add another 300 to each fabric if needed, and the feature sets are basically the same.
There is a higher aggregated bandwidth on the 5100's again due to more ports and there are two power supplies which for redundancy sake is nice vs. the single power supply in the 300. But the power should not be a deal breaker since you will be fully redundant.
I have to agree with the others and say that the 300's will work fine and you already have them which is even better.
Hope this helps!