Does size matter in the data center? With Cisco’s launch of the 9718, I am reminded of what it’s like to shop at Costco. You are surrounded by enormous versions of products that you don’t necessarily want or need. Who needs a jar of mayonnaise when you can get a gallon vat of mayonnaise? Why buy the king-sized bag of Cheetos when you can buy the jumbo seven pound sack? Or why buy six or even twelve rolls of toilet paper when you can buy thirty?
The problem is when you get home, where are you going to put this stuff? Will you ever use it? Buying bigger seemed right at first, but then you have the inevitable regrets that you didn’t buy smarter.
Costco… I mean Cisco added the MDS 9718 to its director product family to address a requirement that we don’t hear from storage customers. They aren’t asking us to build a larger chassis or massively increase port density. They are asking for technology that enables operation stability, predictable performance, and simple scalability. Cisco didn’t build that… they launched a monolithic chassis (based on a Nexus 7718 IP switch) designed to address high port density that comes at the expense of massive size and unbelievable energy consumption. Fun fact: a fully populated 9718 draws more energy than 11 average US homes.
Ironically, Cisco positioned this as “The Beast”. When I think of this, the image that comes to mind is a large, out of control animal that overcomes obstacles through brute force. In the case of the 9718, it’s a huge box that is overly complex, and has solved a consolidation challenge through massive amounts of hardware (brute force engineering). As usual, Cisco is overengineering a solution by adding more hardware and focusing on speeds and feeds.
This flies in the face of customers who want to create simpler and more efficient storage networks. The MDS 9718 delivers 768 16 Gbps Fibre Channel or 10G FCoE ports at a high cost. Fully populated it weighs more than 800 pounds, consumes more than 10,000 watts, and won’t fit in most OEM racks. This high density chassis creates a single large failure domain that has more components, draws more power, and generates more heat. All of these factors will lead to lower reliability.
On the other hand, Brocade can deliver 1024 16 Gbps ports with two DCX 8510 chassis connected via UltraScale Inter-Chassis Links at less than a third of the power draw, about half the weight, and about the same height. So what is Brocade’s secret to efficient and elegant solutions? We have the best hardware engineering team in the industry. Nothing illustrates this better than a quick comparison of Brocade and Cisco 48-port blades.
Cisco’s blade is bigger, has more components and heatsinks and is more than twice the weight. All of this adds up to drawing more power and generating more heat. Again, these factors lead to higher energy costs and lower reliability.
Cisco introduced RESTful API support as means to solve manageability challenges. The intention is to enable programmability for configuration, monitoring, troubleshooting, automation, etc. Brocade enabled RESTful API support 18 months ago through Brocade Network Advisor (BNA). In addition to configuration, BNA is integrated with Fabric Vision to simplify and automate monitoring, diagnostics, and management. Fabric Vision is built into every switch and enables the deployment of 20 years of best practices in a single click, reduces common network problems, and reduces overall management costs.
Cisco also made the claim that it is industry’s first 32 Gbps-ready SAN director… Apparently they forgot they have been making the same claim for the MDS 9710 launched in 2013 and the MDS 9706 in 2014. Bottom line, they have not launched 32 Gbps blades. Being future ready does not equate to product availability. Brocade has been first to market with every generation of Fibre Channel. The last time Cisco said they were committed to the future of Fibre Channel, customers had to wait for more than 3 years before Cisco launched a 16 Gbps product.
In another perplexing move, Cisco also launched a 24 port 40G FCoE blade. Keep in mind, this is the same FCoE that Gartner declared obsolete in the 2015 Hype Cycle for Storage Technologies. The good news is I get to use the meme that keeps on giving…
The blade is intended for ISL connections between directors, 40G FCoE connections from Cisco UCS blade servers, and 40G FCoE connections between hosts and targets. However in the ISL use case, the 40G FCoE blades consume slots, ultimately sacrificing Fibre Channel ports that could be used for hosts and storage. Brocade launched dedicated 64 Gbps UltraScale Inter-Chassis Links back in 2011 to enable massive bandwidth between chassis without sacrificing front-facing ports. On the end-to-end FCoE connectivity use case, Cisco misses the mark entirely as host connectivity is moving to 25G and FCoE targets are almost non-existent.
So to summarize, Cisco has launched a Costco-sized monolithic chassis that has more components, consumes more energy, and weighs nearly a half-ton. Bigger clearly isn’t always better. Stay tuned as I will go through more detailed comparisons in future blogs.