As we get closer to having a standard for 100Gbps Ethernet, still a year out, both industry interest and vendor posturing increase. One vendor recently announced the availability of 100Gbps Ethernet, a seeming PR coup for sure, but actually an impractical implementation, at present, due to simple economics. At $700K per port for a 100 meter MMF connection, who’s buying? Brocade-enabled aggregation of as many as 32 10 gigabit-per-second (Gbps) links (at once, acting as a single 320 Gbps pipe), at a list cost of $4K per port, makes much more economical sense anytime and anywhere, and will for a long while to come.
The 100Gbps standard is being developed as a follow-on to 10 Gbps Ethernet. For a long time, the primary proponents of the standards were the telcos and others in the service provider world where they have to have a much higher backbone speed for user traffic aggregation. The emerging deployment of 10Gbps Ethernet NICs on multi-core, high-performance servers is beginning to drive the demand for higher speed interconnects in the data center as well. With users and servers moving from 1Gbps to 10Gbps, 100Gbps Ethernet seems a network interconnect replacement and it will be.
The vendor-stated need for 100 Gbps Ethernet is not only ahead of the standards, but the introduction of proprietary, pre-standard implementations utilizing prototype parts is an expensive proposition. At $700K per port, you can see how this can be problematic in almost every imaginable real-world scenario. Again a “me-first” PR win, it’s also a credibility-diluting parlor trick in customers’ eyes. The IEEE 802.3 process for defining new Ethernet speed standards takes a minimum of four years; this has historically caused the technology to follow the real-world demand. This has been true as Ethernet went from 10 megabits per second (Mbps) to 100Mbps to 1 Gbps and to 10 Gbps and it has proven true for 100Gbps Ethernet as well. The solution has been, and will continue into the future be, to aggregate multiple links together to form a logical big pipe. This acts as an interim solution until a cost effective implementation of a higher speed Ethernet is available.
Link aggregation allows for having a big pipe between devices with traffic distributed between the links, aggregating ten 10Gbps Ethernet links gives a 100Gbps interconnect between devices. That’s not exactly a true 100Gbps solution, though. 100Gbps of traffic behaves differently coming at you in a single pipe than it does when it’s coming at you in ten 10Gpbs pipes. A single pipe, by definition is more efficient and simpler to manage. We here at Brocade looked at existing aggregation techniques and refined it in order to give a much better utilization of the links and a better simulation of true 100Gbps links.
Brocade’s enhanced aggregation technology, called Carrier Trunk, is what enables the change in behavior from 10 separate pipes to a single pipe. This, and the algorithms that enable it, is where the intelligence lies, and where Brocade has industry-leading advantages. Passing traffic over multiple hops via aggregated links is complicated. Brocade’s embedded, patent-pending algorithm has been tested and confirmed by many tier-one providers as well as the big cloud services providers to ensure efficiency and balance.
ONLY the Brocade NetIron XMR and MLX allow users to aggregate up to 32 10-Gbpe links today. So, while we and others continue to work to develop the standard and drive the cost of truly useful 100Gbps technology down, Brocade, unlike others, does have a current and usable solution. This is in keeping with two of our core principles – lowering costs for our customers, and providing the most advanced, and stable, technology choices to meet their needs.