on 08-24-201304:32 AM - last edited on 10-28-201301:27 PM by bcm1
The movie Top Gun with Tom Cruise made this line short hand for high performance with no compromise.
As proof of the need for speed in Ethernet networks, the IEEE announced this week that they have started a new group, “IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus Group” to begin work on physical standards that will support 400 GbE and 1 TbE with a target of 2015, as reported in EE Times.
There are many reasons for this, notably, the growth in smart phones, on-line video access, data analytics (aka, Big Data), sensors networks, real-time network telemetry and analytics for public clouds, high-performance computing and science research activities such as genomics, proteomics, climate modeling and particle physics. This has lead to network bandwidth doubling every 18 months – is that an echo of Moore’s Law I hear?
One of Brocades customers, CERN, operates the Large Hadron Collider (LEC), and recently announced strong evidence of having found the last particle that completes the Standard Model of particle physics, the Higgs Boson. The media affectionately calls this the “God Particle”, but it’s the particle responsible for what we experience as mass, and this is a very significant discovery.
CERN’s experiments collect gigabytes of experimental data PER SECOND storing 15 PB per year. But, that’s just the beginning as subsequent analysis of raw experimental data generates many large data sets that are moved among the researchers who need access to it.
The Brocade MLXe Router with 100 GbE line cards is installed at CERN and directly supports the data collection from the experiments. It’s the Top Gun of networking at the top particle physics experiment ever constructed.
Brocade MLXe with 100 GbE Line Cards
If you feel a need for speed and want to see how well the MLXe performs with the 100 GbE line card, drop by the Strategic Solutions Lab forum. We just published a new Validation Test document that confirms our performance:
on 08-23-201308:05 AM - last edited on 10-28-201301:28 PM by bcm1
Photograph by Don Vu
It is time for VMworld. Once again, IT pains bring 20,000 professional back to San Francisco looking for answers. But with so many sessions and exhibitors and limited time, how does one find the answer? Well, try these steps.
First, ask - how can you increase efficiency in the current data center to drive additional value from the investment made? At the Brocade booth, you can start by taking matters into your own hand by building your own efficient data center network in 5 minutes.
Second, look. Central in today’s data centers are critical applications using storage connected mostly by SAN fabrics. Users of vCenter Operations Management Suite (vCOPS) should look at Brocade’sOperations Management demoto see how SAN Analytics for vCOPS can help improve operational efficiency in your data center.
Third, chat. You can chat to a Brocade expert in our Validated Solutions area about a range of proven solutions for private clouds which are easy to deploy and manage. Find out how Brocade provides you with the best-of-breed choices for validated solutions.
Fourth, move. Move pass the big roadblock at the gateway to the next gen data center - the lack of agility. The automated, multi-pathing and resiliency provide by the Brocade’s Virtual Cluster Switching technology make Brocade’s IP networking one of the most capable underlay networks for workloads mobility. If you are interested in Hybrid Clouds, you might look at the Brocade Vyatta vRouter and virtual ADC.
Fifth, unite – the critical step to the next gen data center. The concept of the next gen Software-Defined Data Center (SDDC) uses networks to unite compute and storage resources. One of the three SDDC pillars is Software Defined Networking (SDN) which incorporates Network Virtualization. When virtual networks become numerous and complex when combined with policies, multi-tenancy and QoS – then the automation, scalability and resilience of the physical network becomes critical. At VMworld, you should come to the Network virtualization demo area to see these capabilities in action. See how Brocade technology unites compute and storage resources to support data center workloads running across physical and virtual environments.
So the picture is pretty clear. Whether you are looking for higher efficiency today, visibility into your data center fabrics or the ability to unite you physical and virtual resources for tomorrow – Booth 1513 is the place to go for the answer.
A data center divided cannot stand. So come and see how Brocade can help you unite the physical and virtual to realize the benefits of the next gen data center.
on 08-20-201304:48 AM - last edited on 10-28-201301:29 PM by bcm1
Lately I’ve found myself explaining a minor paradox fairly frequently, so I thought I’d capture it here for easy bookmarking.
Brocade VCS fabrics were designed with a distributed control plane as well as a logically centralized management construct. The former means all nodes are aware of each other and share information about their health and state, which enables a relatively high degree of autonomous operation of the fabric as a whole. On the other hand, eliminating per-device management is clearly advantageous in terms of streamlining both deployment and troubleshooting. The VCS control plane facilitates rapid and consistent policy distribution across the fabric.
Now for the paradox: VCS fabrics are designed to be masterless. This helps ensure resilience in the event of a node failure. Yet the simplicity of centralized management depends on there being a single point from which policy is defined and distributed. Some approaches achieve this via a separate controller and management network, which may present resilience concerns. In VCS Logical Chassis mode, a “principal switch” is assigned by the administrator, and the designation can be reassigned to a different node at any time.
However, this is not Darwinian, Roman Triumvirate-style primus inter pares; rather, the election process more closely resembles a Witenagemot, with a generally understood succession plan being ratified and implemented at need by peer nodes. Here’s how it works: the administrator decides the principal switch should have certain characteristics, for example, hardware HA, large scale, etc. (In fabrics containing VDX 8770s, those devices would be preferred as principal switch candidates. Fabrics with leaf-spine topologies would generally designate a spine switch.) The administrator then assigns the principal switch as well as priority of backup principals based on specific policy parameters. In the event of a principal switch failure, management automatically fails over to the designated successor switch to avoid disruption.
However, the “next of kin” succession can be altered, either by quickly moving down the line of succession if multiple nodes are affected, or by direct intervention by the administrator, for example because upgrades or policy changes affect the preferred type of switch. This flexibility to alter the fabric management scheme at need, generally but not strictly within the confines of a clear, predefined succession process, ensures that fabrics can be tuned and optimized organically without the massive disruption of a1066-type of event.
There are a number of other interesting aspects of the VCS Logical Chassis construct I haven't touched on here. Please take a look at the Logical Chassis whitepaper, which also goes over the details of zero-touch discovery, simplified firmware updates and other useful features.