Fiber to the home is made possible with non-traditional methods. Local, friendly customer service helps to differentiate from the competitionRead more...
The Federal Government is going to the cloud... and your company wants to enable that.... A market survey of Federal IT folks spells the requirements?Read more...
In my last blog I wrote about some of the challenges and requirements for Data Privacy in Mobile Provider networks. In this blog I will discuss in more detail the associated use cases for encryption services in the Mobile EPC.Read more...
In a recent survey, security is at the top of minds for most CIOs these days. But what does that mean for mobile providers, particularly in terms of network security?
The 3rd Generation Partnership Project (3GPP) was established with the charter of defining interoperable mobility standards and is currently on the 4th Generation – Long Term Evolution (LTE). 4G/LTE provides IP data networking over cellular and IP Multimedia Subsystem (IMS) services, such as Voice over LTE (VoLTE).Read more...
Software Defined Networking (SDN) has been an industry buzzword for some time. The promise of separating control plane and data plane, and implementing an intelligent controller has created communities of open source developers (e.g. OpenDaylight), standards development in IETF (e.g. I2RS and SFC) and spawned an entirely new approach to networking appliances with NFV to tackle topics such as network programmability and network service chains. Across the landscape, many of the use-cases in these organizations focus on mobile networks. My colleague, and Brocade CTO of Mobile Networks, Kevin Shatzkamer, thinks that SDN could be the single biggest opportunity for operational improvement in mobile networks since the transition from circuit switching to packet switching.Read more...
Just about every day there is a press release where “big data,” “real-time” and “analytics” - along with machine learning - are being advertised as a breakthrough technologies that will enable business and operational transformation for mobile service providers. While it is true that mobile data analytics create new opportunities, with the notable exception of the nascent introduction of machine learning to the “analytics toolkit,” the technology is far from revolutionary.Read more...
Brocade is announcing two innovative interface modules for the MLXe routing platform that provides in-line IPsec and MACsec encryption services @ line-rate! These new encryption modules are the industry’s first in terms of providing L2 and L3 encryption services in a router chassis with no performance impact.Read more...
Short blog on the Layer123 SDN & OpenFlow World Congress and our joint demo with Indiana University/GlobalNOC. We presented an innovative SDN solution that uses OpenFlow for dynamic traffic management in a "Science-DMZ" use case.Read more...
I recently returned from IETF 87 and would like to provide a brief synopsis of the event. First, who would have expected the weather to be that hot in Berlin? It hit 96F on Sunday, the first dayRead more...
I recently returned from IETF 87 and would like to provide a brief synopsis of the event. First, who would have expected the weather to be that hot in Berlin? It hit 96F on Sunday, the first day of the event!
Overall, it was a very well attended event with a very relevant agenda. My first point to make is that the SDN trend continues to gain interest and acceptance. If you’ve been following IETF activity over the last 12 – 18 months or so, you know that this trend started kind of slow but is gaining more traction at each of the IETF events. The future meeting schedule can be found here, and if you’d like a quick review of the IETF 86 event, you can go here.
The NVO3 WG and SDN RG meetings continue to draw the most relevance to the SDN problem space. However, the L3VPN WG and to a lesser extent, the I2RS & PCE WGs, also have correlated activities worth following if SDN is of interest to you.
So, here are the brief updates, organized by WG/RG.
The SDN Research Group kicked off the Monday morning sessions. This RG is chaired by our Brocade SP CTO, Dave Meyer. Dave started the session with a thought provoking presentation that set the bar early. The session included some other quite interesting presentations; such as an SDN-enabled IXP, a presentation on how NFV fits with other IETF SDN activities, and how I2RS and SDN are related (or not).
This was a *very* well attended session, as it was the last time. With the advent of cloud computing and private/public/hybrid data-center clouds, this is where most of that activity is taking place in terms of defining the data-center virtualization problem space, its requirements and eventually the solution space.
The NVO3 architecture team provided an update on their activities. Brocade Principal Engineer, Jon Hudson, is part of this architecture team. This was quite an interactive session. They recommended to promote the current “as-is” VXLAN and NVGRE ID’s to Informational RFC status. This is primarily to document these technologies since they are already implemented and deployed. In addition, other IETF activities relate back to these drafts, so now they are Informational RFCs that can be referenced.
The NVO3 reference architecture that is being developed will identify the key system components and describe how they fit together. This architecture phase will then drive the requirements definition work; and that will then feed into a gap analysis. Some key definitions are being defined; such as, NVE (Network Virtualization Edge) and NVA (Network Virtualization Authority). Some critical ‘on-the-wire’ protocols are being flushed out, such as the NVE to NVA protocol and the Inter-NVA protocol. Finally, consensus on the need for both a push and a pull model of control plane distribution was agreed upon.
The L3VPN sessions are always worth attending, and more recently so since this WG has become closely correlated to the NVO3 activities. Besides the usual discussions of MPLS-related internet drafts and technologies, this session included a very active discussion on how the activities in this WG remains aligned (or not) with the NVO3 activities. This discussion was very much needed, since the last few L3VPN WG sessions included some internet drafts that overlap in some sense with the NVO3 charter. Most of this discussion was around which problem areas should remain in the L3VPN WG and which problem areas should perhaps move to the NVO3 WG. As an example, there was a good amount of consensus that technologies “inside” the DC should belong in the NVO3 WG. However, technologies around “inter-DC” solutions should perhaps remain in this WG. Well, this makes a large assumption that the inter-DC solutions and technologies are based on L3VPN solutions. L3VPN solutions could clearly be one answer to the inter-DC problem, but it’s not the only answer. That’s where it gets fuzzy.
I think one way to clarify this is that if the applicability statement in an ID includes intra-DC problems, then it should be part of the NVO3 WG. If protocol extensions to L3VPN solutions are needed for a particular solution, then clearly that work must be done in the L3VPN WG. But if an ID touches on inter-DC problems, then perhaps it needs to be presented in both WGs; at least until it gets further flushed out. Clear as mud yet?
Another WG that continues to generate a fair amount of activity and interest is PCE. This is also an area of IETF work that is somewhat related to the SDN solution space. This WG is focused on how to enhance traffic-engineering decisions in MPLS networks.
I had two key take-away’s from this session. One is that this WG has quickly moved from working on solutions that only “recommend” traffic-engineered LSPs to the network to also now including solutions that actually “instantiate” those LSPs into the network. In other words, the solutions being discussed here include both the centralized TE control-plane and the actual distributed data-plane. The other important take-away is there is more activity around providing PCE redundancy; that is, how to provide multiple PCE databases for the network and how to keep them synchronized. This is a hard problem to solve and in some sense can help flush out the entire “logically centralized” notion in SDN.
The session ended with an interactive discussion that I thought was quite interesting; well actually, almost amusing. The topic was about whether Auto-BW mechanisms should be pushed back into the network nodes so they can each dynamically adjust their LSP BW. Recall that the primary goal of PCE is to logically centralize the traffic-engineering decisions. It’s all about having a central holistic view of network traffic loads in order to fully optimize the entire network. So, this discussion was about whether it makes sense to then allow each network node to adjust its LSP bandwidth, using auto-BW mechanisms, in a distributed way. Doesn’t this sound counter to the goal of PCE? So, is it a centralized or is it distributed? I hope you see why I thought this entire discussion was somewhat amusing.
I2RS was also very well represented and generated lots of interesting dialogue. I believe this was only the second time this WG met, so the activity here is very early in its definition. The architecture team started off discussing the high-level architecture and a policy framework. All this discussion is centered on the Routing Information Bases (RIBs) of a network node; for example, are multiple RIBs needed? What mechanisms are needed to inject state into the RIB(s)? What mechanisms are needed to extract state from the RIB(s)?
The I2RS activity focuses on the southbound problem space; in other words, the interfaces and protocols needed from the controller or client down to the network node. It does not focus on the northbound interfaces or applications that live on top of the controller.
There was also a question about whether I2RS protocol extensions are being developed in private; or outside the knowledge of this WG. There was a request to encourage people to share those discussions and potential experiments with the general WG to spur discussion.
An interesting I2RS service chaining use case was discussed that is being co-authored by Brocade Principal Architect, Ramki Krishnan.
I dropped into the FORCES session to see how this WG is progressing. This WG has been around for many years but it never really gained much attention in terms of implementation or deployment. Now that SDN is here to stay, my sense is that this WG is trying to re-emerge to become relevant to that conversation.
To that end they have a new charter and some of the terminology being used in this WG is more aligned with the SDN problem space. For example, there are drafts that discuss Virtual Control Element (VCE) nodes and Virtual Forwarding Element (VFE) nodes.
The OPSA WG continued the discussion around a draft presented by Brocade Principal Architect, Ramki Krishnan, on mechanisms for optimal LAG/ECMP link utilization. This capability is important in not only service provider networks but also in research and education networks. Researchers are often the sources for very large IP flow dissemination and the ability to properly load-balance these large flows on LAG bundles is becoming increasingly important. I also presented this draft, on behalf of Ramki, at the recent ESnet Site Coordinators Committee (ESCC) Meeting at the Lawrence Berkeley National Labs and it was very well received by this group.
Regarding the various Routing WGs, here is a short recap.
An interesting presentation in the GROW WG was on a use case of using route reflectors for traffic steering. This idea is not new but it does provide an additional data point on how service providers desire enhanced capabilities to influence traffic patterns in their networks.
Similar to the last IETF, there was more discussion in the IDR WG about the ability to distribute link-state and TE information northbound, using extensions to BGP. This type of capability would allow higher layer applications to make more intelligent traffic-engineering decisions. Kind of sounds SDN-like, doesn’t it?
So, that wraps up this short update on the IETF 87 SP related activities. Please let me know if you have any comments or questions.
As every service provider already knows, performance and reliability are always high priorities for enterprise IT. Add cloud computing to the mix, its importance multiplies. In this week’s blog, I continue discussing the WaveLength Market Analytics study on enterprise infrastructures service needs. Specifically, I address the market opportunity for server load balancing services in both terms of size and demand, as well as service packaging, and pricing.
Like the market for IPv6 translation services, the market for server load balancing (SLB)-as-a-service is very large. When asked about their top three IT management priorities, the graph above shows that more than half of enterprise IT survey respondents chose improving user experience, service quality, and reliability as the top IT management priority. High performance and reliability requirements drive demand for server load balancing solutions, so if half of America’s 18,000 large organization’s outsource just one application’s server load-balancing, it’s a good-sized market.
Of course, server load balancing is not a new concept. Enterprises have been using the technology for more than a decade and as the table below shows, they use a mix of server load balancing solutions. Most organizations use an internally managed solution and about half already use a service provider for some type of server load balancing. Forty five percent of both the medium and large enterprises outsource using dedicated load balancers from a service provider. Although less mature, even 38% of medium and 20% of large enterprises say they outsource to a service provider using shared load balancers. As public and hybrid cloud acceptance grows, this shared load balancing service is expected to grow along with them.
Where are enterprises on their willingness to buy a new variant, server load balancing-as-a-service? What value-added features would they be willing to buy and what would they be willing to spend on them? Among the three new infrastructure services, IPv6 translation, storage area network extension, and server load balancing, server load balancing has the highest percentage of respondents who are willing to pay for services. With 86% of large enterprises, survey respondents are more likely to be willing to pay for server load-balancing services than the other two infrastructure services discussed in my previous blogs, IPv6 translation and SAN extension.
As to the above table shows, server load balancing services offer significant opportunity for differentiated services or to sell additional value-added services. Nearly three-quarters of large enterprises are willing to pay for all SLB capabilities included in the study. A service provider can create a service for global load balancing between two data centers as a stand-alone service and about 77% of large enterprises said they’d be willing to pay for it. A service provider can also earn extra revenue by packaging device redundancy for highly available SLB-as-a-service. This value-add can entice 45% of medium and 78% of large enterprises to part with some dollars.
So how much extra will enterprises likely pay for SLB value-added features? We asked how much extra in the form of a premium over the base service fees for SLB-as-a-service for 3 specific value-adds. On average, large enterprises are willing to pay an additional 27% for IPv6 migration support and for SSL offload, and an additional 31% for device redundancy. The more budget-constrained medium-sized enterprise segment reports they are willing to pay an approximately an additional 25% for each of the three value-adds.
The enterprise IT imperative to improve user experience, service quality and reliability, along with increasing cloud apps, create a large and growing service provider opportunity. With its many value-added features, SLB-as-a-service is potentially lucrative; it is both easily differentiated and average revenue per user (ARPU) increases with each a la carte value-added feature. Service providers should invest in their SLB-as-a-service portfolio today to take those new SLB revenue streams to the bank.
Multi-Chassis Trunking (MCT) is a key Brocade technology that helps network operators build scalable and resilient networks, and we are continuing to add more enhancements to MCT that provide advanced redundancy. MCT-aware routers appear asRead more...
Multi-Chassis Trunking (MCT) is a key Brocade technology that helps network operators build scalable and resilient networks, and we are continuing to add more enhancements to MCT that provide advanced redundancy. MCT-aware routers appear as a single logical router that is part of a single link aggregation trunk interface to connected devices. While standard LAG provides link- and module-level protection, MCT adds node-level protection, and provides sub-200 ms link failover times. It works with existing devices that connect to MCT-aware routers and does not require any changes to the existing infrastructure. Pete Moyer wrote about the multicast over MCT features we added in NetIron software release 05.4.00 in his blog earlier this year, and we recently released version 05.5.00 with support for Layer 3 dynamic routing over MCT which is what I want to write about today. Together these two enhancements give network operators the ability to deploy MCT Layer 3 active-active or active-passive redundancy at the network edge or border for IP unicast and multicast.
Layer 3 Routing Over MCT Highlights
Here’s how it works in a nutshell for a quick example with OSPF. The CCEP and CEP in the diagram are on different Layer 3 networks and the routers run OSPF as the IGP. The MCT CCEPs can be configured as active in the OSPF so that they establish an OSPF adjacency with the connected device. Or they can be configured as passive in the IGP so that the interface is advertised via OSPF, and a static route is configured on the connected device.
Layer 3 traffic that is sent to the network connected to the CEP is load-balanced over the two MCT routers by the connected device, assuming an active-active configuration. If one of the CCEPs or MCT routers fails, Layer 3 traffic will still be forwarded over the MCT Inter-Chassis Link (ICL) to the remaining MCT router. I left out some details for simplicity to illustrate the functionality; obviously you’d want redundant connections via CEPs on each MCT router in order to provide full redundancy.
For more information on Layer 3 routing with MCT, refer to the “MCT L3 Protocols” section in the MCT chapter in the Multi-Service Switching Configuration Guide. If you need more information on MCT then we have two technical white papers that you can read. “Multi-Chassis Trunking for Resilient and High-Performance Network Architectures” provides an overview of the technology, and “Implementing Multi-Chassis Trunking on Brocade NetIron Platforms” has design and deployment details.
sFlow is a very interesting technology that often gets overlooked in terms of network management, operations and performance. That’s a shame; as it can be a very powerful tool in the network oRead more...
sFlow is a very interesting technology that often gets overlooked in terms of network management, operations and performance. That’s a shame; as it can be a very powerful tool in the network operator’s tool-kit. In this brief blog, I hope to shed some light on the appealing advantages of sFlow. To start with - if you are a network operator and you are not gathering network statistics from sFlow, I hope you will carefully read this blog!
While the title of this blog says enhancements to sFlow, I’d like to focus a good portion of this piece on sFlow itself and explain why it’s not just useful, but why it should be considered a necessary component of any overall network architecture. I’ll also point out some differences between sFlow, NetFlow and IPFIX (since I frequently get asked about these when I talk about sFlow with customers).
sFlow was originally developed by InMon and has been published in Informational RFC 3176. In a nutshell, sFlow is the leading, multi-vendor, standard for monitoring high-speed switched and routed networks. Additional information can be found at sFlow.org.
sFlow relies on sampling; which enables it to scale to the highest speed interfaces, such as 100GbE. It provides very powerful statistics and this data can be aggregated into very edifying graphs. Here is a pretty cool animation describing sFlow in operation. sFlow provides enhanced network visibility & traffic analysis; can contribute relevant data to an overall network security solution; and can be used for SLA verification, accounting and billing purposes. sFlow has been implemented in network switches & routers for many years and is now often implemented in end hosts.
Here are some publicly available sFlow generated graphs from AMS-IX (the sFlow samples are taken from Brocade NetIron routers).
Here is another simple output from sFlow, showing the top talkers in a specific IP subnet.
Short Comparison of sFlow, Netflow and IPFIX
While sFlow was explicitly invented as an open standards based protocol for network monitoring, Netflow was originally developed to accelerate IP routing functionality in Cisco routers (it remains proprietary to Cisco). The technology was subsequently modified to support network monitoring functions instead of providing accelerated IP routing; however, it can exhibit performance problems on high-speed interfaces. Furthermore, sFlow can provide visibility and network statistics from L2 – L7 of the network stack, while Netflow is predominantly used for L3 – L4 (there is now limited L2 support in Netflow but there is still no MPLS support).
Another key difference between the two protocols is that sFlow is a packet sampling technology; while Netflow attempts to capture entire flows. Attempting to capture an entire flow often leads to performance problems on high-speed interfaces, which are interfaces of 10GbE and beyond.
IPFIX is an IETF standards based protocol for extracting IP flow information from routers. It was derived from Netflow (specifically, Version 9 of Netflow). IPFIX is standardized in RFC 5101, 5102, and 5103. As its name correctly implies, IPFIX remains specific to L3 of the network stack. It is not as widely implemented in networking gear as sFlow is.
sFlow and OpenFlow?
There is some recent activity around integrating sFlow with OpenFlow to provide some unique “performance aware” SDN applications. For example, take a look at this diagram:
[Diagram referenced from here.]
In this example, sFlow is used to provide the real-time network performance characteristics to the SDN application running on top of an OpenFlow controller, and OpenFlow is used to re-program the forwarding paths to more efficiently utilize the available infrastructure. Pretty slick, huh? This example uses sFlow-RT, a real-time analytics engine, in place of a normal sFlow collector.
NetIron sFlow Implementation Enhancements
Brocade devices have been implementing sFlow in hardware for many years. This hardware based implementation provides key advantages in terms of performance. The sampling rate is configurable and sFlow provides packet header information for ingress and egress interfaces. sFlow can provide visibility in the default VRF and non-default VRFs. NetIron devices support sFlow v5, which replaces the version outlined in RFC 3176.
In addition to the standard rate-based sampling capability, NetIron devices are capable of using an IPv4 or IPv6 ACL to select which traffic is to be sampled and sent to the sFlow collector. This capability provides more of a flow-based sampling option, rather than just sampling packets based on a specified rate. In addition to sampling L2 and L3 information, sFlow can be configured to sample VPN endpoint interfaces to provide MPLS visibility. Neither Netflow nor IPFIX can provide this type of visibility.
One of the new enhancements to the NetIron sFlow implementation is the ability to provide Null0 interface sampling. Service providers often use the Null0 interface to drop packets during Denial of Service (DoS) attacks. sFlow can now be configured to sample those dropped packets to provide visibility into the DoS attack. This feature is in NetIron Software Release 5.5.
The other new enhancement that I’d like to mention is the ability to now capture the MPLS tunnel name/ID when sampling on ingress interfaces. This feature is coming very soon and will provide additional visibility into MPLS-based networks.
In summary, I hope you gained some additional insight into the advantages of leveraging the network visibility that sFlow provides. One last thing I’d like to correlate to sFlow is Network Analytics. These are complementary technologies which can co-exist together in the same network, while performing different functions. Brocade continues to innovate in both of these areas and I welcome any questions or comments you may have on sFlow or Network Analytics.
As data center networks scale to support thousands of servers running a variety of different services, a new network architecture using the Border Gateway Protocol (BGP) as a data center routing protocol is gainRead more...
As data center networks scale to support thousands of servers running a variety of different services, a new network architecture using the Border Gateway Protocol (BGP) as a data center routing protocol is gaining popularity among cloud service providers. BGP has traditionally been thought of as only usable as the protocol for large-scale Internet routing, but it can also be used as an IGP between data center network layers. The concept is pretty simple and has a number of advantages over using an IGP such as OSPF or IS-IS.
Large-Scale BGP is Simpler than Large-Scale IGP
While BGP in itself may take some heavy learning to fully grok, BGP as a data center IGP uses basic BGP functionality without the complexity of full-scale Internet routing and traffic engineering. BGP is especially suited for building really big hierarchical autonomous networks, such as the Internet. So, introducing hierarchy with EBGP and private ASNs into data center aggregation and access layers down to the top of rack behaves just like you would expect. We’re not talking about carrying full Internet routes down to the top of rack here, just IGP-scale routes, so even lightweight BGP implementations that run on 1RU top of rack routers will just work fine in this application.
The hierarchy and aggregation abilities of an IGP are certainly quite extensive, but each different OSPF area type, for example, introduces different behaviors between routers, areas and how different LSA types are propagated. There’s a lot of complexity to consider when designing large-scale IGP hierarchy, and a lot of information that is flooded and computed when the topology changes. The other advantages of BGP are the traffic engineering and troubleshooting abilities. With BGP you know exactly what prefix is sent and received to each peer, what path attributes are sent and received, and you even have the ability to modify path attributes. Using AS paths you can tell precisely where the prefix originated and how it propagated, which can be invaluable in troubleshooting routing problems.
How it Works
What you basically do is divide the network into modular building blocks made up of top of rack access routers, aggregation routers, and data center core routers. Each component uses its own private ASN, with EBGP peering between blocks to distribute routing information. The top of rack component doesn’t necessarily need to be a single rack; it could certainly be a set of racks and a BGP router.
Petr Lapukhov of Microsoft gave a great overview of the concept at a NANOG conference recently in a presentation called “Building Scalable Data Centers: BGP is the Better IGP”, which goes into a lot more background on their design goals and implementation details. If you’d like to experiment with the network design as Petr describes, the commands for the BGP features on slide 23 for the Brocade NetIron software are:
AS_PATH multipath relax: multipath multi-as (router bgp)
Allow AS in: no enforce-first-as (router bgp or neighbor)
Fast EBGP fallover: fast-external-fallover (router bgp)
Remove private AS: remove-private-as (router bgp or neighbor)
Taking it a Step Further
An alternative that takes the design even further from top of rack down into the virtual server layer for high-density multitenant applications is to also use the Brocade Vyatta vRouter. In this design, EBGP would be run from the data center core at each layer to a virtual server that routes for a set of servers in the rack. This addition gives customers a lot of flexibility in controlling their own routing, for example, if they wanted to announce their own IP address blocks to their hosting provider as part of their public cloud. Customers could also use some of the other vRouter VPN and firewall features to control access into their private cloud.
In addition to using BGP to manage routing information, you can also build an OpenFlow overlay to add application-level PBR to the network. Using the Brocade hybrid port features that enables routers to forward using both OpenFlow rules and Layer 3 routing on the same port, introducing SDN into this network as an overlay is easy. In fact, this is exactly what Internet2 is doing in production on their AL2S (Advanced Layer 2 Services) network to enable dynamically provisioned Layer 2 circuits.
So is BGP better as a data center IGP? I think the design lends itself especially well to building modular data center networks with independent and autonomous modular components that can be built all the way down to the virtual server level. Perhaps you even have different organizations running their own pieces of the network, or servers that you’d rather not invite into your OSPF or IS-IS IGP.
For more information on Brocade’s high density 10 GbE, 40 GbE and 100 GbE routing solutions, please visit the Brocade MLX Series product page.
Today, Brocade announced it strategy to bridge the physical and virtual worlds of networking to enable customers to build an “On-Demand Data Center”. For service providers, an On-Demand Data Center means getting closer to becoming the greatly sought after cloud provider by increasing business agility, reducing complexity and scaling virtualization. In this blog I will focus on the announcement of the new 40 GbE interface module we have added to the Brocade MLX Series to enhance the physical aspects of the data center core that are required as the foundation for the On-Demand Data Center.
In the core of the service provider data center, network operators need to be able to respond in real time to dynamic business needs by delivering applications and services on demand. At the same time, they must contain costs through more efficient resource utilization and simpler infrastructure design. Traditional network topologies and solutions are not designed to support increasingly virtualized environments. With the Brocade MLX 4-port 40 GbE module, in conjunction with Brocade VCS Fabric technology, you can scale the data center fabric and extend across the Layer 3 boundary between data centers. High 40 GbE density with advanced Layer 3 capabilities helps consolidate devices and links needed in the data center core. Large Link Aggregation Group (LAG) capabilities provide capacity on-demand and reduce management overhead. By consolidating devices and simplifying the network, customers can reduce capital expenditures and operational expenditures in terms of power, space, and management savings, minimizing TCO. In addition to massive scalability from the 40 GbE density, the rich feature set of the Brocade MLX 4-port 40 GbE module eliminates the need for additional edge routers by enabling Layer 3 data center interconnect with full featured support for Access Control Lists (ACLs), routing, and forwarding in the data center core
Prior to 2012, optical equipment dominated the 40 GbE market. 40 GbE is now taking off on Ethernet routers and switches, principally in data centers because it helps to bridge the bandwidth and economics gap between 10 GbE and 100 GbE for customers. The market for 40 GbE in high-end routing applications is expected to ramp up quickly, with CAGR from 2013 to 2016 expected to be 125% with a total market size in 2016 of $239M (Source: Dell’Oro, 2012). Similar to 10 GbE, business drivers will be the growth of bandwidth-intensive applications:
The image shows a primary deployment model for the 40 GbE module in the Core of the data center. The high density, wire-speed performance enables 40 GbE connection with the aggregation layer – in this case the Brocade VDX 8770 supporting the VCS Fabric. Also supporting advanced MCT, this new module enables data center cores to scale in in highly resilient and efficient manner.
The MLXe also serves as an ideal border router to interconnect the data center to the WAN – or other data centers. Here 40 GbE or 100 GbE is typically used. The new 40 GbE module is often used, especially where underlying WAN optical infrastructure does not yet support 100G.
There has been lots of recent discussion about Google and AT&T targeting to provide the city of Austin, TX with a 1-gigbit-per-second Internet serv.... While the competitive and innovative spirit should make Austin feel like one of the luckiest towns in the world, I would like to tell you about a metro service provider in Clarksville, Tennessee that already provides its residential and commercial customers with 1 Gigabit Ethernet services to the premise.
CDE Lightband is the leading municipal utility provider of electricity, digital television, Internet and voice to all of the 100 square miles located within the boundaries of Clarksville, TN. They offer their services to approximately over 64,000 customers while 892 miles of power lines and 960 miles of fiber optic cable are maintained. Most distinguishably, CDE Lightband provides a true Active Ethernet network to their customers. This means that each and every one of their residential and commercial customers has their own active Ethernet, Fiber-to-the-Premise port. The value of an Active Ethernet network is that the bandwidth on the connection is not shared, and is thus an effective way of ensuring a 1-Gbps connection to each subscriber. It is certainly a feather in the hat for a service provider of any size.
Brocade is proud to support CDE Lightband’s Active Ethernet project. By using the Brocade NetIron CES series switches, CDE Lightband can sell Gigabit Internet service and provide bandwidth throughput. In the future, CDE Lightband plans to use the 10G ports on the Brocade CES so they can grow the switches into the network as they expand their internal infrastructure.
Like all service providers, CDE Lightband’s top priority is to provide world class performance and reliability. Because of the Brocade CES series switches, CDE Lightband is able to offer their customers a unique and powerful Ethernet services (as exemplified in their Active Ethernet project) and deliver them on pace with their customers’ business and personal requirements. Brocade is very honored to be the backbone of CDE Lightband’s network!
To learn more about the Brocade and CDE Lightband partnership, please watch this video.
I recently returned from IETF 86 and would like to update the folks in this community with a brief synopsis of the event. Overall, it was a very well attended, interactive and relevant event! BuRead more...
I recently returned from IETF 86 and would like to update the folks in this community with a brief synopsis of the event. Overall, it was a very well attended, interactive and relevant event! But I think that’s pretty much the norm these days, particularly with all the interest in SDN related technologies and use cases. I will post a separate blog in our SDN community on the SDN related IETF activities, so please go there for that update. In this blog, I will focus on IETF activities related to service providers.
I’ll start off with the discussion around IPv6 in MPLS networks. While we all know that there has been some interest and IETF standards work in the area of MPLS/IPv6, it has yet to garner much real deployment interest. Techniques for providing IPv6 over IPv4-based MPLS networks, such as 6PE and 6VPE, have solved some of the issues with IPv6 and MPLS. However, it appears the IETF community is now getting behind full IPv6 support in MPLS. This would include native IPv6 LDP and RSVP-TE support. Some folks believe that although full IPv6 MPLS networks may not be needed for another 3-5 years, the IETF community should get on board now and start officially driving this. The MPLS WG will start formally tracking progress in this area, as it’s deemed important work.
Entropy labels to improve load balancing in MPLS networks was briefly discussed and this appears to be a done deal in terms of standards (RFC 6790) and having broad community support and consensus.
TRILL over Pseudo-Wires was discussed in the PWE3 WG. This is cool stuff and appears to have some degree of consensus. This basically would allow a TRILL domain in one data center to have layer-2 connectivity to another TRILL domain in another data center.
A similar topic of VXLAN over L2 VPNs was discussed in the L2VPN WG. This would provide a layer-2 MPLS connection between VXLAN or NVGRE logical overlay networks. This is also a pretty cool use case and this appears to be a needed solution if VXLAN/NVGRE solutions become more widely deployed in data centers. A somewhat related topic was discussed on how Ethernet VPNs (E-VPNs) could be leveraged to provide a data center overlay solution. In this context, E-VPNs are based on MPLS technologies. While this solution revolves around Network Virtualization Overlays, it was discussed in the L2VPN WG due to it leveraging MPLS technologies. This Internet Draft was also discussed in the NVO3 WG.
Interesting work on MPLS forwarding compliance and performance requirements was discussed in the different MPLS WGs. This work intends to document the MPLS forwarding paradigm for the perspective of the MPLS implementer, MPLS developer and MPLS network operator. Very useful work!
In the L3VPN WG, there were quite a few IDs that overlap with the NVO3 WG and data center overlay technologies. The general support for MPLS-based solutions for data center overlay architectures appears to be gathering momentum. From a high level, this does make sense as MPLS VPN technologies provide a logical network overlay in the wide area of service provider networks. As data center overlay architectures evolve, why not leverage this work and experience? I will discuss more on this topic in my SDN community blog.
To wrap up the MPLS activities; there were a number of other MPLS-related developments and enhancements that I won’t go into detail about here. Areas such as P2MP LSPs, special purpose MPLS label allocations, OAM, and additional functionality for advertising MPLS labels into the IGP (like an enhanced “forwarding adjacency”) were all discussed and are progressing at various stages though the IETF standards process.
Another WG that generated a fair amount of activity and interest is PCE. This is also an area of IETF work that is somewhat related to the SDN solution space. This WG is focused on how to enhance traffic-engineering decisions in MPLS networks. PCE functionality would “recommend” traffic-engineered LSPs for the network but would not be responsible for the actual instantiation of those LSPs into the network. That would be done by another function; and is deemed outside the scope of the PCE WG.
The WG agreed to make the PCE MIB “read-only”. This makes sense since the MIB is not a good place to implement PCE functionality. They also discussed P2MP LSPs, Service aware LSPs and even the support of wavelength switched optical networks. They also agreed that “Stateful” PCE was indeed in scope and in the charter.
Overall, nothing really ground breaking to report on in the area of routing activity at this IETF. One topic worth a mention is the North-bound distribution of link-state and TE routing information using BGP. This area is somewhat related to the SDN solution space, as it could provide upper layer applications (such as ALTO or PCE) the knowledge of link-state topology state from the network. This would allow those applications to make more intelligent traffic-engineering decisions.
Another area of routing that is interesting to mention is having the ability to make routing decisions based upon additional link-state metric information; such as latency, jitter and loss. This seems like a very logical evolution of IP routing.
And to wrap up the routing activity; as expected, the security of inter-domain routing continues to generate lots of interest. It was interesting that immediately after the IETF, there was a paper published by Boston University on the security implications of the Resource Public Key Infrastructure (RPKI) being discussed on the SIDR mailing list. This paper seems to re- ignite some of the controversy around secure routing.
I2RS was also very well represented and generated lots of interesting dialogue and debate.
This WG is fairly new. The primary goal of this WG is to provide a real-time interface into the IP routing system. This interface will not only provide a configuration and management capability into the routing system, but will also allow the retrieval of useful information from the routing system. Quite a bit of the discussion was centered around what type of state information needs to be injected into the routing system, what type of state information should be extracted from the routing system, and interesting enough, what specifically is the “routing system”. The routing system is generally understood to be the Routing Information Base (RIB) in IP routers but there was a good amount of debate on exactly what constitutes a RIB, what information does it hold and what might the interface to this RIB look and behave like. It appears this WG may have taken a step back to re-group and get more focused before moving on to solutions too rapidly.
There were five use case drafts that were presented and discussed. So, while this WG may have taken a step back to more clearly understand and define the problem space, they are also continuing to move forward with relevant use case definitions and then onward to solutions.
So, that wraps up this short update on the IETF 86 SP related activities. I should mention before closing that the I2RS WG intends to hold an interim meeting after the ONS event in April, so if you are attending the ONS event you may want to attend the I2RS interim meeting as well.
I’d like to follow Greg’s great blog from last week with a related topic. Like his blog, this blog will be focused on router hardware (unlike my previous blogs which were NetIron software related). The topic at hand is a brief discussion of the differences and the pros/cons of FPGA and ASIC technology. I’ll also briefly touch on the advantages of each of these technologies as they apply to high-end IP routers.
FPGAs (Field Programmable Gate Arrays) are specialized chips that are programmed to perform very specific functions in hardware. An FPGA is basically a piece of programmable logic.The first FPGA was invented in 1985, so this technology has been around for quite some time. Rather than executing a function in software, the same function can be implemented in an FPGA and executed in hardware. One can think of an FPGA as “soft-hardware”, since it can be reprogrammed after manufacturing. How many of you remember the bygone days of software-based IP routers? If you do, then you should also remember how poorly the Internet performed at that time! Performance was poor in software-based routers due to the fact that a centralized CPU executed all functions, both the control/management plane functions and the data plane functions of the router. Today, all modern routers execute the data plane functions in hardware; and more frequently, some vendors are moving certain control plane functions into the router hardware as well. The Bi-Directional Forwarding (BFD) protocol is one example of this; where portions of the BFD keep-alive mechanisms are implemented in the line card of the router.
While FPGAs contain vast amounts of programming logic and millions of gates, one thing to note is that there is some programming logic in an FPGA that is not used for the “customer facing” or "mission specific" application or function. In other words, not all the logic in an FPGA is designed to be directly used by the application the FPGA is providing for the customer. There are additional gates needed to connect all the internal logic that is needed to make it programmable; so an FPGA is not fully optimized in terms of “customer facing” logic.
Now, what I find interesting is that some people will still claim that FPGAs cannot scale to the speeds that are required in the today’s Internet. However, Brocade has proven this claim to be quite false and has been shipping line-rate, high-end performance routers using FPGAs for over 10 years. As shown in the line card diagram in Greg’s blog, an FPGA in this context is really a programmable network processor.
One great advantage of an FPGA is its flexibility. By flexibility, I’m referring to the ability to rapidly implement or reprogram the logic in an FPGA for a specific feature or capability that a SP customer requires. When a networking vendor has a new feature that it wants to implement, the vendor may have the choice of deciding whether to put the feature in software or hardware. This is not always the case; for example, OSPF needs to be run in the control plane of the router and cannot be implemented in hardware. The question of whether to implement something in software or hardware basically comes down to a decision of flexibility versus scalability (and cost is always part of that decision process, as one would expect). Implementing something in software usually results in a rapid implementation timeframe, but often at the detriment to performance. As usual, there is always a trade-off to be made. However, if the vendor supports programmable network processors, they can implement the feature in hardware with no detriment to performance. While it takes more time to get the feature into an FPGA rather than implementing it in software, the time-to-market timeframe is still considerably less than doing a similar feature in an ASIC. The real advantage of this becomes evident with deployed systems in a production network. When a customer requires a feature that needs to be implemented in the forwarding plane of a router, once this feature is developed by the vendor the deployed systems in the field can be upgraded to use the new feature. This requires only a software upgrade of the system; no new hardware or line cards would be required. The routers’ software image contains code for the FPGAs, as well as the code for the control and management plane of the router.
Back to the performance question: Industry has shown that high-end FPGAs are growing in density while handling higher-speed applications and more complex designs. Furthermore, if you look at the evolution of FPGAs over the years, they follow Moore's Law just like CPUs have been doing in terms of the amount of logic that you can implement into them. Recent history has shown that FPGA development in terms of density is on an exponential growth curve.
FPGAs can also be used for developing a “snapshot” version of a final ASIC design. In this way, FPGAs can be re-programmed as needed until the final specification is done. The ASIC can then be manufactured based on the FPGA design.
While ASICs have very high density in terms of logic gates on the chip, the result of higher scalability in terms of the same power metric can give ASICs a competitive edge over an FPGA. One thing to note is that an ASIC is designed to be fully optimized in terms of gates and logic. All the internal structures are used for customer facing or mission specific applications or functions. So, while an ASIC may consume more power per unit die size than an FPGA, this power is amortized over a higher density solution; and hence, provides better power efficiency.
Compare/Contrast of FPGA-ASIC
So, FPGAs and ASICs are both specialized chips that perform complex calculations and functions at high levels of performance. FPGAs, however, can be re-programmed after fabrication, allowing the line card's feature set to be upgraded in the field after deployment. Being able to upgrade the data plane of a deployed router extends the useful lifespan of the system; which correlates to extended investment protection. Since an ASIC is not re-programmable, an ASIC-based line card cannot be upgraded in the field. This is a huge differentiator between the two technologies.
One excellent real-world example of this is when Brocade introduced support for 64 ports in a single LAG. This is industry leading scale (64 10GbE ports in a single LAG!) and since this functionality is implemented in the forwarding plane of the line card, it required reprogramming the Brocade network processor. While this type of capability is in the hardware of the router, it was implemented with a system software upgrade and no hardware needed to be replaced.
There are network scenarios or use cases where it makes more sense to have an FPGA-based product and there are use cases when it makes more sense to have an ASIC-based product. For example, a SP may determine that a high density solution is more important than a solution that provides quicker feature velocity and, thus, may choose an ASIC-based product. ASIC-based line cards are often denser in terms of numbers of ports and the cores of SP networks typically do not require high feature velocity. Most of the feature velocity in today’s SP networks is at the edge of the network (ie: at the PE router) or in the data center, where innovation is currently happening at a rapid pace. The general flexibility of an FPGA results in time-to-market advantages for feature implementation and soft-hardware bug fixes.
For smaller applications and/or lower production volumes, FPGAs may be more cost effective than an ASIC. The non-recurring engineering (NRE) cost of an ASIC can run into the millions of dollars. Conversely, in high volume applications the front-end R&D costs of an ASIC are offset by a lower cost to manufacture and produce. For example, in high-end IP core routers, ASIC-based line cards are more economical due to the lower manufacturing cost, combined with the higher port density of the line card that ASICs can provide.
As costs related to ASIC development are increasing, some recent trends may suggest that FPGAs could be a better alternative even for high volume applications that traditionally used ASICs. It is unclear whether this trend is indeed sustaining or a somewhat temporary aberration.
To summarize the primary differences between FPGA and ASIC based line cards; at the highest level it basically comes down to a scalability versus a flexibility question (again, with cost a large contributing factor). ASICs are advantageous when it comes to high port density applications. FPGAs are advantageous when it comes to feature velocity with a shortened time-to-market requirement. In high end core routers, high density ASIC-based line cards can provide higher density at a lower cost than FPGA-based line cards. So, it’s based upon the use case and network application to determine which type of technology would be favored over the other.
As usual, any questions are comments are welcome!
It’s hard to believe that Ethernet is turning 40 this year, isn’t it? Since its conception by Bob Metcalfe and the team of engineers at XEROX PARC in the 1970s, Ethernet technology has continued to evolve to meet the increasing bandwidth, media diversity, cost, and reliability demands of today’s networks. The next Ethernet evolution has officially started, and I'm excited to follow the latest developments on this new technology that will enable networks to support even higher capacities.
“Here is more rough stuff on the ALTO ALOHA network.” Memo sent by Bob Metcalfe on May 22, 1973.
I wrote about 400 GbE in my blog recently as the next likely Ethernet speed, and now it’s official. Last week at the March 2013 IEEE 802 Plenary Session, 400 GbE became an official IEEE 802.3 Study Group that will start work on developing the new standard. Though 100 GbE is only a few years old, it’s important that we start working on the next speed now, so that we have the technology shipping when there is demand from network operators to deploy higher speed Ethernet.
The 400 Gb/s Ethernet Study Group is starting with strong industry consensus this time, which will enable the standard to be developed faster than before. The 400 GbE Call-For-Interest presentation was given last week to measure the interest in starting a 400 GbE Study Group in the IEEE. Based on the hard work of the IEEE 802.3 Ethernet Bandwidth Assessment (BWA) Ad Hoc and the IEEE 802.3 Higher Speed Ethernet (HSE) Consensus Ad Hoc, there was clear consensus on the direction the industry should take on the next Ethernet speed. The straw polls and official vote on the motion to authorize the Study Group formation were all in favor with a few abstains, which showed a high degree of consensus from the individuals and companies represented. This was not so with the last Ethernet speed evolution, which was simply called the Higher Speed Study Group (HSSG) when it was formed. First, the HSSG had to analyze the market and come up with feasible higher speed solutions before even deciding on the speed. This made the standardization process much longer as the HSSG debated 40 GbE and 100 GbE, and eventually standardized both speeds for different applications. Since we are already starting the 400 Gb/s Ethernet Study Group with a clear speed objective in mind, the standardization process should be much faster. This means the Study Group could have the 400 GbE standard finished in 2016 with the first interfaces available on the market soon after.
Stay tuned for more updates as we follow the road to 400 GbE! If you happen to be in the Bay Area next week, check out the Ethernet 40th Anniversary Celebration at the Ethernet Technology Summit on Wednesday evening at 6 pm, April 3, 2013.