Service Providers

SDN Flow-Based MPLS Traffic Engineering

by pmoyer on ‎08-26-2016 09:11 AM (78 Views)

This article discusses an innovative solution from Brocade that provides a true "per-flow" classification capability into MPLS RSVP-TE tunnels. This leverages the benefits of SDN; providing an SDN + MPLS service offering.

Read more...

OpenFlow: What’s it good for?

by pmoyer on ‎05-16-2016 09:59 AM (1,429 Views)

OpenFlow continues to be a point of discussion when the topic of SDN comes up; even now, 6+ years after it initially came to market. Recently, I have been hearing some uncertainty within the industry about whether OpenFlow will ever be widely deployed and some folks have been openly questioning what OpenFlow is really good for. This led me to write this blog; in hopes of clarifying some of this perceived confusion.

Read more...

“SDN-Based” Network Analytics is here… Now.

by pmoyer ‎10-23-2015 02:10 PM - edited ‎10-23-2015 02:25 PM (3,162 Views)

This blog delves into a unique SDN-based approach for inline data collection as part of a network analytics solution. Network analytics is all the buzz these days. Every network operator needs to understand the traffic on their network so they can ensure that they are providing high-value services and/or to determine a way to monetize their network. The Data Center and Enterprise network operators we talk to want vendors to provide the proper solution; however, the solutions that are available today either require a network architecture change or they provide a very in-efficient and tedious data collection capability. The solution discussed in this blog solves the problem in an innovative way.

Read more...

New IP network enables greater business agility and scalability by enabling customers to configure and buy services via website.

Read more...

Fiber to the home is made possible with non-traditional methods. Local, friendly customer service helps to differentiate from the competition

Read more...

The Federal Government is going to the cloud... and your company wants to enable that.... A market survey of Federal IT folks spells the requirements?

Read more...

Use Cases for Encryption in the Evolved Packet Core (EPC)

by pmoyer ‎03-20-2015 08:42 AM - edited ‎03-20-2015 08:52 AM (3,760 Views)

In my last blog I wrote about some of the challenges and requirements for Data Privacy in Mobile Provider networks. In this blog I will discuss in more detail the associated use cases for encryption services in the Mobile EPC.

Read more...

Challenges, Requirements of Encryption in the Evolved Packet Core in the New IP

by pmoyer on ‎02-26-2015 12:01 AM - last edited on ‎03-17-2015 09:16 AM by LisaR (2,537 Views)

In a recent survey, security is at the top of minds for most CIOs these days. But what does that mean for mobile providers, particularly in terms of network security?

The 3rd Generation Partnership Project (3GPP) was established with the charter of defining interoperable mobility standards and is currently on the 4th Generation – Long Term Evolution (LTE). 4G/LTE provides IP data networking over cellular and IP Multimedia Subsystem (IMS) services, such as Voice over LTE (VoLTE).

Read more...

MWC Preview - SDN for Mobile Networks

by Tom Nadeau on ‎02-25-2015 12:01 AM (1,856 Views)

Software Defined Networking (SDN) has been an industry buzzword for some time. The promise of separating control plane and data plane, and implementing an intelligent controller has created communities of open source developers (e.g. OpenDaylight), standards development in IETF (e.g. I2RS and SFC) and spawned an entirely new approach to networking appliances with NFV to tackle topics such as network programmability and network service chains. Across the landscape, many of the use-cases in these organizations focus on mobile networks. My colleague, and Brocade CTO of Mobile Networks, Kevin Shatzkamer, thinks that SDN could be the single biggest opportunity for operational improvement in mobile networks since the transition from circuit switching to packet switching. 

Read more...

Just about every day there is a press release where “big data,” “real-time” and “analytics” - along with machine learning - are being advertised as a breakthrough technologies that will enable business and operational transformation for mobile service providers. While it is true that mobile data analytics create new opportunities, with the notable exception of the nascent introduction of machine learning to the “analytics toolkit,” the technology is far from revolutionary.

Read more...

Yesterday was Data Privacy day. What does that mean for Service Providers?

by pmoyer ‎01-29-2015 12:23 PM - edited ‎01-29-2015 12:48 PM (659 Views)

Brocade is announcing two innovative interface modules for the MLXe routing platform that provides in-line IPsec and MACsec encryption services @ line-rate! These new encryption modules are the industry’s first in terms of providing L2 and L3 encryption services in a router chassis with no performance impact.

Read more...

I HEART SP’s and RSP’s!

by Ed O'Connell on ‎12-07-2014 02:14 PM (1,367 Views)

A POP can pop-up anywhere

Read more...

Short blog on the Layer123 SDN & OpenFlow World Congress and our joint demo with Indiana University/GlobalNOC. We presented an innovative SDN solution that uses OpenFlow for dynamic traffic management in a "Science-DMZ" use case.

Read more...

The Internet IPv4 forwarding table continues to grow unabated. Is anyone surprised?

 

Read more...

This blog provides two valuable insights regarding encryption for providing data privacy of transported data.

Read more...

Is MPLS Dead?

by pmoyer ‎12-17-2013 11:39 AM - edited ‎12-19-2013 10:24 AM (3,836 Views)

This blog provides a glimpse into the recent Isocore MPLS conference and how the SDN evolution has impacted the event.

Read more...

NetIron Software R5.6 OpenFlow Features

by pmoyer ‎11-03-2013 01:49 PM - edited ‎11-14-2013 01:05 PM (3,948 Views)

Update on NetIron Software Release 5.6 OpenFlow features for the SP and REN marketplaces.

Read more...

IETF 87 Recap of SP Related Activities

by pmoyer on ‎08-22-2013 11:27 AM (829 Views)

I recently returned from IETF 87 and would like to provide a brief synopsis of the event. First, who would have expected the weather to be that hot in Berlin? It hit 96F on Sunday, the first day

Read more...

IETF 87 Recap of SP Related Activities

by pmoyer on ‎08-22-2013 09:40 AM (1,318 Views)

 

I recently returned from IETF 87 and would like to provide a brief synopsis of the event. First, who would have expected the weather to be that hot in Berlin? It hit 96F on Sunday, the first day of the event!

 

Overall, it was a very well attended event with a very relevant agenda. My first point to make is that the SDN trend continues to gain interest and acceptance. If you’ve been following IETF activity over the last 12 – 18 months or so, you know that this trend started kind of slow but is gaining more traction at each of the IETF events. The future meeting schedule can be found here, and if you’d like a quick review of the IETF 86 event, you can go here.

 

The NVO3 WG and SDN RG meetings continue to draw the most relevance to the SDN problem space. However, the L3VPN WG and to a lesser extent, the I2RS & PCE WGs, also have correlated activities worth following if SDN is of interest to you.

 

So, here are the brief updates, organized by WG/RG.

 

SDN RG

 

The SDN Research Group kicked off the Monday morning sessions. This RG is chaired by our Brocade SP CTO, Dave Meyer. Dave started the session with a thought provoking presentation that set the bar early. The session included some other quite interesting presentations; such as an SDN-enabled IXP, a presentation on how NFV fits with other IETF SDN activities, and how I2RS and SDN are related (or not).

 

NVO3 WG

 

This was a *very* well attended session, as it was the last time. With the advent of cloud computing and private/public/hybrid data-center clouds, this is where most of that activity is taking place in terms of defining the data-center virtualization problem space, its requirements and eventually the solution space.

 

The NVO3 architecture team provided an update on their activities. Brocade Principal Engineer, Jon Hudson, is part of this architecture team. This was quite an interactive session. They recommended to promote the current “as-is” VXLAN and NVGRE ID’s to Informational RFC status. This is primarily to document these technologies since they are already implemented and deployed. In addition, other IETF activities relate back to these drafts, so now they are Informational RFCs that can be referenced.

 

The NVO3 reference architecture that is being developed will identify the key system components and describe how they fit together. This architecture phase will then drive the requirements definition work; and that will then feed into a gap analysis. Some key definitions are being defined; such as, NVE (Network Virtualization Edge) and NVA (Network Virtualization Authority). Some critical ‘on-the-wire’ protocols are being flushed out, such as the NVE to NVA protocol and the Inter-NVA protocol.  Finally, consensus on the need for both a push and a pull model of control plane distribution was agreed upon.

 

L3VPN WG

 

The L3VPN sessions are always worth attending, and more recently so since this WG has become closely correlated to the NVO3 activities. Besides the usual discussions of MPLS-related internet drafts and technologies, this session included a very active discussion on how the activities in this WG remains aligned (or not) with the NVO3 activities. This discussion was very much needed, since the last few L3VPN WG sessions included some internet drafts that overlap in some sense with the NVO3 charter. Most of this discussion was around which problem areas should remain in the L3VPN WG and which problem areas should perhaps move to the NVO3 WG. As an example, there was a good amount of consensus that technologies “inside” the DC should belong in the NVO3 WG. However, technologies around “inter-DC” solutions should perhaps remain in this WG. Well, this makes a large assumption that the inter-DC solutions and technologies are based on L3VPN solutions. L3VPN solutions could clearly be one answer to the inter-DC problem, but it’s not the only answer. That’s where it gets fuzzy.

 

I think one way to clarify this is that if the applicability statement in an ID includes intra-DC problems, then it should be part of the NVO3 WG. If protocol extensions to L3VPN solutions are needed for a particular solution, then clearly that work must be done in the L3VPN WG. But if an ID touches on inter-DC problems, then perhaps it needs to be presented in both WGs; at least until it gets further flushed out. Clear as mud yet?

 

PCE WG

 

Another WG that continues to generate a fair amount of activity and interest is PCE. This is also an area of IETF work that is somewhat related to the SDN solution space. This WG is focused on how to enhance traffic-engineering decisions in MPLS networks.

 

I had two key take-away’s from this session. One is that this WG has quickly moved from working on solutions that only “recommend” traffic-engineered LSPs to the network to also now including solutions that actually “instantiate” those LSPs into the network. In other words, the solutions being discussed here include both the centralized TE control-plane and the actual distributed data-plane. The other important take-away is there is more activity around providing PCE redundancy; that is, how to provide multiple PCE databases for the network and how to keep them synchronized. This is a hard problem to solve and in some sense can help flush out the entire “logically centralized” notion in SDN.

 

The session ended with an interactive discussion that I thought was quite interesting; well actually, almost amusing. The topic was about whether Auto-BW mechanisms should be pushed back into the network nodes so they can each dynamically adjust their LSP BW. Recall that the primary goal of PCE is to logically centralize the traffic-engineering decisions. It’s all about having a central holistic view of network traffic loads in order to fully optimize the entire network.  So, this discussion was about whether it makes sense to then allow each network node to adjust its LSP bandwidth, using auto-BW mechanisms, in a distributed way. Doesn’t this sound counter to the goal of PCE? So, is it a centralized or is it distributed? I hope you see why I thought this entire discussion was somewhat amusing.

 

I2RS WG

 

I2RS was also very well represented and generated lots of interesting dialogue. I believe this was only the second time this WG met, so the activity here is very early in its definition. The architecture team started off discussing the high-level architecture and a policy framework. All this discussion is centered on the Routing Information Bases (RIBs) of a network node; for example, are multiple RIBs needed?  What mechanisms are needed to inject state into the RIB(s)? What mechanisms are needed to extract state from the RIB(s)?

 

The I2RS activity focuses on the southbound problem space; in other words, the interfaces and protocols needed from the controller or client down to the network node. It does not focus on the northbound interfaces or applications that live on top of the controller.

 

There was also a question about whether I2RS protocol extensions are being developed in private; or outside the knowledge of this WG. There was a request to encourage people to share those discussions and potential experiments with the general WG to spur discussion.

 

An interesting I2RS service chaining use case was discussed that is being co-authored by Brocade Principal Architect, Ramki Krishnan.

 

FORCES WG

 

I dropped into the FORCES session to see how this WG is progressing. This WG has been around for many years but it never really gained much attention in terms of implementation or deployment. Now that SDN is here to stay, my sense is that this WG is trying to re-emerge to become relevant to that conversation.

 

To that end they have a new charter and some of the terminology being used in this WG is more aligned with the SDN problem space. For example, there are drafts that discuss Virtual Control Element (VCE) nodes and Virtual Forwarding Element (VFE) nodes.

 

OPSA WG

 

The OPSA WG continued the discussion around a draft presented by Brocade Principal Architect, Ramki Krishnan, on mechanisms for optimal LAG/ECMP link utilization. This capability is important in not only service provider networks but also in research and education networks. Researchers are often the sources for very large IP flow dissemination and the ability to properly load-balance these large flows on LAG bundles is becoming increasingly important. I also presented this draft, on behalf of Ramki, at the recent ESnet Site Coordinators Committee (ESCC) Meeting at the Lawrence Berkeley National Labs and it was very well received by this group.

 

Routing WGs

 

Regarding the various Routing WGs, here is a short recap.

 

An interesting presentation in the GROW WG was on a use case of using route reflectors for traffic steering. This idea is not new but it does provide an additional data point on how service providers desire enhanced capabilities to influence traffic patterns in their networks.

 

Similar to the last IETF, there was more discussion in the IDR WG about the ability to distribute link-state and TE information northbound, using extensions to BGP.  This type of capability would allow higher layer applications to make more intelligent traffic-engineering decisions. Kind of sounds SDN-like, doesn’t it?

 

So, that wraps up this short update on the IETF 87 SP related activities. Please let me know if you have any comments or questions.

Read more...

Enabling Service Providers to New Revenue Streams: Expect Strong Demand for Server Load Balancing-as-a-Service

by Doug.Dunbar on ‎08-08-2013 07:08 AM - last edited on ‎10-28-2013 10:50 PM by bcm1 (1,308 Views)

As every service provider already knows, performance and reliability are always high priorities for enterprise IT. Add cloud computing to the mix, its importance multiplies. In this week’s blog, I continue discussing the WaveLength Market Analytics study on enterprise infrastructures service needs. Specifically, I address the market opportunity for server load balancing services in both terms of size and demand, as well as service packaging, and pricing.

1.png

Like the market for IPv6 translation services, the market for server load balancing (SLB)-as-a-service is very large. When asked about their top three IT management priorities, the graph above shows that more than half of enterprise IT survey respondents chose improving user experience, service quality, and reliability as the top IT management priority. High performance and reliability requirements drive demand for server load balancing solutions, so if half of America’s 18,000 large organization’s outsource just one application’s server load-balancing, it’s a good-sized market.

 

Of course, server load balancing is not a new concept. Enterprises have been using the technology for more than a decade and as the table below shows, they use a mix of server load balancing solutions. Most organizations use an internally managed solution and about half already use a service provider for some type of server load balancing. Forty five percent of both the medium and large enterprises outsource using dedicated load balancers from a service provider. Although less mature, even 38% of medium and 20% of large enterprises say they outsource to a service provider using shared load balancers. As public and hybrid cloud acceptance grows, this shared load balancing service is expected to grow along with them.

2.png

Where are enterprises on their willingness to buy a new variant, server load balancing-as-a-service?  What value-added features would they be willing to buy and what would they be willing to spend on them? Among the three new infrastructure services, IPv6 translation, storage area network extension, and server load balancing, server load balancing has the highest percentage of respondents who are willing to pay for services. With 86% of large enterprises, survey respondents are more likely to be willing to pay for server load-balancing services than the other two infrastructure services discussed in my previous blogs, IPv6 translation and SAN extension.

3.png

As to the above table shows, server load balancing services offer significant opportunity for differentiated services or to sell additional value-added services. Nearly three-quarters of large enterprises are willing to pay for all SLB capabilities included in the study. A service provider can create a service for global load balancing between two data centers as a stand-alone service and about 77% of large enterprises said they’d be willing to pay for it. A service provider can also earn extra revenue by packaging device redundancy for highly available SLB-as-a-service. This value-add can entice 45% of medium and 78% of large enterprises to part with some dollars.

 

So how much extra will enterprises likely pay for SLB value-added features? We asked how much extra in the form of a premium over the base service fees for SLB-as-a-service for 3 specific value-adds. On average, large enterprises are willing to pay an additional 27% for IPv6 migration support and for SSL offload, and an additional 31% for device redundancy. The more budget-constrained medium-sized enterprise segment reports they are willing to pay an approximately an additional 25% for each of the three value-adds.

 

The enterprise IT imperative to improve user experience, service quality and reliability, along with increasing cloud apps, create a large and growing service provider opportunity. With its many value-added features, SLB-as-a-service is potentially lucrative; it is both easily differentiated and average revenue per user (ARPU) increases with each a la carte value-added feature. Service providers should invest in their SLB-as-a-service portfolio today to take those new SLB revenue streams to the bank.

Layer 3 Routing Over MCT

by Greg.Hankins on ‎07-29-2013 11:27 AM (889 Views)

Multi-Chassis Trunking (MCT) is a key Brocade technology that helps network operators build scalable and resilient networks, and we are continuing to add more enhancements to MCT that provide advanced redundancy.  MCT-aware routers appear as

Read more...

Layer 3 Routing Over MCT

by Greg.Hankins on ‎07-29-2013 09:00 AM - last edited on ‎10-28-2013 10:50 PM by bcm1 (3,520 Views)

Multi-Chassis Trunking (MCT) is a key Brocade technology that helps network operators build scalable and resilient networks, and we are continuing to add more enhancements to MCT that provide advanced redundancy.  MCT-aware routers appear as a single logical router that is part of a single link aggregation trunk interface to connected devices.  While standard LAG provides link- and module-level protection, MCT adds node-level protection, and provides sub-200 ms link failover times.  It works with existing devices that connect to MCT-aware routers and does not require any changes to the existing infrastructure. Pete Moyer wrote about the multicast over MCT features we added in NetIron software release 05.4.00 in his blog earlier this year, and we recently released version 05.5.00 with support for Layer 3 dynamic routing over MCT which is what I want to write about today.  Together these two enhancements give network operators the ability to deploy MCT Layer 3 active-active or active-passive redundancy at the network edge or border for IP unicast and multicast.

 

Layer 3 Routing Over MCT Highlights

  • Enables Layer 3 active-active or active-passive redundancy.
  • Supports RIP, OSPF, IS-IS, and BGP for IPv4 and IPv6 routing protocols (static routes are already supported in previous releases).
  • Supports active or passive interfaces for OSPF and IS-IS on MCT Cluster Client Edge Ports (CCEPs).
  • Provides the flexibility to run different Layer 3 protocols on MCT CCEPs and MCT Cluster Edge Port (CEP), for example OSPF on CCEPs and BGP on the CEP.
  • Provides rapid failover if one of the CCEPs or MCT routers fails; Layer 3 traffic will still be forwarded via the remaining MCT router.

 

Here’s how it works in a nutshell for a quick example with OSPF.  The CCEP and CEP in the diagram are on different Layer 3 networks and the routers run OSPF as the IGP.  The MCT CCEPs can be configured as active in the OSPF so that they establish an OSPF adjacency with the connected device.  Or they can be configured as passive in the IGP so that the interface is advertised via OSPF, and a static route is configured on the connected device.4.png

Layer 3 traffic that is sent to the network connected to the CEP is load-balanced over the two MCT routers by the connected device, assuming an active-active configuration. If one of the CCEPs or MCT routers fails, Layer 3 traffic will still be forwarded over the MCT Inter-Chassis Link (ICL) to the remaining MCT router.  I left out some details for simplicity to illustrate the functionality; obviously you’d want redundant connections via CEPs on each MCT router in order to provide full redundancy.

 

For more information on Layer 3 routing with MCT, refer to the “MCT L3 Protocols” section in the MCT chapter in the Multi-Service Switching Configuration Guide.  If you need more information on MCT then we have two technical white papers that you can read.  “Multi-Chassis Trunking for Resilient and High-Performance Network Architectures” provides an overview of the technology, and “Implementing Multi-Chassis Trunking on Brocade NetIron Platforms” has design and deployment details.

NetIron sFlow Enhancements

by pmoyer on ‎06-18-2013 08:52 AM (958 Views)

sFlow is a very interesting technology that often gets overlooked in terms of network management, operations and performance. That’s a shame; as it can be a very powerful tool in the network o

Read more...

NetIron sFlow Enhancements

by pmoyer on ‎06-18-2013 06:10 AM - last edited on ‎10-28-2013 10:51 PM by bcm1 (1,410 Views)

sFlow is a very interesting technology that often gets overlooked in terms of network management, operations and performance. That’s a shame; as it can be a very powerful tool in the network operator’s tool-kit. In this brief blog, I hope to shed some light on the appealing advantages of sFlow. To start with - if you are a network operator and you are not gathering network statistics from sFlow, I hope you will carefully read this blog!

 

While the title of this blog says enhancements to sFlow, I’d like to focus a good portion of this piece on sFlow itself and explain why it’s not just useful, but why it should be considered a necessary component of any overall network architecture. I’ll also point out some differences between sFlow, NetFlow and IPFIX (since I frequently get asked about these when I talk about sFlow with customers).

 

sFlow Overview

 

sFlow was originally developed by InMon and has been published in Informational RFC 3176. In a nutshell, sFlow is the leading, multi-vendor, standard for monitoring high-speed switched and routed networks. Additional information can be found at sFlow.org.

 

sFlow relies on sampling; which enables it to scale to the highest speed interfaces, such as 100GbE. It provides very powerful statistics and this data can be aggregated into very edifying graphs. Here is a pretty cool animation describing sFlow in operation. sFlow provides enhanced network visibility & traffic analysis; can contribute relevant data to an overall network security solution; and can be used for SLA verification, accounting and billing purposes. sFlow has been implemented in network switches & routers for many years and is now often implemented in end hosts.

 

Here are some publicly available sFlow generated graphs from AMS-IX (the sFlow samples are taken from Brocade NetIron routers).

 

Here is another simple output from sFlow, showing the top talkers in a specific IP subnet.

 

5.jpg

 

Short Comparison of sFlow, Netflow and IPFIX

 

While sFlow was explicitly invented as an open standards based protocol for network monitoring, Netflow was originally developed to accelerate IP routing functionality in Cisco routers (it remains proprietary to Cisco). The technology was subsequently modified to support network monitoring functions instead of providing accelerated IP routing; however, it can exhibit performance problems on high-speed interfaces. Furthermore, sFlow can provide visibility and network statistics from L2 – L7 of the network stack, while Netflow is predominantly used for L3 – L4 (there is now limited L2 support in Netflow but there is still no MPLS support).

 

Another key difference between the two protocols is that sFlow is a packet sampling technology; while Netflow attempts to capture entire flows. Attempting to capture an entire flow often leads to performance problems on high-speed interfaces, which are interfaces of 10GbE and beyond.

 

IPFIX is an IETF standards based protocol for extracting IP flow information from routers. It was derived from Netflow (specifically, Version 9 of Netflow). IPFIX is standardized in RFC 5101, 5102, and 5103. As its name correctly implies, IPFIX remains specific to L3 of the network stack. It is not as widely implemented in networking gear as sFlow is.

 

sFlow and OpenFlow?

 

There is some recent activity around integrating sFlow with OpenFlow to provide some unique “performance aware” SDN applications. For example, take a look at this diagram:

 

6.jpg

 

[Diagram referenced from here.]

 

In this example, sFlow is used to provide the real-time network performance characteristics to the SDN application running on top of an OpenFlow controller, and OpenFlow is used to re-program the forwarding paths to more efficiently utilize the available infrastructure. Pretty slick, huh? This example uses sFlow-RT, a real-time analytics engine, in place of a normal sFlow collector.

 

NetIron sFlow Implementation Enhancements

 

Brocade devices have been implementing sFlow in hardware for many years. This hardware based implementation provides key advantages in terms of performance. The sampling rate is configurable and sFlow provides packet header information for ingress and egress interfaces. sFlow can provide visibility in the default VRF and non-default VRFs. NetIron devices support sFlow v5, which replaces the version outlined in RFC 3176.

 

In addition to the standard rate-based sampling capability, NetIron devices are capable of using an IPv4 or IPv6 ACL to select which traffic is to be sampled and sent to the sFlow collector. This capability provides more of a flow-based sampling option, rather than just sampling packets based on a specified rate. In addition to sampling L2 and L3 information, sFlow can be configured to sample VPN endpoint interfaces to provide MPLS visibility. Neither Netflow nor IPFIX can provide this type of visibility.

 

One of the new enhancements to the NetIron sFlow implementation is the ability to provide Null0 interface sampling. Service providers often use the Null0 interface to drop packets during Denial of Service (DoS) attacks. sFlow can now be configured to sample those dropped packets to provide visibility into the DoS attack. This feature is in NetIron Software Release 5.5.

 

The other new enhancement that I’d like to mention is the ability to now capture the MPLS tunnel name/ID when sampling on ingress interfaces. This feature is coming very soon and will provide additional visibility into MPLS-based networks.

 

In summary, I hope you gained some additional insight into the advantages of leveraging the network visibility that sFlow provides. One last thing I’d like to correlate to sFlow is Network Analytics. These are complementary technologies which can co-exist together in the same network, while performing different functions. Brocade continues to innovate in both of these areas and I welcome any questions or comments you may have on sFlow or Network Analytics.

BGP as a Data Center IGP

by Greg.Hankins on ‎06-10-2013 01:19 PM (938 Views)

As data center networks scale to support thousands of servers running a variety of different services, a new network architecture using the Border Gateway Protocol (BGP) as a data center routing protocol is gain

Read more...

BGP as a Data Center IGP

by Greg.Hankins on ‎06-10-2013 11:10 AM - last edited on ‎10-28-2013 10:51 PM by bcm1 (4,718 Views)

As data center networks scale to support thousands of servers running a variety of different services, a new network architecture using the Border Gateway Protocol (BGP) as a data center routing protocol is gaining popularity among cloud service providers.  BGP has traditionally been thought of as only usable as the protocol for large-scale Internet routing, but it can also be used as an IGP between data center network layers.  The concept is pretty simple and has a number of advantages over using an IGP such as OSPF or IS-IS.

 

Large-Scale BGP is Simpler than Large-Scale IGP

 

While BGP in itself may take some heavy learning to fully grok, BGP as a data center IGP uses basic BGP functionality without the complexity of full-scale Internet routing and traffic engineering.  BGP is especially suited for building really big hierarchical autonomous networks, such as the Internet.  So, introducing hierarchy with EBGP and private ASNs into data center aggregation and access layers down to the top of rack behaves just like you would expect. We’re not talking about carrying full Internet routes down to the top of rack here, just IGP-scale routes, so even lightweight BGP implementations that run on 1RU top of rack routers will just work fine in this application. 

 

The hierarchy and aggregation abilities of an IGP are certainly quite extensive, but each different OSPF area type, for example, introduces different behaviors between routers, areas and how different LSA types are propagated.  There’s a lot of complexity to consider when designing large-scale IGP hierarchy, and a lot of information that is flooded and computed when the topology changes.  The other advantages of BGP are the traffic engineering and troubleshooting abilities. With BGP you know exactly what prefix is sent and received to each peer, what path attributes are sent and received, and you even have the ability to modify path attributes.  Using AS paths you can tell precisely where the prefix originated and how it propagated, which can be invaluable in troubleshooting routing problems.

 

How it Works

 

What you basically do is divide the network into modular building blocks made up of top of rack access routers, aggregation routers, and data center core routers.  Each component uses its own private ASN, with EBGP peering between blocks to distribute routing information.  The top of rack component doesn’t necessarily need to be a single rack; it could certainly be a set of racks and a BGP router.

7.png

Petr Lapukhov of Microsoft gave a great overview of the concept at a NANOG conference recently in a presentation called “Building Scalable Data Centers: BGP is the Better IGP”, which goes into a lot more background on their design goals and implementation details.  If you’d like to experiment with the network design as Petr describes, the commands for the BGP features on slide 23 for the Brocade NetIron software are:

AS_PATH multipath relax: multipath multi-as (router bgp)

Allow AS in: no enforce-first-as (router bgp or neighbor)

Fast EBGP fallover: fast-external-fallover (router bgp)

Remove private AS: remove-private-as (router bgp or neighbor)

 

Taking it a Step Further

 

An alternative that takes the design even further from top of rack down into the virtual server layer for high-density multitenant applications is to also use the Brocade Vyatta vRouter.  In this design, EBGP would be run from the data center core at each layer to a virtual server that routes for a set of servers in the rack.  This addition gives customers a lot of flexibility in controlling their own routing, for example, if they wanted to announce their own IP address blocks to their hosting provider as part of their public cloud. Customers could also use some of the other vRouter VPN and firewall features to control access into their private cloud.

 

In addition to using BGP to manage routing information, you can also build an OpenFlow overlay to add application-level PBR to the network.  Using the Brocade hybrid port features that enables routers to forward using both OpenFlow rules and Layer 3 routing on the same port, introducing SDN into this network as an overlay is easy.  In fact, this is exactly what Internet2 is doing in production on their AL2S (Advanced Layer 2 Services) network to enable dynamically provisioned Layer 2 circuits.

 

So is BGP better as a data center IGP?  I think the design lends itself especially well to building modular data center networks with independent and autonomous modular components that can be built all the way down to the virtual server level.  Perhaps you even have different organizations running their own pieces of the network, or servers that you’d rather not invite into your OSPF or IS-IS IGP.

 

For more information on Brocade’s high density 10 GbE, 40 GbE and 100 GbE routing solutions, please visit the Brocade MLX Series product page.

The Physical: 40 GbE at the Core of the On-Demand Data Center

by mschiff on ‎04-30-2013 09:30 AM - last edited on ‎10-28-2013 10:52 PM by bcm1 (1,223 Views)

Today, Brocade announced it strategy to bridge the physical and virtual worlds of networking to enable customers to build an “On-Demand Data Center”.  For service providers, an On-Demand Data Center means getting closer to becoming the greatly sought after cloud provider by increasing business agility, reducing complexity and scaling virtualization. In this blog I will focus on the announcement of the new 40 GbE interface module we have added to the Brocade MLX Series to enhance the physical aspects of the data center core that are required as the foundation for the On-Demand Data Center.

 

In the core of the service provider data center, network operators need to be able to respond in real time to dynamic business needs by delivering applications and services on demand. At the same time, they must contain costs through more efficient resource utilization and simpler infrastructure design. Traditional network topologies and solutions are not designed to support increasingly virtualized environments. With the Brocade MLX 4-port 40 GbE module, in conjunction with Brocade VCS Fabric technology, you can scale the data center fabric and extend across the Layer 3 boundary between data centers. High 40 GbE density with advanced Layer 3 capabilities helps consolidate devices and links needed in the data center core. Large Link Aggregation Group (LAG) capabilities provide capacity on-demand and reduce management overhead. By consolidating devices and simplifying the network, customers can reduce capital expenditures and operational expenditures in terms of power, space, and management savings, minimizing TCO. In addition to massive scalability from the 40 GbE density, the rich feature set of the Brocade MLX 4-port 40 GbE module eliminates the need for additional edge routers by enabling Layer 3 data center interconnect with full featured support for Access Control Lists (ACLs), routing, and forwarding in the data center core

 

Prior to 2012, optical equipment dominated the 40 GbE market. 40 GbE is now taking off on Ethernet routers and switches, principally in data centers because it helps to bridge the bandwidth and economics gap between 10 GbE and 100 GbE for customers. The market for 40 GbE in high-end routing applications is expected to ramp up quickly, with CAGR from 2013 to 2016 expected to be 125% with a total market size in 2016 of $239M (Source: Dell’Oro, 2012). Similar to 10 GbE, business drivers will be the growth of bandwidth-intensive applications:

  • Virtualization
  • High-performance computing
  • Business continuity
  • Video on demand and video surveillance
  • iSCSI, FCoE, and NAS storage
  • Social networking
  • VoIP

 

8.png

 

The image shows a primary deployment model for the 40 GbE module in the Core of the data center. The high density, wire-speed performance enables 40 GbE connection with the aggregation layer – in this case the Brocade VDX 8770 supporting the VCS Fabric.  Also supporting advanced MCT, this new module enables data center cores to scale in in highly resilient and efficient manner.

 

The MLXe also serves as an ideal border router to interconnect the data center to the WAN – or other data centers.  Here 40 GbE or 100 GbE is typically used.  The new 40 GbE module is often used, especially where underlying WAN optical infrastructure does not yet support 100G.

There has been lots of recent discussion about Google and AT&T targeting to provide the city of Austin, TX with a 1-gigbit-per-second Internet service. While the competitive and innovative spirit should make Austin feel like one of the luckiest towns in the world, I would like to tell you about a metro service provider in Clarksville, Tennessee that already provides its residential and commercial customers with 1 Gigabit Ethernet services to the premise.

 

CDE Lightband is the leading municipal utility provider of electricity, digital television, Internet and voice to all of the 100 square miles located within the boundaries of Clarksville, TN. They offer their services to approximately over 64,000 customers while 892 miles of power lines and 960 miles of fiber optic cable are maintained. Most distinguishably, CDE Lightband provides a true Active Ethernet network to their customers. This means that each and every one of their residential and commercial customers has their own active Ethernet, Fiber-to-the-Premise port. The value of an Active Ethernet network is that the bandwidth on the connection is not shared, and is thus an effective way of ensuring a 1-Gbps connection to each subscriber. It is certainly a feather in the hat for a service provider of any size.

 

Brocade is proud to support CDE Lightband’s Active Ethernet project. By using the Brocade NetIron CES series switches, CDE Lightband can sell Gigabit Internet service and provide bandwidth throughput. In the future, CDE Lightband plans to use the 10G ports on the Brocade CES so they can grow the switches into the network as they expand their internal infrastructure.

 

Like all service providers, CDE Lightband’s top priority is to provide world class performance and reliability. Because of the Brocade CES series switches, CDE Lightband is able to offer their customers a unique and powerful Ethernet services (as exemplified in their Active Ethernet project) and deliver them on pace with their customers’ business and personal requirements. Brocade is very honored to be the backbone of CDE Lightband’s network!

 

To learn more about the Brocade and CDE Lightband partnership, please watch this video.

IETF 86 Recap of SP Related Activities

by pmoyer on ‎04-04-2013 03:19 PM (724 Views)

I recently returned from IETF 86  and would like to update the folks in this community with a brief synopsis of the event. Overall, it was a very well attended, interactive and relevant event! Bu

Read more...

Announcements
A Match Made in Networking - Ruckus Wireless, now part of Brocade

 

New Mandate for Mobile
Gen6 Downloads and Videos
Labels