Service Providers


A Perspective on Router Architecture Challenges – Part 1: Memory

by Greg.Hankins on ‎03-09-2012 02:37 PM (433 Views)

In this two-part blog I’ll be expanding on my conference presentations that I’ve given at NANOG53 and APRICOT 2012 to introduce some of the router architecture challenges that we’re facing from the perspective of a router vendor.  This first part will give you an idea of some of the issues we’re solving with lookup and buffer memory architectures, and the second part will go over some of the things we’re doing with ASIC technology.

As we design next-generation routers and line cards, we constantly have to make design choices and tradeoffs that are primarily driven by a couple of factors: lookup capacity and forwarding complexity vs. port density.  As the old saying goes: solutions are good, fast, or cheap – pick any two.  We can build very complex routers that have low port density, or very simple routers that have high port density.  The challenge is finding the right balance between the design factors that meets our customers’ needs and most importantly, meets everyone’s cost constraints.


On the lookup capacity and forwarding complexity axis we have many requirements:
•    Complex multiprotocol forwarding for L2, IPv4, IPv6, MPLS, unicast, multicast, etc.  There are a lot of protocols in use today, and they all need to be supported in hardware.
•    Growing IPv4/IPv6 Internet and VPN routing tables require more hardware table space.  The IPv4 address depletion, IPv4 deaggregation uncertainty, and growing IPv6 adoption make future table growth and sizing requirements hard to predict.  We constantly ask ourselves: how much hardware table space do we actually need to design in order to give our customers 7 to 10 years of usable line cards (at a cost that makes sense)?
On the port density axis we have one requirement that sounds simple, but is increasingly challenging to deliver:
•    More!  High-density 100 GbE (and 10 GbE) line cards require really fast packet processing.  A 100 GbE port receiving line-rate 64 byte packets has to process a new packet about once every 6.72 ns – yes that is nanoseconds – on every port.  And we’d really like to put more than a couple 100 GbE ports on a card to give customers the density they need.

So as you can see, it’s a careful balance between system design and the available component technology vs. lookup capacity and forwarding complexity, and port density.  All this of course is driven by cost.  When we try to do too much and throw in the kitchen sink, that throws off the whole balance and we have to make some careful design choices.

Before we talk about memory in more detail, let’s look at a typical router forwarding architecture.  Most routers on the market today use a similar basic architecture that looks something like the diagram below.  The system RAM used on the management and line cards is really not much of a challenge, as it’s just commodity DRAM and we have plenty of choices on the market today.  We’re going to focus on the challenging technology instead, which are the packet lookup and buffer memory systems.  Packet lookup memory holds the all the things the router needs to look up in order to forward a packet.  This memory system needs to deliver on the order of 150 Mpps lookups per 100 GbE port.  Buffer memory is where the router buffers packets while forwarding them through the router.  Typically there is buffer memory on the ingress and egress line cards, and a 100 GbE port needs at least 200 Gbps transmit+receive throughput from the buffer memory system per port.

The real challenge we have in designing high-speed memory systems is that packet rates have greatly exceeded memory random read rates (for all of you hardware enthusiasts, that’s the row cycle time tRC).  We really need 1 ns random read rates for lookup memory, and 1 ns random read and write rates for buffer memory – yesterday!  Dynamic memory technology characteristics also impose significant constraints on lookup and buffering architectures.  Yes, RAM does stand for Random Access Memory, but that doesn’t mean you can access the memory over and over again at the same address in consecutive clock cycles, there are inherent non-random access properties and read/write restrictions that we have to deal with.  These apply to both on-chip and off-chip RAM solutions, as this is simply how RAM works.  The chart below gives you a ballpark idea of the random read rates of the memory technology that is available on the market today.


Our memory requirements for high-performance routers are pretty simple: fast and big (and cheap – pick any two, right?).  We need fast memory because the router has to look up everything needed for packet forwarding in hardware to forward at line-rate with all features enabled.  Remember, for 100 GbE that’s 150 Mpps or one packet every 6.72 ns, and we would like to put multiple 100 GbE ports on a network processor to get higher 100 GbE port density.  Now, add the fact that multiple lookups are needed per packet.  Often there are L2, IP, MPLS, security and QoS forwarding information that must be checked for every packet, and these all are stored in different tables.  All these requirements add up.

We also need big memory, for both lookup and buffering memory system.  Multiprotocol forwarding requires a lot of table storage for each different protocols, for example:
•    MAC address table (unicast and multicast, VPN)
•    IPv4 FIB (unicast and multicast, VPN)
•    IPv6 FIB (unicast and multicast, VPN)
•    VLAN tags, MPLS labels
•    ACLs (L2, IPv4, IPv6, ingress and egress)
•    QoS policies (PHB, rewrite, rate limiting/shaping)
Buffering at 100 GbE rates requires the buffer memory system to support multiple 100 Gbps of sustained throughput.  1 GB of buffer is only 80 ms at 100 GbE rates, which means we have to design much larger buffering memories that we did for 10 GbE.

Unfortunately the kind of big and fast memory that we need doesn’t exist, so we have to build our own custom memory systems using the component technology that is actually available to us.  Fortunately we do have a variety of memory choices such as TCAM, SRAM and various flavors of DRAM that we can use as building blocks.  I’ll add a table at the end for your reference, if you want to read all the details.  Each memory technology has benefits and tradeoffs, and faster memory has less capacity and costs a whole lot more.

The solutions we’re considering are all highly proprietary, confidential and specialized.  I can give you a general idea of some of the things we could do though.  Next-generation lookup memory could use a divide and conquer parallel architecture that provides a deterministic search using a combination of SRAM/DRAM.  Using a large number of banks, combined with proprietary lookup algorithms, allows parallel searching in reasonable time.  Future memory systems could also integrate lookup memory into packet processing ASICs – we’ll talk more about this concept in part two.  Combining embedded memories for higher performance and reasonable density will get us much closer to the single digit ns access times that we need.  The other thing we have to consider in designing memory systems is that the component technology must be available for 5+ years at a minimum.  Many specialized memory technologies, high-performance graphics memory used on video cards for example, have a window of production that is just too small for us to use in a router.

For buffer memory, the DRAM read and write times are the limiting factor in guaranteeing buffering performance.  We do expect commodity off-chip DDR4 DRAM that will give us higher memory throughput to be available soon, but ultimately we may need to design a custom buffer memory chip.  Embedded on-chip memory limits buffer sizes but offers higher performance, and it may be economically and technically feasible to design a custom buffer memory chip that uses proprietary buffer memory management techniques.

Using custom memory solutions means that it's likely we'll see many more specialized ASICs in future high-performance router architectures, and I'm glad that we have a talented ASIC team that can build complicated memory systems.

To summarize part one: we took a look at the lookup and buffering memory design challenges we’re facing to build high-density 100 GbE line cards, and some of the possible solutions.  In the second part of this blog we’ll look at some cool ASIC technology solutions that we expect on the market in the future, and how we can put it all together to make a better, faster and cheaper router.

For more information on Brocade’s high density 100 GbE solutions, please visit the Brocade MLX Series product page.

Appendix: Lookup and Buffering Memory Technology Overview



Caption This! Contest

Congratulations Jason Antes! You are week#1 winner. You will be receiving your prize soon. The winning entry is here

Thank you to everyone who participated in our caption contest this week!

Join us Monday 10/27 for another chance to play!