Fibre Channel (SAN)

Oversubscription

by on ‎11-05-2009 09:41 PM (241 Views)

Notice: The information in this Contribution is provided “AS IS,” without warranty of any kind.

The author TechHelp24 reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use.

The author TechHelp24 shall have no liability or responsibility to any person or entity with
respect to any loss, cost, liability, or damages arising from the information contained in this contribution.

What is over-subscription?

Subscription in a director is the ratio of potential port bandwidth to available backplane slot bandwidth. Over-subscription takes place when port traffic exceeds slot bandwidth.

Depending on director architecture, ports may or may not be available for director-class performance. Fibre Channel director vendors generally offer both fully-subscribed and over-subscribed 4 Gbit/sec director port modules. However, there are distinctly different methods available for managing the over-subscription. Not all director ports are created equal.

Function

Brocade
DCX

Brocade
48000

Brocade
Mi10000

Brocade
M6140

Cisco
MDS 9513

Local Switching

Yes
128 - 384 Gbit/sec

Yes
128 - 192 Gbit/sec

No

No

No

Local bandwidth per blade

32 - 48 x 8 Gbit/sec ports

32 - 48 x 4 Gbit/sec ports
16 x 8 Gbit/sec ports

0

0

0

Backplane bandwidth per slot

256 Gbit/sec

64 Gbit/sec
(16 x 4 Gbit/sec)

64 Gbit / sec
(16 x 4 Gbit/sec)

8 Gbit/sec
(2 x 4 Gbit/sec)

51.2 Gbit/sec
(12 x 4 Gbit/sec)

How does Brocade compare?

The Brocade DCX has 256 Gbit/sec of slot bandwidth between each blade and the core switching blades. This 256 Gbit/sec path allows thirty-two (32) 8 Gbit/sec ports on each Brocade DCX blade to simultaneously communicate across the backplane with no contention. This path also allows forty-eight (48) 4 Gbit/sec ports simultaneously. (In fact, this is undersubscribed at 192 Gbit/sec of traffic. A mix of 32 x 4 Gbits/sec ports and 16 x 8 Gbit/sec ports on a DCX blade can be supported with no congestion.

The Brocade 48000 and Mi10000 directors have 64 Gbit/sec of slot bandwidth between each blade and the control processors. This 64 Gbit/sec path allows sixteen 4 Gbit/sec ports on each Brocade director blade to simultaneously communicate across the backplane with no contention. The 48000 can support 8 Gbit/sec blades -- the 16 port blade can switch 8 Gbit/sec locally, pass 8 Gbit/sec traffic to another 8 Gbit/sec blade, or form trunks built out of 8 Gbit/sec ports.

Additionally, local switching in the Brocade 48000 allows full speed 4 Gbit/sec switching within port groups on any director blade:

  • The 16-port 4 Gbit/sec blade is always fully subscribed whether crossing the backplane or switching between local ports.
  • The 32-port 4 Gbit/sec blade is (at worst) 16:8 over-subscribed when crossing the backplane, and 1:1 subscribed when switching in a 16-port local group.
  • The 48-port 4 Gbit/sec blade is (at worst) 24:8 over-subscribed when crossing the backplane, and 1:1 subscribed when switching in a 24-port local group.
  • The 16-port 8 Gbit/sec blade is (at worst) 48:8 over-subscribed when crossing the backplane, and 1:1 subscribed when switching in a 16-port local group.

There is a significant difference between 16:8 over-subscription and 2:1 over-subscription. In the 48000 architecture, two ASICs drive a 32-port director blade, with 16-ports in each port group. Each ASIC can use up to 32 Gbit/sec of bandwidth to communicate with other blades through the CPs. Since this 32 Gbit/sec link is a trunk, a 16:8 blade will not be over-subscribed until more than 8 front-end ports in a port group simultaneously use more than 8 back-end links or when the total traffic from the ports exceeds 32 Gbit/sec. The same holds true for the DCX, except the links are 64 Gbit/sec.

In contrast, a 2:1 over-subscribed blade will be over-subscribed as soon as more than one (1) back-end link and more than one front-end port are used. Brocade blades are not 2:1 over-subscribed. For 16:8 over-subscription, imagine 8 cars spread along 16 lanes — they can always merge into 8 lanes at full-speed. With 2:1 over-subscription, there are 8 pairs of lanes rather than one highway, and it is very likely that two cars might fall into two lanes that merge into one. This leads to a slowdown, otherwise known as SAN congestion.

The Brocade 48000 and DCX have the additional advantage of local switching. The 32 and 48-port blades divide ports into two local switching groups with 64 and 96 Gbit/sec of bandwidth respectively. Since locally switched traffic never crosses the backplane, that valuable bandwidth is left free for communication to ports on other blades. On the 48-port blade, if 8 ports in a local group are communicating over the backplane to other blades at 4 Gbit/sec, the remaining 16-ports can still communicate to each other at full-speed 4 Gbit/sec.

As a result, every Brocade 48000 port has the opportunity to run at 4 Gbit/sec thanks to 64 Gbit/sec of backplane bandwidth, up to 192 Gbit/sec of local switching and internal frame-based trunking. All 384 ports in the Brocade DCX have the opportunity to run at 8 Gbit/sec thanks to local switching, 256 Gbit/sec of backplane bandwidth and internal frame-based trunking. Competitive products do not have this flexibility. Local switching also provides a speed advantage -- locally switched port latency is 700 nanoseconds (or 0.7 microseconds), while blade-to-blade switching is 2.1 microseconds.

For FICON customers, it is important to note that a 256 port 4 Gbit/sec DCX is 1:2 undersubscribed -- this configuration will use just 128 Gbit/sec of bandwidth while 256 is available. Unlike the 48000, the DCX's core switching architecture is not on the Control Processor blades. Since each Core Blade delivers 128 Gbit/sec to each blade, if one Core Blade is pulled, there is still enough bandwidth for all 256 ports to run at 4 Gbit/sec.

The Cisco MDS 9513 has only 51.2 Gbit/sec of slot bandwidth. Cisco cannot take advantage of local switching or internal trunking to help manage over-subscription. As a result, its 24 and 48-port linecards are always 6:3.75 and 12:3.75 over-subscribed. All traffic, whether to the neighboring port or a neighboring linecard, must use a portion of this 48 Gbit/sec. Only the 12-port linecard is fully subscribed at 4 Gbit/sec, meaning the MDS 9513 is at best a 132-port 4 Gbit/sec director.

In addition to restricted slot bandwidth, each Cisco linecard has limited bandwidth per port group. Regardless of port density, every linecard has four port groups, each with 12.8 Gbit/sec of bandwidth. As a result,

  • on the 12-port linecard, every three (3) ports share 12.8 Gbit/sec of bandwidth (3:3.75 subscribed)
  • on the 24-port linecard, every six (6) ports share 12.8 Gbit/sec of bandwidth (6:3.75 over-subscribed)
  • on the 48-port linecard, every twelve (12) ports share 12.8 Gbit/sec of bandwidth (12:3.75 over-subscribed)

On the 24 and 48-port MDS linecards, users must choose between "dedicated mode" or "shared mode" to allocate bandwidth within each 12 Gbit/sec port group. If "shared mode" is chosen, neighboring ports will be fighting over bandwidth. If dedicated mode is chosen and ports are locked at 4 Gbit/sec, only the remainder 12.8 Gbit/sec in the local group can be assigned to the neighboring ports. (In other words, if 12 ports are dedicated to 4 Gbit/sec on the 48 port linecard (3 per port group), only 0.8 Gb is available for the remaining 9 ports in each port group.*

Under no circumstances can more than 12 ports per linecard can be dedicated to 4 Gbit/sec due to the lack of local switching and the 48 Gbit/sec path to the crossbar. Even on the 24 and 48-port linecards, only 12 ports can run simultaneously at 4 Gbit/sec.

If any first generation MDS linecard is installed in a 9513, no more than 252 ports can be addressed (even if more ports are physically present).* Examples of first generation linecards are the 32-port 2 Gbit/sec over-subscribed Storage Services Module (SSM) and the MPS 14/2 linecard used for FCIP and iSCSI, with 2 ports of FCIP and 14 2 Gbit/sec FC ports. There is not yet a second generation 4 Gbit/sec versions of the SSM.

Comments
by
on ‎11-09-2009 10:43 AM

Nice article.

A strange thing that I noticed is the depletion of buffer credits on a 48K switch due to a slow draining device connected to another port. So if there is a storage port with a high fan out ratio then whats its effect on the switch.