turn on suggestions
![]() Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Showing results for
|
10-10-2013 10:20 AM
Hi there,
I'm hoping that someone can help my understanding of how BB Credits are assigned/allocated to each of the Virtual Channels on an ISL connection. I'm particularly interested to know in relation to EISLs.
I have a 30km link and have allocated BB Credits based on this link distance. I only have a single array connection at each end of my fabric and so I'm thinking that the VC used by the array-to-array link is only getting a fraction of the total BB Credits because, it will only ever be assigned one of the Virtual Channels (the VC allocated being based on the destination address?).
Hope someone can help my understanding
10-10-2013 11:14 AM
Normally BB credit are divided across VC's, newer gear/firmware will show bbcredit zero counters per vc.
However on longdistance links, VC's collapse to only use on VC for all traffis and thereby only that VC will claim BB credits.
This enables a longer link to be better utilized.
AFAIK from this point on.
Which VC is used is elected by destination PID in the pre 8Gb era based on two bits.
so
port 0 uses vc 2
port 1 uses vc 3
port 2 uses vc 4
port 3 uses vc 5
port 4 uses vc 2
etc.
So if your storage array was connected on port 0 /4 /8 /16 traffic across and ISL always uses VC2.
If that array is receiving a lot of traffic, you may end up with starving your VC2 of credits.
How this is done on the 8Gb and later platforms I cannot recall or find.
I think it's a similar method but as there are 6 VC's for high prio traffic 2 bits for the destination port election is not enough.
Anyway if you don't use QoS your stuck to VC 2345 anyway and the election algorithm still applies to 8 and 16Gb platform as well
10-14-2013 04:55 AM
Hi dion.v.d.c, thanks for your response.
The 'algorithm' you describe for allocation of ports to virtual channels agrees with what I have read elsewhere and was the cause of my concern. Becuase, it surely means that an equal share of the available BB Credits is assigned to each medium priority channel. This in turn means in an environment with a very small number of connections (1 in my case), there are a significant number of BB credits that are not being used, but are not available to the channel with active traffic. The result being a significantly under utilized link.
However, if as you state, on LongDistance links VCs collapse to a single channel that gets all the BB Credits, then I don't have an issue . Do you know where I might find LongDistance links and VCs documented in a Brocade article? I can't see any mention of it in the SAN Distance Extension Reference Guide.
10-14-2013 05:13 AM
With a 30km link the proper parameters should have been set already.
Additionally you need a license because be default a Brocade switch will allocate buffers for links up to 10KM.
An extended fabric license allows you to allocate credits to go beyond 10KM.
You ports should be in LD or LS mode (set by portcfglongdistance)
Depending on th use of VC_RDY's or R_RDY's for flow controle you must set the VC Translation Link Initialization accordingly (ie 1 or 0)
If you use QoS and or credit recovery vc_link_init must 1 (which also sets the fillword to use ARB instead of IDLE's)
Depending on your platform and FOS level you may need to set the fillword (portcfgfillword)
However, if as you state, on LongDistance links VCs collapse to a single channel that gets all the BB Credits, then I don't have an issue . Do you know where I might find LongDistance links and VCs documented in a Brocade article? I can't see any mention of it in the SAN Distance Extension Reference Guide.
This is a typical Extended Fabric license
10-14-2013 06:06 AM
Yep, I already have my links configured with an Extended Distance License.
Went for LS after an initial LD to confirm the 'length' of the link. LS now allows me to tune the BB Credits.
Using VC_RDY with VC Translation Link Init set to 0.
Had to use IDLE fillword as DWDM doesn't support ARBs, so QoS and Credit Recovery are disabled.
It was just the allocation of BB Credits to Virtual Channels that was bothering me as I wasn't sure I had enough buffers assigned to each channel.
10-14-2013 06:34 AM
Then you should only see traffic on VC2 and all credits should be used for VC2.
Are you now sure if you have enough buffers?
If don't have enough buffers and you psuh a lot of data across the line you should see the tx zero counter incrementing rapidly.
10-14-2013 07:38 AM
Is there a way to 'see' traffic on the VCs?
I have been basing things on the output from a portstatsshow (below), that shows the only tx zero counters being against VC2. Given that my arrays are plugged into port 0, I was only expecting traffic to be on VC2.
The stats were cleared a number of weeks ago, so I guess a TX Credit Zero counter of 60,000+ over that time frame, suggests that I have enough Buffers (106) assigned for a 2Gb/s link;
stat_wtx 1044696810 4-byte words transmitted
stat_wrx 576487365 4-byte words received
stat_ftx 42853842 Frames transmitted
stat_frx 80239098 Frames received
stat_c2_frx 0 Class 2 frames received
stat_c3_frx 79988473 Class 3 frames received
stat_lc_rx 141678 Link control frames received
stat_mc_rx 0 Multicast frames received
stat_mc_to 0 Multicast timeouts
stat_mc_tx 0 Multicast frames transmitted
tim_rdy_pri 13993 Time R_RDY high priority
tim_txcrd_z 61179 Time TX Credit Zero (2.5Us ticks)
tim_txcrd_z_vc 0- 3: 0 0 61179 0
tim_txcrd_z_vc 4- 7: 0 0 0 0
tim_txcrd_z_vc 8-11: 0 0 0 0
tim_txcrd_z_vc 12-15: 0 0 0 0
er_enc_in 0 Encoding errors inside of frames
er_crc 0 Frames with CRC errors
er_trunc 0 Frames shorter than minimum
er_toolong 0 Frames longer than maximum
er_bad_eof 0 Frames with bad end-of-frame
er_enc_out 0 Encoding error outside of frames
er_bad_os 0 Invalid ordered set
er_rx_c3_timeout 0 Class 3 receive frames discarded due to timeout
er_tx_c3_timeout 0 Class 3 transmit frames discarded due to timeout
er_c3_dest_unreach 0 Class 3 frames discarded due to destination unreachable
er_other_discard 0 Other discards
er_type1_miss 0 frames with FTB type 1 miss
er_type2_miss 0 frames with FTB type 2 miss
er_type6_miss 0 frames with FTB type 6 miss
er_zone_miss 0 frames with hard zoning miss
er_lun_zone_miss 0 frames with LUN zoning miss
er_crc_good_eof 0 Crc error with good eof
er_inv_arb 0 Invalid ARB
open 0 loop_open
transfer 0 loop_transfer
opened 0 FL_Port opened
starve_stop 0 tenancies stopped due to starvation
fl_tenancy 0 number of times FL has the tenancy
nl_tenancy 0 number of times NL has the tenancy
zero_tenancy 0 zero tenancy
10-14-2013 08:29 AM
Actually I don't know of one single command that shows traffic per VC's.
With 106 buffers to span 30KM @ 2Gb, your average payload would need to be 600 bytes, which is small.
That said as the credit zero counter still increases you could calculate the average framesize to know with what sizes your environment is coping.
Don't assume the credit zero counter is just 60k, unless you capture those statistics (BNA/Cacti?).
Being a 32 bit couter and the stats not recently cleared they could have rolled over to one.
Try the portstats64show command or just reset all counters and look again in a couple of hours/days depending on the traffic.
10-15-2013 02:02 AM
I have tried to avoid getting ino too much technical detail about our actual setup, but there was a reason for 'over-allocating' on the number of Buffers and thats because in the event of a physical DWDM link failure, the DWDM switches to a different rout which is longer and so I wanted to allow enough BB credits to give adequate performance in that scenario. As I said previosuly, my main concern was that I was way of in my allocation if LongDistance links used seperate VCs each of which only got a fraction of the BB credits.
Regarding the stats, I have been watching the counters on a regular basis since the links were finally setup correctly. I pasted the 32 bit counters here because the 64 bit version of the command doesn't show the stats for each VC.
The Tx Credit Zero has slowly crept up to this value since the counters were reset. portstats64show -long shows the same value and also allows me to easily calculate what the average frame size is :-)
10-15-2013 04:47 AM
OK fair enough.
Indeed you need to calculate and allocate buffers for the longest link if your DWDM setup provides path failover.
I'm a bit worried about the tx_zero counter increasing (although slowly).
Reason for my concern is;
You've allocated buffers for the longest link, but currently are using the shortest link (as i understand) and occasionally still run out of buffer credits.
Now imagine a failover on DWDM to the longest path, do you still have enough credits in such event?