02-10-2011 12:18 AM
My SAN is getting 5 Billion BB credit time 0 errors each day.
( I reset the counters every night )
mainly these are logged against the ISLs
My question is. do i use the calculation for BB credits based on physical cable lenght or round trip time
Physical cable lenght is about 50 meters
round trip time is 500microseconds
the BB credits on the ISLs is currently set to 26
Round trip calculation
BB_Credit = x /
Optimal # BB_Credit =(Round-trip receiving time+Receiving_port processing time)\Frame Transmission time
BB_Credit = cable lenght/physical frame lenght
I cant seem to get a straight answer for this one
02-10-2011 03:02 AM
BB_Credit = x /
is the right formula.
BB-credits are used to put frames on the wire without having recievad an vc-rdy back yet.
But your link is only 50 meters and even at 8 gig, you won't be needing 26 credits unless your frames (eg payloads) are exceptionally small.
For 50 meters an L0 long distance mode is enough and gives you (default) 20 credits.
Also your round trip is very high for such a small distance.
Considering the reserved credits (and loosing them) and your delay, are you sure the (cable) distance is onlu 50 meters?
02-10-2011 04:02 AM
the cable lenght is about 50 meters. but this occurs on all ISLs ( in our fabric ) regardless of the distance
some are right next to each other.
I agree the round trip time is very high for such a small distance.
the fiber has been tested and comes back ok.
does any one have any ideas on how to reduce the round trip time?
02-10-2011 05:11 AM
I agree with the previous post, you may want to check the following :
- sfpShow power levels on both sides of the ISL
- is this a direct link or does it run through patch panels? In case the line is patched try different ports/lines etc.
- any splicing on the link
- you said the line was measured, that was probably just an attenuation test, but no load test, right?
- make sure that all of the cabling used on that link actually supports 50m at whatever link speed you're running. I've seen folks running 8Gbps ISL's over 150m cable runs on OM1 (62.5u) cabling ..
Seeing txcrd_z is normal, but what you're reporting appears excessive, comes down to around 60k per sec. That counter increases when you cannot transmit due to BB_Credit being zero for 2.5 us, so at 400k per second you're not sending anything (in theory). Now only seeing that counter increase doesn't necessarily represent any problem, you may want to check whether you're also seeing lots of tim_rdy_pri and mainly disc c3 (frame drops).
With that being said, how many buffer credits you require depends a lot on average frame size. A buffer credit represents a full sized frame (~2kB), the average may be well below that i.e. around 1k, so you may require twice as many BB_credits.
02-10-2011 06:01 AM
nerver calculated the ISL over subscription
but the switches are connected with 13 ISLs
so about 15 to 1 ( unless the ISL over subscription calculation is more complex )
I'm more of an array guy than a fiber guy
02-10-2011 06:30 AM
1 MB/s is not much depending on ISL speeds and loads.
15/1 depends on a few things.
How many switches are we talking about anyway?
What kind of topology?
Any trunking on those ISL's?
In other words how describe your environment.
02-10-2011 06:35 AM
Its a core edge design.
DCX as the core and the edge switches are 4800s and 5300s
The issue is with all our ISLs
The ISLs in question are 4gig and no trunking
there is no trunking throughout our SAN
11 switches in total.