07-05-2013 05:57 AM
Does anyone have any useful information as a basis for comparison between UDLD and 10GE Link Fault Signaling (802.3ae LFS) in terms of overhead and performance? Is one more "light weight" than the other and does one offer faster detection times than the other?
07-10-2013 03:21 PM
The LFS works at port MAC level in the hardware whereas the UDLD works at the application software level in CPU. In LFS when the hardware determines if there is anything wrong on the local link then it generates the local fault to link partner. On seeing this the link partner brings down its link. Since the whole operation happens in hardware so it happens quickly. In UDLD the application code running on CPU keeps sending periodic packets from each port to the link partner. The receiving end application responds back these packets to the source indicating that the end to end link connectivity is good. This takes more time as its driven by software so the software scheduling comes in picture.
The purpose of UDLD is to detect the STP/RSTP. Spanning tree may create a 2 FORWARD state during root bridge and bring down the network. While LFS(for 10G) and RFN(for 1G) are HW andf IEEE 802.3 standard to prevent the TX line (cable) failure that causes the network outage.
Basically LFS is good to find out issues at link level, it’s quite fast but the LFS does not guarantee end to end packet path. The UDLD guarantees end to end packet path so it’s good at application level but it’s quite slow and it puts lot of overhead on CPU.
Hope this helps,