06-28-2013 05:09 AM
Need you help to understand the issue which we are facing in our environment
Our environment we have redudant fabric and both are connected to different site iusing DWDM technology
Fabric1 - DCX --> DWDM (nortel 5200) --> DCX (distance is 100KM)
we have 10x2Gbps ISL between two sites for fabric 1
Fabric2 - DCX --> DWDM (nortel 5200) --> DCX (distance is 100KM)
we have 8x2Gbps ISL between two sites for fabric 1
We have HP24K/P9500 stoarge at both the site with same configuration and geo cluster configured across the site with continous access synchronous replication.
Someimes we are facing performance issue in one of the site due intersite ISL performnce degration in one of the fabric (performnce of link capped to some amount (30MBps). the issue will be resolved after we bounce the ISL links of the particular fabric. Is it because of buffer credit issue. but the issues occurs once in a month or once in a 3 months. Not clear what eactly the issue. Please let us know anyone of you have same kind of iisue and how you resolve the issue
Also in one of the fabric we have 10 Inter site DWDm links out of this 8 links data flow is uniform but remaing two the dataflow is not uniform. Just wanted to know is there any limit for DLS to distibute data across only 8 link sets.
06-29-2013 02:22 AM
MAX support ILS trunk groups or single ISLs in each DPS group is 8.
Suspect you are using ISLs without trunking, so there will be two DPS groups for those 10 ISLs.
for your environment, mostly like trunking could not form or up layer does not recommend trunking over DWDM tunnels. Without trunking, we will get over subscription on those single ISLs.
Besides upgrading those 10*2Gb ISL to 4 or 8Gb ISLs, not sure if anyone has other solution to improve I/O on those 10 single ISLs.
06-29-2013 08:25 PM
I agreed about the DPS. DPS group is 8 for ISL links.
Performnce issue not because of the bandwidth as performance issue observed 3 times in last 5 months and not a regular one.
The issue is something strange. The dataflow from one site to another site limitted to 30-50MB for all the 10 links instead of going 100 to 160MBps. The workaround is to disable & enable the upstream/down stream link in effected fabric switch port then the fabric route recalulation will happend and the dataflow become normal. Unable to findout why the bandwidth is limited for all the links if its butter credit issue it should happend all the times.
Please let me know if you face similar kind of issues & the solution for the issue.