09-01-2016 07:59 AM
Actually, I am using ICX-6610 switch, so I have an issue of queues/port. As I know from the documention that qosq7 is the highest prioity queue( which queue with id =7), then qosq6 ...qosq0. So I tried to shape the dfferent flows to different queues, one flow to qosq7 , and the othre one to qosq1, but the problem: I see both flows are given the same bandwidth allocation, I saw the number of packets for both queues are also the same!!. My expectation to find 75% of bandwidth is allocated for qosq7 and only 3% for qosq1.
By the way, the queues are configured to serve the actions of specific openflow flows.
I was wondering If anyone please can advise about what is going on such case?
09-05-2016 12:53 AM
It might be difficult to observe the difference if the port is not congested.
Can you share the output of "show qos-profiles all"?
On the other hand, it is recommended not to use qos7 nor 6 as these carry control traffic.
09-07-2016 08:55 AM - edited 09-07-2016 09:23 AM
Thanks for your help.
This is the setting:
ICX6610-24 Router(config)#show qos-profiles all
bandwidth scheduling mechanism: weighted priority
Profile qosp7 : Priority7(Highest) bandwidth requested 75% calculated 75%
Profile qosp6 : Priority6 bandwidth requested 7% calculated 7%
Profile qosp5 : Priority5 bandwidth requested 3% calculated 3%
Profile qosp4 : Priority4 bandwidth requested 3% calculated 3%
Profile qosp3 : Priority3 bandwidth requested 3% calculated 3%
Profile qosp2 : Priority2 bandwidth requested 3% calculated 3%
Profile qosp1 : Priority1 bandwidth requested 3% calculated 3%
Profile qosp0 : Priority0(Lowest) bandwidth requested 3% calculated 3%
Actually I used Q4 and Q1, but I see the same throughput for the two flows go through the queues, although I sent 2 gbps for each flow over the link speed 1gbps. So what happend is not like what I understand.
Please take a look and if you can advise, I will be so grateful.
09-07-2016 01:52 PM - edited 09-07-2016 02:28 PM
In the output you shared, Q4 and Q1 will both be allocated the same share of the interface bandwidth.
When the queueing is set for weighted the queue number does not result in any priority, it is only the weight that is relevant.
What is it that you were expecting to happen?
09-07-2016 09:39 PM - edited 09-07-2016 09:44 PM
Yes. You are right. I thought that queue number may result to some priority even for the same weights.
However, I set different share and the allocation is varied according to the weight.
Bur, I get into some issues:
1) the allocation as planned goes well only with udp flows!
2) with tcp flows, the share it is not controllabl:
2.1) sometime flow assigned to less priority queue takes a higher bandwidth!!
2.2) when sending let say two flows, not all interface bandwidth is exploited. For example, if port speed is set to be 100-full, flow 1 takes 42.5 mbps, while the other one is 28mbps.
please let me know what is going on?
Many thanks and best regards,
09-10-2016 04:05 PM
For TCP flows things can get a bit more complicated since the end stations will back off their transmissions when they see dropped packets.
When I have tested previously I have tended to use UDP as it is more straightforward and from a routing/forwarding perspective the router shouldn't be looking at whether the packet is UDP or TCP for simple queueing.
Can you describe your setup? What are you using to test the TCP flows?
09-13-2016 09:16 PM - edited 09-13-2016 09:17 PM
For two flows, it was specified that flow1 goes through Q1, flow2 goes through Q4. Q1 and Q4 belong to the egress port 3, and they are given 8% and 12% weights. However, no more flows will be arrived to port 3 rather than flow 1 and flow 2. So as I understand it is supposed that Q1 and Q4 will take the whole capacity (100 Mbps) based on Weighted Round Robin strategy. The flow rules are installed as openflow rules and transfered (tested) using iperf. When sending the flows, flow1 was given 28 mbps, while the other one is 42.5 mbps.
The queues setting is attached.
09-21-2016 05:13 AM - edited 09-21-2016 05:13 AM
Sorry it has taken me a while to get back to you.
I did the same sort of test as you and for TCP flows it works as expected for me.
With your profiles configured I end up with 56Mbps for Q4 traffic and 38Mbps for Q1 traffic.
My setup is as follows
two iperf sessions running on single machine connected into port 7 of ICX6610
iperf -s configured on a machine connected to port 8 of the ICX6610
port 8 is configured for 100M full duplex
I am using an ACL to classify and move the traffic into the appropriate queue
permit tcp any eq 50774 any internal-priority-marking 4
permit tcp any eq 50776 any internal-priority-marking 4
permit tcp any eq 50778 any internal-priority-marking 4
permit tcp any eq 50780 any internal-priority-marking 4
permit tcp any eq 50782 any internal-priority-marking 4
permit tcp any eq 50784 any internal-priority-marking 4
permit tcp any eq 50786 any internal-priority-marking 4
permit tcp any eq 50788 any internal-priority-marking 4
permit ip any any internal-priority-marking 1
I classified on a few source ports as I wasn't too sure how to change the source port in iperf
Once the two iperf sessions are running I get
[ 4] 25.0-30.0 sec 22.4 MBytes 37.7 Mbits/sec
[ 5] 25.0-30.0 sec 33.7 MBytes 56.5 Mbits/sec
[SUM] 25.0-30.0 sec 56.1 MBytes 94.2 Mbits/sec
[ 4] 30.0-35.0 sec 22.4 MBytes 37.7 Mbits/sec
[ 5] 30.0-35.0 sec 33.7 MBytes 56.5 Mbits/sec
[SUM] 30.0-35.0 sec 56.1 MBytes 94.2 Mbits/sec
[ 4] 35.0-40.0 sec 22.4 MBytes 37.7 Mbits/sec
[ 5] 35.0-40.0 sec 33.7 MBytes 56.5 Mbits/sec
[SUM] 35.0-40.0 sec 56.1 MBytes 94.2 Mbits/sec
[ 4] 40.0-45.0 sec 22.4 MBytes 37.7 Mbits/sec
[ 5] 40.0-45.0 sec 33.7 MBytes 56.5 Mbits/sec
[SUM] 40.0-45.0 sec 56.1 MBytes 94.2 Mbits/sec
ICX version I am running is 8.0.20