There are two methods to limit the amount of traffic originating from an interface: policing and shaping. When an interface is policed outbound, traffic exceeding the configured threshold is dropped (or remarked to a lower class of service). Shaping, on the other hand, buffers excess (burst) traffic to transmit during non-burst periods. Shaping has the potential to make more efficient use of bandwidth at the cost of additional overhead on the router.
All this is just dandy, but doesn't mean much until you see its effects on real traffic. Consider the following lab topology:
We'll be using Iperf on the client (192.168.10.2) to generate TCP traffic to the server (192.168.20.2). In the middle is R1, a Cisco 3725. Its F0/1 interface will be configured for policing or shaping outbound to the server.
Using Wireshark's IO graphing feature on a capture obtained at the server, we can observe the apparently random nature of the flows. The black line measures the aggregate throughput, and the colored lines each represent an individual TCP flow.
We'll be using Iperf on the client (192.168.10.2) to generate TCP traffic to the server (192.168.20.2). In the middle is R1, a Cisco 3725. Its F0/1 interface will be configured for policing or shaping outbound to the server.
Iperf
Iperf is able to test the bandwidth available across a link by generating TCP or UDP streams and benchmarking the throughput of each. To illustrate the effects of policing and shaping, we'll configure Iperf to generate four TCP streams, which we can monitor individually. To get a feel for how Iperf works, let's do a dry run before applying any QoS policies. Below is the output from Iperf on the client end after running unrestricted across a 100 Mbit link:Client$ iperf -c 192.168.20.2 -t 30 -P 4 ------------------------------------------------------------ Client connecting to 192.168.20.2, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1916] local 192.168.10.2 port 1908 connected with 192.168.20.2 port 5001 [1900] local 192.168.10.2 port 1909 connected with 192.168.20.2 port 5001 [1884] local 192.168.10.2 port 1910 connected with 192.168.20.2 port 5001 [1868] local 192.168.10.2 port 1911 connected with 192.168.20.2 port 5001 [ ID] Interval Transfer Bandwidth [1900] 0.0-30.0 sec 84.6 MBytes 23.6 Mbits/sec [1884] 0.0-30.0 sec 84.6 MBytes 23.6 Mbits/sec [1868] 0.0-30.0 sec 84.6 MBytes 23.6 Mbits/sec [1916] 0.0-30.0 sec 84.6 MBytes 23.6 Mbits/sec [SUM] 0.0-30.0 sec 338 MBytes 94.5 Mbits/secIperf is run with several options:
- -c - Toggles client mode, with the IP address of the server to contact
- -t - The time to run, in seconds
- -P - The number of parallel connections to establish
Policing
Our first test will measure the throughput from the client to the server when R1 has been configured to police traffic to 1 Mbit. To do this we'll need to create the appropriate QoS policy and apply it outbound to F0/1:policy-map Police class class-default police cir 1000000 ! interface FastEthernet0/1 service-policy output PoliceWe can then inspect our applied policy with
show policy-map interface
. F0/1 is being policed to 1 Mbit with a 31250 bytes burst:R1# show policy-map interface FastEthernet0/1 Service-policy output: Police Class-map: class-default (match-any) 2070 packets, 2998927 bytes 5 minute offered rate 83000 bps, drop rate 0 bps Match: any police: cir 1000000 bps, bc 31250 bytes conformed 1394 packets, 1992832 bytes; actions: transmit exceeded 673 packets, 1005594 bytes; actions: drop conformed 57000 bps, exceed 30000 bpsRepeating the same Iperf test now yields very different results:
Client$ iperf -c 192.168.20.2 -t 30 -P 4 ------------------------------------------------------------ Client connecting to 192.168.20.2, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1916] local 192.168.10.2 port 1922 connected with 192.168.20.2 port 5001 [1900] local 192.168.10.2 port 1923 connected with 192.168.20.2 port 5001 [1884] local 192.168.10.2 port 1924 connected with 192.168.20.2 port 5001 [1868] local 192.168.10.2 port 1925 connected with 192.168.20.2 port 5001 [ ID] Interval Transfer Bandwidth [1884] 0.0-30.2 sec 520 KBytes 141 Kbits/sec [1916] 0.0-30.6 sec 1.13 MBytes 311 Kbits/sec [1900] 0.0-30.5 sec 536 KBytes 144 Kbits/sec [1868] 0.0-30.5 sec 920 KBytes 247 Kbits/sec [SUM] 0.0-30.6 sec 3.06 MBytes 841 Kbits/secNotice that although we've allowed for up to 1 Mbps of traffic, Iperf has only achieved 841 Kbps. Also notice that, unlike our prior test, each flow does not receive an equal proportion of the available bandwidth. This is because policing (as configured) does not recognize individual flows; it merely drops packets whenever they threaten to exceed the configured threshold.
Using Wireshark's IO graphing feature on a capture obtained at the server, we can observe the apparently random nature of the flows. The black line measures the aggregate throughput, and the colored lines each represent an individual TCP flow.
Shaping
In contrast to policing, we'll see that shaping handles traffic in a very organized, predictable manner. First we'll need to configure a QoS policy on R1 to shape traffic to 1 Mbit. When applying the Shape policy outbound on F0/1, be sure to remove the Police policy first withno service-policy output Police
.policy-map Shape class class-default shape average 1000000 ! interface FastEthernet0/1 service-policy output ShapeImmediately after starting our Iperf test a third time we can see that shaping is taking place:
R1# show policy-map interface FastEthernet0/1 Service-policy output: Shape Class-map: class-default (match-any) 783 packets, 1050468 bytes 5 minute offered rate 0 bps, drop rate 0 bps Match: any Traffic Shaping Target/Average Byte Sustain Excess Interval Increment Rate Limit bits/int bits/int (ms) (bytes) 1000000/1000000 6250 25000 25000 25 3125 Adapt Queue Packets Bytes Packets Bytes Shaping Active Depth Delayed Delayed Active - 69 554 715690 491 708722 yesThis last test concludes with very consistent results:
Client$ iperf -c 192.168.20.2 -t 30 -P 4 ------------------------------------------------------------ Client connecting to 192.168.20.2, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1916] local 192.168.10.2 port 1931 connected with 192.168.20.2 port 5001 [1900] local 192.168.10.2 port 1932 connected with 192.168.20.2 port 5001 [1884] local 192.168.10.2 port 1933 connected with 192.168.20.2 port 5001 [1868] local 192.168.10.2 port 1934 connected with 192.168.20.2 port 5001 [ ID] Interval Transfer Bandwidth [1916] 0.0-30.4 sec 896 KBytes 242 Kbits/sec [1868] 0.0-30.5 sec 896 KBytes 241 Kbits/sec [1884] 0.0-30.5 sec 896 KBytes 241 Kbits/sec [1900] 0.0-30.5 sec 896 KBytes 241 Kbits/sec [SUM] 0.0-30.5 sec 3.50 MBytes 962 Kbits/secWith shaping applied, Iperf is able to squeeze 962 Kbps out of the 1 Mbps link, a 14% gain over policing. However, keep in mind that the gain measured here is incidental and very subject to change under more real-world conditions. Also notice that each stream receives a fair share of bandwidth. This even distribution is best illustrated graphically through an IO graph of a second capture: