Congestion Control for High Bandwidth-Delay Product Networks
This presentation discusses the challenges faced by TCP in high bandwidth-delay product networks, highlighting issues such as oscillations and instability. It explores solutions like adjusting aggressiveness based on feedback delay, decoupling efficiency and fairness control, and introduces XCP as an outperforming alternative to TCP in environments with increasing bandwidth and/or delay.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Congestion Control for High Bandwidth-Delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Presented by Yufei Chen
Future Internet High Bandwidth Large number of high-bandwidth links High Latency Satellite and wireless links Bad news to TCP Oscillatory Prone to instability
TCPs problems TCP + any Active Queue Management Instability over high-capacity or large-delay Additive increase policy Acquire spare bandwidth with one packet per RTT Short flows transfer delay No improvement with increased link capacity Fairness Throughput inversely proportional to RTT User with different RTT compete for same bottleneck capacity
Rethink Packet loss (or not) is poor as signal of congestion. Congestion is not binary, should reflect the degree of congestion. Aggressiveness of the sources should be adjusted according to the delay in the feedback loop. The dynamics of aggregate traffic should be independent to number of flows. Decouple efficiency control (control of utilization or congestion) from fairness control.
XCP (eXplicit Control Protocol) Outperforms TCP in conventional environments Efficient, fair, stable when bandwidth and/or delay increases Routers inform senders about degree of congestion Control utilization according to spare bandwidth & feedback delay Control fairness by reclaiming bandwidth from flows with higher rate Put control state in the packets High utilization, small queues, almost no drops Error distinguishing, deployable
Congestion header H_cwnd (set to sender s current cwnd) H_rtt (set to sender s rtt estimate) H_feedback (initialized to demands) Filled in by sender H_cwnd sender s current congestion window H_rtt sender s current RTT estimate Initialized by sender, modified by routers H_feedback modified by routers along the path
XCP Sender Maintains cwnd and rtt Sets the H_cwnd and the H_rtt fileds of a packet to current cwnd and rtt For the first packet of a flow, H_rtt is set to 0 Initializes H_feedback based on desired window increase (r * rtt cwnd) / # packets in window, where r is the desired rate Update sender s cwnd when acknowledgment arrives cwnd = max(cwnd + H_feedback, s), where s is the packet size Response to losses Same as TCP
XCP Receiver Similar to a TCP receiver Except when acknowledging a packet, it copies the congestion header from the data packet to its acknowledgment.
XCP Router Compute the feedback Optimal efficiency & min-max fairness Efficiency Controller (EC) Fairness Controller (FC) Estimates over the average RTT of the flows Average RTT computed with congestion header Single control decision every average RTT (the control interval)
Efficiency Controller (EC) Maximize link utilization, minimize drop rate and persistent queues = * d * S * Q aggregate feedback d average RTT S spare bandwidth Q persistent queue size , constants, 0.4 and 0.226 respectively based on stability analysis
Efficiency Controller (EC) Maximize link utilization, minimize drop rate and persistent queues = * d * S * Q Feedback ( ) proportional to spare bandwidth (S) and queue size (Q) Allocate the aggregate feedback to H_feedback in single packets Doesn t matter which packets get the feedback Doesn t matter how much congestion window changes for each individual flow As long as total traffic changes by over this control interval
Fairness Controller (FC) Apportion the feedback to individual packets for fairness Additive-Increase Multiplicative-Decrease (AIMD) principle > 0, allocate, throughput increases of all flows are the same < 0, allocate, a flow s throughput decrease is proportional to current throughput. Bandwidth shuffling for = 0 Simultaneous bandwidth allocation/deallocation consistent total traffic rate Each flow s throughput gradually converge to fair share h = max(0, * y | |) y is input traffic in an average RTT, = 0.1 Every average RTT, at least 10% of the traffic is redistributed according to AIMD
Fairness Controller (FC) Apportion the feedback to individual packets for fairness Per-packet feedback H_feedbacki = Basically, positive feedback negative feedback
XCP Router Sender Router Router Receiver Feedback = +10 Feedback = +10 Feedback = +5 Multiplicative-Increase Multiplicative-Decrease (MIMD) for EC Quickly acquire positive spare bandwidth over high capacity links Additive-Increase Multiplicative-Decrease (AIMD) for FC Converges to fairness Simple to implement
Simulation Packet-level simulator ns-2, extended with XCP module Compare XCP with TCP Reno Different queuing disciplines. Random Early Discard (RED) Random Early Marking (REM) Adaptive Virtual Queue (AVQ) Core Stateless Fair Queuing (CSFQ) Different dropping policies Drop-Tail Random Early Discard (RED) Topology used in most simulations
Performance Capacity Round trip propagation delay 80ms 50 long-lived FTP flows share a bottleneck 50 flows with reversed paths to simulate 2-way traffic environment XCP near optimal Independent of link capacity Utilization rate Bandwidth (Mbps) Plot credit: Behnam Shafagaty, http://slideplayer.com/slide/4703406/
Performance Feedback delay Bottleneck capacity of 150Mbps 50 long-lived FTP flows share a bottleneck 50 flows with reversed paths to simulate 2-way traffic environment XCP near optimal Independent of feedback delay Utilization rate Round trip delay (sec) Plot credit: Behnam Shafagaty, http://slideplayer.com/slide/4703406/
Performance Number of flows Number of FTP flows 0 - 1000 Bottleneck capacity 80Mbps Round trip propagation delay 80ms XCP good utilization, reasonable queue size, almost none packet loss
Performance Short Web-like traffic Arrival Rate 0 1000 per second Bottleneck capacity 150Mbps Round trip propagation delay 80ms 50 long-lived FTP flows share a bottleneck 50 flows with reversed paths to simulate 2-way traffic environment XCP good utilization, reasonable queue size, almost none packet loss
Performance Fairness Equal RTT flows vs. Different RTT flows Bottleneck capacity 30Mbps Round trip propagation delay 40ms 30 long-lived FTP flows share a bottleneck XCP has a fair bandwidth allocation and does not have any bias against long RTT flows Flow Throughput (Mbps) Flows
Security Requires additional mechanism Same as TCP Facilitates the job of policing agents Unlike TCP Faster misbehaving source isolation Policing agent use the explicit feedback to test a source Easier suspicious flow isolation Router sends the flow a test feedback to require cwnd decrease
Deployment and Summary Gradual deployment Cloud-based approach XCP in multi-protocol network XCP is TCP-friendly Good utilization and fairness, low queuing delay, few packet drops Viable and practical Increase efficiency in high bandwidth-delay product environments
Discussion questions Compare XCP with other high-speed TCP extensions such as Scalable TCP, HighSpeed TCP, FAST TCP, and BIC-TCP. Analyzing and solving the deployment issues of XCP on a wide scale as it require changes to the OS of end hosts and routers. Y. Zhang, T. Henderson. An Implementation and Experimental Study of the eXplicit Control Protocol (XCP). Published in INFOCOM 2005. 24th Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings IEEE.