Monday, September 7, 2009

Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks; Chiu & Jain

This paper gave a rigorous analysis of several metrics related to congestion avoidance mechanisms, including efficiency, fairness, distributedness, and convergence. Network efficiency describes how closely the network is operating to the goal "knee" load level where throughput is still increasing slightly but response time is increasing vastly. Fairness is an even sharing of bottleneck resources, distributedness is decentralization of information necessary for hosts to make behavior decisions, and convergence refers more generally to reaching some equilibrium with small oscillations being acceptable.

The authors then examine what happens with these metrics when various combinations of multiplicative and incremental increases and decreases are used for congestion avoidance. Ultimately they find that decreases should be multiplicative and increases should be linearly additive. Interestingly, a host receives binary feedback from the network, and this extra bit of information became pervasively used despite the fact that it feels like a hack to get around the E2E principle.

All different aspects the authors examine lead them to the same "additive increase, multiplicative decrease" conclusion, from quanitifying responsiveness (time to converge) to looking at smoothness (size of oscillations). They show some nice graphs that demonstrate convergence of fairness and efficiency and how one metric might be neglected even as the other converges. The mathematical proofs even allow the authors to predict maximum overshoot of their proposed mechanism. This paper really hammered in the idea that the use of "additive increase, multiplicative decrease" in TCP's congestion avoidance is the correct way to go.

No comments: