Beyond Jain's Fairness Index: Setting the Bar For The Deployment of Congestion Control Algorithms

https://www.cs.cmu.edu/~rware/assets/pdf/ware-hotnets19.pdf

  • Key idea: deploy new congestion control algorithms on the Internet

  • Deployment threshold: inter-CCA

    • What happens when a new CCA is deployed on a network with flows using some legacy CCA. Is the new CCA impact on the status quo acceptable?

  • Past effort: "fairness", "friendliness"

    • Fair: if it maximizes every users utility function given limited link capacity

      • End host CCA, with users as flows, aim to maximize utility per-flow by ensuring every flow sharing the same bottleneck link gets equal bandwidth

      • Looking at the throughput ratio between competing CCAs or by computing Jain's Fairness Index (JFI)

        • [1: perfectly meeting expected fair allocation; 0: unfair]

        • Does not account for the demand of each job: amount of resources a flow uses when operating in absence of contention

        • Thus look at Max-min fairness

      • Problem:

        • Ideal-Driven Goalposting: impractically high requirements (Cubic is not even fair to itself!)

        • Throughput-Centricity: ignore other important figures of merit for good performance, such as latency, flow completion time, or loss rate.

        • Assumption of balance: fairness metric cannot tell whether the outcome is biased for or against the status quo.

          • I.e. Jain's Fairness Index: equal score to --> "takes" and "leaves" larger share

          • status quo: whether or not the new algorithm damages the performance of existing traffic

    • Mimicry: new algorithms replicate properties of TCP-Reno (e.g., driving throughput as a function of the loss rate) in order to be 'friendly' to legacy, Reno-derived TCPs

      • Binds new algorithms to repeating the often undesirable idiosyncrasies of Reno

  • Now: advocate for new approach --> limiting harm caused by the new algorithm on the status quo

    • More practical, future proof, and handles a wider range of quality metrics

Limitation of fairness

  • Assumption of Balance: values the performance outcome of both the new CCA and the existing, deployed CCA

  • Beta: most users use

  • Alpha: brand new algoithm

  • It should be perfectly acceptable to deploy alpha if alpha is the one receiving unfair treatment - no users other than user A, who chose to use alpha, would be impacted negatively

Limitation of Mimicry:

  • Two mimicry based approaches are

    • TCP-Friendly Rate Control (TFRC)

      • p: the link loss rate

      • describes TCP Reno's average sending rate over time

    • RTT-biased Allocation

      • grant more throughput to connections with low RTTs than to hose with higher RTTs

  • Binds new algorithms to replicate the often undesirable idiosyncrasies of the deployed CCA

    • TFRC limit new CCAs to achieve high throughput

    • RTT-based: RTT-based algorithm might be better?

Harm

  • 1: maximally harmful, 0: harmless

  • x = demand

  • y = performance after introduction of a competitor connection

  • Harm is

    • Multi-metric

    • Demand-aware

    • Status-quo biased

    • Practical

    • Future-proof

  • If the amount of harm caused by flows using a new algorithm α on flows using an algorithm β is within a bound derived from how much harm β flows cause other β flows, we can consider α deployable alongside β.

  • Formulation:

    • Worse case bounded harm might have some problems: falling to the lowest common denominator (too loose)

    • Equivalent bounded harm: focus on a pair of workload w and w* in a legacy network where all services using beta. We want to switch the service with workload w* to use alpha. With equivalent bounded harm, alpha would be acceptable iff (alpha, w*) does no more harm to (beta, w) than (beta, w*) would

      • Too strict to serve as a threshold

    • Symmetric bounded harm

      • 'do unto other flows as you would have other flows do to you'

Last updated