A couple of days ago, I wrote an article about congestion control on the internet. In the comments, I promised a more technical exploration of the subject, and this is it.
Congestion is a problem on the internet. It will always be a problem on the internet, just as it is on our roadways. Yes, bandwidth keeps increasing. So does the number of users, and the number of network applications per user, and the required bandwidth per application. Supply will never stay ahead of demand for long, if at all. Sooner or later, it will become logistically or economically infeasible for service providers to add more bandwidth all throughout their networks. Sooner or later, enough people will be sending enough traffic over the network’s weakest links that it won’t matter if other parts of the network are overbuilt. The “more bandwidth” crowd will lose to those who recognized the need for congestion control and did something about it.
When thinking about how to handle congestion, it’s possible to look at things from many different perspectives. Here are some examples:
- Different kinds/definitions of fairness.
- The core of the network vs. the edge.
- Distinguishing flows vs. hosts vs. “economic entities” (i.e. users).
To keep this from turning into a major dissertation, I’m going to say little about these.
- On fairness, I think Briscoe has really said all that needs to be said about flow-rate fairness vs. cost fairness. Note that “flow-rate fairness” does not necessarily refer to flows in the sense that they’re often discussed in the congestion-control literature. It refers to the rate at which things (packets) flow, whether they’re associated with connections or hosts or anything else. It’s an unfortunate re-use of terminology, but that’s life. I think cost fairness is the proper goal/standard for congestion control. If you disagree, keep it to yourself. I might discuss things in terms of cost fairness, but for the most part the concerns I’ll address are directly translatable to a flow-rate-fairness world and I don’t want to get bogged down trying to make the flow-rate zealots happy.
- As for the core vs. the edge, I bring it up because they’re very different environments calling for very different kinds of congestion management. While routers at the edge can reasonably hope to make flow/host/user distinctions (modulo the mess that NAT introduces), routers at the core have no such hope. They must also process packets at a much higher rate, so the most they can do is make hopefully-useful guesses about what packets to drop when they’re overloaded. The dumbest approach is simply to drop the last packets to arrive. The various flavors of RED (Random Early Detect) are slightly better, and I believe Stochastic Fair Blue (which I’ve written about before) is even better, but they’re all still basically guesses. While congestion control in the core is a fascinating subject, it’s not what I’ll be writing about here.
- On flows vs. hosts vs. users, most of the people weighing in on these issues have tended to focus on users and I’ll follow suit. NAT makes a hopeless muddle of the whole thing anyway, so the only thing you can really be sure of is one entity paying the bill for one physical line. There’s a lot of merit to the argument that if there are multiple uses huddled behind NAT then that’s their problem anyway.
OK, enough stage-setting. On with the show.