I had been meaning to write about network congestion control for a while anyway, but it seems like now I have an extra reason. George Ou just posted an article called Fixing the unfairness of TCP congestion control, lauding (and somewhat representing) Bob Briscoe’s excellent paper on Flow Rate Fairness: Dismantling a Religion. Because Ou is an ardent opponent of so-called “network neutrality” the response has been predictable; the net-neuts have all gone nuts attacking Ou, while all but ignoring Briscoe. One example, sadly, is Wes Felter.

This is fine, but for the same cost as detecting unfair TCP, ISPs could probably just implement fair queueing.

I think you’re a great guy, Wes, but did you actually read Briscoe’s paper? You cite Nagle’s suggestion of queuing at ingress points, as though it somehow contradicts Briscoe, but that is in fact pretty much what Briscoe ends up recommending. The only difference is in how we define “fair” and Briscoe makes a pretty good case for cost fairness instead of flow-rate fairness. He points out, for example, how current techniques merely create an “arms race” between service providers and the application developers, creating new problems as the race continues but never really solving the old one. He even takes aim at some of the tricks that providers like Comcast have been using.

While everyone prevaricates, novel p2p applications have started to thoroughly exploit this architectural vacuum with no guilt or shame, by just running more flows for longer. Application developers assume, and they have been led to assume, that fairness is dealt with by TCP at the transport layer. In response some ISPs are deploying kludges like volume caps or throttling specific applications using deep packet inspection. Innocent experimental probing has turned into an arms race. The p2p community’s early concern for the good of the Internet is being set aside, aided and abetted by commercial concerns, in pursuit of a more pressing battle against the ISPs that are fighting back. Bystanders sharing the same capacity are suffering heavy collateral damage.

That sounds like a pretty serious indictment of forging RST packets and so on. What more could you want? The simple fact is that congestion control is necessary, it will always be necessary, and current methods aren’t doing a very good job. This issue is almost orthogonal to network neutrality, as it’s about responses to actual behavior and not preemptive special treatment based on source or destination before any behavior is manifested, so don’t let opposition to Ou or to his views on network neutrality color your evaluation.

Gaming-resistance is just as desirable a property in congestion control as it is in other areas we’ve both studied, and right now real users are seeing degraded performance because a few developers are gaming this system. I remember talking to Bram Cohen about this issue while BitTorrent was being developed, for example. He was very well aware that congestion control would sometimes slow his transfer rates and very deliberately designed BitTorrent to circumvent it. That benefits BitTorrent users, and I just used BitTorrent to download an ISO yesterday so I’m not unaware of its value, but really, what makes BitTorrent users so special? Why should those who create congestion not feel its effects first and most? How, exactly, is it “fair” to anyone else that they don’t?