In my previous post about this book, I took issue with some of George Varghese’s conclusions but generally spoke favorably of the book. Now, having read chapters 9-14, I’m not convinced the book as a whole is that good after all. While chapters 1-8 contained a lot of fairly objective and generally applicable ideas, the author’s biases really start to show in the middle section. For example, 90% of the techniques described in chapters 11 (Prefix-Match Lookups) and 12 (Packet Classification) are so domain-specific that they’re no longer useful as case studies. Contrary to the cover blurb’s claim that the material will benefit “anyone involved with network implementations” these chapters are very specific to high-end IP routers such as are made by a mere half-dozen companies. Anyone working on Fibre Channel or InfiniBand networks, or even IP gear that’s COTS-based and not filled with custom hardware, will probably find little of value. The few academics and Cisco/Juniper architects who care about this material probably know Varghese personally anyway, so there’s little point to putting it in book form.

Chapter 13 (Switching) was a lot more interesting, particularly from the perspective of someone who will shortly be working on a switching network even more exotic than those Varghese mentions. Chapter 14 (Scheduling Packets) is where it gets really bad. An entire subsection is devoted to Random Early Detection, but what little is said about alternatives is either misrepresentative or just plain wrong. You’d think RED-PD, CSFQ, FRED or Blue – all cited in Sally Floyd’s excellent list – would deserve at least passing mention by Varghese. Stochastic fair queuing in general gets a mere two paragraphs, with one citation to a 1991 paper; Stochastic Fair Blue is the best-known member of this family, and that papers’s from 2001. Then Varghese has this to say:

Stochastic fair queuing has two disadvantages. First, different backbone routers can hash flows into different groups because routers need to be able to change their hash function if the hash distributes unevenly. Second, SFQ does not allow some flows to be treated differently (either locally within one router or globally across routers) from other flows, a crucial feature for QoS. Thus, SFQ only provides some sort of scalable and uniform bandwidth fairness.

Wrong. Inexcusably wrong, because Varghese clearly knows better but isn’t telling the reader. The fact that routers might hash differently is not a weakness. In fact, some might consider it a strength, for the same reason that multiple hash functions in SFB are considered a strength. An ill-behaved flow might “slip by” one router or hash function, to be caught by the next, so its packets are still more likely to be dropped than those of well-behaved flows – which is exactly what active queue management is supposed to achieve. The part about QoS is just FUD. Varghese does get around to mentioning weighted RED, and it doesn’t exactly take an algorithmic genius to see how similar modifications could be made to SFB or practically any other queue-management algorithm.

Ignoring over a decade’s worth of research and raising bogus objections, just so he can claim SFQ “only” provides “some sort” etc., is simply shabby. So, by the way, is a Wikipedia page, written by a research collaborator and business partner, that looks more like a corporate “about the founders” page than something that belongs in an encyclopedia. I understand that authors have a tendency to write more about stuff they’ve worked on than about competing ideas (I’m not immune either) but that’s no excuse for misrepresenting the state of the art.