Latest Linux Follies

There’s a lot of fun on the Linux Kernel Mailing List this week. There’s way too much BS there to sort through on a regular basis, so unless there’s something particularly interesting going on I usually follow it by reading the excellent Kernel Traffic digests. In this week’s edition there’s a lovely little flame war about coding style. Apparently anyone who uses longer variable names than Alexander Viro prefers, or tries to write platform-independent code using typedefs or macros (because Linux is basically deficient when it comes to providing those for you) is a “bugger” or perhaps a “wanker” who deserves to be in a coding “Hall of Shame”. It’s particularly rich to see Mr. Virus^H^Ho complaining about people writing “the 1001st broken implementation of memcmp” when he’s the guy who chose to create the VFS Layer From Mars instead of adopting known and proven models for how a VFS layer should work. That, in a nutshell, is what’s wrong with too many Linux kernel developers: each and every one thinks he’s some sort of Nietzschean übermensch, free by virtue of their creative genius to break the very same rules they demand mere mortals adhere to.

Later on, we get to see Jeff Garzik treating non-standard variable names as comparable to security vulnerabilities, and Linus exhibiting the same “nobody ever thought about this until I came along” hubris regarding philosophy and evolutionary theory that he has always shown with respect to kernel hacking. It’s a good read, all around.

LotR Movie Review

I went to see Fellowship of the Ring today, and was as amazed as other reviewers have been. Here’s my own review explaining why.

Word Challenge

What is the longest meaningful sequence of two-letter words that you can include in a sentence? If you can manage seven or more (I’ve already done six), send me email and let me know.

Freenet FIQ

I sent Ian Clarke a copy of my Freenet Frequently Ignored Questions two weeks ago for review/comment, and haven’t heard back. I guess he’d rather duck the issues, or perhaps there’s another fowl besides a duck involved. ;-) In any case I think I’ve waited long enough.

I’m back!

Sorry about the almost-week-long hiatus. For three days I was in Michigan visiting my mother and had neither spare time nor good net access. For two more days I was driving between Massachusetts and Michigan, and had even less of either. The only think that kept me sane was listening to Dune: House Harkonnen on tape. Books on tape are a great way to deal with long trips. I had never tried them before, and now I cannot recommend them highly enough.

First Post!

Now that Google has extended its Usenet archive back in time, I thought it would be fun to find some of my early posts from the dim past. The very earliest post I could find, included below in case Google truncates their archive again, dates from June 13, 1989.

The most amusing thing is that, at a time when I used to post on a multitude of topics, the topic for this particular article was basically whether disks should be on servers or clients, with a discussion of throughput vs. latency thrown in for good measure. Now, 12.5 years later and after many detours, I’m still very much involved in exactly those same debates. Plus ça change, plus c’est la même chose.

I also can’t resist pointing out that I was right, even though I failed to address my interlocutor’s ridiculously-low SCSI throughput numbers (this post is so old that it predates my own involvement with SCSI) and misidentified the causes of latency in remote data access. Diskless workstations died a well-deserved death years ago, and the trend – much to my employer’s chagrin – is still toward having lots of storage at the edge of the network. Server-centric NAS still sucks, at a basic technological level. In another 12.5 years we’ll all be using some form of location-transparent storage with intelligent caching/replication, and we’ll wonder how anyone could ever have been so dumb as to do things the old way.

From: darcy@tci.UUCP (Jeff d'Arcy)
Subject: Re: Academic workstations
Message-ID: <258@tci.UUCP>
Date: 13 Jun 89 16:14:03 GMT
In article <32705@bu-cs.BU.EDU> bzs@bu-cs.BU.EDU (Barry Shein) writes:
>>This should give you better performance because local 
>>disks should be faster than networks, but it also adds to the cost and 
>>administration effort.
>>                                                Rick Daley
>>                                                rpd@Apple.COM
>Bad guess, go measure it, because servers 
lmost always have faster
>disks, controllers and bigger disk buffers remote
disks are usually
>faster than local disks (assuming a reasonable 
network loading which
>doesn't have to be zero.)

Certainly, servers (at least those configured by sane administrators)
are likely to have faster, bigger disks etc. than would be feasible
for individual workstations.  However, latency is likely to suffer due
to the overhead associated with protocol encapsulation etc. especially
in heterogeneous environments.  If the application is doing something
simple such as a huge block transfer the performance hit won't be that
bad, but more complex operations involving random disk accesses will

>An ethernet can deliver data at almost 1MB per second, go look at the
>specs on your standard 27msec SCSI cheapo, 20KB/sec is not unusual for
>maximum disk transfer rate, about 1/40th the speed of an ethernet.

At risk of repeating myself, this observation only applies to the simple
case where transfer speed is the limiting factor.  Unfortunately, latency
is frequently more important and is the first thing shot to h*ll in a
network environment.

>[very good points about parallelism and network administration
> deleted to save network bandwidth]

>However, I will agree that blaming it on the diskless workstations is
>a wonderful alibi, the yokels believe you and rarely ask you to
>actually do your job and find out what's really causing the problem.

>It's the diskless workstations, it's the diskless workstations (we
>know those diskless workstation users will never buy the local disks
>you recommend so it's a safe bet to blame it on them.)
>[attempted disclaimers deleted]

>I am not saying there aren't cases where a diskful workstation is far
>better, I'm just saying most people don't know what they're talking
>about or have motives other than understanding the technology.

"...don't know what they're talking about..."?  Disagreement is not
a sure sign of one party's ignorance, Barry.  I'm not saying that
local disks are the one and only way to go, but they are superior for
a wide range of applications.  I make my living in this field and I
am probably not alone in being offended by your implication that those
who disagree with you on this point are either foolish, lazy or

Found in the Wild

In light of the last two articles, it might be interesting to note that I spent four hours yesterday helping with “due diligence” (a.k.a. “butthole inspection”) evaluating some technology from another company. This sort of thing has become a semi-regular part of my job, but this is the first time I’ve been asked to do an actual code inspection in that context. I was very impressed with the code I saw, which met pretty much every criterion I’ve just described for good code. I’d even say it was better code than most of what I write, though the spec was a little scant. If you’re reading this, M, kudos to you!

Specs vs. Comments

original article

Now then, on to the “proper” use of comments.

1. Write out what you are planning to do in English…
2. Make a copy and label it “documentation.”

3. Go back to the original, fill in all of the logic…label it “source file.”

I’m reluctant to discourage people from writing both specs and comments, but I think that advice needs to be taken with a grain of salt. One of the most common causes of error in programming is having two copies of the same thing that get out of sync over time. This “thing” can be specs vs. comments in your example, variables in a program, or cached disk blocks in a system. Maybe I’m especially sensitive to this sort of problem because this last example is pretty much my specialty (or maybe it’s the other way around) but alarm bells go off for me whenever someone suggests duplicating something without specifying how the copies will be kept in sync.

IMO comments should be complementary to what’s in the spec, not a copy. Explanations of how the program is written, especially those that would apply regardless of module structure or implementation language, belong in the spec. Explanations of how the code is written, that might be hard to understand except when the explanation and the code are close together, belong in comments. Putting information of either type in the place designated for the other is IMO a mistake.

Writing Good Code

original article

Functions are named consistantly, variables use Hungarian Notation or some other standard.

I’d add that functions should be not only named but also grouped consistently. Matched pairs of functions (initialize/terminate, hold/release, get/put) should be close together so that when one is changed it’s easy to go and change the other in complementary ways.

As for Hungarian notation, I disagree. I’ve worked on lots of code that used it, and lots that didn’t, and I haven’t noticed any difference in readability. There’s even a possibibility when using HN that the name will not match the actual type after someone changes the code, and that can be worse than having to find the declaration. Type information belongs in declarations; names should contain information that can’t be expressed as a type – usually usage.

In the context of that last statement, I should also point out that typedefs and enums should always be used to maximize the amount of information in a declaration. Some of my code uses virtual block numbers, physical block numbers, and buffer numbers, for example. These all equate to integers of the same size, but they’re visibly different types with different expectations. Opaque pointers (“typedef struct foo * bar”) are also very useful to define usage.

Lots of comments, clearly written and explanatory…The best comment I heard was from a friend about a former coworkers code: “It’s English with some C++ thrown inbetween the comments.”

Yuk. Verbose comments are a waste of time. Yes, if you have to write a paragraph to describe why a tricky piece of code works the way it does or how a complex data structure is linked together, by all means do so. However, “English with some C++ thrown in” is likely to lead to information overload. IMO, comments are like warning flags; they alert the reader to something that they might miss otherwise. I’d say 80-90% of code is usually pretty self-explanatory as long as you have the proper context from design docs and module/function header comments, and further verbosity is undesirable.

As Joel points out, code embodies the experience of the people who wrote and debugged it. The most useful comments are those that share that experience in the form of pointing out things like why seemingly-extraneous code is really necessary. My personal favorite type of comment is one that anticipates my objections and says “we tried xxx, but it didn’t work because…”; that kind of comment can save hours or days of frustration.

Documentation is good. Write it.

Absolutely. I’m sometimes dismayed by the apparent drop-off in my productivity as measured by lines of code compared to the days when code was all I wrote, but I know deep down that writing the spec first is worth it, which brings me to the single most important point that I think you forgot to mention:

Fix things sooner, not later

In a lot of ways, this is like the Zeroth Law of Thermodynamics; it really underlies all of the other suggestions. Anticipating a problem or future enhancement in the design phase is cheaper than having to go back and rewrite thousands of lines. Anticipating and avoiding type confusion, or code rewritten into an inferior form due to misunderstanding, is also better than suffering the effects after the fact. Finding problems with thorough asserts or unit tests is better than finding them in the field is good. Finding problems in the field with internal integrity checks that warn you when a data structure first gets corrupted rather than when the corrupted data structure gets used is good. Always, in everything you do as a programmer, plan ahead and try to think of ways you can save yourself grief later by acting now. These “defensive” skills are the ones that set the programming elite apart from green recruits, and it’s unfortunate that most people don’t start developing them until they’ve lost a few months of their lives under the gun desperately searching through post-mortem dumps to figure out bugs that should have been caught in unit tests.

LotR Personality Test

Just for kicks, I whipped up a Lord of the Rings Personality Test. The infrastructure’s pretty generic, so expect mo’ betta for the future, but for now you can find out what LotR character you are using special BSQ (BullS*** Question) and FDL (Four-Dimensional Logic) technology.