Dead Wood

One of the earliest photos I posted on this site, when I was still getting the hang of digital photography in 2002, was of a fungus on a tree in our front yard. Here’s what it looked like…

old fungus photo

…and here’s what I had to say about it at the time.

There also seem to be some hints that it might indicate a rot problem, but most sites say nothing about that and the tree is clearly in excellent health so I’m not inclined to worry.

Maybe I should have been more inclined to worry, because here’s what the tree it was growing on looks like now.

broken tree

Some time on Monday, during the most torrential downpours I can remember since I was growing up in New Zealand, about half the tree ended up either in our neighbor’s yard or across the hedge, having snapped about six feet above the branch-stub where the fungus had grown. It wasn’t all rotten in there, but now that it’s all down on the ground it’s easy to see that quite a bit of it was. The neighbors say it fell around 4pm; Cindy and I didn’t notice until a little past 7pm after Amy was in bed. We immediately trimmed some branches to minimize damage to our neighbors’ small Japanese maple, which I think will survive minus a branch or two. Cindy and her parents did some more cleanup on Tuesday, but the thickest part of the trunk is still lying across the hedge until the tree-removal people take care of it.

The funny thing is that, when I got home around 5pm in the pouring rain, I remember thinking, “Somebody’s going to drop a branch; I hope it’s not one of ours.” Well, it was, but we were a bit lucky. The tree that fell could just as easily have fallen toward the house, or onto the fence, or it could have wiped out our phone lines. Alternatively,we have some other trees with sizeable dead limbs or branches, including the one close to the road that I had actually been worrying about. Oh, the joys of home ownership.

Foresight Linux First Thoughts

I hadn’t had any trouble with either of my laptops recently, so I decided to mess around with my desktop instead by installing Foresight Linux on it (to replace Xubuntu). I had first heard about it because I’d heard good things about its package manager Conary, and then because it was being shipped with the Shuttle KPC. Surely, if it was being shipped with an inexpensive mass-market machine like that, it should be pretty easy to use, right? Wrong. Oh, the install worked great. The initial boot was complicated by the way I already had GRUB set up, but I got through it. It brought me up to a reasonably functional desktop environment, at full resolution (which is only 1280×1024) and with full networking (for a common-as-dirt Ethernet chip), which is actually pretty decent.

Then I started to play with stuff, and that’s where I began to see how user-unfriendly Foresight can be. The Emerald Theme Manager (part of Beryl) seemed to be installed, so I figured I’d try some eye candy. No go. Various forum posts seemed to indicate that I needed to install something called fusion-icon to get access to that stuff (along with many complaints about the partial nature of the Beryl/Emerald/Compiz/whatever distribution in Foresight) so I started my first venture into Foresight package management. It seemed like they had wrapped Conary in something called PackageKit, so I gave that a try. Well, PackageKit is bleeping useless. As Seopher explains better than I can be bothered to, it is of little to no use in finding the package you need, so if you don’t know exactly what it’s named you’re out of luck. PackageKit is slow, the interfaces are all lousy, and it has plenty of other shortcomings as well. My unfavorite is that some commands must be run as root, and some must not be, with no real rhyme or reason to it; some of the non-root commands are far more destructive than the root ones. I managed to persuade it to install fusion-icon and the correct video driver, then resolved to use Conary directly from then on. Voila! I had all sorts of silly eye candy, which I’ll probably never use or think about again, but it was an interesting exercise.

My next great disappointment came when I tried to import some settings from my old Xubuntu setup, and xrdb complained about cpp being absent. How can the C preprocessor be missing? Well, it turns out that the system ships with no compiler, which is just inexcusable for a Linux system. Unhappy about having to install the compiler in the first place, I was doubly unhappy to find out that Conary isn’t all it’s cracked up to be either. It happily told me that gcc was already installed, but there was still no compiler actually there. After some digging around, I discovered that I had to install gcc:runtime to get an actual compiler. That’s insane, but it gets worse. Having lost faith in the ability of the Foresight developers to do something reasonable that would result in a working system, I decided to try compiling a “hello world” program. It failed, due to header files missing. After even more digging (“conary rq –path” seems to be the most useful incantation they provide) I installed gcc:devel so that I could compile and run a simple program. That’s two more steps and a lot more digging than should have been necessary. Overall, this business of packages and components and “troves” and package names that sometimes look semi-reasonable and other times look like they’ve had a bunch of line noise tacked on the end all seems rather arbitrary, inconsistent, and obfuscatory. Maybe “gcc-objc;4.1.2-11-0.1;x86;/conary.rpath.com@rpl:devel//2-qa/1204703034.211:4.1.2-11-0.1,1#x86|5#use:~!bootstrap:~!cross:~!gcc.core” has some use or value to rPath developers, but it’s just a silly-looking mess to anyone else. (Yes, I got that from pkcon rather than conary, but who the heck cares? How you layer your software should be your problem, not the user’s.)

It’s a good thing I don’t care much about sound on this machine, because that doesn’t work either and I’m sure I could have another whole rant about that if I did care. In the end, all I can say that the single word that comes to mind for Foresight Linux is immature. Despite all the claims of being oriented toward ease of use, it’s just not ready for general use even among Linux experts.

Creating Jobs

I hear a lot about how rich people create jobs. BS. If anybody creates jobs it’s innovators, not rich people, and in any case why does creating jobs matter more than doing them? You can’t create a job without having someone to fill it. Just because capital is necessary doesn’t mean it should be privileged over labor which is also necessary. Even if that weren’t the case, the oft-prescribed policy of giving tax breaks to rich people so they can create more jobs still wouldn’t work. Some money is spent in non-job-creating ways, some is spent on foreign labor (which is not intrinsically bad but is no justification for domestic tax cuts), and you can only create so many jobs before there’s nobody left to do them. To the extent that we want to reward or stimulate job production and other benefits to the real economy, we should offer tax cuts tied specifically to hiring. Cutting revenue by cutting taxes for those who least need it and who have no direct interest in creating jobs (instead mostly preferring to increase their fortunes by speculating in international stock/currency/futures markets) is just bad policy.

Catching Strays

If I were ever to teach an actual class, it would probably be about how to write networking code that actually works. A lot of people teach about protocols and algorithms, and that stuff’s definitely important, but I see a lot less about there about the nitty-gritty details of how to implement those things in a debuggable and maintainable kind of way. There seems to be a huge gap between the things CS professors tell us we should be doing and the actual facilities that are present on most systems, and it’s increasingly apparent that a lot of programmers are falling into that gap. Even if you already have a complete and clear description of the protocols you’ll be implementing, actually turning that into working (and perhaps even efficient) code is a hard problem all by itself. Here,’s another example of a “defensive programming technique” that you can use to save time on mere debugging so you can spend more on the fun stuff.

One of the most common problems network programmers face is the prospect of a message arriving later than it should. Often the incoming message is a reply to one you had sent earlier, but by the time it arrives the object it referred to has already been deleted or has changed state. This is part of the reasons why I don’t like timeouts very much, and it’s one of many reasons why it’s a bad idea to send out pointers and expect them to be valid when they’re sent back. Textbooks are full of acknowledgment and sliding-window protocols to handle this for the case of messages sent through a single channel. In some cases you can take advantage of that by closing an old channel and reopening a new one, but that’s not a general solution. Closing and opening channels often is very disruptive and inefficient, you might be dealing with orders of magnitude too many objects to make such an approach feasible, you might have other reasons for not using a protocol that imposes ordering as well as providing exactly-once semantics, etc. For these or other reasons, you might need to deal with this issue yourself.

The way to avoid “stale message” problems is to observe that every message has a context. There’s some reason you’re getting it, something the sender knows – or thinks he knows – about the state or information you hold. If your state changes, it might invalidate the sender’s assumptions and that can be reflected by creating a new context to replace the old. The easiest way to do this is to represent the relevant context in the form of an object, and to establish a rule that every message you handle must identify an object to which it pertains. These objects must have three basic attributes:

  • A unique ID, so you can look it up.
  • A generation number, so you can reject messages from a previous “epoch” of the object’s existence (including another free/allocate cycle). Note that the generation number needs to be large enough to avoid wraparound, just like all that textbook stuff for the single-channel case.
  • A state, so you can reject messages that don’t make sense any more (or perhaps never did).

The objects used to establish a message’s context might not be objects as they would otherwise exist in your program. They could also represent requests, transactions, asynchronous event handlers, or groups of any of these things – in fact, whatever contexts you use when you think about your program. In many of these cases, the object also provides a handy place to maintain lock state, timestamps, or other information associated with handling the message. It’s OK to create a “global” object to establish context for messages that have a genuinely global effect. The ID can be an index into a table, and in fact that’s where the generation number becomes most useful. Similarly, it can be very useful if the ID that you expose externally is actually an ID+generation internally, so the beginning of your message handler can look like this:

  1. Extract the external ID from the message header.
  2. Separate the external ID into the real ID and the generation number.
  3. Use the real ID to look up the appropriate object.
  4. Check the message’s generation number against the object’s, and reject it (loudly) if they don’t match.
  5. Check the message type against the object’s state (more about this in a moment) and reject etc.
  6. Process the message, secure in the knowledge that it’s now safe to do so.

The part about checking state is also important. Basically, the idea here is that certain messages are only valid in certain states, and it’s the same idea as in Microsoft’s Singularity project. If a request object should only get a RequestComplete message while it’s in a RequestInFlight state and not when it’s in a RequestOnQueue state, that can be enforced even before real processing begins by assigning the appropriate states. Watch out for synchronization issues around state changes, though. If you’re using the kind of staged execution model that I’ve recommended elsewhere, these checks should happen before releasing the lock that allows another thread to take over the message-dispatch role. Also, keep it simple. States and messages that are uniquely associated with one another are pretty safe, but if you have too many rules more complex than “if type is X then state must (not) be Y” than you’re probably asking for trouble. More complex state models can protect you from more things, but the one thing we’re really concerned with here is stray (i.e. delayed or duplicate) messages, and chasing lots of false alarms from a too-complex state model is a cure worse than the disease.

Adopting this methodology won’t protect you from all stray-message problems. For example, if a message to you is duplicated and the duplicates arrive back to back, then they might get processed in the same generation and state. If that’s a possibility for you, you’ll need to work out another mechanism to deal with it and then subject that mechanism to some more rigorous formal verification. What these simple tricks represent is more of a coding style that inherently avoids some of the most common problems from timeouts and retransmissions and race conditions, complementary to any protocol-specific analysis you might do.

On the Internet, Nobody Knows You’re a Bag of Rocks

Can anybody possibly take this stuff seriously, and pay good money for it? I found the site when J-Walk linked to the “Brilliant Pebbles” which are basically little plastic baggies full of rocks (cost of production approx. $0.10 apiece?) that you put in various places to enhance your stereo’s sound. Riiiight. I can almost see some value in the vibration-damping platforms, though that is rather a lot of technobabble for some springs and stuff. I can see “Codename Turquoise” fooling some people who don’t know what a laser actually is – hint for any such people: it doesn’t just shine light all over the place. If the “Clever Little Clock” has really been “extensively modified using a number of highly specialized techniques” I have to wonder what awesome results they might get using something other than a dime-store trinket as the starting point, though. Ditto for the equally-unspecified “proprietary materials-processing techniques” used to turn a cheap electrical wall plate into the Tru-Tone Duplex Cover that sells for a thousand times more. Most magical of all, though, must be the Teleportation Tweak.

The Teleportation Tweak has a profound effect on the sound – clearer, more information, greater frequency extension and realism – and is performed during a phone call to Machina Dynamica that can be made via landline phone or cell phone from any room in the house. The tweak itself takes about 20 seconds. The Teleportation Tweak will sound to the listener like a series of mechanical pulses. Further benefits will accrue by performing the Teleportation Tweak for ANY or ALL additional phones in the house. 30 day money back guarantee for this product.”

That’s right, for $60 they’ll call you on the phone and make some “mechanical pulse” noises at your stereo, which doesn’t even need to be turned on. And if you’re not satisfied with the results, they say the results get better with each phone they call you on. Wow. Do they have a recording of some industrial equipment, or do they just bang on some pots? This is clearly satire of audiophile mania. What’s not clear is whether the satire is intentional.

Safe Places

A few years ago, Katrina taught us about the risks of building a city in a river delta. This year, we’re learning (again) about the dangers of building cities on a flood plain. In other cases we’ve learned to avoid earthquake faults. When you consider hurricanes and forest fires and water-supply issues, a whole bunch more places get crossed off the “ideal site for a city” list. So, where is it safest to build a city? In the mountains, perhaps? Maybe I can find a chart of homeowners’ insurance rates or something, to see where the insurers think the natural-disaster risk is lowest. Hey, look

  • Filed under:
  • My Name

    Somebody at work – hi, Elizabeth – asked whether I prefer to spell my name Darcy or d’Arcy. I started to reply, then the reply got so long that I decided to make a blog post out of it. The short version is that I myself use Darcy. The long form follows.

    I used to insist on the fully-French version d’Arcy. Besides the fact that it tended to give email programs fits, and occasionally cause my records to be filed under A instead of D, spelling it out all the time was a pain. Usually what I ended up with was a hybrid form like D’Arcy, which I dislike even more than the English form Darcy. By trying for my first choice I kept getting my third, so I decided to settle for second; my signature still looks like d’Arcy but in every other context I use Darcy. As it turns out, Darcy might be the most historically-accurate form. The name in its various forms is actually quite a distinguished one from France through England to Ireland, starting before the Norman invasion.

    Its descent is derived from David D’Arcy, of an eminent family in France which deduces its origin from Charlemagne, who took his surname from Castle D’Arcie, his chief seat, which lay within thirty miles of Paris.

    His son, Christopher, having, with a band of his vassals, joined the crusades, died in Palestine, leaving Thomas his heir, whose son, Sir Richard D’Arcy, accompanied William the Conqueror to England.

    From him descended, Sir John D’Arcy, who was high in repute with Edward II. by whom he was appointed justice of Ireland in 1323. He married the Lady Jane Bourke, daughter of Richard, Earl of Ulster, from which marriage are derived all the D’Arcies of Ireland.

    Additional fun facts about Darcy (the name) and Darcys (the people who have it).

    • Despite the fact that Joan of Arc is known in French as Jeanne d’Arc, the names are not actually related (as I had thought).
    • Similarly, in addition to the French/English/Irish Darcys, one of the “Thirteen Tribes” of Galway, there are some unrelated Darcys whose Gaelic name means from the dark one.
    • There are many Darcy crests, all seeming to have the three roses but with different colors and sometimes different additional devices. I’d have to guess that the one at the first link is the “senior” form, with the others derived from this, but otherwise I’m not sure how they relate.
    • Thomas Darcy of Temple Hurst defied Henry VIII.
    • Sir D’Arcy, Duke of Leeds – a relation? No idea.
    • A Guy Fawkes co-conspirator?

    Getting back to the original question, though, it appears that both the Norman/English and Gaelic names are most often rendered as plain old Darcy, with no apostrophe or unusual capitalization. If I’m connected to the former, it’s probably not to d’Arcy which was only in use for a very short time between d’Arcie and Darcy (with the linguistically abhorrent D’Arcy also more defensible historically). If I’m connected to the latter, perhaps an argument could be made for O’Dorchaidhe but more likely Darcy still prevails. Either way, the most familiar form also seems most correct as a reference to likely ancestors and relatives.

    Changing Times

    A while back, I suggested two Firefox addons: Google Browser Sync to synchronize bookmarks and S3 Organizer (a.k.a. S3Fox) to manage files stored in Amazon’s S3. I’ve stopped using both. In both cases it was due to apparent loss of interest by their maintainers. Neither works on Firefox 3, and S3 Organizer kept losing its mind even on my laptop’s version of Firefox 2. The last batch of pictures was uploaded using s3://, which is less featureful than S3 Organizer but seemed to do the job well enough. As of Monday night and Tuesday morning, I’ve also switched to using Foxmarks to synchronize bookmarks between home (Firefox 3) and work (Firefox 1.5).

    I’m somewhat amazed by the continuing lack of a standalone S3 browser that handles things like permissions and MIME types and streaming uploads without being an utter piece of crap. Having done some hacking with both S3 and wxWidgets in the past, such a project is moving up on my some-day list.

    UPDATE 2008-06-14: I guess Google finally got around to admitting what has been true in fact for quite a while – Google Browser Sync To Be Discontinued. Next time, guys, don’t wait until months after you’ve actually stoppped work before telling people that you intend to. In other words, don’t be evil.

    UPDATE 2008-07-10: No doubt responding to widespread criticism, Google has open-sourced GBS. Too late, as far as I’m concerned, but I’m sure it will be useful to someone else.

    Don’t Use KWord

    Last night, KWord (the KDE word-processing program) did something that I consider unforgivable: it threw away my data. I had been working on a spec for work, and KWord started acting all confused, for example not updating the table of contents properly, so I decided to exit and restart to see if things got better. Of course I had been saving frequently, as we’re all – especially we old-timers – conditioned to do to avoid data loss, and the only updates since my last save had been very minor, so I just quit without saving. When I reopened the document, imagine my surprise to find that KWord hadn’t saved anything all evening. The document was just the same as when I had copied it from work. There had never been even a hiccup during any of my saves, just two hours of tedious editing – poof! As someone who actually designs filesystems and such for a living, I’d say this comes in at #2 on the list of cardinal sins for anyone writing code to handle other people’s data.

    1. Destroying data that you had no business touching in the first place.
    2. Throwing away data silently – what KWord did.
    3. Losing data but at least flagging the error.
    4. Failing to store data but telling someone while they can still retry.

    It’s moments like these that I wish for certification of software engineers. Anybody who does anything as irresponsible as the KWord idiots did should lose their certification and re-earn it before they’re allowed to publish anything for public consumption again. Tongue firmly in cheek, of course, I really do believe in open source and letting everyone participate, but I’m sure people know how I feel.

    Of course, I shouldn’t have been using KWord in the first place. Here’s a list of other deficiencies.

    • If there’s a way to include intra-document references, it’s very well hidden. I use this feature a lot, so this really bugs me.
    • Ditto for section breaks or some other ways to reset the page numbering so the first real page (not the title/contents page) is #1. This is only a minor annoyance, but still.
    • Footnote numbering is per-document by default. I find this much less useful than per-page, and there was no apparent way to make it work that way.
    • Footnotes get the same style as the main document. I prefer to have footnotes in a smaller typeface, and I can do that with a new style, but as soon as I apply that style to the footnote the superscript number goes away and if I add it back manually then I lose the benefits of auto-numbering.

    I was a little frustrated with OpenOffice’s shortcomings, AbiWord wouldn’t even run on my workstation without hanging, and my initial testing seemed to show that KWord was quite usable. It handled a table of contents reasonably (one of my complaints with the others), the style handling was quite good, etc. Oh well, live and learn. Don’t even get me started on Tex/Lyx and other tools which claim to free you from worrying about visual presentation but only because they render any non-trivial structure in Quasimodo-like unstyle that hurts the eyes to look at. Presentation does matter, folks, it makes documents readable and you guys royally screwed up the defaults so that your promises are unmet. I guess I’ll just have to go back to OpenOffice and put up with its – relatively minor by comparison – flaws.

    Great Minds…

    On the issue of taxing investment income, it looks like Joseph Stiglitz is thinking along some of the same lines as I was a while ago. Here’s my version:

    We already tax different kinds of investment income (e.g. short vs. long term) differently, so let’s create many more classes of investment income according to the rule that every level of separation from productive reality increases the tax rate.

    Here’s his:

    Why should those who make their income by gambling in Wall Street’s casinos be taxed at a lower rate than those who earn their money in other ways?
    Capital gains should be taxed at least at as high a rate as ordinary income. (Such returns will, in any case, get a substantial benefit because the tax is not imposed until the gain is realised.)