Here are some pictures of Amy in our leafy back yard, wearing her Halloween pants that Cindy made.
OK, last precisely because it’s not least, here’s the latest on Amy. No pictures this time; sorry. The latest milestone is that Amy has become far more adept at using her hands. A while back she discovered her (and other people’s) thumbs for sucking on. Now she seems to have figured them out for grasping as well. Sometimes this is reflected in her clinging, velcro-like, to the shirt of anyone who holds her. Another welcome development is that now, a pretty good percentage of the time, she’s able to pick up and reinsert her pacifier herself when it falls out instead of needing someone else to do it. Sometimes she even gets it the right way up. :) She doesn’t seem to mind if it’s upside-down, actually, though the slurping sounds that ensue can be a bit distracting for the rest of us. It might not sound like much to those who haven’t had kids, but it can be quite an impressive display of coordination for her age.
That’s really it for now. More later; back to work.
One of the common problems in designing any network-based software (which is pretty much all software that matters nowadays) is how to deal with the possibility of sending a request and never getting a response. The most common approach is to set a timer when the request is sent, and abort if the timer fires. This approach is so common that it’s actually a bit amazing that support for it is not usually built into the networking interfaces themselves. Sending the request and setting the timer are typically two completely separate actions, served by completely separate OS subsystems, exacting overhead (e.g. extra syscalls, otherwise-unnecessary timeout threads) and requiring programmers to examine error codes to tell the difference between a timeout and a more local kind of error. Ick. Any number of “frameworks” have been created – I’ve been involved in a couple myself – to provide a better programming model, but they’re still layered on top of the same inherently flawed model instead of replacing it.
I would contend, though, that per-request timeouts are inherently suboptimal. Relying on them makes the overhead mentioned above a potentially serious issue, and in addition you always seem to end up with a plethora of different timeout values with subtle dependencies that provide breeding grounds for bugs. Too many times I’ve seen software that depends on timeout X always occurring before timeout Y, but then Y needs to be shortened (or X lengthened) to deal with some completely unrelated problem, so the hidden assumption is violated and chaos ensues. Most often, what I find is that there should be one timeout, implemented in dedicated “heartbeat” code, to determine whether the remote node you’re talking to is really still there. Everyone else should wait indefinitely (retransmitting etc. if necessary) until they find out from the heartbeat module that their cause is hopeless, instead of everyone trying to figure that out for themselves. Besides being far easier to verify code that uses this model as correct, it’s more efficient. “Everyone for themselves” is a lot like what used to happen on the internet with congestion, where everyone whose packet was dropped due to congestion would retransmit stupidly, causing the original congestion to persist longer than was necessary and actually hampering efforts to improve the situation. It was stupid then, and it’s stupid now. There’s a difference between letting systems or modules act independently to avoid single points of failure and setting them up to exhibit “mob” behavior that’s injurious to all.
As I’m sure just about everybody who visits here (which is to say just about nobody in the grander scheme of things) has noticed, my activity here has been steadily decreasing for quite a while. I think I’ve stayed with it a lot longer than most people, but it’s still less active around here than it used to be. It’s funny to think that I actually switched web hosts a while back because I was worried about hitting the bandwidth limits at my old host and the new one had a more cost-effective plan for where I expected to be. The reasons for the relative lack of activity are several.
- Amy obviously takes up a lot of my time and – more importantly – energy. It’s not that I’m complaining. She’s a source of endless joy and only occasional frustration, and I’m delighted to have her in my life. Nonetheless, that does mean less of me here online than would be the case otherwise.
- More of my technical focus is at work than used to be the case while I was at EMC, and I don’t feel entirely comfortable posting anything that might reveal what platforms or techniques etc. we’re using to work our magic. I’m also working in areas that are probably of less general interest to readers than the distributed-filesystem kinds of stuff I was doing previously. It’s more nuts and bolts kinds of stuff, so there’s often not all that much to say about it.
- Politics has obviously become more of a focus for me than it used to be, occupying more of that precious time and energy, but I find that posting political stuff here is not very rewarding and I know that most of my readers are less interested in that than in my other topic areas.
It’s this last point I think deserves more explanation. When I think about something political I’d like to write, often sparked by something else I read in my morning “rounds” through the links you’ll see in the sidebar, I’m always faced with a choice: post it here, or on a forum (or both)? More often than not, I’d rather post it on a forum. I do like to think that my writing has some small effect on people’s thinking, and posting to a forum gives what I write a broader audience than if I post it here. That’s also more likely to provoke discussion, allowing new information to be revealed and ideas to be refined. That’s why I have over 3000 posts at Whistle Stopper in the last year and a bit, and only a few hundred here. The decision does go the other way sometimes, particularly with my more rambly philosophical ideas, but most of the time I’d rather post there than here.
What I wonder is this: is what has been true for me true (or likely to become true) in general? Are forums destined to eclipse weblogs as a venue for political conversation on the web? They’re certainly better designed for it; all of the trackback/pingback/whatever hacks in the world will still be less effective than software which was intended since day one to advance dialogue over monologue. There are a lot of problems with most forums, technically and “editorially” and socially, but overall I still think they’re inherently better for sharing ideas that are inherently social in nature.
Sorry about the “fat ass” comment spam, folks. One of the relative weaknesses of WordPress vs. other packages seems to be poor support for banning certain IP addresses such as the one this loser is coming from. Once I figure out how to do that more effectively, some of that garbage should disappear.
If you’ve ever tried to deal with a lousy web-hosting company, you’ll probably find this pretty hilarious (in a dark way). If you haven’t, you probably won’t. As a former customer of both JTLnet and Feature Price, though, I laughed enough to make my coworkers nervous.
Apparently copies of Grand Theft Auto: San Andreas have already been pirated, even though it hasn’t been released yet (via Slashdot).
Are increased civil rights a cause or an effect of material prosperity…or both (i.e. a feedback loop)?
I’ve updated the bookmarks etc. over on the right, so – at least for a little while – what you see is pretty much the same group of sites that I actually visit in my morning surf. I left out a few, mostly commercial or infrequently updated, but otherwise that’s pretty much it.