My Front Yard

First, let’s start with something small.

I’m not absolutely sure, but if I had to guess I’d say those are Amanita virosa – a.k.a. Destroying Angel – mushrooms. As the name sort of implies, they’re pretty deadly. Amanita toxin is interesting stuff. Apparently you’ll get really sick for a while, then feel better for a few days, then die of liver failure. What’s going on is that the amatoxin is interfering with messenger RNA in your cells, causing them to die. The liver is most affected because it’s the organ that’s most directly affected by the toxin, and also involved in cleaning up all the dead cells elsewhere. I’ve also heard that the toxin is very durable, able to be stored for long time, heated, frozen, irradiated etc. without losing potency. Somebody said it was featured as the lethal agent in at least one mystery novel, and it’s easy to see why.

On to slightly less morbid topics. Remember the tree that fell in our yard (and our neighbors’) a while ago? Earlier this week they took it down and stacked the three main pieces in our front yard. I’m sure our other neighbors, already amused by the generally pathetic state of our yard, got a particular chuckle out of that. Here’s what it looked like.

 

That’s the top of the bottom section in the picture on the right, and there’s a lot missing even there. Most of the rest was very wet and soft, not like oak should be. Thank you, ants. (BTW, these pictures were taken with my new Casio Exilim EX-1080Z camera, which is inherently a better camera than the old Sanyo Xacti X60 in a lot of ways, but it seems that the automatic settings don’t do all that good a job of utilizing the hardware’s capabilities.) Yesterday evening, the Big Truck showed up to take the logs away.

That big crack shows how thin the outer shell of solid wood was; it appeared the moment the claw closed on the log. It was a lot of fun watching them put the logs on the truck, but now it’s all over but the stump-grinding.

Heckuva Job, Johnnie

So, IndyMac Bancorp failed, and Office of Thrift Supervision director John Reich blames it on Senator Charles Schumer’s publication of documents expressing concerns about his office’s (lack of) regulatory oversight. Yeah, that would be Bush appointee John Reich, previously at FDIC since the inauspicious date of January 2001 after serving as chief of staff for Republican Senator (and Bush crony) Connie Mack. A lot of wingnuts are trying to blame Schumer specifically for publishing his concerns instead of letting things be handled privately . . . as though they would have in this administration. Give me a break. Does anyone seriously believe that’s how things would have played out? No, OTS would have just sat on the report, and then found some other way to blame the inevitable failure on Democrats. It might have taken a little longer, more time for the principals to buy new careers in government by splitting their ill-gotten gains with John “Keating Five” McCain’s campaign fund before taking off for a vacation in Bermuda, but the result for the taxpayers would have been the same. To facilitate all this, yet another political appointee in the Bush administration is doing exactly what he was brought in to do. No, that’s not to regulate anything. Don’t be silly. He’s there to assist in whatever political hatchet job the president needs doing, just like any of the thousand other “sleepers” Cheney has placed throughout the federal government. If you don’t recognize the pattern, you haven’t been paying attention for the last eight years. What Reich characterizes as “interference in the regulatory process” was merely Schumer trying to ensure that there actually be a regulatory process, using the only method that has ever caused a Bush-appointee-led agency to do its job. At least he tried to avert disaster, instead of ensuring that one would happen and then using it for political gain.

InkTank is Back!

Barry Smith, the guy who wrote/drew/whatever Angst Technology, apparently started doing strips again a while ago – this time with more of a home-life theme. Biology jokes (warning: gross), Wii games, . . . comic genius! It’ll be hard to match Pikacthulhu and “board chow” but, if you like geeky comic humor, you have to check it out.

BTW, today’s XKCD would have been me all over, back when I was coming in super-early. I used to tell people to treat me as though I was on South Georgia Island, which is in the least-inhabited GMT-2 (Oscar) time zone.

Amy at the Carnival

As usual, the Lexington Lions held a carnival in the town’s athletic fields around July 4. We went on Sunday afternoon. Amy was considerably more adventurous than last year, going on the cars and even on one of those rides where a set of seats spin at the end of long arms that are themselves spinning around the ride’s center (though we did take the opportunity to get off early when another little girl freaked out and stopped the ride). Just a couple of pictures today, but I think they turned out pretty well.

Amy and Cindy on the merry-go-round Cindy and Amy on the merry-go-round. The horses were a bit of a new thing but, despite the look of intense concentration, she seemed to enjoy it enough to ask for a second ride. On the second one she even felt brave enough to wave and smile at me each time she went by, but I was too busy waving and smiling back to get a picture.
Amy on the kiddie cars Amy on the kiddie cars. Again, notice the look of intense concentration.

Sun Announces Boring New Products

Oh yay, Sun has some new JBODs and a new Thumper. I can get a J4500 and a Thumper X4540, providing 48TB and 3GB/s in 8U, for how much? Looks like about $50K. Or maybe I could get a pair of Jackrabbit-M units, providing the same 48TB and 3GB/s in the same 8U, for about $33K. But I’d be missing so much if I did that, like . . . like . . . like what, Sun? Maybe there really is some value add, but enough to justify a 50% price premium over stuff that’s already out there? Somehow I don’t think so.

Using Amazon S3 With Boto

I’ve written before about using Amazon’s S3 data-storage service, and even developed a minimal website backup tool for it. One of my complaints, though, has been the absolute cruddiness of the various libraries one might use to interface with it. I’m pleased to say that Boto seems to have fixed that. Boto is a Python library that provides a quite-reasonable interface for S3. For example, here’s how I get a list of my buckets.

$ python
Python 2.4.1 (#1, Oct 13 2006, 16:51:58) 
[GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto
>>> conn = boto.connect_s3('my_public_key','my_private_key')
>>> blist = conn.get_all_buckets()
>>> blist
[<bucket : s3dav_2>, <bucket : wombatypus>]

The interface for listing a bucket and finding keys that match a name or pattern is not quite ideal, but still quite usable.

>>> w = blist[1]
>>> klist = w.list()
>>> for s3file in klist:
...   if s3file.name == 'flume1.jpg':
...     break
... 
>>> s3file
<Key: wombatypus,flume1.jpg>

Lastly, given the key object, I can fetch the contents into a local file with one line.

>>> s3file.get_contents_to_file('myflume5.jpg')

Very nice. Since I’m often harshly critical of other people’s code, I like taking the opportunity to praise some as well, so . . . good job, Boto folks!. The interfaces for dealing with ACLs look reasonable, though I haven’t actually played with them. Specifying MIME types looks a bit ickier (apparently you have to construct the appropriate HTTP headers yourself and pass them to send_file or set_contents_from_file* as such instead of having an actual type argument to those functions) but survivable. With little bit of my own extra library glop on top of Boto, it looks like I might be able to write a decent set of tools to manipulate my S3 files. With that plus FusePython, I might be able to prototype my very own S3-based filesystem. Actually I should blog about that soon, since it’s based on a very different approach than the other S3-filesystem attempts I’ve seen out there (none of which have turned out very well IMO).

Amy at Stone Zoo

On Saturday, we all went to Stone Zoo. I remember going there once shortly after it had reopened, seeing a very depressed-looking polar bear and little else, and getting kind of depressed myself. I’m glad to say that it’s a lot better now, and in fact it’s very nice. One thing that they’ve done is tried to focus on a few specific geographic areas, which I think is a good idea for a zoo their size. They have a good black-bear exhibit – two juveniles who put on a good play-fighting show while we were there. Their “Lord of the Wings” show is not to be missed, with open-air flyovers and demonstrations by various kinds of hawks, vultures, crows, owls, and (a new one to me) a red-legged seriema. Amy was very restless through the show, perhaps a bit freaked out, but I thoroughly enjoyed it. So, without further ado, here are my own pictures.

one otter One otter.
one otter Two otters. In case you think all my captions are going to be this terse, I’ll point out here that across from the otter enclosure was an impeyan pheasant, which I’d never seen before. Yes, its feathers really are that iridescent.
purple flower An interesting globular flower, with Amy’s hand in the foreground and the otters resting in the background.
dragonfly Just a dragonfly I noticed on the way to the bird show.
sleeping jaguar A sleeping jaguar. It had been active earlier, but by this time it had pooped out. If we hadn’t been in a hurry to get to the bird show, I would have posed Amy on the near side of the glass – you can hardly tell it’s there, but it is (of course). Oh well.
Amy on a wolf Amy climbing on a wolf statue in an artificial cave (which had a real swallow nest in it).
Amy on a train Amy riding the miniature railroad. She did seem to enjoy seeing some of the animals, but I think this was still the highlight of her day.

Employer-Mandated Tools

Here’s another Slashdot Special. This time the context is that somebody asked about the validity of employers mandating use of certain tools such as editors and IDEs. What follows is what I posted in response, but I’d like to add one more piece here. The original question referred to frameworks, which everyone seemed to interpret as meaning an IDE but could also mean a code framework such as Qt or Rails. When it comes to management requiring use of a certain code framework, I think management is on firmer ground. Code reuse is good. Less duplicated code means less maintenance and less creeping incompatibility. Using a common framework, whether home grown or external, maximizes the leverage gained from expertise and minimizes the productivity loss as work moves from one developer or group to another. This is an area where architects and such should apply a lot of their thought and effort. Exceptions should of course be considered, and approved in the rare instances where they’re justified, but developers should not be allowed to implement essentially identical functionality using a three different paradigms, four different languages, and a dozen different frameworks or codebases. That was one of the mistakes Revivio made, I was never given enough authority to correct it, and it was a major drag on productivity. I’m lucky that at SiCortex I was here early enough to establish at least one de facto standard (use of Python asyncore/asynchat for inter-daemon communication) that has served us well and allowed several pieces of code to be transferred readily between developers without significant issue. Now, away from frameworks and back to tools.

I think it’s asinine for an employer to mandate the use of certain tools. What they should worry about is whether the data produced as work product is uniform or at least compatible, regardless of how it’s generated. If code follows the coding rules, who cares what editor was used? If a spec is readable and editable by everyone who needs to read and edit it respectively, again, who cares? The problem is that achieving a certain level of file-format compatibility sometimes effectively forces use of a certain tool. Let’s look at some examples.

  • At a previous job, use of emacs was mandated because somebody had developed a set of emacs bindings to insert standard file and function headers etc. Emacs is not my favorite editor but I’m quite comfortable with it, so I didn’t find this requirement particularly burdensome, but I still think it was wrong. If somebody had developed an equivalent set of vi macros, or inserted the same headers by hand, the effect would have been the same and how they achieved that effect should not have been the company’s business.
  • Fast forward to another job, where emacs was again mandated in one group, this time because that group used tools that were very finicky about tabs and indentation. This being Slashdot, I know everyone will say to fix the tools and someone else will say to use “indent” with a standard profile, but for the sake of argument let’s say that those approaches were infeasible or insufficient. In that case, you’d have a situation where using different tools could yield work product that looked identical in an editor but was not in fact functionally identical, and the most practical way to ensure uniformity of output would be to mandate use of a certain tool. I can’t reach a firm conclusion on this one.
  • What if it was important that IDE workspaces (defining files, build options, etc.) be shareable? This depends heavily on the workflow on that project, but if the workspace is part of the work product and only one tool supports it, then it might not be totally unreasonable to require use of that tool.
  • What about specs? Everyone can generate PDF, which is great for reading and printing, but what if you want many people to edit the document or attach comments during review? Our doc group uses tools that preclude either of these, and I think that sucks. Many others use Tex/Lyx, which I also find deficient in many ways. The standard workstation builds all include OpenOffice, so if it were up to me that would be the standard (we barely allow Windows on laptops, let alone workstations, so MS Office would not be an option).

What many of these examples have in common is that the functional requirements applied to employees’ work product often do end up practically forcing use of a particular tool, but that doesn’t mean one should lose sight of the real goal – document compatibility. Requiring everyone to use MS Office, for example, might actually sacrifice document compatibility when half the company switches to a new version of Office and the documents they create are mangled or unreadable for those still using the older version. Been there, done that, more than once. Ditto for people using Mac versions of either Windows or *N*X software, when those versions are subtly different. OmniGraffle sure makes pretty diagrams, but if most people in the company use Linux and they need to be able to modify those diagrams then you’d damn well better save them in a format they can use and if you don’t then you’re out of line. The goal should be that all code is in a format that everyone needs to edit code should be able to edit, and all specs or manuals are in a format that everyone who needs to edit specs or manuals can edit, all diagrams are in a format that everyone who needs to edit diagrams can edit, etc. What tool you use shouldn’t matter except to the degree that your choice of tool affects whether others can pick up your data and run with it.

Strange Parenting Moments

On the way home from visiting the Stone Zoo on Saturday – pictures tonight, I hope – Cindy and I had an interesting conversation with Amy. It was generally about some kids not having a mommy and daddy, some having two mommies (we know one such), some only having one parent, parents having to go away or getting sick, and so on. I don’t remember the exact details, but it was that kind of serious stuff. Have I mentioned how much I enjoy being able to have actual meaningful conversations with Amy, or watching her actually strike up a conversation with other kids her own age? I should have, because it’s one of the best things about watching her grow up. Anyway, the conversation was moving right along until Amy brought it to a screeching halt with this.

Maybe we could all be just bricks.

Huh? (That’s another conversational tic she’s picked up from me, BTW.) How does one respond to that? She obviously got the “surrealist” gene from my side of the family. I sort-of vaguely remember having a conversation about bricks being an example of something that’s not alive, maybe she was making some sort of connection to not-alive to not-having-parents or something, but we were both completely non-plussed at the time. All we could do was laugh, so of course Amy laughed too and everything was OK.

SSD Power Consumption

Apparently alarmed about the ongoing decline in their page views and associated ad revenue, the folks at Tom’s Hardware posted an inflammatory article about The SSD Power Consumption Hoax several days ago. It got picked up just about everywhere in the technosphere; one of the latecomers was good old Robin Harris, who immediately used its faulty conclusions to repeat his and his clients’ related message in Notebooks SSDs are dead.

The scoop: the gap between notebook SSD promise and performance has been growing steadily. Now a review in Tom’s Hardware puts the final nail in the coffin. The title says it all:

The SSD Power Consumption Hoax : Flash SSDs Don’t Improve Your Notebook Battery Runtime – they Reduce It

By as much as an hour. A winner with the stupid high-end notebook demographic. The Paris Hilton market.

And people accuse me of offensively characterizing people who reach different conclusions than I have. It is to laugh. In reply, and attempting to remain more civil than Robin had been, I said this.

You misread the article, Robin. If you look at the power-consumption results on page 14, you’ll see that the Hitachi drive drew more power *at idle* than the Sandisk SSD did *under load* – and that doesn’t even count the difference in cooling needs. The MemoRight SSD also used less power at idle than the Hitachi did, and idle is where most notebook drives are most of the time. Those results are starkly at odds with the traffic-generating headline, and until the inconsistency is resolved I wouldn’t jump to any conclusions. What problems do exist with SSD power consumption are also more easily solved than you let on. Some functions can be moved back to the host, others to dedicated silicon which can do them very efficiently. It’s not like hard drives don’t have processors in them drawing power too, y’know. When somebody does a head to head comparison where the drives are idle 90% of the time and only reading 90% of the remainder, and properly accounts for the fact that whole-system I/O performance might not scale perfectly with drive performance, then it’ll be worth paying attention to.

Of course, Robin tried to poison the well by preemptively dismissing any methodological criticism as “denial and obfuscation” but I’d like to expand on that last point a bit. At the low end of the scale, a slightly improved I/O rate might prevent a processor from entering its power-saving sleep modes. At the high end of the scale, a slightly improved I/O rate could push a multi-threaded benchmark past the point where context switches or queuing problems degrade performance. In these cases and many others, the result can be more instructions executed and more power drawn on the host side per I/O, yielding a worse-than-deserved result for a faster device on benchmarks such as Tom’s Hardware used. Since the power consumed by CPUs and chipsets and other host-side components can be up to two orders of magnitude more than the devices under test, it doesn’t take long at all before these effects make such test results meaningless or misleading.

I’m sure Robin knows a thing or two about storage benchmarketing, which is not to say that he has engaged in it himself but that he must be aware of it. Workloads matter, and any semi-competent benchmarker can mis-tune or mis-apply a benchmark so that it shows something other than the useful truth. Starting from an assumption that Tom’s Hardware ran the right benchmark and demanding that anyone else explain its flaws is demanding that people reason backwards. Instead we should reason forwards, starting with what we know about I/O loads on the class of systems we’re studying, going from there to benchmarks, results, and conclusions in that order. That’s where the “idle 90% and reading 90% of the rest” comes from. It’s true that on back-room systems, including the ones, I work on, host caches absorb most of the reads and writes predominate at the device level. Patterson/Seltzer et al showed that decades ago, it has remained true, and I’ve pointed it out to people many times. However, what’s true in the data center is not true on the desktop and even less so for notebooks. In that context, Tom’s Hardware ran exactly the wrong benchmark and got exactly the wrong results. They screwed up, but that didn’t stop those whose self-interest predisposed them toward the same conclusion from jumping all over the story. For shame. Yet again, wannabes and marketroids have given hard-working and competent implementors an undeserved black eye, and that will have real effect on the pace of innovation. Thanks a lot, you cretins, and don’t you dare complain about my tone unless you’re willing to condemn the “Paris HIlton” remark first.