Market Growth

In a comment on StorageMojo, Wes Felter had this to say:

“Storage arrays are only about $30 billion a year. Who could build a business on that?”

If the OSS startups are lucky, they can turn that into a $3B market. :-)



There are some interesting aspects of working with a 972-node system on a regular basis. (I hardly ever think of the SC5832 as a 5832-processor system; for the stuff I do it’s more often the number of nodes rather than the number of processors that matters.) For one thing, all of that algorithmic-complexity stuff from college has come back with a vengeance. When n=972, the difference between O(n) and O(n2) can be the difference between four seconds and one hour. Even subtler differences can matter. For example, when I was working on high-availability clusters, eight nodes was about as big as things got. At n=8, the difference between O(log(n)) and O(sqrt(n)) is negligible, and with rounding they’re identical. At n=972, sqrt(n) is approximately three times log(n). When the difference appears in boot-time coordination or data-structure sizing, you really tend to notice.

Besides the obvious difference in timing, algorithmic differences can often determine whether something works at all. The performance vs. load curve for many, if not most, programs tends to peak at some point and then drop (not level off) after that due to various forms of thrashing. Worse, many programs’s behavior will degrade very quickly once they get behind by a certain amount, as code that was only intended to handle transient load spikes gets exercised for a high continuous load. We’ve found several commonly used utilities and libraries that either crash or slow down so much that they might as well have crashed in our environment. Even well-written code usually needs at least a little bit of tweaking to handle the numbers of connections and messages that we can throw at it almost accidentally.

Another fun difference at this scale is that every single boot of the system exercises your code a thousand times, and in fact might stress some parts (e.g. communications) far more. This means that bugs that happen only one time in a thousand will happen pretty much every time on some node in the system. That means you can reproduce it in under ten minutes, whereas a single machine booting every five minutes would take (on average) nearly four days and in normal usage might take years. This has its downside, especially if one node failing holds up the boot for the rest, but it’s kind of nice in terms of smoking out all those deadlocks and race conditions and stuff inherited from programmers who didn’t know how to avoid them.

Fun fact: my boss has estimated that every time we boot an SC5832 we’re probably booting more Linux-MIPS machines than will be booted that day in the rest of the world. Frightening thought, huh?

Idiotic OS Review

Having used and developed for “real BSD” way back in ’90 when I was at Encore, I’ve always kept an eye on the BSD world and I’ve been particularly interested in trying PC-BSD some time. It was thus with some interest that I clicked through to an ExtremeTech review. Besides just being generally light on information, the review was based on installing PC-BSD within the Parallels virtualized environment, which I think is just stupid. As I said in the comment thread,

I’m sorry, but using Parallels to review a desktop OS is just silly. One of the most important things a desktop OS needs to do is detect and set up your hardware correctly. Does it install the right drivers for your video card, letting you use full resolution and features? How painful is network setup (especially wireless)? Does USB work for thumb drives and cameras and whatever else? When you run under any kind of virtualization, you don’t get any of those answers because you’re using a whole different – and much simpler – set of drivers. The review thus becomes practically useless for most readers.

…to which the author replied,

Parallels makes it easy to keep many different operating systems on one computer. There is simply no way to test for all possible system configurations or to account for individual components that folks might have. So we try to give a good overview of the OS and let folks get an idea of whether or not it’s worth downloading.

…to which I reply here: stop being lazy. Your personal convenience is not the only factor that should guide review procedures, and nobody’s asking you to review all possible system configurations. Most OS reviewers do native installs on reasonably common hardware and report the results, and that is an important service to their readers. While it doesn’t necessarily predict what any particular user’s experience will be, particularly with off-beat hardware, it does a far better job than a virtual install of letting them know what overall level of device recognition/setup maturity they should expect. If you don’t have a machine handy on which to do a native install for a review you just shouldn’t be doing OS reviews, and if the only information you intended to convey was how an OS works in the specific Parallels environment then get a damn blog.

Team Work

Every person working in splendid isolation is not always the best way to get things done, as it leads to people spending hours to do things that another team member might do in minutes. In a well-functioning team, work is fluid – it flows from one team member to another according to each person’s expertise and availability, always being done by the person best able to do it. Unfortunately, this never lasts. Everyone’s specialized, and inevitably every team gets divided into those who produce “team work” (i.e. work that is not part of anyone’s own schedule or performance metrics but contributes to overall progress) and those who consume it. Sooner or later, someone becomes overloaded. Schedules slip, someone complains about being blocked, one person is accused of not being responsive enough while another is accused of being too selfish, team spirit is lost, yadda yadda yadda. If you’ve worked in computing, particularly at a startup, you’ve seen all of this. I suspect it’s pretty much true throughout the business world, but it seems to be a particularly common and serious problem in the sorts of places I’ve worked.

Trying to make everything explicit and formal, with every ten-minute task accounted for in schedules and status reports, doesn’t work. It just adds overhead, and moves the finger-pointing into the meetings where schedules are discussed or reviewed. In some cases you can deal with it by making some kind of team work an explicit responsibility, with its own time allocation and performance metrics even if every single task isn’t tracked. Quality assurance and release engineering tasks started getting this treatment so long ago that they have evolved into their own separate disciplines; tool development and performance analysis are well along the same track. Failing that, just allocating some percentage of each individual’s time to “unspecified” team work, adjusting schedules and performance evaluations accordingly, also works and can be combined with similar measures such as Google’s “20% time” for personal projects.

Several jobs ago, we used to practice “management by beer” to deal with this. If you wanted someone to do something for you, you offered them a beer (or other beverage). The IOU was hardly ever called in, of course, because sooner or later the beer went full circle and everything tended to cancel out, but it was still useful. Without any kind of formal tracking, the mere fact of acknowledging people’s efforts on each other’s behalf seemed to make everyone feel better so the finger-pointing never got started. Try it. I prefer gummi bears/penguins/whales to beer, though.

Unintentional Truth

Anybody who cares will know who I got this from.

I will always get more leads from any trade show than you.

12) Leads suck. Leads suck resources, money, time, energy. Leads give the false impression that somebody wants to buy your product.

Yeah, the same guy said both of those things in the same article, and they’re both true. Consequences are left as an exercise for the reader.

Which Way?

Which way is this dancer spinning? My natural inclination is to say counter-clockwise, but I can actually get her to reverse direction pretty easily. The trick is to look away – not away completely, but just enough so that the image is in the corner of your eye. Then, while she’s still spinning in your peripheral vision, think about the leg going in front of the body where it used to be behind, or vice versa, and get into that “rhythm” for several revolutions before looking back. The first few times I found that I’d see her going clockwise just for a moment before she’d snap back to counter-clockwise, then I actually got her stuck going clockwise for a while, but then I could pretty much change direction at will in just a few seconds. I’d love to know what an fMRI scan would show while I’m doing that.

(Hat tip: Freakonomics blog.)


duck-shaped zucchini

Book Review: Enemies, by Bill Gertz

Enemies is like a character in a Greek tragedy. It’s well researched and well written. It describes in significant but accessible detail the devastating damage done to US intelligence efforts by the well known cases of Aldrich Ames and Robert Hanssen, the lesser known cases of Katrina Leung and Chi Mak, and several others. These are important events, which more people should know about, but they probably won’t because of Gertz’s tragic flaw – he just can’t stick to reporting the facts without injecting large doses of right-wing bias. To his credit, Gertz does cover events that reflect poorly on both Democratic and Republican administrations, but there are some significant differences in how each gets presented. Here are some examples.

  • Bad decisions made between 1992 and 2000 are consistently identified as belonging to the Clinton administration, and Gertz hardly ever passes up a chance to mention how the Clintons received campaign contributions from China. By contrast, for bad decisions made after 2000 the identity of the sitting president is only mentioned once, and that president’s one-cozy relationship with KGB man Vladimir Putin is never mentioned at all.
  • Many mistakes are attributed to unnamed “leftist factions” within the CIA, or to “the most liberal court in the country” (the Ninth Circuit) without a moment’s consideration of the actual legal or policy issues involved. By contrast, the political leanings of David Szady – who seems to have played a role in almost every major screwup Gertz documents – are never mentioned but are readily apparent from the many times his attempts to blame liberals for everything are approvingly cited.
  • Clinton is roundly blamed for the decrease in the number of CIA agents involved in human intelligence, even though Gertz well documents how our efforts during that period between the fall of the Soviet Union and the rise of al Qaeda were so marked by foreign subversion of our intelligence services that they actually harmed US interests. Why would more of that have been good? Simultaneously, the damaging departures from the intelligence community since 2000 – from the well known Clarke and Scheuer to the less known but more influential Kappes and Sulick – aren’t mentioned.
  • Gertz never even mentions the single biggest intelligence failure of our generation, in which our entire intelligence system was subverted and used by men like Chalabi and Ghorbanifar to advance their own interests. Why place moles in the existing intelligence agencies when you can get people like Feith and Ledeen to set up a new one that will effectively displace them?

If Gertz wants to take credit for sounding an alarm, he should also take blame for sounding it in an echo chamber where the majority of the population will never hear it. He had an opportunity to tell a compelling story that might have gotten the attention not only of regular of citizens but of people with their hands on the levers of power, but he chose not to do that. Despite the wealth of information he provides, his overwhelming bias distorts the whole picture of US counterintelligence beyond usefulness. Maybe his next book should be about how putting partisanship ahead of patriotism has harmed our security.

Does Beer Make You Smarter?

Apparently some researchers in New Zealand think so.

Top boffins at the University of Auckland, New Zealand, by studying the mental performance of specially-created transgenic rats well supplied with drink, have found that moderate daily alcohol intake conferred “heightened cognition”.

More likely, the alcohol-consuming rats just thought they were smarter. The other rats thought they were acting like idiots.