Chess Programs

In the original context this was a reply to a post citing an interview with Feng-Hsiung Hsu, the designer of Deep Blue. The thrust of the article was that Fritz isn’t All That and he (Hsu) could do better.

Hardware speed isn’t everything. Why else would programs running on the exact same hardware show such great variation in ability? Fritz might not be able to evaluate as many positions per second as Deep Blue, but it evaluates them better. Kramnik and Kasparov are fairly evenly matched. Fritz seems fairly well matched with Kramnik, and Deep Blue with Kasparov. It doesn’t exactly take an advanced degree in math or logic to figure out the transitive relationships and conclude that Fritz and Deep Blue are a lot closer in strength than the raw hardware numbers would indicate.

Hsu says he could write a program today that would kick the stuffing out of Deep Fritz

So why doesn’t he? Talk is cheap. Despite all of its raw hardware speed, Deep Blue would not have beaten Kasparov had it not been for Joel Benjamin spoon-feeding it tips on how to beat one specific player. Kramnik, Anand, or any of a half-dozen other top grandmasters would have kicked its ass because it was not tuned to their styles. Fritz, by contrast, is not so reliant on tuning and would probably do better in a tournament setting against multiple top-level opponents. When Hsu can write a program that’s even IM level, without having a GM hold his hand, his claim will have some credibility.

Freeloaders II

Aaron Swartz and I have been continuing the discussion from my Freeloaders article via the comment mechanism (which, BTW, is something I’m at this point very glad I implemented). Since this response is kind of long, and it’s a topic I’m sure interests many of my readers, and I’d otherwise be short on content today, I decided to put it on the front page.

I couldn’t get a response from, but I did manage to find the Google cache of the “Warchalking Legality FAQ” Aaron cited. The sophistry therein is so extreme that it actually had me chuckling. Let’s look at a couple of examples:

In my dictionary, theft is defined as taking something from someone else without their permission with the purpose of preventing them from using it.

Two can play that game. Aaron is obviously using Merriam-Webster, so let’s look up “steal” there:

intransitive senses
1 : to take the property of another wrongfully and especially as an habitual or regular practice

transitive senses

1 a : to take or appropriate without right or leave and with intent to keep or make use of wrongfully [stole a car] b : to take away by force or unjust means [they've stolen our liberty] c : to take surreptitiously or without permission [steal a kiss] d : to appropriate to oneself or beyond one’s proper share : make oneself the focus of [steal the show]

Interesting. All of the above except transitive 2b fit very well the act of using someone else’s wireless network without their knowledge or permission, and they even use stealing cars as an example of a primary meaning (without reference to whether the owner notices). Don’t like Merriam-Webster? OK, let’s try Cambridge:

to take (something) without the permission or knowledge of the owner and keep it or use it

Bingo! Note, again, the explicit inclusion of the case where the owner is unaware of the taking, or of flat vs. metered cost structures. Legal loophole-finding aside, in a strictly linguistic sense what Aaron advocates is clearly stealing. Relying on dictionaries instead of common sense can be dangerous.

The FCC, the government organization which regulates the airwaves, has specially reserved the “stations” used by wireless networks for public use. This means that anyone is allowed to broadcast what they like on them and all devices listening to those “stations” must be prepared to receive unexpected interference.

I don’t think that’s the sense in which “must accept interference” was meant. The FCC only cares about interference at the analog RF-engineering level; what they’re saying is that radio equipment in the unlicensed bands assumes responsibility for discriminating between the signal it wants and others at that level. Once you cross that analog/digital line, though, it’s no longer just FCC regs that apply. Now you’re into the realm of laws that govern use of other people’s computing devices (a base station is both a radio device and a computing device) and those laws have something very different to say about your “right” to use other people’s computing equipment.

Before I depart, I’d like to address a couple of the things Aaron actually managed to get right. Here’s one:

most people I know invite folks to use their wireless network. It’s a kind and neighborly thing to do.

I find the first claim dubious, although if anyone lives in a specialized enough environment for it to be true it might be Aaron. The second claim, though, is something I wholeheartedly agree with. It would be great if more people left their networks open deliberately, and indicated that fact via warchalking, but it must be voluntary.

The entertainment industries assertions to the contrary, sharing with a stranger is a generous, not criminal, thing to do. It’s a sad commentary that we’re so suspicious of this hospitality.

Again, I agree. However, I also have to make the point that comparisons between “leeching” (warchalking is the advertisement, not the use itself) are suspect. The brokenness of copyright law has no weight here, because we’re talking about using other people’s resources – not their ideas or artistic expressions. The difference in standing between the RIAA/MPAA in the one case and the person who directly pays for bandwidth in the other should also be pretty obvious to anyone who has spent as much time around lawyers as Aaron has. Talking about generosity and neighborliness is all very mom-and-apple-pie, but it doesn’t really apply in this case. Giving someone a pie is neighborly; having it stolen off your windowsill as it cools is something else entirely.

Columbus Day Pictures

I’ve put up some pictures from this weekend’s hiking trip. The original plan was to backpack into the Great Gulf Wilderness and do some light day-hiking on Saturday, ascend the Six Husbands trail between Washington and Jefferson on Sunday, and then come out yesterday, but that plan was quickly scrapped due to poor weather. Instead, we stayed at our original campsite, climbed Carter Dome on Saturday, and then hiked through Castle Ravine to Emerald Bluff on Sunday. The weather was consistently dreary but not seriously wet, and we had a really good group of people, so the whole thing turned out quite well.

The wind on Sunday night, between about midnight and three or four in the morning, was amazing. Not only was it strong, but it was weirdly localized. Usually wind just sort of comes from one vague direction, but with this wind you could very clearly hear it come in from one direction, pass overhead, and then depart in the opposite direction. You could point to it at each moment, as though it was an invisible freight train passing overhead. On Monday morning we heard that there had been gusts up to 85mph at the top of Mt. Washington and, given the way the clouds were zipping along, we were surprised the number was that low.

Stupid, Uneducated, and Venal

On the way up to New Hampshire on Friday, there was an interview on NPR with Keith Bradsher about his book High and Mighty, which is about SUVs and what’s wrong with them. I was impressed enough with the author’s ability to respond to just about every question with clear explanations and statistics, and his ability to remain calm even when pressed by the inevitable SUV-owning callers. Today I went to Amazon to put the book on my wish list, and while I was there I checked out the reader reviews. What I found was saddening. Way too many of the reviews were based not on how informative or well written the book was, but almost purely on the reviewers’ own opinions about SUVs; owners gave the book one star, eco-types gave it five. Way too many of the SUV-owning reviewers spent all of their time trying to cast aspersions on the author’s motives than on the book, or “refute” the automakers’ profile (not Bradsher’s) profile of owners using their own sample of exactly one, or generally not reviewing the book. Ironically, their behavior only tends to reinforce the impression many people already have of SUV owners as insecure self-deluding jerks.

On a similar note, the pattern established in my informal survey of vehicles parked at White Mountain National Forest trailheads still holds. For example, at the Appalachia trailhead I counted only two SUVs, and a couple more pickups, among thirty or forty vehicles. According to hundreds of observations I’ve made, the ratio of SUVs to other vehicles drops sharply as you approach anything resembling the real outdoors. Despite the outdoorsy image promoted by marketing types, the only outdoors most SUV owners ever seem to see is the mall or restaurant parking lot, where the percentage of SUVs is consistently highest.

Storage Predictions for 2003

My big bet is that storage is going to be the interesting area in high-tech next year…and I don’t just say that because I happen to work in that area. CPUs, video cards, and memory will all get faster in not-very-interesting ways. Wireless networks will grow in not-very-interesting ways (mostly; see below). But there will be heaps of storage-related news:

  • Portable removable storage devices will be a growth market. Wireless versions, probably based on some flavor of wireless 1394 will be particularly handy.
  • Someone will start shipping some form of removable storage (probably optical) that offers 50GB or more on something the size of a CD or smaller. Initial versions will be write-once and expensive; lower costs and rewritable versions won’t hit until 2004.
  • Products and services to synchronize and distribute data will grow steadily as people want to share that data between more and more devices.
  • People will continue to ignore distributed filesystems and their cousins as alternatives to the above-mentioned synchronization nightmare.
  • iSCSI will continue to be hyped until (about mid-year) people realize that it doesn’t give them anything they didn’t already have. That plus a continuing soft IT economy will create a wave of rolled-back claims and changed strategies from all the router-company refugees behind the hype.
  • The BFDA (Big Fine Disk Array) vendors will continue to pay more attention to lawsuits among themselves than to designing and implementing actual products that meet customers’ needs.
  • More and more storage-related functionality will be packaged as separate appliances (for reasons see above). People will eventually realize that all this “virtualization” hype is just a bunch of garbage anyway, but will continue to support the appliance approach for other kinds of functionality.

Pontius Pilate

Den Beste actually made a good point in his article about last night’s “‘Authorization for the Use of Military Force Against Iraq” – and he even managed to make his point honestly for once. I’m referring specifically to this point:

Yet another leftist rhetorical device has been to actually demand that Congress pass something that is labeled “Declaration of War” or includes the phrase “declare war”…Irrespective of the wording, this bill is a “declaration of war” and legally fulfills that constitutional requirement

That’s absolutely correct. We have declared war on Iraq. It’s worth noting that there’s a distinction between war as a diplomatic state and war as an operational state. Nations have been technically (diplomatically) at war for years – even decades – without a shot being fired. Conversely, many wars have occurred without declaration. While I believe “real war” against Iraq would be premature and counterproductive, I have absolutely no qualms about being at war against Iraq in a diplomatic sense. It’s impractical to wait until the moment before the bullets start to fly for such a declaration, and doing it now provides a clear signal – for both Iraq and our intransigent non-allies – of our resolve.

The question remains, though: why didn’t Congress make it clearer that they have issued a declaration of war? Why was it phrased as a “resolution” instead? The answer comes down to one word: deniability. Many people who voted for the resolution made quite a point about how they were not voting for war…when in fact that’s what they’re doing. What they want people to think is that they didn’t declare war, but rather “washed their hands” and delegated the actual decision to the executive branch. Unfortunately, the Constitution doesn’t actually allow them to do that, and that’s not really what they did, but doing things this way allows them to at least make the claim that they weren’t involved in declaring war and not have the lie be so blatant that even the average voter will notice. We do have an election coming up, after all, and poll after poll has shown that the majority of the people are against war.


OK, I know this is going to alienate some of my regular readers, but I just have to get it off my chest. The other day I heard the Marketplace piece with Aaron Swartz talking about “warchalking” and using other people’s wireless networks without permission. When asked whether he thought this might be wrong, his response – and I’m too lazy to get an exact quote – was something along these lines:

Most people pay a flat rate for their connection, regardless of how many packets they send. It doesn’t cost them a penny for me to use their network. They wouldn’t even notice I was there unless they were looking.

I just have to ask: how do you know you’re having no effect? How do you know whether or not they are, in fact, trying to download something or play an online game at that particular moment and really are affected by your use of their connection? How do you know whether your usage is counting against their bandwidth cap? What about legal liability or damage to reputation that might result from your traffic that gets tracked back to them? You could be affecting them in all sorts of ways, flat monthly connection fee or no flat monthly connection fee. If you don’t know that your presence is truly innocuous, how can you say there’s nothing wrong with using their connection? Is it OK to take someone else’s car for a joy-ride so long as you refill the gas tank when you’re done?

Information wants to be free. Bandwidth doesn’t. Arguments that apply to sharing the former do not necessarily apply to sharing the latter, and “permission is implied” is BS too. If someone has not explicitly granted you permission to use a resource they pay for, it’s not OK for you to use it.

No Romans Allowed

Not surprisingly, David McCusker took umbrage at my comments yesterday about Trusting Systems. He seems to like umbrage, but I guess that’s OK because it’s one of my vices as well. In the course of his response, he had this to say:

I really hate the ethos of expertise when it advocates black boxes of complexity whose territory is off limits to the great unwashed masses of ordinary developers. Besides being quite offensively elitist, this design style also creates bad specs.

Not necessarily; I’ve also seen it lead to some extremely good specs. The problem with most specs is that they’re not complete; they do not completely specify the behavior of the component. In the most egregious cases there are whole classes or functions that are not described in the spec. Much more often, though, even when every function and its arguments are defined, some information is missing – resource (especially stack) consumption, locking requirements, error conditions, etc. Those should all be considered part of the interface, not the implementation. In the memory-manager example David cites, the problem is really that he needed information that wasn’t in the spec; the solution is to make it part of the spec. David even touches on this when he refers to “undefined behavior for some sets of inputs” and “overall summaries are not quite true” but doesn’t seem to appreciate its significance.

The goal is that the user of a component must know what to expect, and must be assured that expectations once developed will not be violated. A black-box approach with a good spec is actually the best way to achieve that goal. Lacking a good spec, though, users are forced to figure out for themselves what the component actually does. Whether they do so by examining the code or by poking at it experimentally, the result is likely to be the same: hidden dependencies on behavior that might change. At first it might seem clever to discover some obscure aspect of how a component or system works and take advantage of it, but that cleverness starts to look like stupidity when things break because of the dependency. What usually happens is that stuff that was never intended to be part of the interface becomes so because people depend on it, and ossification sets in. Look at all the backward-compatibility cruft in the web, or Windows, or the x86, for examples – and don’t think open source is a panacea because Linux and Apache are afflicted too. The brittleness comes from the proliferation of hidden dependencies, not from a design approach that – when properly used – prevents such proliferation.

I realize that the sorts of specs I’m talking about can get complex, and it might be hard at this point to believe I’m at least as much of a minimalist as David (whose design style as expressed on his website often seems hopelessly baroque to me). Occam is my hero, but Occam’s Razor says entities should not be multiplied beyond necessity and some things are just inherently complex. If a component spec is getting too complicated, it’s probably a sign that the interfaces between it and its neighbors were drawn at the wrong points. Dumbing down the spec – or the component itself – won’t help. What you need to do in such cases is take a second look from a higher level and decide where the lines really should be. Unnecessary dependencies should be eliminated ruthlessly. Components that remain obdurately interdependent should be closer to one another in the overall design than things that are more separable. “Clusters” of interdependent components should be isolated in a higher-level component seen by the rest of the system through a simpler interface. The goal is not to limit the amount of knowledge necessary to understand the whole system, because that’s unrealistic, but to limit the amount of knowledge necessary to achieve positive results within each part.

The comment about elitism takes us in a whole different direction. Many systems are necessarily complex. “Hands off” is too often grounded in elitism or territoriality, to be sure, but there are other reasons as well. Intellectual-property protection is an obvious one. Another is avoidance of those hidden dependencies; if you’re looking at my code to decide how to write yours, do I trust you to inform me of all the dependencies you’re building in? In my experience it’s often more painful for everyone in the long run to allow such “dependency creep” instead of making one person the “gatekeeper” for all explanations and changes regarding a critical component. Reviewing code is good, insuring against the “developer hit by a truck” scenario is good, but poking through code just for the hell of it is questionable (especially in a commercial environment) and doing so with the specific intent of discovering internal behaviors that can then become the target of hidden dependencies is unquestionably bad. Elitism has nothing to do with it. Someone could admit that they’re an inferior programmer and nonetheless insist on acting as gatekeeper, and be well justified in doing so. If anything’s elitist it’s the attitude that anyone should be allowed to go wherever he chooses and change or depend on anything he sees without consulting the primary developer(s). In open source, where it can be done so that the only thing wasted is one’s own time, that’s probably OK, but as soon as it starts to impact other people – and in the commercial environment that’s pretty much all the time – it’s much less OK.

The Roman army conquered Europe and large chunks of other continents not because they had superior warrior-champions but because they had a system that allowed mediocre but well trained soldiers to work together and defeat the greatest warrior-champions on the other side. Consider, in that context, the following:

If it’s not simple enough so a single person can keep in mind one design at the same time as many others in a system, then its wrong. And when I say a “single person” I mean any handy bright developer

In my opinion that’s too limiting. There are systems that are, of necessity, too complex for “any handy bright developer” to understand at anything more than a PowerPoint level. The Romans taught us that a well organized group can have capabilities well in excess of its members acting separately. Should they have stayed home? Should we? Or should we have faith in a system that at least sometimes does allow us to go beyond the “throttle” of individual capability?

Still the Champ

I’ve been trying AllTheWeb as an alternative to Google. However, when I needed to find out what the “RJ” in RJ-11 or RJ-45 stood for (turns out to be “Registered Jack”) the same search terms got me the desired result on Google but not AllTheWeb. Maybe switching was premature.

Where’s the Money?

Den Beste is demonstrating his penchant for unsavory argument again. This time the subject is the European Union and how it will be paid for.

like any government the European Commission and the European Parliament are eventually going to want to start passing measures which call for expenditures of cash by the bucket. (It is the nature of government to spend money.) Where’s it going to come from?

You can get as creative as you want about the details, but ultimately it can only come from one of three places. Either taxes get increased, existing spending programs get cut, or you borrow it.

The remainder of Den Beste’s argument is based on an assumption that these are the only options…but are they? What about increasing efficiency, or decreasing duplication between EU functions and member-nation functions? Do those not free up funds which can then be used instead of resorting to the measures Den Beste enumerates? There’s also a third possibility, which should be familiar to any anti-liberal from the Reagan years:

Make the economy grow, increasing tax revenues without increasing rates.

I’m no fan of “trickle-down” economics, which were really the rich people “trickling” on the middle class and poor when you come right down to it. However, the idea that the EU might economically benefit its member nations at least deserves some consideration alongside the three alternatives Den Beste mentions. I’m sure he knew that, but it would have interfered with his rant so of course it got left out.