Thoughts About Libertarians

Simplification is a standard teaching technique that is beginning to have unintended consequences for our political and economic discourse. In a first-year physics course, students do basic trajectory and conservation-of-momentum calculations without considering a hundred factors (friction/drag, lunar gravity, relativistic effects…) at first, to avoid obscuring the basic lessons behind a swarm of details regarding real bodies in motion. Similarly, first-year economics students are taught about simple supply/demand curves and capital flows before diving into the complexity of real-life economic interactions. Unfortunately, the pedagogical purpose of these simplifications is rarely driven home. The students, many of whom are only taking that one economics class to satisfy a distribution requirement and will never take another, mistake the simplified model for something applicable to real people in real markets. They fall in love with its abstract elegance, uncluttered by all of the “grunge” that adheres to everything in real life, and set about preaching the gospel to friends and strangers online. The effect is pollution of online discourse with half-baked theories and half-informed opinions coming from people who have never actually held a job, bought a house, or generally experienced economic life except as someone else’s dependent. The flaws in their thinking are of course answerable on an instance-by-instance basis, but by sheer weight of numbers and obstinacy they can often place an intolerable strain on adult discussions of economic (or political) issues.

One of the tipoffs that the Libertarian Youth Brigade members are all taking their ideas from the same few books or websites instead of learning to think for themselves is the adoption of identical terminology and lines of attack (not reasoning). For example, they use “freedom” or “liberty” in a very restrictive way, mostly economic and predicated on the idea of government as the only possible agent of coercion. Another favorite is “common sense” meaning “sense” that is only common on Faux News. The all-time winner in the terminological abuse category, though, has to be the use of “objective” to mean “from my perspective”. There’s a whole “philosophy” (if a word that means “love of thought” can be applied) called objectivism that is in fact extremely subjective. Just because I believe government can occasionally do some good, I’ve been accused of being against freedom, defying common sense, and lacking objectivity more times than I can count.

The same sort of uniformity applies not only to terminology but also to logic. Besides the obvious strawmen, bifurcation, and personal attacks that everyone uses, here are some tricks that seem particularly popular among the fibberati:

  • Post Hoc: government did X and bad thing Y happened, therefore Y is government’s fault
  • Disproof by Fallacy: government did one thing wrong, therefore everything government does will be wrong.
  • Appeal to Authority: Friedman said, Hayek said….
  • Argument of the Beard, Slippery Slope, Camel’s Nose, and relatives: usually expressed as a failed reductio ad absurdum. One particularly noxious liberal-hater on America’s Debate recently attempted, in all seriousness, to claim that if you couldn’t justify a $50/hour minimum wage you couldn’t justify any minimum wage…which leads us to…
  • Shifting the Burden, Moving the Goalposts. We should all start with the assumptions that Adam Smith’s “invisible hand” is a panacea and the authors of the Holy Constitution were omniscient perfect beings. Every deviation from that dogma must be explained and justified down to the finest detail, complete with unassailable citations from accredited libertarian sources. Even if you accept the unfair burden and meet the impossible standard, the best response you can expect is for your opponent to exit the thread silently.

It’s because of these tendencies that I’ve come to loathe libertarians, even though there are many areas where I have always agreed with their conclusions. If liberals are the left wing and conservatives the right, then I propose that libertarians be considered the drumsticks – the part of the bird furthest from the part capable of thinking.

Name That Context

A certain high-profile blogger who has always been and shall forever remain clueless recently wrote this:

Ignoring the entire question of ethics, the broad and long-term consequences of such an attack would be catastrophic for us. It would drastically change our relations with the rest of the world, for the worse. It is, for instance, one of the few things I can think of which would cause even our closest friends to turn against us. America would become an international pariah, a nation afflicted with the moral equivalent of leprosy.

What was he talking about? As it turns out, it was the nuclear saturation-bombing of North Korea. That’s what it takes before he considers the issue of world opinion with anything but scorn. In the case of the US invasion/occupation of Iraq for what turns out to be less than adequate reasons, he actively taunts those who treat world opinion as a valid concern, and yet now he uses that same concern as part of his argument.

Just Too Tough

Well, I did it again. I broke another exercise machine. This time it looks like one of the main bolts/bearings on my Diamondback HRT1100ES has snapped, which might be the end of it. I think the machine’s just a bad sport, and didn’t want me to make another personal best.

PlatSync

For the time being I’ve finished work on my backup/synchronization tool, which I’m calling PlatSync. It’s kind of targeted for Windows now because that’s what I’ve been developing on; I’ll be testing on Linux shortly, and don’t expect much trouble since it’s portable Python code. Both a description/help file and the complete source/program are available for people’s amusement.

Thoughts about Hashing

As part of my backup/synchronization project, I put some thought into what hash algorithm to use, and in fact whether to rely so entirely on hashing at all. Yes, I have read Val Henson’s Analysis of Compare-by-hash and disagree somewhat with her conclusions. I think it’s entirely reasonable to compare the probability of a hash collision against the probability of a errors arising from other causes. It doesn’t matter whether media degradation is silent or deterministic or not; I’m more likely to lose two or even three copies of my data, in separate locations, to media failure than to a hash collision. As long as that’s the case, expending extra effort reducing a risk that’s already over a dozen orders of magnitude lower is basically a waste of time. Henson’s hand-wavy “keep more state” alternative doesn’t just make code a tiny bit more complex; it can make the code much more complex, introduce new logistical problems such as user registration or history pruning, or degrade performance to unacceptable levels scanning through change lists. These can all basically defeat the purpose of a program that’s intended to be low-overhead in terms of both setup complexity and run time.

This brings us to my choice of hash algorithms. I used MD5. Yes, I heard that gasp of astonishment and disapproval from all the SHA-1 purists, but let me explain. MD5 is much quicker to compute, and I don’t care that it’s (theoretically) less secure. I need a hash that’s collision-resistant but not necessarily one that’s cryptographically secure, and I don’t subscribe to the belief that the two are equivalent. To see why, let’s look at two criteria usually given for a secure and/or collision-resistant hash:

  • Reversibility refers to the ease or difficulty with which the input (or an input, since the pigeon-hole principle tells us there must be many) corresponding to a given output.
  • Collision resistance refers to the ease or difficulty with which two inputs yielding the same output can be calculated.

Astute readers will note that both definitions sort of assume that someone’s trying to reverse the algorithm or produce a collision. That’s a non-issue in my case; all I’m interested in is the accidental occurrence of duplicates. I could embed the entire contents of a file up to fifteen bytes within a sixteen-byte hash, making an easily recognizable subset of output values trivially reversible, and it wouldn’t matter. It might even be useful. As for collision resistance, let’s take the Gödel approach. Assume that I’m hashing data produced by some other program. There’s some very small subset of programs that will produce hash collisions with significantly greater than random probability (which is very small). There’s also some very small subset of programs that actually do something useful. The intersection of these two very tiny subsets is so small (relative to the set of all possible programs) that the probability of finding one rivals the probability of a random hash collision in its smallness. Sure, someone might be able to deliberately devise a program that can generate collisions, but that program will almost certainly not be useful for anything else. When the issue is resistance to accidental error and not deliberate attack, it’s important to look at collision resistance within the space of actually useful data producers, not programs that serve no other purpose. In that context I’d say that MD5′s collision resistance is just fine, and the difference in calculation speed does matter. If I were designing a protocol for the express purpose of running over a network, where the possibility of deliberate attack must always be considered, I definitely would use a more secure kind of hash despite the cost. For something that will mostly run within a single machine, though, that would be silly.

Takes One to Know One

Well, my weekend plans are set. The Colossal Colon is in Boston this week!

Yet Another Project

In my copious spare time (a little joke; I worked 75 hours last week) I’ve been turning over some ideas in my head for a system that combines aspects of backup/restore, file synchronization, and basic version control. In some ways it’s similar to Unison (which might be a good starting point for the codebase) with a few significant enhancements:

  • Instead of both directories being treated as peers, one is treated as a “repository” with a complete history of its previous states and the other would be treated as a working directory.
  • In addition to file synchronization (basically a union or merge of two directories) there would be functions to force the repository to become like the working directory or vice versa, or to make one repository like another.
  • Various other things people expect in a backup system, such as checking/clearing archive flags, compression, etc.

The key to this is a content-addressable file store (using hashes that need to be collision resistant but not cryptographically strong). On top of that, a repository or snapshot thereof becomes mostly a map from file names to their content hashes and can thus be quite space-efficient.

There’s a very distinct possibility that I’ve just reinvented something that already exists. If so, then I’d appreciate a pointer to whatever it is, either as a starting point for my own efforts or as a complete solution that I can use right away. I know that many of my readers have ideas related to this, and would appreciate their suggestions as well.

Beak Job

It’s not every day you see a platypus in a comic strip, so I try to make a point of highlighting when it happens. Here’s one.

Conan the…Governor?

Today is a great day for the state of Minnesota. After years of being ridiculed for their choice of governor, their election of Jesse Ventura has been completely eclipsed by the election of a buttocks-groping Hitler-admiring barely articulate political tyro in Kookiefornia. It’s a great relief to them, I’m sure, and I offer my congratulations.