Thought Experiments Require Thought

I enjoyed Freakonomics (the book) quite a lot, but Freakonomics guest blogger Daniel Hamermesh does a pretty good job of blurring the line between economic inquiry and outright crackpottery. I first noticed his predilection for the latter when he tried to tar everyone who doesn’t agree with his own extreme views (which he tries to reify as “standard analysis”) as an incentive hater (i.e. communist). Then he made the absurd assumption that people would respond to a bottled-water ban on college campuses by drinking soda instead – an idea which many people pointed out was absurd in many ways. Now he doesn’t want to turn off his computer, so he’s making up more stuff.

The article says that Americans who leave computers on overnight are wasting $2.8 billion on energy costs per year.

It ignores the cost of turning computers off — and having to turn them on again the next morning. Let’s say that process takes five minutes per day, and one does it 250 days per year. That’s 1,250 minutes, or more than 20 hours per person per year.

Assume the average computer user’s wage is $21 per hour, …

Notice the pattern here? Anything that might cause him to change his behavior – e.g. with regard to jaywalking, refreshment choices, or computer use – is treated as an attack on economic rationality, which we should all apparently recognize must be the sole cause behind his current behavior. Furthermore, any such attack can be met by making a bunch of convenient assumptions, presenting them as fact, and then tossing out a bunch of numbers like chaff to distract from the more fundamental cognitive errors. (Actually that’s Chicago School in a nutshell, isn’t it?) For example:

  • Assume that the computer must be turned on every day, to maximize the time spent booting.
  • Assume that the user is absolutely helpless during that time, e.g. unable to turn on the computer before taking a shower or pouring coffee, to maximize the cost associated with that time.
  • Ignore costs associated with leaving the computer on, such as shortened component lifetimes or more time for the worm of the week to infect your system.

The real number, Professor Hamermesh, is more like five seconds per day, maybe fifteen if you’re smart enough to require that users log in. Using your own numbers for the rest, let’s call that $7.29 per person per year, or $365M for 50M computers for a year. That’s not 2.5 times the alleged power savings from turning computers off, is it? Try 13% instead. Even better, here’s some real economic analysis for you. At one third of $21/hour, and an energy cost of $0.10 per kilowatt-hour, fifteen seconds of your time turning the computer on and logging in corresponds to leaving it on for eight hours at 3.6W. Even using the most modern power-saving sleep modes and such, how many computers (all components included) consume less than 3.6W? Very few indeed. In other words, this has nothing to do with any “environmental savings uber alles” strawman that Hamermesh made up. Even in the narrowest possible sense of direct power cost to the consumer, ignoring everything else from the cost of generation and transmission equipment to effects on environment and geopolitics, turning computers off is the economically rational choice. Where does that leave our beloved paragon of rational behavior?

Nowhere. Or the University of Texas economics department. Comparisons between the two are left as an exercise for the reader.

Police vs. Civilians

The issue recently came up of police officers using “civilian” in reference to non-police. I’m among those who find the way they use it slightly offensive, because I believe it reinforces an “us vs. them” attitude that is antithetical to the proper performance of their jobs. It doesn’t help that the term is so often uttered with a sneer or an eye-roll to show exactly how the speaker feels about those who “aren’t good enough” to wear the uniform. I think police calling others “civilians” is rude, but is it incorrect at an abstract use-of-words level? Put another way, are police themselves civilians? It turns out that the answer to that is unclear. There seem to be three kinds of definitions:

  • Almost half of the definitions exclude only active military, which would mean that police are civilians.
  • Almost half of definitions exclude not only military but also police and sometimes firefighters as well, which would mean police are not civilians.
  • The small remainder of definitions define “civilian” as an expert in civil (as opposed to criminal) law. This is the most etymologically sound interpretation, but probably the least relevant either to police usage or to my objections. If one were to accept this definition, though, it would certainly shed an interesting light on police who consider themselves not to be civilians.

I guess this is the old prescriptivist vs. descriptivist debate. Should a dictionary tell us what words mean, or reflect how we use those words? In this particular case, because of where I think a police/civilian distinction leads us (call this the consequentialist position), I think dictionaries should not cave to common usage and the first definition above should remain the primary one. The militarization of police is a real problem. Police are supposed to be of the people, not an army at war against the people. I understand how, when police see what they see every day of how people can be, they can feel that their contempt is justified. Such feelings are an occupational hazard, just like doctors who start to see people as bags of malfunctioning bits, but they’re the sort of feelings that anyone truly committed to their profession would learn to resist. Any police officer who calls someone “civilian” shouldn’t take offense when “bully” comes back. That’s in the dictionary too.

The Value of Profiling

Joel Spolsky responded to a developer’s complaint about slow compiles by going out and buying a solid-state disk. It didn’t help the compile times. I guess that just reinforces the lesson that you should profile your system or application before making changes. Joel’s a smart guy. If he can get bitten by this mistake, so can anyone else.

Lies about Christopher Dodd and Executive Compensation

There’s a lie going around that Dodd was the one who added an amendment to the stimulus bill that protects executive compensation. This just burns me up, because almost the exact opposite is true and people should know it because it’s all there in the public record. For example, consider this lead paragraph CNN.

Senate Banking committee Chairman Christopher Dodd told CNN Wednesday that he was responsible for language added to the federal stimulus bill to make sure that already-existing contracts for bonuses at companies receiving federal bailout money were honored.

They don’t provide much in the way of actual quotes or evidence, but here’s the quote that matters.

“The administration had expressed reservations,” Dodd said. “They asked for modifications. The alternative was losing the amendment entirely.”

“I agreed reluctantly,” Dodd said. “I was changing the amendment because others were insistent.”

How does Dodd’s “agreed reluctantly” turn into “pushed for” in CNN’s headline? What was “the amendment” that Dodd was afraid of losing entirely? What CNN won’t tell you is that “the amendment” would have imposed much stricter limits on executive compensation. FactCheck provides more context. Here is what Dodd actually proposed.

The language is contained on page 736, and it said the Treasury Department’s regulations governing recipients of funds under the Troubled Assets Relief Program (TARP) “shall” contain:

H.R. 1, Senate version: … a prohibition on such TARP recipient paying or accruing any bonus, retention award, or incentive compensation during the period that the obligation is outstanding to at least the 25 most highly compensated employees, or such higher number as the Secretary may determine is in the public interest …

This language was authored by Dodd, who offered it as an amendment to the Senate bill on Feb. 4.

Here’s what happened later.

Dodd’s strict ban was rewritten. Most important, the final bill said the prohibition on bonus payments (page 404) …

H.R. 1, Final version: … shall not be construed to prohibit any bonus payment required to be paid pursuant to a written employment contract executed on or before February 11, 2009, as such valid employment contracts are determined by the Secretary or the designee of the Secretary.

In simple language, Dodd’s ban would have applied to AIG and any institution that had yet to repay TARP funds, regardless of whether existing employment contracts called for the bonuses. The bill that emerged from the House-Senate conference committee, and was signed into law by President Obama, only applies to bonus agreements made after Feb. 11.

How did Dodd “push for” something that happened without him, over his objections, contradicting what the public shows he actually proposed? If you don’t like FactCheck as a source, go find any other that has actual quotes. The information is there: look it up. The claim that Dodd enabled the AIG debacle is flat-out false. It’s simply a lie, promulgated both by political opponents looking for a scapegoat and supposed allies looking for cover. Believing or repeating it should be beneath anyone who claims to be a responsible citizen.

Warming up to Linus

Linus (one of those people who no longer needs a last name) often says things that seem hasty and/or extreme. I’ve criticized him for it in the past, when he said things about specs or debuggers or typedefs that I considered wrong. Sometimes, though, a bit of hasty and extreme commentary is just what’s called for.

if you write your data _first_, you’re never going to see corruption
at all.

This is why I absolutely _detest_ the idiotic ext3 writeback behavior. It
literally does everything the wrong way around – writing data later than
the metadata that points to it. Whoever came up with that solution was a
moron. No ifs, buts, or maybes about it.

I share his amazement that anybody would think writing metadata before data was a good idea. Contrary to one Very Senior Linux Developer’s assertion, this has not been a problem with FFS/UFS since soft updates were invented (in fact it’s kind of why they were invented). It’s something most filesystem developers have known about, and been careful about, for over a decade. Unless they’re ext* developers, I guess. I find myself similarly amazed at another mis-statement by the same VSLD.

these days, nearly all Linux boxes are single user machines

No. Wait, let me think about that some more. Still no. It might be the case that the majority of Linux systems only have one user logged in, but that’s not relevant for the great number of Linux installations on servers or embedded appliances. The relevant number is not how many users are logged in but how many are requesting service, and that’s far more than enough to make “nearly all” untrue. What’s worse is that this untrue statement was used to brush off a security concern, as though multi-user systems aren’t even worth worrying about.

Keep in mind that this is not some late arrival working on Linux out of the goodness of their heart. This is someone who has been involved with Linux since very early days, who for years has been paid to work on and represent Linux full time. If I had shown such an extreme lack of judgment in my design or coding, then compounded that by making such egregiously false and reckless statements about both my own employer’s and others’ products, I would expect an even stronger reaction than Linus’s.

Saving the Economy

This idea just hit me on the drive in this morning. Instead of buying up all the “troubled” assets of financial institutions, why doesn’t the government buy all the good assets? The financial institutions that took outrageous risks can sink in a pool of their own toxic waste, while responsible borrowers are protected from the consequences of others’ greed and the government uses the proceeds from those good assets to make the loans that the big-finance guys keep refusing to make. Some time after the economy has stabilized, we could work on transitioning those assets back to a responsible private financial sector (not the parasitic one we have now) where they belong.

Maybe it’s not such a great idea, but there doesn’t seem to be much to recommend the ideas coming from the people currently in the Washington phase of the great Washington/Wall Street revolving door. Given how seriously and consistently wrong these people (and their Randian polyps) have turned out to be, they can be considered negative predictors of any idea’s value.

Random Science

Why are peppers hot?

“Capsaicin demonstrates the incredible elegance of evolution,” says Tewksbury. The specialized chemical deters microbes—humans harness this ability when they use chilies to preserve food—but capsaicin doesn’t deter birds from eating chili fruits and spreading seeds. “Once in a while, the complex, often conflicting demands that natural selection places on complex traits results in a truly elegant solution. This is one of those times.”

Woolly Lamboth

Bruno, who weighed in at a massive 21lb when he was delivered on Monday, is one spring lamb that won’t be going to the slaughter.

He is believed to be the biggest lamb ever born anywhere in the world, and farmer Mark Meredith plans to keep him as a pet to see how big he gets.

Mr Meredith, 44, who farms at Chaddesley Corbett, Worcestershire, said it took him 20 minutes to deliver Bruno the ‘woolly lamboth’, compared to a couple of minutes for the average 7lb lamb.


Somehow I don’t think the people who put up This Is Why You’re Fat expected people (like me) to treat it as a place to go for ideas. The Meta Pizza looks amusing, but the Scotch Egg on a Stick looks more like something I’d actually try.

UPDATE: new favorite is the True Love Roast.

The True Love Roast has a bird for each of the 12 days of Christmas.

On the outside a turkey, inside are the breasts:

Goose filled with orange and walnut stuffing. Chicken with hazelnut and ginger. Pheasant with juniper stuffing.
Aylesbury duck with sage and onion. Barbary duck with Persian fruit stuffing.
Poussin and guinea fowl layered with parsley, lemon and thyme. Partridge and pigeon squab set in juniper stuffing.
Mallard duck layered with cranberry and lemon and whole boned quail filled with cranberry and orange relish.

Feeds 125.

Logan’s Rant

It was kind of fun getting involved in the fsync debate, so now I’ll get involved in another ongoing controversy: software transactional memory. If you haven’t already heard of STM, Wikipedia has a good intro – albeit one obviously written by an STM proponent. Such proponents often try to position STM as the solution to all of our concurrent-programming problems. However, as often happens when the advocacy for something becomes too strident, there seems to be a growing backlash from people pointing out the supposed panacea’s limits and problems.

One example of this is Patrick Logan’s rant about STM, and the ensuing discussion at Lambda the Ultimate. In the discussion, there seems to be a general consensus that locking and shared state don’t scale very well, but then there’s a split between those who propose STM as a solution and those who propose more Erlang-like “shared nothing” approach based on messaging. Every once in a while, but too rarely in my opinion, somebody seems to notice that putting operations under a lock and putting them inside a transaction are actually very similar. Transactions offer some desirable properties such as isolation and composability, but the two approaches have much in common and require similar diligence from the programmer. Races, deadlock/livelock, starvation, and so on are still entirely possible with STM if the programmer is careless – and we all know they are.

On LtU, Tim Sweeney weighs in on the pro-STM side.

You could implement this using message passing, but you’d be writing and debugging your own ad-hoc transactional protocol for each of the tens of thousands of updates and state transitions present in the program.

I understand his concern. Data-consistency problems are often unrecognized as such by programmers who lack the proper training or experience. People given messaging often end up implementing their own kind of distributed state sharing, whether they realize they’re doing it or not, and almost invariably implement it poorly so that they have stale or inconsistent data everywhere. Why not give them STM-based state sharing that avoids at least some of these problems? Because each transaction having its own view creates the same problems as each process having its own view, and becomes more likely as people take advantage of all the composability that is supposedly such a great feature of STM.

Also, and perhaps more importantly to me than to some, we have to consider the distributed case. Distributed STM (DSTM) is essentially the same problem as software distributed shared memory (SDSM). I’ve actually worked on SDSM systems, so I’m pretty familiar with its problems and limits. Over the years I’ve watched one group after another make grandiose claims that they’ve solved the problems and overcome the limits, only to have the actual results disappoint. That’s why I view claims such as ScaleMP’s with some skepticism, and claims backed by even less proof than theirs with something like derision. Show me. Tell me the details, and show me the results – on real applications, not some rigged comparison where an optimized STM implementation is compared to an unoptimized lock-based one to achieve the desired result. Until then, I’ll continue to believe that performance of DSTM is likely to be so bad that programmers will avoid it in most (if not all) cases despite its supposed advantages, so at most DSTM will be relegated to niche use.

Experience trumps theory, of course, so it’s interesting that the most empirical observation I saw in these discussions was from Mental on Patrick’s site.

Having written several implementations of both STM and various message-passing-based concurrency models for Ruby lately, I’m a lot less sunny on STM than I used to be even a few weeks ago.

I was having fun implementing STM until I realized that I was able to implement the Actor model correctly in a few hours, versus several weeks for getting the fiddly aspects of STM down.

The biggest immediate problem for STM is starvation — a large transaction can just keep retrying, and I’m not sure there is a way to address that without breaking composability. …and composability is the whole raison d’etre of STM in the first place.

That’s a pretty good example of STM failing to solve a problem – in this case process starvation – that its proponents commonly portray as characteristic of lock-based systems. How, indeed, does one guarantee fairness and forward progress, or even avoid livelock, in an optimistic-concurrency model where you just retry everything when contention is detected? I’m sure somebody has come up with solutions. I’m equally sure those solutions are rarely implemented in actual STM code, so real programmers will encounter these real and hard-to-debug problems when they try to follow the STM advocates’ advice.

Probably the most damning point about the limitations of STM was made by Sriram Srinivasan on LtU.

Second, there are many areas that don’t come under the purview of STM (file activity, screen updates, real db transactions with different tx semantics). They don’t work under an arbitrary retry model, unlike in a locking case where you know you own the critical section.

I love the idea of STM, particularly MVCC STM, and have spent a lot of enjoyable time musing about how I would implement it or use it in programs. Nonetheless, after reading all this, I can’t help but conclude that STM might be too limited to be of much interest to me in the things I do. It might be great for computation within a single process (Tim Sweeney’s example is a good one) but it seems to fall apart in the distributed case or where side effects like I/O are important. Maybe there are some constrained distributed cases in which the benefits would outweigh the costs. After all, many people do useful work within the messaging constraints imposed by MPI. A lot of big simulations are done by partitioning objects or physical regions between processes, having each process calculate one time-step’s worth of change for its objects/regions, then all getting together to share the results before launching the next time step. Maybe some restricted form of DSTM would allow all of this to happen more efficiently using transactions instead of barriers, and displace MPI in some cases. For the more general case, though, I remain rather skeptical.

The Great Fsync Debate

One of the hottest topics in the Linux world lately has been the issue of atomically updating a file on a filesystem that uses delayed allocation, and whether fsync() is an acceptable solution. This is an issue now because, even though many filesystems have used delayed allocation for a while, ext4 is the first to make it into common enough use to spark the debate. One of the best discussions I’ve seen so far is from Alexander Larsson’s (thanks to Wes Felter for the link). It also refers to a proposal from Ted Ts’o regarding the issue which is worth reading.

One of the things that might not be obvious about Ted’s proposal is that it’s constructed to maintain a separation between files and the directory entries that (might) point to them. The desirability of such separation is a bit of a religious issue which I’m not going to get into; the point here is that, while Ted doesn’t explicitly mention it, this explains many things about his proposal that might otherwise seem strange or unnecessary. It’s actually a good proposal as far as the file/directory separation issue goes, but I think it runs smack into another issue: like the fsync() approach, it tries to fix an ordering issue by forcing synchronous updates. In the same LWN discussion Ted even cites Anton Ertl’s explanation of what’s wrong with synchronous metadata updates, but I would say that synchronous data updates – such as the fsync-like behavior implied by the comment attached to flinkat() in Ted’s proposal – are bad for almost exactly the same reasons. The problem here is that the common open/write/rename idiom represents a clearly intended ordering of both file and directory operations, and that ordering can be preserved for the file operations (the writes) but the directory operation is allowed to “jump the queue” because it’s not a file operation. (Note, BTW, that the open is both a file and a directory operation, with clear ordering semantics wrt the writes. So much for that mythical separation between file and directory operations.) My suggestion is that if you have an ordering problem then you should provide a way to preserve ordering. Forcing certain operations to be done synchronously is not necessary and hurts performance/scalability, which is exactly why people are avoiding or complaining about fsync() in the first place.

Unfrtunately, the issue of ordering vs. synchrony highlights a pretty fundamental problem that pervades POSIX: the assumption that synchronous operations are the norm, and asynchronous operations are handled in a second-class kind of way if at all. If not for that, then all metadata calls including rename() could be done asynchronously. Once you’re doing operations asynchronously, it’s a small step to add predicates that must be satisfied before they execute. A solution for some hypothetical system not hobbled by some of the sillier Linux/UNIX/POSIX dogma might therefore look like this:

token1 = fbarrier(fd);
Inserts a marker into fd‘s I/O stream, and returns a token corresponding to that marker. The token does not become valid until the marker leaves the I/O stream.
token2 = rename_async(old_name,new_name,token1);
Queues a rename operation from old_name to new_name, to execute when token1 becomes valid, and returns token2 representing the status of the rename itself (e.g. queued, completed, failed). Note that token1 could represent any kind of pending event, not just a token from fbarrier.
status = query_token(token2);
Find out whether the rename actually completed (optional). There could also be wait_for_token(), epoll() support for generic async tokens, etc. providing a fully generic infrastructure for asynchronous programming.

Someone’s bound to point out that such an approach does not lock the parent directory (as Ted’s proposal does). That means it’s still vulnerable to certain kinds of races involving the parent; separate solutions for that problem should be obvious and are left as an exercise for the reader. The particular problem I’m trying to focus on is of preserving a commonly expected kind of ordering between writes and renames, without forcing any of the operations to be synchronous.