Golden Era, My Ass

This is (a rewrite of) the message that got eaten to provoke the previous rant. I still can’t post it, because apparently Slashdot has gone into its “static-content only” mode for a while. The original parent is here.

Web-browsing used to being up a plethora of intelligent, well-written, interesting pages back in the day

I’m sorry, but can we give the “Golden Era” meme a rest after several thousand years of constant use? I was here when the web was invented, I remember what it was like. I don’t think the quality of the ideas, or the writing, or the visual presentation, has changed a whole heck of a lot either way since then. There’s a lot of crap now, but there was a lot of crap then too. Maybe it used to be geekier crap, more to fellow geeks’ liking, but it was still crap.

only us “old timers” bother with things like netiquette

Here you do touch on the one thing that seems to be different: the prevalence of trolls. Trolls are, by and large, a lazy lot. Even the smallest barrier to entry – even free registration – is often enough that they’ll seek easier targets, so in the early days of the web trolls weren’t a problem. Now, of course…well, you know.

I don’t see it as a “newbie” vs. “oldbie” thing, though. Oldbies might know netiquette, but that doesn’t mean they follow it. In fact, the net tends to train trolls. Think about the stage each young troll goes through when they first learn about these things called logical fallacies. Do they use this knowledge to clean up their act? No, they use it to club other people over the head. Over time, trolls get better at what they do, and the most annoying trolls are usually the ones who’ve had the most years of practice.

For more on “us old timers” and newbie-bashing, you might find this article from last February interesting.

Broken Web Forums (and Browsers), part I

I wonder what the long-term effects are of having comments eaten by web-based discussion forums. I’ve complained about it here in the past, I’ve had it happen this week on Joel on Software, and it just happened to me on Slashdot. I almost never recreate the entire post that was eaten. Sometimes a shorter post might take its place, more often I just move on to some less aggravating activity. My pithy wit and wisdom…stop laughing, there in the back! My, errr, valuable input is lost to the community that I had hoped to enlighten. How many times a day does this happen, on every forum site? How much more likely is it that a long post (which took a long time to write, during which time anything could happen) will be lost, compared to a one-line response devoid of serious thought? Over time, how much does this bias net behavior in favor of the one-line responses devoid of serious thought?

Streams Revisited

Sorry for the lack of updates, folks. I’m in the middle of a major heap of coding, creating some basic network-server infrastructure that I need for my project. The problem is a fairly common one:

  • Several modules are layered on top of one another in a multithreaded system, with requests being issued “downward” and results coming the other way.
  • The module issuing/forwarding a request does not know at the time whether the request will complete immediately or asynchronously, but must be prepared for either.
  • If a request is completed immediately, processing should devolve into a series of simple function calls, with each “borrowing” lock and other context from its caller as is the case with non-asynchronous code.

As I’m sure anyone familar with this kind of programming would know, reconciling these requirements is quite a challenge. What I’ve ended up with is a combination of universal structures for requests and queues, and a generic inter-module interface based on dispatch tables and queues. Sound familiar? It should, because several well-known systems have been based on a similar formula. The real secret ingredient here is the way it deals with reentrant callbacks. In a lot of similar systems reentrancy is not handled well. Many programmers resort to releasing and reacquiring locks to avoid reentrancy issues altogether; not only does that hurt performance, but it also introduces the possibility that state will change while the lock is released. Then they find that the callback needs some extra piece of context on the “return trip” so they add a private queue/table to associate that context with the reply – at the cost of more locking, more trips through the memory allocator, etc. Then there’s always that head-scratchingly weird bit of code to deal with the inevitable case where the information just mysteriously went missing. I’ve seen many people head down that particular road to insanity before.

Now contrast this with a system where the per-module context is closely bound with an extensible request structure, so that no lookup is necessary and there’s no possibility that the context won’t be found. In addition, the callback now has access to one crucial bit of information maintained by the system: whether it’s being called in the original call context or asynchronously from a different one. Obviously, the callback still needs to retake locks etc. in the latter case, but now it has readily available and accurate information that allows it to optimize the original-context case. None of this is earth-shattering stuff, but it adds up. Avoid a spurious lock/unlock here, a redundant lookup there, a few extra possible error cases…pretty soon it starts to look like an efficient and maintainable system where before everything seemed like it was barely held together with chicken wire and duct tape.

Pop-Under Popper

My current browser seems to have a little problem with pop-under windows, which cause the entire application to be sent to the back of the window list, and it’s really annoying. I set up a little Proxomitron filter to deal with this. It’s kind of cute because what it actually does is change the “blur” method that sends it to the back to a “close” method instead, thus making the pop-under go away entirely in a much more straightforward way than happens with pop-up windows. Check it out here.

CSS Layout Tricks

I played around with CSS layout today, and came up with a pretty decent simple way to get both left and right bars (e.g. for navigation) with a “fluid” main-content section in the middle. I even figured out how to make it nest. Yeah, I know the real HTML/CSS gurus out there probably think this is pretty basic stuff, but it actually took a while before I could get it so stuff wasn’t overlapping or getting pushed out of alignment. Check it out here.

Car-Pooling Survey

Just for fun, as I was sitting in the passenger seat for the morning commute today, I started counting heads in the vehicles around us. Out of 250 non-commercial vehicles in my informal sample, only 26 had more than one person (including small children) in them. Isn’t that sad?

How to Write Unmaintainable Code

‘Nuff said.

USB 2.0 vs. FireWire

Tom Halfhill’s column in Maximum PC is usually one of the more informed pieces of technical journalism I get to read in any given month. This month is an exception. His topic is the ever-popular USB 2.0 vs. FireWire (IEEE-1394) debate, and he comes down firmly on the USB 2.0 side on the basis of compatibility. What’s disappointing is the points that he leaves out. Here’s an excerpt:

[USB 2.0] has special features to support slow and fast devices without sacrificing performance…
…It reclocks the slower-data traffic to avoid bogging down faster devices…

Well, how about not having slower devices in the first place, which is the case with FireWire? The very next statement is even more egregious:

…and it guarantees the delivery of time-sensitive audio/video data – a feature called isochronicity.

True enough, but misleading. As written, in the context of a comparison between two technologies, this could easily be interpreted as an advantage of USB 2.0 vis a vis FireWire…except that FireWire has had isochronous transfers for years, and practically no products are out yet that use the feature in USB 2.0 to prove that it works.

Nowhere does Halfhill mention that FireWire provides symmetric connections between devices and/or computers acting as peers, while USB requires a computer as a “master” polling “slave” peripherals. This shortcoming of USB is supposed to be addressed in USB On-The-Go, but that’s even newer and less supported than USB 2.0. In addition, it’s a nasty choose-the-master kludge rather than a truly symmetric protocol version. Absent is any mention that USB 2.0′s supposed 480Mbps to 400Mpbs speed advantage over FireWire is bogus because of additional protocol overhead, or that 800Mbps 1394b products are practically upon us with speed upgrades up to 3.2Gbps likely to follow rapidly. Strangely missing is any comment that USB 2.0 is an Intel baby, designed from the ground up to benefit the chip monopolist, while FireWire/1394 is an open IEEE standard supported by many vendors.

Halfhill’s prediction of success USB 2.0 is not unreasonable. USB 2.0 does have advantages as part of the USB “brand”. What’s disheartening is that Halfhill doesn’t even mention the technical issues that consistently favor 1394. I for one expect better technical perspective from him; I can get the marketing viewpoint from Intel’s PR department.

Copyright Implications of CD-ROM Emulato

From this month’s Maximum PC

The question, from Andy Young:

Removing the CD-check [in a program] by downloading and installing a “fixed executable” was reported as illegal since it modified the program. However, what about using optical drive emulators to solve the same problem?
…is it legal for me to install a program that takes a portion of the hard drive and makes it appear as though it’s a CD-ROM drive? This would not be tampering with the game program itself.

The answer, from Maximum PC’s Logan Decker:

If you were to go by the letter of the Digital Millennium Copyright Act, the answer would probably be a resounding no. That’s because the DMCA says: “No person shall manufacture, import, offer to the public, provide, or otherwise traffic in any technology…that is primarily designed or produced for the purpose of circumventing protection afforded by a technological measure that effectively protects a right of a copyright owner…” Presumably, the downloading and use of a CD-ROM emulator for the purposes of circumventing copyright protections is verboten.

I contend that Mr. Decker’s answer is incorrect on two counts:

  • The quoted passage contains no injunction against the use of such technology – only against its distribution. Look again at the list starting with “manufacture” and ending with “otherwise traffic” to see what I mean. In other words, the guy who sells you the CD-ROM emulator might be liable but you the consumer would not be.
  • The word “primarily” in “primarily designed or produced…” is also significant. Are CD-ROM emulators made or used primarily to circumvent copyright protection? No; they’re primarily intended to improve performance. This is the DMCA’s version of the “substantial non-infringing use” argument that won the famous Betamax case for consumers and, contrary to popular belief, it still exists even in the post-DMCA era.

School Vouchers

The problem I have with voucher programs is that they just result in different students winning what remains a zero-sum game. If you have one good school, six OK schools, and three crappy schools before vouchers, what will you have after vouchers? If you’re lucky, you’ll still have one good school overwhelmed with applications, six OK schools, and three crappy schools that still end up as the dumping ground for 30% of the students. More likely, by weakening the incentive for people to support their local schools, you’ll find that you have no good schools at all, and more crappy ones than before. Increasing student mobility doesn’t solve anything. Yeah, sure, maybe student X gets to escape his crappy local school and go to a better one, but only by displacing student Y. Sooner or later some other poor kid is going to be sitting in that same seat in that same crappy school.

But wait, you say, it’s not a closed system! If we make the vouchers large enough, that will create an incentive for entrepreneurs to create more and better private schools, right? Uh huh, let’s think that one through. Either students will flock to these new schools, or they won’t. If they don’t, we haven’t changed anything at all. If they do, then because the total number of students is constant that means the public schools will become relatively depopulated and some will close. If we allow the private schools to be selective in any way, the remaining public schools will become even worse dumping grounds for underperforming or otherwise undesirable students than they already are. If we allow the private schools to charge tuition over and above the voucher amount, the migration will start at the top and work downward; maybe more middle-class kids will go to private schools than did before but the poor kids will be just as screwed as ever and the most noticeable difference would be that the government would be subsidizing rich kids attending rich-kid schools.

The only way to avoid these problems is to make it so that (at least some) private schools are made to accept vouchers without selection and not charge extra tuition – charter schools, in other words. Now we have a bunch of students going to private schools. This student body will have the same statistical makeup as before, and be funded by the government at the same levels everywhere. Can somebody explain what makes this better than leaving them in the public schools they were in before and creating more equitable conditions there? Is there some magic that’s supposed to occur just because the schools are now run by corporate entities whose goal is profit rather than public service? Do vouchers provide any benefit in this scenario compared to more equitable state or federal funding instead of property taxes? No, of course they don’t. Somebody’s still paying for that rich-kid subsidy.

Vouchers are a classic example of an appeal to illogic and selective perception. Voucher proponents love to paint a picture of poor disadvantaged children who can go to better schools because of vouchers. That’s a good thing, right? And if vouchers lead to a good outcome then more vouchers will lead to more good outcomes, right? WRONG. What works for a sample often doesn’t work when applied to the entire population. It’s good that we have soldiers to defend our country, but what kind of country would it be if it consisted only of soldiers? Some people buy into the voucher myth because they don’t understand statistics, some buy into it because they realize it doesn’t work for everyone but are willing to gamble that they’ll be among the lucky few, but anyone who really thinks through the implications cannot help but realize that vouchers are a dead end or stopgap at best. The real problems would still be waiting for solutions even if we had vouchers.