Aspire One Wireless

Important note (added October 17). Today Acer pushed out an update to the built-in wireless drivers, and it seems to work much better than the old ones. I’ve been able to get strong, stable signals using it both at work and at home (I’m using that connection to type this). Therefore, the instructions below are unlikely to be necessary if you have up-to-date software.

The big negative for my new Aspire One turned out to be the wireless. Using the packaged drivers, I managed to get a connection at work once, but it was soon dropped and I never repeated that success. I was able to get a stable connection to our neighbor’s wireless network, but never connected at home. Why do the open-source wireless drivers suck so much? I’ve had this problem with the Broadcom driver for my Dell at home, forcing me to use ndiswrapper. This one’s Atheros/MadWifi, but just as bad if not worse. Part of the problem seems to be that the open-source drivers will not set the transmit power and sensitivity to their proper values even if you try to set them manually. Yeah yeah, FCC blahblah make all the excuses you want, but the vendor drivers seem to get away with using higher power and besides, that can’t be the whole problem because I can put the machine right next to my access point at home and it still fails. Using the vendor driver with the exact same settings works just dandy; my Aspire One is demonstrating that fact right this moment.

OK, so how do you get the vendor driver working on your Aspire One? Here are the steps I took.

  1. Make sure gcc is installed on your system.
  2. Download the kernel source from Acer. Do not try to get the sources via the package manager; that’s an old version and not the one actually shipped on the system.
  3. Go into /usr/src and unpack the source. Create a symbolic link with “ln -s linux-2.6.23.9 linux” so that module builds and such will work.
  4. Go into the linux directory. Copy /boot/config_080627 (or whatever the latest version is) to .config here. Acer needs to get their source-control together a bit here; there are several config files in the unpacked source at well, using many different naming conventions. I used config_080609v2 and it worked OK for me, but good practice would be to use the one from /boot. Whichever one you use, run “make oldconfig” to finish the kernel-configuration process.
  5. At this point I actually built the kernel because I figure I’ll want to strip out a bunch of stuff some day (there are a surprising number of drivers configured in for devices that are not physically present on the system). For what we’re doing right now, the only value of this is to validate that the build tools are all working. I actually did find some minor breakage, which is that asm-offsets.h is present in include/asm-i386 but not include/asm (which is supposed to be a symlink to include/asm-$ARCH but Acer screwed that up). A simple symlink for that one file fixed the problem.
  6. Get ndiswrapper from SourceForge and unpack it. Go into the directory where it was unpacked and run “make install”; if you did the previous steps correctly this should just work.
  7. Go to the unofficial Atheros driver download site and grab the Windows XP 32-bit driver for this chip. I tried the 7.x-series driver and it didn’t work, complaining about being a WDM driver instead of NDIS. Working backward, I fetched the only 6.x-series driver (xp32-6.0.3.85.zip) and that seems to load without complaint.
  8. Unpack the zip file and run “ndiswrapper -i net5416.inf” to install the driver.
  9. Run “madwifi-unload” to unload all of the MadWifi junk and “modprobe ndiswrapper” to load the driver that works.
  10. Configure your wireless network as you should have been able to do in the first place.
  11. Enjoy your wireless connectivity.

Obviously, your mileage may vary. Certain aspects of this formula are almost certain to change, as Acer ships new systems with slightly different hardware or an updated kernel config. Nonetheless, the basics worked for me and should approximate what you’ll need to do. Feel free to comment here or contact me via email if you use these instructions, whether you succeed or fail, so I can update appropriately.

Eee vs. Aspire One

My Eee 701 never quite got over being doused with Coke (don’t ask) so I decided to get a replacement. This is a rapidly evolving market, and my experience with the Eee was that the screen was just a little bit too small, so I decided I’d take the opportunity to upgrade a bit. It came down to the Eee 901 vs. the Aspire One A110. They have the same processor and screen. Here are some points of comparison:

  • Same processor, same screen.
  • Eee has more memory. Point for Eee.
  • Eee has more flash built in, but Aspire has two SD slots with one transparently increasing capacity on the primary drive (neat trick BTW) and flash cards are cheap. It’s a wash.
  • Eee has 802.11n instead of 802.11b. Don’t care, because I don’t have any other 802.11n gear.
  • Eee has a 1.3MP built-in camera, instead of 0.3MP for Aspire. Big don’t care, since even my celphone can do better than either and I’d really rather not have a camera at all since some of the places I’m likely to visit won’t allow them.
  • Eee runs Xandros, Aspire runs Linpus. Half point for Aspire even before I’d seen Linpus.
  • Eee has a bigger battery. Point for Eee.
  • Aspire is slightly lighter. Point for Aspire.
  • Aspire is considerably cheaper. Point and a half for Aspire.

Before I made another purchase in this area, I did an experiment. I used Xnest to set up a 1024×600 desktop within my real desktop, then ran my email and browser in that for half a day. It was a little cramped, but much more tolerable than 800×480 had been. So I got the Aspire One A110 from Newegg, and it arrived the next day (yesterday). The screen was even nicer than I had expected, glossy and bright, Linpus really was nicer than Xandros, and things seemed to work pretty well out of the box – using the wired connection at work because I didn’t want to mess with wireless yet. I also discovered my first negative. The case is also very glossy, and shows fingerprints like you wouldn’t believe. Oh well. It’s definitely nicer than the machine it replaces, but there is one big negative which will be the subject of my next post.

Software Packaging Pet Peeve

Right now, in another window, a package install is grinding away on one of our systems. It’s grinding away far longer than it needs to, because the package actually consists of multiple components, with each and every one doing the same set of configuration checks instead of doing those checks once at the top level and propagating the result. Worse, I’m sure that half of those checks are useless for any particular component, but the idiot package maintainer just borrowed a does-everything configure script from something else they worked on once and never bothered to prune it. Sloppy, sloppy, sloppy. Thank you, Mr. Package Maintainer, for wasting so much of everyone’s time.

Pardus Linux

This morning, in between doing bits of real work, I ditched the greatly-misnamed Foresight Linux. I considered three alternatives:

OpenSUSE
What I use at work. Large user base. Bland. GNOME-oriented, which I consider a minus especially since GNOME/avahi entanglement started screwing up my laptop with Ubuntu Hardy.
PCLinuxOS
A little noob-ish, but pretty well regarded especially with regards to hardware detection and usability.
Pardus
The unknown of the group. KDE-oriented, which I favor. Innovative in some ways, such as their own package manager and configuration utilities, but reviewers seem to think they’ve pulled it off pretty well.

After all the negative things I’ve said about prior Linux installs, here’s what I have to say to the Pardus developers: well done. The install went smoothly. All of my hardware was detected, and it got the right resolution for my monitor. Kaptan, the post-install configuration tool, very nicely walked me through getting my mouse and network set up. I’m not left-handed but I configure my trackball that way, and this is the first install where I haven’t had to remember that myself, so it’s a nice touch. I see that Flash is already installed and working, which is a nice contrast with some of the “we’ll leave stuff non-functional in the name of Free Software Purity” attitude of many distros. I’ve already set up my email accounts in Thunderbird and my essential Firefox plugins (AdBlock Plus, FoxyProxy, Foxmarks).

My only quibble is that the software suite is a bit incomplete out of the box. There was no rsync, though there seemed to be things in my .bashrc that refer to it. I prefer a lighter alternative to konsole, which is built in, but I couldn’t even find xterm and there only seemed to be one rxvt relative readily available. Good enough. I happened to notice while I was in the package manager that gcc and so on were not installed, which I still consider wrong, but at least this time (unlike with Foreskin) the workaround was obvious and consistent with the generally recommended package-management methods.

Overall, though, working with Pardus has been a very pleasant surprise so far. A very auspicious start indeed.

Google Chrome

Yawn. Wake me when it’s available on an operating system I use.

Foresight Linux First Thoughts

I hadn’t had any trouble with either of my laptops recently, so I decided to mess around with my desktop instead by installing Foresight Linux on it (to replace Xubuntu). I had first heard about it because I’d heard good things about its package manager Conary, and then because it was being shipped with the Shuttle KPC. Surely, if it was being shipped with an inexpensive mass-market machine like that, it should be pretty easy to use, right? Wrong. Oh, the install worked great. The initial boot was complicated by the way I already had GRUB set up, but I got through it. It brought me up to a reasonably functional desktop environment, at full resolution (which is only 1280×1024) and with full networking (for a common-as-dirt Ethernet chip), which is actually pretty decent.

Then I started to play with stuff, and that’s where I began to see how user-unfriendly Foresight can be. The Emerald Theme Manager (part of Beryl) seemed to be installed, so I figured I’d try some eye candy. No go. Various forum posts seemed to indicate that I needed to install something called fusion-icon to get access to that stuff (along with many complaints about the partial nature of the Beryl/Emerald/Compiz/whatever distribution in Foresight) so I started my first venture into Foresight package management. It seemed like they had wrapped Conary in something called PackageKit, so I gave that a try. Well, PackageKit is bleeping useless. As Seopher explains better than I can be bothered to, it is of little to no use in finding the package you need, so if you don’t know exactly what it’s named you’re out of luck. PackageKit is slow, the interfaces are all lousy, and it has plenty of other shortcomings as well. My unfavorite is that some commands must be run as root, and some must not be, with no real rhyme or reason to it; some of the non-root commands are far more destructive than the root ones. I managed to persuade it to install fusion-icon and the correct video driver, then resolved to use Conary directly from then on. Voila! I had all sorts of silly eye candy, which I’ll probably never use or think about again, but it was an interesting exercise.

My next great disappointment came when I tried to import some settings from my old Xubuntu setup, and xrdb complained about cpp being absent. How can the C preprocessor be missing? Well, it turns out that the system ships with no compiler, which is just inexcusable for a Linux system. Unhappy about having to install the compiler in the first place, I was doubly unhappy to find out that Conary isn’t all it’s cracked up to be either. It happily told me that gcc was already installed, but there was still no compiler actually there. After some digging around, I discovered that I had to install gcc:runtime to get an actual compiler. That’s insane, but it gets worse. Having lost faith in the ability of the Foresight developers to do something reasonable that would result in a working system, I decided to try compiling a “hello world” program. It failed, due to header files missing. After even more digging (“conary rq –path” seems to be the most useful incantation they provide) I installed gcc:devel so that I could compile and run a simple program. That’s two more steps and a lot more digging than should have been necessary. Overall, this business of packages and components and “troves” and package names that sometimes look semi-reasonable and other times look like they’ve had a bunch of line noise tacked on the end all seems rather arbitrary, inconsistent, and obfuscatory. Maybe “gcc-objc;4.1.2-11-0.1;x86;/conary.rpath.com@rpl:devel//2-qa/1204703034.211:4.1.2-11-0.1,1#x86|5#use:~!bootstrap:~!cross:~!gcc.core” has some use or value to rPath developers, but it’s just a silly-looking mess to anyone else. (Yes, I got that from pkcon rather than conary, but who the heck cares? How you layer your software should be your problem, not the user’s.)

It’s a good thing I don’t care much about sound on this machine, because that doesn’t work either and I’m sure I could have another whole rant about that if I did care. In the end, all I can say that the single word that comes to mind for Foresight Linux is immature. Despite all the claims of being oriented toward ease of use, it’s just not ready for general use even among Linux experts.

Ubuntu Hardy Heron on Dell E1505

I finally got around to installing this. Here are the steps involved.

  1. Basic install, reboot, BEEP. As usual, one of my first acts after installing is to blacklist the annoying pcspkr driver.
  2. It used my full 1680×1050 resolution right off the bat, but the non-accelerated driver. Didn’t take me long to find EnvyNG, install it, and run it. Voila! Considering how bad X configuration has always been in the past, this was a pleasant surprise.
  3. Wireless seemed to be working pretty much out of the box (modulo my network-specific configuration), which was another pleasant surprise.
  4. Function keys also worked out of the box too, as did front-panel keys. Sweet.
  5. Suspend/resume seemed to work, albeit slowly (about a minute for the resume). Then I found that my wireless went away. I spent a lot of time figuring out that the b43 driver is a piece of crap. It never survives suspend/resume on its own, so I started writing a script to do it. I eventually had something that would work most of the time, with lots of sleeps and retries and such to work around the various race conditions in b43, only to find that I’d usually lose my connection within a minute anyway. Even when I had a connection, packet-loss rates were awful and performance was abysmal. Much, but not all, of the problem seems to be that this driver refuses to set the transmit power to anything near its full value, so it’s like being at the very edge of my network’s range. I eventually resorted to the tried-and-true combination of ndiswrapper and the Broadcom bcmwl5 NDIS driver, and things are a lot better now.
  6. Firefox 3 is still clearly not done. There’s no excuse for not being able to import a Firefox 2 bookmark file. Apparently somebody did write most of the code, because it gets invoked when you do a version upgrade, but they didn’t hook it up so that it would work for a fresh OS install or any of a hundred utilities that worked with the old format. That’s just a developer being lazy. I downgraded to version 2 until version 3 grows up a bit more.

At this point everything seems to be working, so I’d have to say the install was a success. It’s nice to be up to date again.

Linux Pointer Types

Looks like Linus is off in space again.

“const” has *never* been about the thing not being modified. Forget all
that claptrap. C does not have such a notion.

“const” is a pointer type issue, and is meant to make certain mis-uses
more visible at compile time. It has *no* other meaning, and anybody who
thinks it has is just setting himself up for problems.

. . .

In other words, “kfree()” can be const.

– Anything that *can* take a const pointer should always do so.

Why? Because we want the types to be as tight as possible, and normal
code should need as few casts as possible.

Not so much, actually. It doesn’t make the types tighter; it just makes them different. “Tighter” has even less meaning in C than const does. At least const means something to the compiler – that it may issue certain warnings or perform certain optimizations based on an assumption of immutability and any violations of that assumption are henceforth Somebody Else’s Problem. As Linus’s example of storing a const alias for a distinctly non-const piece of kmalloc’ed memory shows, const applies only to a particular high-level-language way of referring to some memory and not to the machine-level memory itself. It marks the name or path, not the thing.

The problem with kfree is that its declaration is contrary to the very distinction both I and Linus have pointed out. Only the function parameter is considered const – not the caller’s variable which might be either const or non-const without a warning either way. Having kfree take a const pointer therefore does nothing to detect Linus’s “misuses” at compile time. If people really want to get serious about distinguishing this type of pointer from that type of pointer on the basis of something other than data type then they should use something like Cqual’s type qualifiers which are much more powerful than abusing const. The way kfree is declared does save a cast if the caller’s variable happens to be const, at the expense of forcing one within kfree itself, but I think that’s weak. Reducing casts is a good thing, but automatic promotion to const is also a kind of cast so in a sense the savings here aren’t real. Memory aliases are something that clueful programmers try to avoid, inconsistent aliases doubly so, and I’ve seen many bugs that could have been avoided if differences in const-ness were always flagged instead of automatically and silently creating a const alias for a non-const pointer. A rule of reducing visible casts (while ignoring the invisible kind) carries much less weight with me than a rule that obviously-paired functions such as kmalloc and kfree should operate on the same type.

Stable APIs

Over on Greg Laden’s somewhat hyperactive blog, I got into a little bit of a debate about whether or not stable APIs (Application Programming Interfaces) are a good thing. That led to a second post, in which Greg L cites Linux guru Greg Kroah-Hartman on the subject. Since Greg L had explicitly asked my opinion about what Greg K-H has to say, I’ll respond here and link back to there. First, let’s deal with a little bit of Greg K-H’s introduction.

Please realize that this article describes the _in kernel_ interfaces, not the kernel to userspace interfaces. The kernel to userspace interface is the one that application programs use, the syscall interface. That interface is _very_ stable over time, and will not break. I have old programs that were built on a pre 0.9something kernel that still work just fine on the latest 2.6 kernel release. That interface is the one that users and application programmers can count on being stable.

What Greg K-H is talking about is therefore actually an SPI (System Programming Interface). Normally it would be OK to use the more familiar term API anyway, but in the interests of clarity I’m going to adopt strict usage here. What I want to point out here is that most of the supporting-legacy-code bloat which Greg L cites as a major liability for Windows is not in the SPI that Greg K-H is talking about. All that stuff about code to support old applications or whatever is in the API, and there’s plenty of the same sort of cruft in Linux’s API for the same reasons.

Moving right along, what Greg K-H seems to be arguing is not that a stable SPI is worthless in and of itself but that the need for one is largely subsumed by having code be part of the main Linux kernel. He’s actually right in a particular context, but I don’t think it’s really the proper context for three reasons.

1. There are things out there besides PCs
If the code in question is a driver for a consumer-grade Ethernet card that dozens of kernel hackers are likely to use in their own machines, then most of Greg’s reasons for getting code into the kernel make sense. Other developers probably will be able to add features or fix bugs, or make SPI-related changes, to your code, and then test those changes. On the other hand, if you’re working on code for a cutting-edge high-end system that uses one of the less common CPU architectures, with an unusual and notoriously tricky memory-ordering model, with a lot of other features that bear little relation to what you might find in your desktop PC, it’s a bit different. Other developers might well be able to make useful changes for you. They also might screw them up in very hard-to-debug ways, particularly if they bring PC assumptions with them to something that’s not a PC. They certainly won’t be able to test their changes. In the end, getting your code into the mainline kernel won’t reduce your maintenance burden a whole lot.

2. Everyone has to play along
It’s one thing to say that you’ll play along and get your code into the kernel, but many projects nowadays involve drivers and patch sets from many sources. Getting your own patch A and/or patch B from vendor X into the kernel might not help you all that much if vendors Y and Z decline – for whatever reason, good or bad – to play along by doing likewise for patches C through G. Sometimes they want to do it, but their patches are rejected by the Kernel Gods for what might be legitimate technical reasons but are just as often petty political ones. Either way, you’re still going to be stuck with complex multi-way merges cascading outward any time any of those components moves forward, either because the vendor did something or because someone else changed part of the SPI. In other words, you don’t really gain much until that last vendor joins the club. By contrast, every attempt to preserve SPI compatibility brings its own immediate gain, even if other parts of the SPI change.

3. It doesn’t scale
“One kernel to rule them all and in the darkness bind them” solves some problems, but bundling every bit of device- and architecture-specific code with the kernel has its costs too. How many millions of hours have been wasted by developers configuring, grepping through, and diffing against PA-RISC and s390 code even though they’ll never use anything but an x86 on their project, just because they do those commands at the whole-tree level and never got around to automating the process of pruning out irrelevant directories every time they sync their tree? Even worse, how many times has someone who thought of making a change balked at the idea of applying it to every file it affects? How many times have they forged ahead anyway, but then gotten bogged down dealing with bugs in a few particularly intransigent variants? How much time has been wasted on the LKML flame wars that result? Preserving every architecture and device in one tree can have the same chilling effect on development as preserving every past SPI version.

If you’re living in the same desktop-and-small-server universe that Windows lives in, and you don’t have to deal with development partners who can’t or won’t get their own patches in the kernel, then getting your own code into the kernel might seem to obviate the need for a stable SPI and bring other advantages besides . . . this year. Down the road, or if those constraints don’t apply to your project, that might not be the case. SPI instability is a bad thing in and of itself, even if the pain doesn’t seem too great or there are other reasons to endure it. As I said to Greg L, it’s not something that makes Linux better than Windows. It’s an artifact of where Linux is on the adoption curve, quite reasonably more concerned about attracting new users than about alienating old ones. As Linux climbs that adoption curve, perhaps to a point of becoming a dominant operating system, I think that calculus will change. Breaking SPI compatibility is sometimes justified, but it’s almost never anything to be proud of.

Sloppy Programming

Here’s a quick quiz. Let’s say that you have a Linux kernel module, built against the same version of the kernel that you’re running but with one slightly different configuration option, and you try to load it. It will:

  1. Work correctly.
  2. Fail to load, because it is recognized as being incompatible.
  3. Load but fail to function, producing some error messages that explain the problem.
  4. Crash your system in an utterly non-obvious way.

The answer, of course, is D. It shouldn’t be, but it is, at least if what you’re loading is a network driver and the configuration option is netfilters. The reason is simply bad software engineering, similar to another example I wrote about seven years ago. Both of the examples I mention there have been fixed, by the way; I guess that senior developer I mentioned managed to put his new knowledge about pointers and caches into context enough to do the right thing (or allow someone else to) at some point. Without getting too far into the technical details, because that’s not the point I want to make, the problem in this case is that turning on netfilters in your kernel adds fields to the sk_buff structure – one universally needed by any networking code – and adds them in the middle. Thus, inline functions in network drivers try to modify some common fields and actually end up trashing the structure. I thought everyone knew that, at the very least, new or optional fields should be added at the end of a structure. Better still would be to put these special-purpose fields in a separate structure, allocated along with the sk_buff itself or hidden in an skb_reserve area (as is done in some similar cases). Structure versions would be nice too, so at least a mismatch could fail gracefully instead of messily.

Of course, some Linux hackers will say such things are just aesthetic, because you have the source and should always build against the exact same kernel you’re using anyway. Bull. For one thing, this doesn’t affect only developers but users as well. I might have the source, but Joe User might not. If I therefore have to provide Joe with binaries, I now need to provide exactly the right binaries for every possible configuration Joe might be using, and that’s a logistical nightmare. If anyone wants people like Joe to use Linux, they should consider what makes life easier for people like Joe, and this kind of silliness definitely doesn’t. Incompatibilities involving an integral part of an API that’s used by hundreds of packages maintained by thousands of different people should simply be handled better, and not introduced spuriously just because someone wants a few new fields for their own use and Linux-kernel politics preclude adding them the right way. If Linux is a private toy for a closed community you can do that. If it’s something you expect to gain broad acceptance you can’t.