Once upon a time, Linux had no virtual filesystem (VFS) abstraction to speak of, and was the laughingstock of the UNIX community because of it. After all, the VFS layer is one of the few points of commonality between most UNIX flavors. Systems as disparate as BSD, Solaris (SVR4), AIX and HP-UX had almost identical VFS layers – a different argument here, a different function there, but each familiar and recognizable to anyone who had experience with any of the others. It seemed like one of the easiest parts of a UNIX variant to get right.
Some people in the Linux community seemed to realize this, and decided that for Linux 2.3/2.4 they would create a new VFS layer. The go-ahead was flashed by The Only Power That Matters (initials: L.T.) and work proceeded apace. Many thousands of lines of filesystem-related code were examined by a dedicated band of hackers. Interfaces were argued over at great length on mailing lists. Much functionality was moved out of individual filesystems and made common code in the VFS layer. Much new code was written. Many bugs were found, and fixed. Now the new VFS layer is done, and will be part of the great 2.4 release which should happen any day now. There are only a couple of problems…
- To this day, critical bugs – race conditions and deadlocks leading to system crashes and hangs, even data corruption – are still being found related to the new VFS layer. Several filesystems that used to work no longer do, and have been discarded because nobody was willing to fix them.
- To this day, no serious analysis of the new code’s performance impact has been done. Informal observations are that the new code may improve performance “a lot” for multi-stream reads, but may cost a much-more-concrete 10-20% for single-stream operations.
- To this day, no spec for the new VFS has been written and published. The standard response to requests for one from Alexander Viro, the project leader, is “stop whining and just read the code”. Other than the code and random posts to linux-fsdevel, there is no published information about the exact semantics of VFS calls, synchronization constraints or concerns, etc.
- To this day, nobody has provided a set of regression tests for the VFS layer. All that exists is “try it on live systems, and see what happens”.
- The new VFS still looks nothing like its counterpart on any other system.
Fundamental change is often necessary to something as broken as the old Linux VFS layer, and fundamental change is inevitably very disruptive. Nonetheless, Linux VFS crew have managed to caused much unnecessary disruption, and made things much worse than they needed to be. Customers will suffer downtime, and lose data, because of this exemplary feat of bad engineering.
The Linux VFS layer was once a laughingstock. Nobody’s laughing any more, but only because it’s unseemly to laugh about a train wreck.
Update 07-Mar-2000: I just realized that even with all the shiny new VFS code, fs.h still uses unions for inodes and superblocks, containing elements from all known filesystem types and making it impossible to add a new filesystem type without rebuilding (and hence rebooting). This stands as a truly monumental example of bad software engineering, when a set of disruptive changes to add a component still fails to satisfy one of the most basic requirements of that component’s category. Apparently one of the most senior Linux developers has complained about the performance hit of dereferencing through a pointer in a common structure to get to FS-specific data…as though there weren’t more significant performance issues to deal with in filesystems. Of course, this performance argument is bogus. More likely, he just discovered that CPU-cache misses can be expensive on some systems and hasn’t yet placed that information into proper context, or – even more likely – he’s just too $#@! lazy to change his code to accomodate the new layout. This exemplifies the dangers of attaching too much weight to someone’s reputation and not enough to the technical merit (or lack thereof) of what they’re saying.