Yak Electrolysis

While I was reviewing a patch yesterday, I found code that used lots of distinct directory names for a series of tests – test1 would use brick1 and brick2, test2 would use brick3 and brick4, etc. I’ve run into this pattern myself, and it can be a bit of a maintenance problem as tests are added or removed. For example, in the test scripts for iwhd, there were multiple occasions when adding a test led to accidental reuse of names, and much non-hilarity ensued (everything about that project was non-hilarious but that’s a story for another time). The simplest pattern to deal with this is something like the following, which I suggested in a review comment:

sequence=0
...
# Test 1
sequence=$((sequence+1))
srcdir=/foo/bar$sequence
sequence=$((sequence+1))
dstdir=/foo/bar$sequence
...
# Test 2
sequence=$((sequence+1))
thedir=/foo/bar$sequence

This works pretty well, but the inline manipulation of $sequence kind of bugged me so I tried to put it in a function. My first try looked something like this.

sequence=0
 
function next_value {
    sequence=$((sequence+1))
    echo $sequence
}
 
thedir=/foo/bar/$(next_value)

Yeah, I hear the laughter. For those who didn’t get the joke yet, this falls prey to bash’s handling of variable scope and subshells. The $(next_value) construct ends up getting executed in a subshell, so changes it makes to variables aren’t reflected in the parent and you end up with the same value every time. I really should have stopped there, satisfying myself with the original inline version. Sure, that version can still hit the scope/subshell issue, but only if you use functions in your own code and not as a side effect of the idiom itself. I realized that getting around the scope/subshell issue would involve something ugly and inefficient, which is why I should have stopped, but I was intrigued. Surely, I thought, there should be a way to do this in an encapsulated and yet robust way. The first idea was to stick the persistent context in a temporary file.

tmpfile=$(my_secure_tmpfile_generator)
 
function next_value {
    prev_value=$(cat $tmpfile)
    next_value=$((prev_value+1))
    echo $next_value > $tmpfile
    echo $next_value
}

OK, it’s kind of icky, but it should work. Again, I should have stopped there, but that temporary file bothered me. Surely I could do that without the file I/O, perhaps by spawning a subprocess and talking to that through a pipe. Yes, folks, I had embarked on a quest to find the most insanely complicated way to solve a pretty simple problem. The result is generator.sh and here’s an example of how to use it.

source generator.sh
start_generator int_generator 5 6
...
dir1=/foo/bar$(next_value 5 6)
dir2=/foo/bar$(next_value 5 6)

Doesn’t look too bad, does it? OK, now go ahead and look at how it’s done. I dare you. Here are some of the funnier bits.

# start_generator
ctop=$(mktemp -t -u fifoXXXXXX)
mkfifo $ctop || fubar=1

Yes, really. Not polluting the filesystem with a temporary file was part of the point here, but I ended up dropping not one but two orts instead. (Cool word, and yes, I did use a thesaurus.) To be fair, these are only visible in the filesystem momentarily before they’re opened and then deleted, but still. I tried to find a way to do this with anonymous pipes, but there just didn’t quite seem to be a way to get bash to do that right. Here’s the next fun bit.

# start_generator
$1 < $ptoc >$ctop &
eval "exec $2> $ptoc"
eval "exec $3< $ctop"

The first line invokes the subprocess, with input and output fifos. The two execs are the bash way to create read and write file descriptors for a file. They’re wrapped in evals to satisfy my goal of making things as complicated as possible by allowing the caller to specify both the generator function/program and the file descriptors to use. Eval is very evil, of course, so let’s play Spot The Security Flaw.

start_generator int_generator "do_something_evil;"
# ...causes us to eval...
exec do_something_evil;> $ptoc

I'm not going to fix this, because it's only an "insider" threat. This code already runs with the same privilege as the caller, and can't do anything the caller can't. They could also pass in a totally bogus generator function, and I'm not going to worry about that either because they'd only be shooting themselves. On to the next fun piece.

# next_value
echo more 1>&$1
read -u $2 x

Again, this is kind of standard bash stuff to write and then read from specific file descriptors. Having an example of this is one of the main reasons I didn't just throw away the script. With a little bit of tweaking, the same technique could be used as the basis for a general form of IPC to/from a subprocess, and that might be useful some day.

To reiterate: this is some of the craziest code I've ever written. It's way more complicated than other solutions that better satisfy any likely set of requirements, and the implementation threads its way through some particularly perilous bash minefields. FFS, I might as well have just used mktemp in the first place and skipped all of this. You'd have to be nuts to solve this problem this way, but maybe my documentation of the discoveries I made along the way will help someone solve a similar problem. Or maybe it's just a funny story about bash scripting gone horribly wrong.

Scaling Filesystems vs. Other Things

David Strauss tweeted an interesting comment about using filesystems (actually he said “block devices” but I think he really meant filesystems) for scale and high availability. I thought I was following him (I definitely am now) but in fact I saw the comment when it was retweeted by Jonathan Ellis. The conversation went on a while, but quickly reached a point where it became impossible to fit even a minimally useful response under 140 characters, so I volunteered to extract the conversation into blog form.

Before I start, I’d like to point out that I know both David and Jonathan. They’re both excellent engineers and excellent people. I also don’t know the context in which David originally made his statement. On the other hand, NoSQL/BigData folks pissing all over things they’re too lazy to understand has been a bit of a hot button for me lately (e.g. see Stop the Hate). So I’m perfectly willing to believe that David’s original statement was well intentioned, perhaps a bit hasty or taken out of context, but I also know that others with far less ability and integrity than he has are likely to take such comments even further out of context and use them in their ongoing “filesystems are irrelevant” marketing campaign. So here’s the conversation so far, rearranged to show the diverging threads of discussion and with some extra commentary from me.

DavidStrauss Block devices are the wrong place scale and do HA. It’s always expensive (NetApp), unreliable (SPOF), or administratively complex (Gluster).

Obdurodon Huh? GlusterFS is *less* administratively complex than e.g. Cassandra. *Far* less. Also, block dev != filesystem.

Obdurodon It might not be the right choice for any particular case, but for reasons other than administrative complexity.
What reasons, then? Wrong semantics, wrong performance profile, redundant wrt other layers of the system, etc. I think David and I probably agree that scale and HA should be implemented in the highest layer of any particular system, not duplicated across layers or pushed down into a lower layer to make it Somebody Else’s Problem (the mistake made by every project to make the HDFS NameNode highly available). However, not all systems have the same layers. If what you need is a filesystem, then the filesystem layer might very well be the right place to deal with these issues (at least as they pertain to data rather than computation). If what you need is a column-oriented database, that might be the right place. This is where I think the original very general statement fails, though it seems likely that David was making it in a context where layering two systems had been suggested.

DavidStrauss GlusterFS is good as it gets but can still get funny under split-brain given the file system approach: http://t.co/nRu1wNqI
I was rather amused by David quoting my own answer (to a question on the Gluster community site) back at me, but also a bit mystified by the apparent change of gears. Wasn’t this about administrative complexity a moment ago? Now it’s about consistency behavior?

Obdurodon I don’t think the new behavior (in my answer) is markedly weirder than alternatives, or related to being a filesystem.

DavidStrauss It’s related to it being a filesystem because the consistency model doesn’t include a natural, guaranteed split-brain resolution.

Obdurodon Those “guarantees” have been routinely violated by most other systems too. I’m not sure why you’d single out just one.
I’ll point out here that Cassandra’s handling of Hinted Handoff has only very recently reached the standard David seems to be advocating, and was pretty “funny” (to use his term) before that. The other Dynamo-derived projects have also done well in this regard, but other “filesystem alternatives” have behavior that’s too pathetic to be funny.

DavidStrauss I’m not singling out Gluster. I think elegant split-brain recovery eludes all distributed POSIX/block device systems.
Perhaps this is true of filesystems in practice, but it’s not inherent in the filesystem model. I think it has more to do with who’s working on filesystems, who’s working on databases, who’s working on distributed systems, and how people in all of those communities relate to one another. It just so happens that the convergence of database and distributed-systems work is a bit further along, but I personally intend to apply a lot of the same distributed-system techniques in a filesystem context and I see no special impediment to doing so.

DavidStrauss #Gluster has also come a long way in admin complexity, but high-latency (geo) replication still requires manual failover.

Obdurodon Yes, IMO geosync in its current form is tres lame. That’s why I still want to do *real* wide-area replication.

DavidStrauss Top-notch geo replication requires embracing split-brain as a normal operating mode and having guaranteed, predictable recovery.

Obdurodon Agreed wrt geo-replication, but that still doesn’t support your first general statement since not all systems need that.

DavidStrauss Agreed on need for geo-replication, but geo-repl. issues are just an amplified version of issues experienced in any cluster.
As I’ve pointed out before, I disagree. Even systems that do need this feature need not – and IMO should not – try to do both local/sync and remote/async replication within a single framework. They’re different beasts, most relevantly with respect to split brain being a normal operating mode. I’ve spent my share of time pointing out to Stonebraker and other NewSQL folks that partitions really do occur even within a single data center, but they’re far from being a normal case there and that does affect how one arranges the code to handle it.

Obdurodon I’m loving this conversation, but Twitter might not be the right forum. I’ll extract into a blog post.

DavidStrauss You mean complex, theoretical distributed systems issues aren’t best handled in 140 characters or less? :-)

I think that about covers it. As I said, I disagree with the original statement in its general form, but might find myself agreeing with it in a specific context. As I see it, aggregating local filesystems to provide a single storage pool with a filesystem interface and aggregating local filesystems to provide a single storage pool with another interface (such as a column-oriented database) aren’t even different enough to say that one is definitely preferable to the other. The same fundamental issues, and many of the same techniques, apply to both. Saying that filesystems are the wrong way to address scale is like saying that a magnetic #3 Phillips screwdriver is the wrong way to turn a screw. Sometimes it is exactly the right tool, and other times the “right” tool isn’t as different from the “wrong” tool as its makers would have you believe.

Extracting Data From Quassel

Earlier this morning I needed to reconstruct a conversation with someone on an IRC channel, and I don’t generally keep text logs of IRC activity. However, I do use Quassel, which maintains a substantial backlog for me, and I also happen to know that the backlog is stored in a SQLite3 database. I poked around a bit and figured out enough of the schema to extract the information I wanted. I’ll probably need it again so I turned my manual hacking into a script, and other people might need to do the same thing so I’m publishing the script. Here it is: convo.py. Enjoy.

When Coding Standards Hurt Quality

Most of my work is on code that has “initialize all local variables at declaration time” as part of the coding standard. I’ve never been a big fan, but I’m very reluctant to get into coding-standard arguments (probably as the result of having had to enforce them for so long) so I just let it go. The other day, Rusty Russell offered up a better reason to avoid this particular standard. The crux of the matter is that there’s a difference between a value being initialized vs. it being initialized correctly, and the difference is too subtle to define a usable standard. Sometimes there is a reasonable default value, and you want to initialize to that value instead of setting it in ten different places. Other times every value has a distinct important meaning, and code depends on a variable having one of those instead of a bland default. Does NULL mean “unassigned” or “no such entry” or “allocate for me” or something else? The worst part of all this is that required initializers prevent compilers and static-analysis tools from finding real uninitalized-variable errors for you. As far as they’re concerned it was initialized; they don’t know that the initial value, if left alone, will cause other parts of your program to blow up. If you need a real value, what you really want to do is leave the variable uninitialized at declaration time, and let compilers etc. do what they’re good at to find any cases where it’s used without being set to a real value first. If your coding standard precludes this, your coding standard is hurting code quality.

Rusty suggests that new languages should be designed with a built-in concept of undefined variables. At the very least, each type should have a value that can not be set, and that only the interpreter/compiler can check. This last part is important, because otherwise people will use it to mean NULL, with all of the previously-mentioned ambiguity that entails. The “uninitialized” value for each type should mean only that – never “ignored” or “doesn’t matter” or anything else. A slightly better approach is to make “uninitialized” only one of many variable annotations that are possible, as in cqual. Maybe some of that functionality will even be baked into gcc or LLVM (small pieces already are), providing the same functionality in current languages. Until then, the best option is to educate people about why it can sometimes be good to leave variables uninitialized until you have a real value for them.

My Android Experiment

A while back, I bought an Asus EeePad Transformer. A bit later I bought the base unit which has a keyboard and extra battery. I still love it. Being able to switch between landscape and portrait orientation on a whim is really awesome, because so many sites look so much better in one format. The battery life is phenomenal. I can get a full day of near-continuous use with the tablet alone, and the battery in the base is even bigger. It’s even smart about draining the base-unit battery to keep the built-in one as full as possible.

The last couple of days, I had to travel to NYC, so I decided to experiment with using this as my travel computer. I did bring another (small) laptop along just in case, but resolved not to use it – and I didn’t, except as a reserve battery for my MiFi portable wireless gadget. However, I have run into two serious limitations. One is that I can’t use it for presentations. Besides the fact that it physically has only a (mini) HDMI whereas most projectors are still VGA, I can’t find any decent software for presentations. I’ve looked at several apps and a couple of online services. They’re all awful. Is it too much to ask that a presentation program handle a simple two-level bullet list properly? Apparently. The other problem is that I can’t really use this thing for terminal sessions. The base-unit keyboard actually lacks an escape key. As a vi user, that’s crippling. I could use emacs instead, but the handling of the control key also seems a bit erratic. I tried using the on-screen escape key in ConnectBot, but eventually settled on using Hacker’s Keyboard instead. While I was able to get some work done (Gluster guys: that’s how I did the quorum-optional patch) it was certainly not very pleasant. I’d like to avoid solutions that require rooting the device, but I might have to resort to that.

I still love using the EeePad at home, and in meetings. I just might not be able to use it as a road machine and that makes me sad. Maybe the software situation will improve over the next year or so.

Why Buy RHEL?

Yet again, I’m going to post about something related to my employer. Yet again, I’m going to reiterate that this is not an official Red Hat position. In fact, I more than half expect I’ll get in trouble for saying it, but it just had to be said. You see, there’s a discussion on Slashdot about How Can I Justify Using Red Hat When CentOS Exists? The poster wants the functionality of Red Hat Enterprise Linux, but the CIO doesn’t want to pay for it and demands that they use CentOS instead. A lot of people have tried to explain the various aspects of what a RHEL subscription gets you. I’m not going to expand or correct those comments, because that will definitely get me in trouble and partly because I just don’t care. Here’s the reason that apparently carries no weight at all with CIOs and never even occurs to Slashdotters.

Because it’s the fucking right thing to do, you assholes.

Yeah, I used profanity on what has almost always been a family-friendly blog. I did that because it’s so utterly infuriating that such an obvious and important principle has totally escaped notice elsewhere. If you value something, you pay for it. Even the worst free-market zealots claim to believe that. They often use the same rationale to justify eliminating regulations (especially environmental ones) or replacing public aid with private charity. Red Hat folks do more work than anyone to improve the Linux kernel, GNOME, and dozens of other projects. They write the code, do the testing, fix the bugs, write the documentation, and provide all kinds of logistical support. The beneficiaries include not just obvious derivatives like CentOS and Scientific but even commercial competitors from Oracle and Amazon’s obvious clones to completely separate distributions like Ubuntu which also package that code and fixes. This work isn’t done by volunteers. It costs a lot of money. The fact that we allow the code to be distributed for free should have nothing to do with the principle that you pay for what you value. When you violate that principle you ensure that there will be less of what you value. The result will be a net loss for everyone, as less innovation occurs and more energy is wasted making sure everyone’s “intellectual property” remains under lock and key. Even the thieves lose.

I’d really like to hear from someone who can offer a better moral justification than “we can so we should” for using CentOS on thousands of machines without paying for even one RHEL subscription, because nothing I’ve heard so far is even close. “Duty to maximize profits” arguments will be deleted, because I’ve already turned that one into swiss cheese enough times in my life. Does anybody seriously believe that freeloading should be on the “good” side of our collective moral map?

My Brother Rocks

In just about every technical community but one, I probably have a higher profile nowadays than my brother Kevin. I say that not out of younger-sibling competitiveness, but almost for the exact opposite reason – to point out that he’s a pretty technical guy too, and largely responsible for my being one. Here are a couple of points of evidence.

  • His interest in computers predates mine. When a friend of the family loaned us what had to be one of the first TRS-80 computers in New Zealand, it was Kevin who really jumped all over the opportunity.
  • He made a lot more effort regarding computers. One of the very first things he did when he came to the US (a year after our mother and I did) was save up and buy an Apple II. Given the price tag and our economic circumstances at the time, that was a pretty major expenditure. He dove right into 6502 programming, still years before I took programming seriously.
  • He was involved in open-source long before I was . . . except it wasn’t called that back then. Kevin was on the NetHack 3 development team, which was a pretty complex global enterprise. If you were to look at the way the developers coordinated, you’d recognize a lot of the patterns in common use today. This was back in 1989, as I was just starting my own programming career.

Since then, I’ve gone on to infamy and misfortune. Kevin is now a DNS guru, which is why I said “every community but one” earlier. As it happens, this knowledge came in handy just recently. I’m trying to consolidate my web “properties” which are currently spread all over the place. I want to use one provider for DNS, one for email, and one for everything else. GlowHost is very soon going to web-only, and not even that as soon as I get un-stuck enough to set up my own nginx/PHP/etc. configuration on a cloud server I already use for a bunch of other things. As I was trying to move email from GlowHost to FastMail I ran into a glitch. I transferred DNS and email for one of my less-used domains just fine. When I tried to move atyp.us – yes, this domain right here – the DNS part seemed to be OK but I was having trouble with email. I was able to get email on FastMail, but I could see from the headers that it was still going through GlowHost first. I looked at the NS and MX records from a bunch of different places, and everything seemed fine, but even after several days I was still seeing this screwy behavior. Time to call in the DNS expert to see what I was missing.

Pause: can anyone else guess?

The problem turned out to be that mail transfer agents are dumber than I thought, and my silly insistence on using pl.atyp.us instead of atyp.us was confusing the poor babies. Even though I had the MX records for atyp.us and *.atyp.us in place, they’d still fail to find an MX record for pl.atyp.us specifically. Then, they wouldn’t even go “up the tree” and get the MX for atyp.us as I thought they would (and as the SOA for pl.atyp.us makes pretty clear). Instead – and this is the part where Kevin was able to point me in the right direction – they’d fall back to looking for an A record which was still pointing to GlowHost because that’s still where the website is. Bingo. I added the “pl” MX records, and I can already see email flowing in without going through GlowHost.

So thank you, Older Brother. No, not for the MX thing. For every thing.

Stop The Hate

I’ve noticed a significant increase lately in the number of complaints people are making about the operating systems they use, particularly Linux and most especially the storage stack. No, I’m not thinking of a certain foul-mouthed SSD salesman, who has made such kvetching the centerpiece of his Twitter persona. I’m talking about several people I know in the NoSQL/BigData world, who I’ve come to respect as very smart and generally reasonable people, complaining about things like OS caches, virtual memory in general, CPU schedulers, I/O schedulers, and so on. Sometimes the complaints are just developers being developers, which (unfortunately) seems to mean being disrespectful of developers in other specialties. Sometimes the complaints take the form of an unexamined assumption that OS facilities just can’t be trusted, get in the way, and kill performance. The meme seems to be that the way to get better application performance is to get the OS out of the way as much as possible and reinvent half of what it does within your application. That’s wrong. No matter how the complaint is framed, it’s highly likely to reflect more negatively on the complainer rather than the thing they’re complaining about.

Look, folks, operating-system developers can’t read minds. They have to build very complex, very general systems. They set defaults that suit the most common use cases, and they provide knobs to tune for something different. Learn how to use those knobs to tune for your exotic workload, or STFU. Does your code perform well in every possible use on every possible configuration, without tuning? Not so much, huh? I’ve probably seen your developers deliver a very loud “RTFM” when users visit mailing lists or IRC channels looking for help with a “wrong” use or config. I’ve probably seen them say far worse, even. How can the same person do that, and then turn around to complain about an OS they haven’t learned properly, and not be a hypocrite? When you do find those tuning knobs, often after having been told about them because you had already condemned the things they control as broken, don’t try and pass it off as your personal victory over the lameness of operating systems and their developers. You just turned a knob, which was put there by someone else in the hopes that you’d be smart enough to use it before you complained. They did the hard work – not you.

I’m not going to say that all complaints about operating systems are invalid, of course. I still think it’s ridiculous that Linux requires swap space even when there’s plenty of memory, and behaves poorly when it can’t get any. I think the “OOM Killer” is one of the dumbest ideas ever, and the implementation is even worse than the idea. I won’t say that operating-system documentation is all that it should be, either. Still, if you haven’t even tried to find out what you can tune through /proc and /sys and fcntl/setsockopt/*advise, or gone looking in the Documentation subdirectory of your friendly neighborhood kernel tree, or accepted an offer of help from a kernel developer who came to you to help make things better, you’re just in no position to complain or criticize. It’s like complaining that your manual-transmission car stalled, when you never even learned to drive it. Not knowing something doesn’t make you a fool, but complaining instead of asking does. Maybe if you actually engaged with your peers instead of pissing on them, they could help you build better applications.

Thoughts on Fowler’s LMAX Architecture

I have the best readers. One sent me email expressing a hope that I’d write about Martin Fowler’s LMAX Architecture. I’d be glad to. In fact I had already thought of doing so, but the fact that at least one reader has already expressed an interest makes it even more fun. The architecture seems to incorporate three basic ideas.

  • Event sourcing, or the idea of using a sequentially written log as the “system of record” with the written-in-place copy as a cache – an almost direct inversion of roles compared to standard journaling.
  • The “disruptor” data/control structure.
  • Fitting everything in memory.

I don’t really have all that much to say about fitting everything in memory. I’m a storage guy, which almost by definition means I don’t get interested until there’s more data than will fit in memory. Application programmers should IMO strive to use storage only as a system of record, not as an extension of memory or networking (“sending” data from one machine to another through a disk is a pet peeve). If they want to cache storage contents in memory that’s great, and if they can do that intelligently enough to keep their entire working set in memory that’s better still, but if their locality of reference doesn’t allow that then LMAX’s prescription just won’t work for them and that’s that. The main thing that’s interesting about the “fit in memory” part is that it’s a strict prerequisite for the disruptor part. LMAX’s “one writer many readers” rule makes sense because of how cache invalidation and so forth work, but disks don’t work that way so the disruptor’s advantage over queues is lost.

With regard to the disruptor structure, I’ll also keep my comments fairly brief. It seems pretty cool, not that dissimilar to structures I’ve used and seen used elsewhere; some of the interfaces to the SiCortex chip’s built-in interconnect hardware come to mind quickly. I think it’s a mistake to contrast it with Actors or SEDA, though. I see them as complementary, with Actors and SEDA as high-level paradigms and the disruptor as an implementation alternative to the queues they often use internally. The idea of running these other models on top of disruptors doesn’t seem strange at all, and the familiar nature of disruptors doesn’t even make the combination seem all that innovative to me. It’s rather disappointing to see useful ideas dismissed because of a false premise that they’re alternatives to another instead of being complementary.

The really interesting part for me, as a storage guy, is the event-sourcing part. Again, this has some pretty strong antecedents. This time it recalls Seltzer et al’s work on log-structured filesystems, which is even based on may of the same observations e.g. about locality of reference and relative costs of random vs. sequential access. That work’s twenty years old, by the way. Because event sourcing is so similar to log-structured file systems, it runs into some of the same problems. Chief among these is the potentially high cost of reads that aren’t absorbed by the cache, and the complexity involved with pruning no-longer-relevant logs. Having to scan logs to find the most recent copy of a block/record/whatever can be extremely expensive, and building indices carries its own expense in terms of both resources and complexity. It’s not a big issue if your system has very high locality of reference, which time-oriented systems such as LMAX or several others types of systems tend to, but it can be a serious problem in the general case. Similarly, the cleanup problem doesn’t matter if you can simply drop records from the tail or at least stage them to somewhere else, but it’s a big issue for files that need to stay online – with online rather than near-line performance characteristics – indefinitely.

In conclusion, then, the approach Fowler describes seems like a good one if your data characteristics are similar to LMAX’s, but probably not otherwise. Is it an innovative approach? Maybe in some ways. Two out of the three main features seem strongly reminiscent of technology that already existed, and combinations of existing ideas are so common in this business that this particular combination doesn’t seem all that special. On the other hand, there might be more innovation in the low-level details than one would even expect to find in Fowler’s high-level overview. It’s interesting, well worth reading, but I don’t think people who have dealt with high data volumes already will find much inspiration there.

Distributed Databases in 1965

While we were in Ann Arbor last month, we stopped by the abolutely amazing Kaleidoscope used and rare bookstore. (I’d link, but can’t find a website.) I knew from our last visit that they have an excellent collection of old sci-fi magazines, so I decided to see if they had any from the month I was born – April 1965. Sure enough, they had a Galaxy from that month. I was surprised how many of the authors I recognized. Here are the stories mentioned on the cover:

  • “Wasted on the Young” by John Brunner
  • “War Against the Yukks” by Keith Laumer
  • “A Wobble in Wockii Futures” by Gordon R. Dickson
  • “Committee of the Whole” by Frank Herbert

That’s an all-star cast right there. However, the story that really made an impression on me was by someone I had never heard of – “The Decision Makers” by Joseph Green. It’s about an alien-contact specialist sent to decide whether a newly discovered species met relevant definitions of intelligence which would interfere with a planned terraforming operation. That’s pretty standard stuff for the SF of the time, but there’s a twist; the aliens, which are called seals, have a sort of collective intelligence which complicates the protagonist’s job. This leads to the passage that might be of interest to my usual technical audience.

Our group memory is an accumulated mass of knowledge which is impressed on the memory areas of young individuals at birth, at least three such young ones for each memory segment. We are a short-lived race, dying of natural causes after eight of your years. As each individual who carries a share of the memory feels death approaching he transfers his part to a newly born child, and thus the knowledge is transferred from generation to generation, forever.

Try to remember that this was written in 1965, long before the networked computer systems today were even imagined, and that the author wasn’t even writing about computers. He was trying to tell a completely different kind of story; the entire excerpt above could have been omitted entirely without affecting the plot. Nonetheless, he managed to describe a form of what we would now call sharding, with replication and even deliberate re-replication to preserve availability. The result should be instantly recognizable to anyone who has studied modern distributed databases such as Voldemort or Riak or Cassandra. A lot of people think of this stuff as cutting edge, but it’s also an incidental part of a barely-remembered story from 1965. Somehow I find that both humbling and hilarious.