We don’t know what it is yet, but we know it’s coming because the Sun bloggers have started “spontaneously” posting their “personal” thoughts about issues related to it. Bonwick makes a lot of good points, for example about “roll your own” OSes, but he also misses on a few. For example:

A single Opteron core can XOR data at about 6 GB/sec.

…if it’s doing nothing else, and if it has a memory system to support it. Is that really a good tradeoff? Many systems, even those sold as appliances, are already running at near computational capacity and can ill afford to throw any cycles at something that can reasonably be done off-CPU. At the same time, really fast general-purpose memory systems (especially coherent ones) are expensive – perhaps more so than a little dab of silicon and a few signal lines to serve a particular limited but performance-critical purpose in a system. CPUs have advanced a lot, but so has the ability to develop and integrate special-purpose hardware into a system. I’ve learned a bit about that lately. ;) This battle between general- and special-purpose has been going on for a long time, and will continue to do so. Lots of people want to believe they’re part of a transition from one era to another, I guess it makes them feel important, but most of them are just fooling themselves.

In the time that Fibre Channel went from 1Gb to 4Gb — a factor of 4 — Ethernet went from 10Mb to 10Gb — a factor of 1000.

That’s not really an accurate picture. Right now the comparison is still 2Gb/s Fibre Channel to 1Gb/s Ethernet in most of the market, and 4Gb/s FC is more “real” in that market than 10GigE. Like everyone who has worked with it for any length of time I’ve learned to dislike FC for many reasons, but the raw bit rate is not the problem. Just about nothing can pump that much data through a single channel anyway, nor should it need to try when multiple channels are available. As in long-haul networking (e.g. DWDM), the higher bit rates are mainly useful for aggregating smaller flows within the core, and there are other ways to deal with that issue within the relatively small distances of a data center. There’s often a tradeoff between “faster but fewer” and “more but slower” with the former relying more on bleeding-edge hardware and the latter relying more on software smarts. If the efficacy of using general-purpose cycles for everything is one of your assumptions, it’s a little odd for “faster but fewer” to be another. Saying that the real distinction is commodity vs. custom doesn’t explain it either, because FC and (10)GigE are both commodities and standards. The real preference here seems to be for what Sun’s doing over what they’re not, and it’s a preference with little bearing on anyone not standing on the Titanic’s observation deck.

And if you think Thumper was disruptive, well… stay tuned.

Oh, I will, but I don’t expect the disruption to come from Sun. I’m a lot more interested in what’s going on with GPUs being adapted for general use, or Cell, or reconfigurable computing, or other things that offer a “serial repurposing” option in addition to the cast-in-silicon general vs. special purpose options mentioned earlier. I’m interested in different memory models that allow more efficient (or cost-efficient) implementation, system architectures that represent a different resource balance than the same old ho-hum, and so on. I’m sure Bonwick has a reason for being where he is, but I have a reason for being where I am too and the same could be said for thousands of others at companies much nimbler than Sun. We’ll just have to wait and see if “disruptive” is more than a marketing term at Sun.