Since my last article on the subject, a couple of other folks have tried to use the EBS failure to pimp their own competing solutions. Joyent went first, with Network Storage in the Cloud: Delicious but Deadly. He makes some decent points, e.g. about “read-only” mounts not actually being read-only, until he goes off the rails about here.

This whole experience — and many others like it — left me questioning the value of network storage for cloud computing. Yes, having centralized storage allowed for certain things — one could “magically” migrate a load from one compute node to another, for example — but it seemed to me that these benefits were more than negated by the concentration of load and risk in a single unit (even one that is putatively highly available).

What’s that about “concentration of load and risk in a single unit”? It’s bullshit, to put it simply. Note the conflation of “network storage” in the first sentence with “centralized storage” in the second. As Bryan himself points out in the very next paragraph, the fallback to local storage has forced them to “reinvest in technologies” for replication, migration, and backup between nodes. That’s not reinvesting, that’s reinventing – of wheels that work just fine in systems beyond those Bryan knows. Real distributed storage doesn’t involve that concentration of load and risk, because it’s more than just a single server with failover. Those of you who follow me on Twitter probably noticed my tweet about people whose vision of “distributed” doesn’t extend beyond that slight modification to an essentially single-server world view. Systems like RBD/Sheepdog, or Dynamo and its derivatives if you go a little further afield, don’t have the problems that naive iSCSI or DRBD implementations do.

Next up is Heroku, with their incident report which turned into an editorial. They actually make a point I’ve been making for years.

2) BLOCK STORAGE IS NOT A CLOUD-FRIENDLY TECHNOLOGY. EC2, S3, and other AWS services have grown much more stable, reliable, and performant over the four years we’ve been using them. EBS, unfortunately, has not improved much, and in fact has possibly gotten worse. Amazon employs some of the best infrastructure engineers in the world: if they can’t make it work, then probably no one can. Block storage has physical locality that can’t easily be transferred.

OK, that last part isn’t quite right. Block storage has no more or less physical locality than file or database storage; it all depends on the implementation. However, block storage does have another property that makes it cloud-unfriendly: there’s no reasonable way to share it. Yes, cluster filesystems that allow such sharing do exist. I even worked on one a decade ago. There are a whole bunch of reasons why they’ve never worked out as well as anyone hoped, and a few reasons why they’re a particularly ill fit for the cloud. In the cloud you often want your data to be shared, but the only way to share block storage is to turn it into something else (e.g. files, database rows/columns, graph nodes) at which point you’re sharing that something else instead of sharing the block storage itself. Just about every technology you might use to do this can handle its own sharding/replication/etc. so you might as well cut out the middle man and run them on top of local block storage. That’s the only case where local block storage makes sense, because it explicitly does not need to be shared and is destined for presentation to users in some other form. Even in the boot-image case, which might seem to involve non-shared storage, there’s actually sharing involved if your volume is a snapshot/clone of a shared template. Would you rather wait for every block in a multi-GB image to be copied to local disk before your instance can start, or start up immediately and only copy blocks from a snapshot or shared template as needed? In all of these cases, the local block storage is somehow virtualized or converted ASAP instead of being passed straight through to users. The only reason for the pass-through approach is performance, but if you’re in the cloud you should be achieving application-level performance via horizontal scaling rather than hyper-optimization of each instance anyway so that’s a weak reason to rely on it except in a few very specialized cases such as virtual appliances which are themselves providing a service to the rest of the cloud.