The "IOPS Myth" Myth

It's nice to see more people becoming aware that IOPS are not the be-all and end-all of storage performance. Unfortunately, as with all consciousness raising, the newest converts are often the ones that take things too far. Thus, we get extreme claims like IOPS Are A Scam or Storage Myths: IOPS Matter. What a surprise, that somebody who works for Violin would claim that it's all about latency, all the time. Let's get away from the extremists on both sides, and try to find the truth somewhere in between.

As people who have actually worked on storage - and particularly storage performance - for a while know, different workloads present different needs and challenges. The grain of truth in the extremists' claims is that latency really is king for some applications. Those applications are the ones where I/O is both serialized and synchronous. Quick, how many of those do you run? Probably very few, quite possibly even fewer than you think, for a few different reasons.

  • Most modern applications have some internal parallelism. Therefore, their performance often is bound more by IOPS than by pure latency.

  • If applications do asynchronous writes, then the I/O system responsible for ensuring their (eventual) completion can take advantage of parallelism even if the writes were issued sequentially. This is what's likely to happen every time you do a series of writes followed by fsync. It can be done all the way from the filesystem down to an individual disk sitting in another cabinet.

  • Even applications that do serialized and synchronous writes often only do so some of the time - e.g. for logs/journals but not for main data. These applications are often latency-bound, but that doesn't mean low latency is necessary for every bit they store.

I've made the point myself, quite recently, that it's important to look at all aspects of storage performance, including predictability and behavior over time. That still doesn't mean that you should just pick one set of characteristics as "best" and leave it at that. You're going to be using many kinds of storage. Get used to it. For example, you might need low latency for 1% of your data that's written serially, high IOPS for the next 9% for data that's still warm but read/written in parallel, and neither (at lowest possible cost) for the cold remainder. In that middle part, low-latency storage would be overkill. What matters is how many IOPS you can get within a single system, to avoid the management and resource provisioning/migration headaches of having several. Thus, a high-IOPS system still has value even if it doesn't also offer low latency. If that weren't true, nobody would even consider using S3 or Swift let alone Glacier, since those all have terrible latency characteristics.

In short, "latency is king" is the new "scale up" motto, but we mostly live in a "scale out" world. Yes, sure, there are situations where you just need a single super-fast widget, but much more often you need a whole bunch of more conventional widgets providing high aggregate throughput within a single system. Low latency and high IOPS are entirely complementary goals. Just as there have been valid uses for both mainframes and supercomputers since they started to diverge in the 70s, there are valid uses for both types of storage systems. Those designing or selling one should not lightly dismiss the other, lest that lead to a discussion of who's merely picking components and who's solving hard algorithmic problems.

Comments for this blog entry