Very interesting announcement.

In April, Berkeley Lab signed a collaboration agreement with Tensilica®, Inc. to explore such new design concepts for energy-efficient high-performance scientific computer systems. The joint effort is focused on novel processor and systems architectures using large numbers of small processor cores, connected together with optimized links, and tuned to the requirements of highly-parallel applications such as climate modeling.

The numbers are interesting even if they’re estimates for the future.

They conclude that a supercomputer using about 20 million embedded microprocessors would deliver the results and cost $75 million to construct. This “climate computer” would consume less than 4 megawatts of power and achieve a peak performance of 200 petaflops.

Hmmm. Divide by a thousand to get 20K processors, 4kW, and 200TF. Of those, only the 4kW number seems out of place compared to current Top 500 kinds of systems. This reminds me a bit of the UC Berkeley (not quite the same place) BEE project. It’s more ambitious in scale and range of applications, perhaps a bit less so in terms of being usable with “standard programming languages and tools” instead of the more direct-to-hardware nature of BEE.

You just bought a huge bucket of computes. So, how do you connect the pieces to memory, to storage, to each other? You’ve got a compiler, now how do you write your programs to run reliably and well on it? How do you make it even slightly manageable? When you’re done, can you put gull-wing doors and big blue LEDs on it? ;)