The REAL Weapon of Mass Destruction

I suspect there are very few people who really knows what going on behind the scene here, but China‘s got one and the USA got a couple. their aim is to paralyze the other by taking full control over cyber space with just one  click. I’m talking about the supercomputers. An interesting study was released Monday; showing that the new Chinese machine – the Tianhe-1A – is totally superior against to the most powerful supercomputers in the US.

“Unless we invest in this area we are just going to end up with fantastic machines that we can not use.”

Stephen Jarvis

The Tianhe-1A

While the Chinese officially fire up their new gigantic supercomputer for the first time, scientists  from the University of Warwick presents new research at the World’s largest supercomputing conference today, comparing China’s new No. 1 supercomputer to the alternative US designs. The findings suggest that the new Chinese mega-machine may be up to  seven times as faster than its US peers.

The work provides crucial new analysis that will benefit the battle plans of both sides, in an escalating war between two competing technologies, reports.

According to the research by professor Stephen Jarvis, Royal Society Industry Fellow at the University of Warwick’s Department of Computer Science, the GPU (GPGPU) designs used in China’s 2.5 Petaflops Tianhe-1A is able to run 3 to 7 times as fast as the alternative supercomputing designs employed in the US.

The main reason is that these use relatively simpler processing cores brought together in parallel by highly effective and scalable interconnects, as seen in the IBM BlueGene architectures.

“If your application fits, then GPGPU solutions will outgun BlueGene designs on peak performance.”The Tianhe-1A has a theoretical peak performance of 4.7 Petaflops, yet our best programming code-based measures can only deliver 2.5 Petaflops of that peak,” professor Jarvis says.

Professor Jarvis’ modeling found that small GPU-based systems solved problems between 3 and 7 times faster than traditional CPU-based designs.

However he also found that as you increased the number of processing elements linked together, the performance of the GPU-based systems improved at a much slower rate than the BlueGene-style machines.

So, it’s not sure that the size of the Petaflops is all that matters.

According to the research is the super-machines are also producing a lot of unused computer power at the moment, burning enough energy to power a small US town.

“Contrast this with the Dawn BlueGene/P at Lawrence Livermore National Laboratory in the US, it’s a small machine at 0.5 Petaflops peak [performance], but it delivers 0.415 Petaflops of that peak. In many ways this is not surprising, as our current programming models are designed around CPUs,” Jarvis points out.

But there’s more.

The BlueGene

“The BlueGene design is not without its own problems. In our paper we show that BlueGenes can require many more processing elements than a GPU-based system to do the same work. Many of our scientific algorithms — the recipes for doing the calculations — just do not scale to this degree, so unless we invest in this area we are just going to end up with fantastic machines that we can not use,” professor Jarvis says.

Another key problem identified by the University of Warwick research is the fact that in the rush to use excitingly powerful GPGPUs, researchers have not yet put sufficient energy into devising the best technologies to actually link them together in parallel at massive scales.

Both the USA and China are racing for the next milestone in 21st-century computing – the Exascale –  one quintillion floating-point operations per second (10^18).

“It’s not simply an architectural decision either — you could run a small town on the power required to run one of these supercomputers and even if you plump for a design and power the thing up, programming it is currently impossible,” Jarvis notes.

“At Supercomputing in New Orleans we directly compare GPGPU designs with that of the BlueGene. If you are investing billions of Dollars or Yuan in supercomputing programmes, then it is worth standing back and calculating what designs might realistically get you to Exascale, and once you have that design, mitigating for the known risks — power, resilience and programmability,” he adds.

“Given the crossroads at which supercomputing stands, and the national pride at stake in achieving Exascale, this design battle will continue to be hotly contested. It will also need the best modeling techniques that the community can provide to discern good design from bad,” professor Jarvis concludes.

The research  paper, entitled “Performance Analysis of a Hybrid MPI/CUDA Implementation of the NAS-LU Benchmark” by S.J. Pennycook, S.D. Hammond, G.R. Mudalige and S.A. Jarvis at the University of Warwick’s Department of Computer Science, was presented on Monday November 15 at the Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems in New Orleans.


Filed under International Econnomic Politics, Philosophy, Technology

44 responses to “The REAL Weapon of Mass Destruction

  1. Pingback: Financial Industry To Spend $90 Billion on New Technology | EconoTwist's

  2. Pingback: Regulations May Increase Risk of Flash Crash, Traders Says | EconoTwist's

  3. Can I just say what a relief to find someone who actually knows what theyre talking about on the internet. You definitely know how to bring an issue to light and make it important. More people need to read this and understand this side of the story. I cant believe youre not more popular because you definitely have the gift.