I suspect there are very few people who really knows what going on behind the scene here, but China‘s got one and the USA got a couple. their aim is to paralyze the other by taking full control over cyber space with just one click. I’m talking about the supercomputers. An interesting study was released Monday; showing that the new Chinese machine – the Tianhe-1A – is totally superior against to the most powerful supercomputers in the US.
“Unless we invest in this area we are just going to end up with fantastic machines that we can not use.”
While the Chinese officially fire up their new gigantic supercomputer for the first time, scientists from the University of Warwick presents new research at the World’s largest supercomputing conference today, comparing China’s new No. 1 supercomputer to the alternative US designs. The findings suggest that the new Chinese mega-machine may be up to seven times as faster than its US peers.
The work provides crucial new analysis that will benefit the battle plans of both sides, in an escalating war between two competing technologies, ScienceDaily.com reports.
According to the research by professor Stephen Jarvis, Royal Society Industry Fellow at the University of Warwick’s Department of Computer Science, the GPU (GPGPU) designs used in China’s 2.5 Petaflops Tianhe-1A is able to run 3 to 7 times as fast as the alternative supercomputing designs employed in the US.
The main reason is that these use relatively simpler processing cores brought together in parallel by highly effective and scalable interconnects, as seen in the IBM BlueGene architectures.
“If your application fits, then GPGPU solutions will outgun BlueGene designs on peak performance.”The Tianhe-1A has a theoretical peak performance of 4.7 Petaflops, yet our best programming code-based measures can only deliver 2.5 Petaflops of that peak,” professor Jarvis says.
Professor Jarvis’ modeling found that small GPU-based systems solved problems between 3 and 7 times faster than traditional CPU-based designs.
However he also found that as you increased the number of processing elements linked together, the performance of the GPU-based systems improved at a much slower rate than the BlueGene-style machines.
So, it’s not sure that the size of the Petaflops is all that matters.
According to the research is the super-machines are also producing a lot of unused computer power at the moment, burning enough energy to power a small US town.
“Contrast this with the Dawn BlueGene/P at Lawrence Livermore National Laboratory in the US, it’s a small machine at 0.5 Petaflops peak [performance], but it delivers 0.415 Petaflops of that peak. In many ways this is not surprising, as our current programming models are designed around CPUs,” Jarvis points out.
But there’s more.
“The BlueGene design is not without its own problems. In our paper we show that BlueGenes can require many more processing elements than a GPU-based system to do the same work. Many of our scientific algorithms — the recipes for doing the calculations — just do not scale to this degree, so unless we invest in this area we are just going to end up with fantastic machines that we can not use,” professor Jarvis says.
Another key problem identified by the University of Warwick research is the fact that in the rush to use excitingly powerful GPGPUs, researchers have not yet put sufficient energy into devising the best technologies to actually link them together in parallel at massive scales.
Both the USA and China are racing for the next milestone in 21st-century computing – the Exascale – one quintillion floating-point operations per second (10^18).
“It’s not simply an architectural decision either — you could run a small town on the power required to run one of these supercomputers and even if you plump for a design and power the thing up, programming it is currently impossible,” Jarvis notes.
“At Supercomputing in New Orleans we directly compare GPGPU designs with that of the BlueGene. If you are investing billions of Dollars or Yuan in supercomputing programmes, then it is worth standing back and calculating what designs might realistically get you to Exascale, and once you have that design, mitigating for the known risks — power, resilience and programmability,” he adds.
“Given the crossroads at which supercomputing stands, and the national pride at stake in achieving Exascale, this design battle will continue to be hotly contested. It will also need the best modeling techniques that the community can provide to discern good design from bad,” professor Jarvis concludes.
The research paper, entitled “Performance Analysis of a Hybrid MPI/CUDA Implementation of the NAS-LU Benchmark” by S.J. Pennycook, S.D. Hammond, G.R. Mudalige and S.A. Jarvis at the University of Warwick’s Department of Computer Science, was presented on Monday November 15 at the Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems in New Orleans.
Related by The Swapper:
- Student Design Software to Combat Modern Cyber Crime
- Hackers Attact Norway’s Peace Prize Institute
- Cyber Wars Enter Center Stage At NATO Summit
- Hey, You HFT Bashers! Are You Ready For This?
- “Artificial Intelligence” To Be Implemented In HFT
- The Ultimate Trading Weapon
- EU Respond To Cyber Threath Alarm
- EU Demand Explanation On US Plan To Monitor Money Transfers
- Europe: Cyber Criminals Attack Critical Water, Oil and Gas Systems
- Hackers Steal CO2-emission Permits Worth $4bn
- Another Carbon Fraud Raid Reveals Firearms, Piles Of Cash
- China Builds World’s Fastest Supercomputer (spectrum.ieee.org)
- China rockets to top of supercomputer list (infoworld.com)
- New research provides effective battle planning for supercomputer war (scienceblog.com)