Three times faster

Supercomputers increase of processing performance keeps the trend going. After a 2 years a new supercomputer, Aurora, promises to triple current performance of Thiane-2. Credit: Top500.org

Thiane 2 still has the crown as the fastest supercomputer with its 55 Petaflops of peak performance. It is now two years that it is number one in the Top500 ranking.

This apparent flattening in performance increase is not unusual as it can be seen in the performance growth graph shown (courtesy of Top500). We have had previously periods of two years with no performance increase.

Now a news from the US Department of Energy points to a new supercomputer, Aurora, to be available in 2018 tripling the peak performance of Thiane-2, to 180 Petaflops.

Clearly, if no new contender comes up before 2018 this will represent a slow down in the performance growth. Five years to triple performance is indeed a slower growth from the past. We were used to see a growth by a factor of 5-8 over five years.  This is also making doubtful the forecast of reaching the Exaflop peak performance by 2020. 
The problem is that the single units making up a supercomputer are not increasing their performance as they used to do and to further increase the overall performance you need to cluster more units and that makes the overall architecture more complex and the leveraging of the overall potential crunching power even more tough.

Still, we are moving on. And what is astonishing is that we still find ways to use the huge computation power provided and crave for more.

In announcing the 200 million investment to develop Aurora the US Department of Energy pointed out its expectation that the increased processing power will support advances in Material Science (by allowing researchers to designa and simulate new ways of aggregating atoms), in Biological Sciences (designing new organisms for bio-fuel production and understanding and fighting diseases at molecular level), in Transportation (allowing the design of more efficient engines and more aerodynamic vehicles), in Renewable Energy (designing better wind turbines).

As we move to the nanoscale, manipulating molecules and atoms we need to push the envelope in computation to manage the staggering complexity of dealing with billions of interacting particles and derive the overall behaviour at the macro scale.

Author - Roberto Saracco

© 2010-2018 EIT Digital IVZW. All rights reserved. Legal notice. Privacy Policy.