The age of silicon is slowing down its fast pace of evolution, clocked for 52 years by the Moore’s law of a doubling of transistor per chip every 18 months with a parallel decrease in the manufacturing cost per transistor.
As shown in the first graph, several of the parameters used to track the evolution of computing are leveling out:
- Single Thread performance
What still keeps growing is the number of transistors per chip but the shrinking dimension no longer translates into better performance: moving from 14nm down to 10 has not led to significant performance improvement. The number of cores per chip is growing, more than ever, but that is just showing the quest for alternatives to keep on bettering chips performance that no longer can be obtained through the Moore’s Law.
Moore’s law has already failed from the economics perspective that was the one that actually fueled it by increasing the market (and hence the volumes). Since 2014 the cost per transistor no longer decrease as density increases, rather it is growing.
The three pillars for future evolution are Energy Efficiency, Security and Interfacing. The push towards more performing architectures is been pursued by looking at adiabatic computation, new CMOS architectures, non von Neumann computation like quantum computer. The recent Google announcement of a 1000 qbits quantum computer just round the corner does not seem realistic. The studies on approximate computation that seek inspiration from the brain are creating a mixed feeling among the panelists that discussed the topics, with a broad range of opinions.
On the contrary there was consensus on the need for interdisciplinary cooperation considering algorithm, architectures and nanotech as the bases to progress further.
On the horizon there seems to be no silver bullet in computation. The expectation is that we are going to see progress in the different areas outlined before producing different “computation machines” each fitting a specific field of application. Hence we are moving towards a heterogeneous future, with different kinds of “computers” serving different tasks.
The advent of a new Moore’s like age would require investment that is beyond the possibility of a single company nor of a single Country, hence broad collaboration is required to reinvent computation.
Interestingly, it was noted that the increase in computation over the last 60 years has kept up with the growth of data and the computation required to process them. No longer so. Computation today is lagging behind, most data collected today are discarded within 3 hours because we do not have enough memory for storing them. This is known as the von Neumann bottleneck, since von Neumann 60 years ago stated that the lack of memory is what limits computation.
Also interesting is the observation that processor, today as yesterday, are sitting idle 90% of the time waiting for data to transferred from memory to processor and vice versa.
Hence the next steps are focusing on:
- moving storage from outside of the chip (DRAM/Disc) to the inside
- changing connection from copper to light guides – optical connection between processor and storage
- moving memory to the center of computation with all processors around the memory performing computation on the memory itself (see figure 2)
The rebooting computing initiative is fostering a collaboration among several companies to create memory centric computation.