TTM 2014 - Future of Processing // EIT Digital

TTM 2014 - Future of Processing

The limits of present and under research technologies for processing, as discussed in a post on phase changing technologies that might lead to brain like processing power. Credits: Brian Wang

A good application playing chess, on a powerful computer, can investigate hundreds of thousands of moves before making its choice. That is not much considering that there are 10 to the 40th power of possibile positions of the chess pieces on the board and 10 to the 120th power of possibile games. When the Cuban Chess Grandmaster José Raul Capablanca was asked how many moves ahead he considered he replied: "jus one, the best".

In this little story lies our struggle on computation. We can surely improve our machines' computation capabilities, and we have done that quite consistently for over 50 years (Gordon Moore stated its prediction in a paper in 1965) but that does not necessarily matches the increase in the processing of meaning and in achieving a desired result.

The way a computer analyses flight data (FMC) for a commercial plane and regulates ailerons, pitch and whatever requires hundreds of thousands of lines of code and several MIPS of processing power (The Motorola FMC Model 2907C1 used on Boeing 737 has about 60 MIPS of processing power and 32Mb RAM). A fly, to fly, uses about 5,000 neurones at an infinitesimal fraction of power used by the FMC. The raw processing power is completely different and yet they both serve (basically) the same goal.

The real challenge that I see for the future of processing is in its efficiency, in terms of power consumption and effectiveness with respect to the goal. Computers today are general purpose machines (I am talking about our PCs and super computer alike) that gets specialised by the software running on them. In a brain there is no software, but there is no hardware either! Neurones and their connections constantly changes and these changes represent what we remember, what we feel and our reactions to external stimuli. As time goes by our brain changes significantly and gets, in a sense, more and more specialised, good, at managing future events.

There are a few interesting technologies

- at layer 0, like graphene and molybdenum disulphide, that may replace silicon in chips driving better performances, i.e. lower power requirement and faster processing, 

- at layer 1, with new architectures for transistors and new forms of computation not based on transistors, like molecular computing and quantum computing

- at layer 2, with in chip optical communications to replace coper wires

- at layer 3, with massive distributed processing that may derive from Internet of Things and communications clouds

However, although these are promising and will push Moore's law into the next two decades, none of this will help in matching Capablanca approach of "just one, the best".

To achieve that we need to move to new computation paradigms, possibly like neuromorphic networks, phase change technologies and memristors. We need to move into the unification of data, processing and communications. This is not the final recipe, we will still need CMOS based processing (your computer is no match to you when it comes to doing calculation, sorting stuff, searching through huge lists) in many areas. 

Of course, in our race to find a technical solution for a computation that can match our brain we might stumble onto success. And that may open up a can of worms in terms of new ethical questions (can you through away a sentient computer?) and of unfair competition for the human race.

Plenty of things to consider and discuss for the coming IEEE Technology Time Machine to run in San Jose, California, on October 21st and 22nd.

Author - Roberto Saracco

© 2010-2018 EIT Digital IVZW. All rights reserved. Legal notice. Privacy Policy.