SyNapse has moved up one notch

The newest IBM SyNapse chips gets a bit closer to a brain by embedding 5.4 billion transistors representing an equivalent 1 million neurones and 256 million synapses.

On August 7th, 2014, IBM has introduced the new Neurosynaptic Chip, a significant step beyond the one launched in 2011. This chip contains 5.4 billion transistors, one of the largest chip ever designed, that emulate the working of 1 million + neurones and 256 million synapses with 4,096 neurosynaptic cores.

Mind you. It is still 100,000 times smaller than a human brain (in terms of "neurones") but it is nevertheless an impressive progress over the 2011 chip. That one had an equivalent of 256 neurones, 262,144 synapses and 1 neurosynaptic core. In 3 years we have seen a progress of 4,000 times. Now, the interesting thing is that we are looking at an exponential process. Hence, don't fall in the trap that since the improvement in the last 3 years was 4,000 times, to improve 100,000 times you need 75 years (25 times 3). It will actually happen, assuming the same evolution pace, before the end of this decade!

The chips not based on a von Neumann architecture, a finite state machine. It does not have a place for processing and one for storage. The emulated neurones and synapses are both processing and storing the information at the same time. If an input signal makes a neurone (a set of transistors) spike, that spike may (or may not) excite other neurones creating further spikes, as in our brain. You can take a look at this architecture by clicking here.

The researchers have created 4,099 blocks of identical 250 neurones (if you do the math is a bit over 1 million), similarly to what is seen in our brain where neurone are clustered in circuits by 100-200.

Another very interesting characteristic of the chip is that is power consumption is extremely low, in the mW order. In an experiment they had the new chip analysing video images to detect objects and have been able to do so in real time, whilst a high end lap top programmed to recognise objects would require 100 times more time (so it won't be able to do it in realtime). That "computation" chew 63mW, 10,000 times less than it would be required by a battery of servers to perform the same recognition in real time.

One should notice, however, that a most fundamental difference with a brain exists: the animal brain is a plastic architecture, its neurones and synapses keep rearranging as result of the previous experience (processing and storage all together). This is not duplicate in the chip that still works on a fixed interconnection matrix (although the way this interconnections are activated changes over time, depending on the previous experience).

Take a look at the clip.  It is a glimpse on the future of processing.

Author - Roberto Saracco

© 2010-2020 EIT Digital IVZW. All rights reserved. Legal notice. Privacy Policy.