Neuromorphic computer roadmap

Close-up of a field programmable analog array (FPAA) board, developed by professor Jennifer Hasler, that includes an integrated circuit with biological-based neurone. Credit: Georgia Tech university

The debate has been going on for a long long time: in order to create an intelligent computer, where "intelligent" means able to process information and react as a human brain, can we relay on specific software that "simulates" the brain processing or we need a physical underpinning that works as neurones and neurone networks?

Scientists and researchers are divided and have been working on both approaches. It is also a philosophical debate: is the brain "a brain" because of its hardware or is it a brain because of the processing paradigm implemented? If it is the former then we need to mimic its hardware (e.g. with SyNAPSES chips from IBM) if it is the latter it is just a matter of pinpointing the processing paradigm and use brute processing capacity to simulate it. In other words, the former try to create a machine working like a brain (neuromorphic computing) the others work to get the same output signals that a brain would generate given a certain set of inputs.

The jury is still out, at least this is my impression. And because of that we still see researchers camping in the two trenches and coming up with their solutions that are step by step improving the processing in a computer closing the gap with what goes on in the brain.

I run onto a news from Georgia Tech University reporting on the work being done there on neuromorphic computing resulting in a roadmap that propose to use analog computation systems rather than digital ones. In a digital computer you have a machine that can in principle do whatever task (processing) you can dream of, provided you run the appropriate software. On the other hand, an analogue computer is a machine whose hardware has been wired to perform a very specific applications, although it can have a significant latitude in doing that (brain scientist don't use the word "latitude" but "plasticity" to say that the brain can adapt its wiring and workings to evolve over time). It might be a long stretch, but it reminds me on the one hand of Leibniz that believed we could all reason, and come to a solution, by using a processing approach that can be defined, and on the other of Kant who said we can only reason following processes that have been "pre-cabled" in our brain and that actually it is exactly because we all share the same cabling that we can understand each other and come to the same conclusions.

Interestingly, Georgia Tech researchers for their roadmap start by comparing the power used by a brain for processing, 20W, with the one required by a computer that needs to have a similar level of processing power (several hundreds of thousands of W). Their paper, 29 pages long, outline the steps that need to be taken to develop a hardware engine that can process information as a brain would do.

The researchers have been working for several years in developing basic building blocks, based on field programmable analog array (FPAA) and now they are showing ways to assemble these building blocks in ways that scale graciously in terms of processing and power requirements.

They have also addressed the aspect of communications among the different modules a crucial issue in developing brain like structure.

If you are interested in exploring the roadmap and the steps they are proposing read their paper. It is pretty technical but you can get the gist even if you are not an expert in the field.

What I find interesting is that we have reached a point where researchers are setting an agenda for creating a computer that will eventually behave like a brain. There is no more a discus sun on if such a computer can be made, rather a detailed work plan to do it!

Author - Roberto Saracco

© 2010-2018 EIT Digital IVZW. All rights reserved. Legal notice