Brain prosthetic controller getting better via software

Brain-controlled prostheses sample a few hundred neurons to estimate motor commands that involve millions of neurons. So tiny sampling errors can reduce the precision and speed of thought-controlled keypads. A Stanford technique can analyze this sample and quickly make dozens of corrective adjustments to make thought control more precise. Credit: Jonathan Kao, Shenoy Lab

We control most of our movements, and all the voluntary ones, through neurones in our brain and we are somewhat in control of these neurones through our "thinking". At least, this is what we perceive. You feel thirsty, you want to drink and therefore after having considered the context you decide to pick up the glass and the bottle and pour water into it, then you move the glass to your lips and drink. All of this comes naturally to you but it is a very complex interplay of many neurones, mostly organised in specific networks.

Scientists have learns ways to detect what neurones are active in connection to a certain micro step in the overall process of moving from being thirsty to drinking. 

In practice by observing some neurones a computer may derive the information on what those neurones will end up in causing (getting to the glass, filling it, moving it to your lips). Hence the computer can instruct a robot to do the actions you may not be able to do because of a severed nerve or a palsy.

The problem is that even for the simplest activity and sub-activity the number of neurones involved is in the millions and the sensing involves only e few hundreds of them. In many cases a neurone, or a cluster of neurones, fires both if you see a cat and if you want to drink. Sorting out what is really meaning is very difficult and this is why making brain prosthetics is so difficult to the point that what we have today is a mechanisms through which is your brain that learns what to do so that the computer understand it correctly. We are not training the computer, we are training the brain.

Now a team of researchers at Stanford has created a software that can make a step forward in "understanding" what the firing of hundreds of neurones can really mean.

They have shown in a series of experiments a monkey in front of a keypad trained to hit a key when it was lighted. The monkey had to detect the light and move its finger to press the key. The computer learned to understand the signals generated by the monkey's monitored neurones and made adjustments to come to a good interpretation. Than the monkey was again solicited to type on the lighted key but this time the key was pressed by the computer upon analyses of the monkey "neurones". It turned out that a monkey can correctly press 29 keys in 30 seconds (on average) and the system could muster 26 keys in 30 seconds, a 90% effectiveness.

The goal of the researchers is to come up with a brain prosthetics that can be used by people with ALS providing them with an easier way to communicate. Today they interact with a computer through a system that track their eye movement but this requires a longer time and soon leads to fatigue. A direct connection with the brain would be more seamless and more effective.

Author - Roberto Saracco

© 2010-2018 EIT Digital IVZW. All rights reserved. Legal notice. Privacy Policy.