Grabbing an object with your thoughts

New UH research has demonstrated that an amputee can grasp with a bionic hand, powered only by his thoughts. Credit: University of Houston

Brain Computer Interfaces, BCI, are making good progress, although we are still quite far from "reading the mind" (and this is not too bad given the scaring implications).

In a news coming from the University of Houston we see a report of a combination of mind reading and sophisticated software that allows an amputee to grab objects with a prosthetic hand.

Our brain controls our movements by creating "a plan" of actions and basically imagining what should be done. This is then converted into a sequence of orders to our muscles to execute the action. This "plan" precedes the actual execution orders to the muscles by some 50ms. 

What scientists are doing is to pick up this "plan" and hand over the execution orders to a computer that controls the prosthetic limb/hand.

So far the "plan" has been detected through probes, electrodes placed surgically in the brain (most of the times on the cortex). This is no good, because having surgery is not making the top of anybodies' pleasure and even worse these implants work for just a limited time, since the brain continuously reshape itself and the actual position where the "plan" is made might shift.

What the team at Houston managed to do was to use external electrodes woven in a cap that you can place on your head and use signal processing to single out from the variety of electrical activity the signals that pinpoint the "plan". This is not easy at all and has required quite sophisticated software.

Such a software can distinguish, as I said, the relevant signals from background noise and from signals related to other thoughts. The signals that need be intercepted are basically: 

  • the intention to pick up something and
  • what one wants to pick up

Clearly the more possibilities exist (the more objects one has in the environment) the tougher the identification process. Notice that you may think about picking up a glass but you might as well feel that you are thirsty and there is a bottle of Coke on the table. These are quite different way of shaping the "plan". You see the complexity in identifying the object.

We are still far from such seamless identification. In the demonstration you see that the person has to look specifically at one object and really focussing his thinking on it.

Even so the feat accomplished in identifying the "plan" is amazing.

But that's not all. Once the intention to pick up something has been identified the computer controlling the prosthetic hand needs to execute a number of actions that are quite complex and that needs to take into account what it is going to be picked up: its, shape, its surface, its weight and its "meaning". Picking up a solid object is quite different from picking up a glass containing a liquid that might spill depending how you handle the glass.

Reality is that what seems to be an "easy" thing since we can do it without a second thought it is actually very difficult for a computer.

We are just a the very beginning of a very long and winding road, but we have definitely started the trail.

Author - Roberto Saracco

© 2010-2018 EIT Digital IVZW. All rights reserved. Legal notice