When we consider the Brain to Computer Interface we are not, actually, looking at an interface that can be compared to a computer to computer interface. Data flowing are not specific to a given meaning, they are a mixture (a mess…?) of electrical signals hiding a (several) meaning(s). The more selective we can be in detecting signals, and at the same time the more comprehensive in harvesting electrical activity (capturing more electrical activities from different parts of the brain, different neural circuits) the more chance we have to process those data and extract meaning. Notice that also the separation between motor signals (like raising a arm to pick up a banana) and the thinking of “wanting a banana” is quite fuzzy and it has been demonstrated that sometimes the electrical activity resulting in the motion of the arm “precedes” the thinking of wanting the banana. It may be quite surprising, particularly given the fact that we feel, and believe, we “are” in control of our activity….
The extraction of a (the) meaning from data harvested from the brain involves signal processing and in this area we expect to see significant progresses in the coming decade. One of the issue is to eliminate the “noise”, that is the mixture of meanings (some un-meaningful) that are not relevant. As an example (and just to make the point) when our brain “thinks” about a banana it is also “working” on keeping us standing, is receiving sounds from our ears and images from our eyes and is busy to disregard plenty of data coming from the ambient that are not relevant (like the presence of our feet in the shoes). All of this generates electrical activities (and chemical as well, that in turns alter the electrical activity).
Notice that the present (amazing) results of paralysed patients being able to control a robot are a mixture of advanced signal processing and brain "computation". What happen is that the brain learns how to create signals that are then correctly interpreted. Those patients received sensors implants (to collect electrical signals with high resolution) and then had to be trained to think in such a way to generate signals that could be captured and decoded by the computer controlling the robot. We are at a stage where we have found ways to train our brain, not to "read" our brain. In the coming decades we are likely to see significant progresses that will require less and less training.
Besides, in the coming years it might turn out that additional data can be captured from ambient sensors (including wearable). As an example a video camera capturing the face of a person can provide data that contain information on the emotional state and therefore on the chemistry at work in the body of that person and in his brain.
By 2050 the number of ambient sensors and wearable and their pervasiveness will be so great that I can easily imagine that advanced brain to computer communications will be possible. This will complement significant improvement in sensors like the atom-scale brain sensors being studied at the University of Wisconsin.
Signal processing will also be very sophisticated to the point of being able to capture hints of thoughts with very good accuracy. Artificial intelligence and big data are likely to play a significant role in improving the accuracy of signal processing.