Whereas in the last century the quest for Artificial Intelligence passed through the creation of algorithms that could embed (and show) an intelligent behavior (and specific programming languages were invented, like Prolog) in the first part of this century the attention shifted to massive use of data and processing, under the belief that brute force could win the day, in spite of our lack of understanding the mechanical underpinning of intelligence.
In these last few years (2-3 years) we have seen the getting together of several technologies, devices and data, that all together are being used to deliver intelligence in a variety of fields.
Smartphones are becoming an interface to intelligence in the cyberspace. We are getting used to voice our queries to our smartphone and getting smart answers is the norm. We are no longer surprised. Smartphones, by packing huge amount of data, huge processing capacity and (basically) being always connected to the cyberspace are both an ideal interface and a great delocalized meaning generators.
The cell phone “knows” where we are, likely where we are going, who is with us, what is our pleasure and much more. They are, in other words, ideally situated to provide smart answers to our needs. They will look more and more intelligent and the evolution in this respect will be continuous but so soft that we won’t realize it and will keep taking for granted what we use. It will be a natural intelligence.
Cell phones are also representing a (usually) dense multitude of meaning points (meaning emerging through local and remote software analyses of local and remote data). These meaning points will create a tapestry of knowledge and processing that will be giving rise to more complex meanings. To intelligence.
If cell phones are the interface turning cyberspace intelligence into an everyday -normal- experience, the crunching of data will be enhanced by chips mimicking neuronal circuits architectures, like Synapse (IBM) that will find a way into the cell phone (probably), and novel computation architectures like Quantum Computing (d-Wave) that will remain centralized providing specific crunching capabilities in the Cloud.
The software is already benefitting from new approaches, like deep neural networks, deep learning, and more -I bet- will come.
The evolution of sensors, their dissemination, will increase the exposure of software to the world of atoms, letting more and more experience to be transformed into knowledge, common sense and meaning.
Most of this knowledge will be shared and become available through Clouds (and Fogs).
My take is that we are going to see a tremendous acceleration from artificial to natural intelligence resulting from the interplay of these enablers.