Artificial Intelligence is already embedded in many everyday experiences to the point that we seldom take notice. It has become “natural” and it will just become even more so in the coming years.
Take Roomba, the vacuum cleaner by iRobot. It is a very smart vacuum cleaner and it is getting smarter with any new release. It embeds technology that makes it possible to create a model of the environment, something we do everyday and that is fundamental to be at ease in what we do. Just think at being in a new city, or just in a new building, and feeling at loss because one does not have the “lay of the land”.
Interestingly, Roomba technologies were available some 15 years ago, but they were not affordable, having a price tag of the order of some hundred thousands $. Now the whole package is on department stores shelves for 300$.
Later versions of Roomba create a 3D map of their environment (and detect if they are moved into a new one, immediately starting to adapt to the new condition) using VSLAM, Visual Simultaneous Localization and Mapping, a technology that has become affordable in the last two years. More than that. Roomba learns about the habits of other home inhabitants (us) and makes its moves accordingly (sweeping the kitchen floor once we are done with dinner, saving the cleaning of the living room for mornings (but not over the week ends) so that it doesn’t bother us….
Does all this makes Roomba intelligent? For sure it makes it smart.
And curiously, as it gets smarter we pay less and less attention to it. Its “artificial” intelligence is becoming natural to us.
OK, however Roomba is just a vacuum cleaner, and we can’t compare a vacuum cleaner with our intelligence. Intelligence is much more that complex mechanical behavior and even the mapping and understanding of an environment run short of what we would consider intelligent.
Intelligence is tied to creativity, like painting; there you need a mechanical ability, but you also need the idea of what to paint, how to paint it to create emotion … That surely requires intelligence.
It turns out that computers (and software) are becoming pretty good at painting, so good, in fact, that it is difficult to tell apart a computer painting from an artist painting.
A computer can learn a “style” by looking at paintings and can replicate that style in brand new paintings. On the web you can be challenged to distinguish between a computer produced and a painter produced painting.
Cartoons are produced to day by computers. They recreate the images in a way that convey meanings. Some of them are just beautiful, there is artistry embedded.
It gets almost impossible to distinguish a cartoon drawn by a professional cartoonist from the one drawn by a computer. It has become an everyday experience for our kids and grandchildren to watch computer drawn cartoons.
Paintings, however, can be imitated. There are a few characteristics that can be extracted, almost mechanically (the brush strokes, the color mix..), and then replicated.
Creating a human face is different. You need to understand what a human face is, what are the traits that immediately let us recognize a human face from another, that makes one face looks old or young …
All of the faces in the image have been created by a computer. The intensity of the gaze, the facial mimics convey a clear human feeling. The computer had to know what we pick up as tell tale signs. The variety of faces show that it is not just imitation, it comes along with the understanding of human characteristics.
Here again we have everyday examples in the movie arena. More and more “special effects” are being created by computers, including the ones involving human characters. This has become so pervasive that the US association of stuntmen has complained of the loss of jobs.
We have reached the point that it has become impossible to recognize a real human from an artificial one in a movie. Crowds in movies are often created artificially by computers and each person in the crowd is undistinguishable from a real person.
The gait, the facial mimics, the reaction to what is going on, all makes sense ... to our senses, and we have evolved hundreds of thousands of years finely tuning our ability to read faces and emotions!
The above examples do not involve any interaction. We, humans, are watching something that has been polished to the point of tricking us into perceiving reality were there is just artifact.
What about AlphaGo and its successful challenge to the Go world champion? That is a different story. Two minds, plotting and challenging one another were at work. Go is not like playing chess. The range of possibilities far exceeds any processing power.
One cannot just evaluate possible moves, one has to “see” what makes sense.
And AlphaGo, apart from winning the challenge, showed a capability of vision that surprised the world champion and the experts. Moves like W107 in the first game or B37 in the second game came literally out of the blue.
AlphaGo, according to qualified observers, “invented” a move that human experts would not have done and that proved to be a smart one. It was not copied from anything AlphaGo saw in watching other games. A brand new moves indicating a speculation on its part of what to do.
It showed creativity, strategy, daring and the willingness to challenge the adversary. All of them very much human traits.
It created a “wow” but that is just because it was the first time. Give it a few more years and the interaction with robots having those characteristics will be taken for granted and no longer noticed. Normal intelligence.
Intelligence is also the ability to understand. Understanding is much more than listening to words making up a sentence. Understanding requires placing a local context (like the utterance of a sentence) into a global context.
A demonstration of the degree of understanding reached by a computer was given by Watson, the IBM computer that played Jeopardy against humans, winning (notice that for a computer the difficult part is understanding the question NOT to find the answer). However, winning was not the real point. This was the demonstration of the understanding capabilities of questions phrased in natural languages, where ambiguities that are associated with it have to be resolved by an understanding of the context and application of common sense.
This latter has been a stumbling block for computers, and often it is what led people to tag computers as “stupid” (which in a way is the opposite of “intelligent”). It doesn’t take rocket science to know that you cannot push a rope, but you can pull a box with a rope. And yet, it has always been extremely difficult for a computer to understand this (unless of course somebody programmed this knowledge into it).
Notable the attempts of Marvin Minsky of “teaching common sense to computers…”.