Look at that dog. It is wagging its tail, looks like it has seen his owner approaching it. Looking, recognising the wagging dog and deducing why it looks happy is not rocket science. And yet it is a most difficult task for a computer. What is obvious to us is usually very hard (it used to be impossible) for a computer.
Now a program developed by researchers at the University of Washington and the Allen Institute for artificial intelligence is making significant steps towards providing a computer the capability to "understand" an image.
This feat is performed through a mixture of hard number crunching and artificial intelligence approach.
The researchers have explored images on millions of books made available by Google and on billions of web pages associating the text to the images and then using artificial intelligence approaches to extract meaning from pictures and reinforce that meaning with the text going along with them (in proximity, not necessarily in a caption).
You can see an example of the kind of concept association made with the term "walking" and a given picture.
It might seem easy but it is actually extremely complex and it really amazes me that we have been able to reach this point. It really shows that the line between humans and computers is getting fuzzier and the question is not if computer will ever get smarter than humans but when it will happen...