A Google software robot (that's how I would call this), AlphaGo, has defeated the European Go champion 5-0.
The Go game has been considered as the holy grail for artificial intelligence. It is way more complex than chess, although at first sight it might look simpler. The number of different situations possible on the usual Go board (19x19) is exceeding a googol (10 followed by 100 zeros, way more than the number of atoms in the universe).
What is very interesting is how AlphaGo has been able to became smarter than the current European Champion: AlphaGo has been equipped with two deep neural networks, each one containing several millions artificial neurones (compare them with the hundred of billion neurones in the brain of Fan Hui, the European champion) and these networks have been programmed to observe Go players and digest some 30 millions moves from expert players.
At that point AlphaGo has started to play Go by itself against itself and as it played it learned and actually come up with new strategies that it evaluated and adopted in its fight against the European champion.
The next step will be to challenge the world champion.
The goal of Google teams working on deep neural networks goes beyond winning Go games. They are trying to use this strategy of learning applying them to different fields, like getting better at diagnostics in fields like health care, avionics, manufacturing.
What is really impressive is the capability of AlphaGo to come up with something new, that was not observable in the moves of the expert players and that in some cases proved to be better. If this was the result of a human we won't esitate in calling it "creativity". Being the result of a robot we tend to be more careful, but independently of how you want to call it,... it remains amazing.