I am no expert in the game of Go, I only have a vague idea of the game and how to play it. I have read that is one of the most difficult, and creative game humankind has invented with a slate of possibilities (2.08x10 to the 170 power as possible positions) that is simply beyond a computer brutal processing power. By comparison a game of chess is easy (if you want to see a comparison of complexity of different games look at Game Complexity on Wikipedia).
Newspapers are all looking with interest at what is going on in Seoul as the Go world champion is being challenged by AlphaGo. No point in discussing that.
The reason I am writing this post is a sentence that was voiced by a person from the Google team that programmed AlphaGo during the press conference following the second game (won by AlphaGo):
"It (AlphaGo) seems to know what is happening"
When I heard it I confess I got the creeps. Is AlphaGo a (the first) sentient machine?
Of course there can be many reasons to say it is not... And yet, that person, may be involuntary, gave the impression he felt AlphaGo knew what it was doing and was aware about it.
Other commentators during the game pointed out with surprise at some moves that were unexpected, even considered a few "pointless" if not a mistake, and then they had to reconsider their opinion when they saw the game evolving. AlphaGo had a strategy that was not a copy of someone else's strategy, something it could have learnt by looking at one of the 30 million + games it observed. It looked like its own invention.
You may still feel this remains a long shot from being sentient, but alas we have problems in defining what we mean, exactly, by "sentient".
Will autonomous vehicles be "sentient", "aware", "ethical". Can we hold them as responsible for what they end up doing? Some legislators seem to be moving in this direction.
These are new questions we have never been confronted before. And the possibility of sentient machines will just increase their complexity.