Ever thought about our computer feels when you win a chess game or when it beats you at chess? Probably not, at least I never did because of course my computer is completely oblivious and indifferent to the bits (electrons) flowing through it.
And yet what is it that makes me feel happy by beating a computer at a chess game and leaves it completely indifferent to the outcome?
This question has been asked more and more often by techno-philosophers as machines are becoming more and more powerful and in many ways react to their environment as we do. None, so far, has been able to pass the Turing test but many, me included, feels it is just a matter of time.
I read an interesting interview to Christof Kock, chief scientist at the Allen's Institute for Brain Science in Seattle about the if and when a machine can gain consciousness. Of course before attempting to tackle the question one has to agree on what consciousness mean and that not easy. Debates have been raging for millennia with no accepted solution.
The Turing test (so far not passed) is a way to prove that a machine can become indistinguishable from a person, which my prove that it has an intelligence that is equivalent to the one of a person and a behaviour that cannot be distinguished by the one of a person. Hence such a machine would declare it is happy or sad when confronted to a situation where a person would feel happy or sad. But is the machine feeling sad (or happy)? That is a completely different story.
A new theory claims that consciousness can be seen as an integrated information. The theory has been developed by Giulio Tonioli at the University of Wisconsin.
Integrated Information (represented by the symbol Φ ) is defined as the amount of information generated by a complex of elements and this generated information is above the information generated by its part. According to this definition consciousness is itself an information. In other words I would call this an "emergent" information.
My claim would be that this emergent information cannot result by a processing of existing information alone but it is actually the ensemble of the existing information and their processing. Hence, and this is also a point made by Christof in his interview, simulating these process of consciousness does not result in consciousness. In other words, unless you have a brain (like) structure, you won't have consciousness.
In this sense (assuming it is right) I should not be concerned about my Mac becoming sad independently of how much more performant its chips might become. However, this does not exclude that in a (near) future we might have computers powered by neuromorphic chips that will indeed experience consciousness....
A bit scaring though!