To close this brief set of speculations on Artificial Intelligence in 2050 let's consider the implication of a world populated by intelligent (autonomous) entities. In the previous post I stressed that there is no consensus on the fact that a machine passing the Turing test (having a behaviour that is indistinguishable from a human one) would have "feelings". What about "free will"? What about "character", "mood"? Would these machines be sociable? Would they compete with one another or cooperate?
The point is really tricky. You cannot say: "No problem, we are going to program them so they they will be -good-".
The essence of creating artificial intelligence, like for human, is to have an entity that can have its own peculiar open space of decision. Actually, most recent technologies to have machines learning and getting smarter are letting them work out by themselves the "best" strategy and keep becoming better and better. The problem, of course, is that "better" can mean different things to different people, communities, cultures, societies and... machines!
In this respect it is very interesting to look at studies, like the one recently published by Google researchers who investigated the behaviour of "intelligent agents" when they are confronting with one another. It turns out that depending on the availability of their own resources (that is the power of their "brain", the processing power available) they may lean over a cooperative or a competitive strategy. Notably, if one agent has more processing power than the others it will tend to drift into a competitive attitude, whilst in presence of equivalent processing power the agents tend to lean towards a cooperative attitude. It looks like the "law of the strongest" is at play in the machines kingdom as it has been in the human civilisation history.
Over the millennia, with difficulties and with several relapses, humankind has built a social ethic and has strived to cooperate rather than compete (the widespread occurrence of wars, fights is there to remind us this is not really so...). Also in business, particularly in today's world, competition is the name of the game.
This is a big issue. True, one can assume that the open space of decision for AI machines will be bounded but those boundaries will get broader and fuzzier over time. Besides who is going to set those boundaries? Who is programming the programmer?
Again, we are in an uncharted territory. This is pointing to the need for a broad cooperation among scientists, researchers, sociologists, politicians and regulators. I have a bad feeling in just letting the market evolve and hope for the better.