Looking ahead to 2050 - Symbiotic Autonomous Systems IV - Technologies

Asimov's three laws of robotics put a boundary on what robots can do and would ensure that robots make no harm to humans. However there is no way to insure they will be applied (think of military drones...). Credit: Jantoo Cartoons


As engineers we are “control freak”. A good portion of our design goal is to achieve control on the machine. With autonomous sytems the engineers are still designing with control in mind, even though this control is in terms of goals to be pursued and boundaries within which the behavior to achieve the goal is allowed.

In biological systems the idea of control is quite different. A biological system has to operate in an equilibrioum zone, its metabolism is what controls at chemical/physical level its operation. You break the metabolic equilibrium and you die. Then the range of behaviours within this equilibrium are bounded by the characteristics of your body: it doesn’t matter how fast you flip your arms, you will not fly.

In our case, however, we eventually got to fly, not by flipping our arms but by building airplanes. Could an autonomous system that in principle cannot fly eventually find a looparound and … fly?

Even though I oversimplify the point this is a crucial one and it is one that is being discussed by scientists in these years. In other words: as we are creating more and more flexible autonomous systems how can we be sure that their autonomy will not lead, eventually, to step outside of boundaries that we have designed? If they are really autonomous they might be able to gain insight on their limitations and find turnaround, just like we did.

Notice that the issue is not about the possibility to develop an autonomous system that may result in harm, we have plenty of examples, from military drones killing people to self driving cars or autopilots on planes that fail to respond in the right way. It is about the possibility of an autonomous system of pursuing its goal in an harmful way or, even worse, to change its goals in unexpected and unplanned ways that would result in harm.

There have been several concerns expressed by scientists around the world on the intrinsic danger of artificial intelligence, which is very much related to the aspect of control in an autonomous system.

The situation gets even more complicated when we are looking at the interaction among several autonomous systems. To clarify the problem think about yourself. You are a law abiding citizen (most of the time…), you are kind to other people, you love animals… and then you swat a mosquito that bit you. The reasoning is the mosquito bit me so it has to die! (I usually try a pre-emptive strike trying to kill it before I get bitten).  I am just giving this trivial example to state that even us, as autonomous sytem do things that can be harmful to other “systems”.  More than that. There are “unexpected” situations where we are not sure of our reaction, just because they are unexpected, and those reactions may end up to be harmful. Or we might be on the edge, under stress, and our reactions can overstep the boundaries of our normal reations.
This is a fundamental problem in autonomous systems. Once you provide autonomy you (partly) lose control. And in general the more autonomous a system is the less control can be imposed.


When we come to symbiotic autonomous system the issue becomes even more complex because of the “symbiotic” behavior. Each system is in close relation with the other and the reaction of one can trigger an amplification in the other that in turns lead to amplified reactions in a potentially dangerous loop. The "bio" part (us) is more unpredictable in its behaviour and this bring unpredictability into the system as a whole.

Author - Roberto Saracco

© 2010-2018 EIT Digital IVZW. All rights reserved. Legal notice