Looking ahead to 2050 - Symbiotic Autonomous Systems X - A New World: Ethical Challenges

Ethical issues are not clearcut, there are greys along with whites and blacks and not everybody agrees on what is right and what is wrong. This will be true in the future as well with the additional complexity that in some areas we do not have come to an answer yet, since we are basically lacking the capability to formulate the question. Image credit: Fraud Magazine

The speed of technology adoption has increased significantly. Credit: Black Rock

The technology evolution at the current rate will lead by 2050 to systems that are a billion times more performant than the ones we have today. Even Applying a logarithmic scale, in synch with our perception, we will see 9 generations of improvement and this leads us to several unknown and unexplored paths with completely new ethical challenges. Credit: The emerging future

A few ethical challenges starting to confront the world as result of technology evolution. These will become crucial in the coming decades. Credit: John J. Reilly Center

The advance of technology is bringing to the fore new ethical issues. It is nothing new, in a way. Ethical issues flanked technology evolution through the centuries. However, now technology evolution, and its adoption, is way faster than it used to be and ethical challenges pop up more frequently. Since ethics is strongly tied into Society culture (and habits) and culture has greater lateny than technology in these last decades we are less prepared than in the past to face new ethical issues.

There are clearly many aspects of ethical issues related to symbiotic autonomous systems and they will be part of the studies planned in the IEEE-FDC Initiative. Here I would like to point out two of them, one related to the augmentation of humans and the other to the “meta systems” resulting from the symbiotic relationship among autonomous systems.

These aspects are also addressed in the context of the EU Future Emerging Technologies (FET) CSA Observe discussing human-machine symbioses.

Augmenting humans is opening up a Pandora box. We are not aware of the full implications of augmenting humans, at the same time we have technology that makes this possible and a range of applications (needs) that makes this desirable; we are also seeing several undesirable side effects and we feel there may be many more we are not aware of.

Let’s take a positive attitude: augmented humans is leading to an increase in human performance with no nasty consequence (like an augmented human taking advantage of his augmentation to harm others). Even with this unrealistic assumption we are facing the issue of managing the gap between the have and the have-not. Clearly it is nothing new. We had, and we have, this gap in many instances: those having a better education are having the upper hand on those who are less “literate”, more opportunities of getting better jobs, better paid, fostering more educated children with priviledged start in life, thus farther widening the gap. Those having access to knowledge (to the web) to funds, to health care, to food and clean water. Think about it and you will come up with a long list of inequalities in today’s world and Joan Baez song “There but for fortune” will come to mind.
Human augmentation in the coming decades will provide further steam to already existing unequalities but I feel that, since it is not really new, we have the cultural “tools” to confront them.

Of course it is not a given that human augmentation will not be used to harm not-augmented fellows. Again, this is nothing new (unfortunately). The invention of weapons goes back to the first humans, it just got potentially worse, given their increased effectiveness provided by technology. Killing a man with a club or with a drone achieves the same end result but the second widens the possibility to reach a target and de-personalise the action thus making it more difficult to control and giving rise to novel ethical questions. Yet, as before, not being anything new we have the cultural tool to takle this (not to solve it, I am afraid, since we haven’t been able to solve it through our history).

Augmenting humans in their sensing capabilities, particularly through invisible technology, however, is something brand new and it may disrupt the very fabric of Society as we know it.

We all remember the upheaval generated by Google Glass for their potential violation of privacy. Think about a symbiotic relation of an augmented human with the environment resulting from an in depth knowledge of what is going on, including details on the other persons in that environment. We can have the situation in which only one person is augmented (without the other persons being aware of that). The privacy issue is clearly at the forefront, besides potential unfair advantages for that person. We can also imagine a situation where all people in that ambient are augmented and aware of the others. This is breaking down the fabric of interpersonal relation, as we have been used from our birth and even more important from the Darwinian selection. Privacy is more than protecting our own information, it is about making possible social relations. Technology that can bring information about everybody, in real time, as we are interacting, that can dig into our emotions and unveil them is disrupting our social fabric. We are on the brink of continuous connection to the web to enable services like real time translation. Microphone and loudspeaker (or coclear implants) in our ear can connect to the web sending the voice of the person talking to us in Japanese and bring bck his voice in English. But other services in the web can give us hints on his emotion, can detect if he is truthful, can augment his talk with information on “why” he is telling us such a thing, can provide advice on how to respond… A personal assistant in symbioses with us, knowing what our goal is, can even morph our responses to maximize the chance of achieving our goal.
Should we be aware of that, should be control in real time the personal assistant or the symbioses is so strong, and effective, that we relinquish the decision to it. Who is going to be responsible for the outcome. Suppose that what our personal assistant said is bringing us what we want but in the process is harming the other person (phsycologically or even physically), who should take the blame? 
This clearly is just an example to make the point. It is also leading me into the discussion of the ethical issues related to a symbiotic autonomous system.

Because of the “autonomous” characteristics each system in the symbiotic relation makes its  (his) choices to the best of its knowledge to satisfy its needs and goals). This is the case of human relationships. Here, in a way, we live in societies that are the result of symbiotic interactions among autonous systems (no man is an island, … and therefore never send to know for whom the bells tolls; it tolls for thee) but we are sharing the same framework (and when we are not, as it is the case when different cultures meet/clash we may run into problems, ethical problems since deciding what’s right or wrong gets difficult).

In case of relationship among “augmented” humans and “plain” humans the symbiotic relationship between a human and his augmenting system may create unprecedented ethical issues. Who is going to be responsible for the action of the augmented human, since his actions are strongly influenced by his augmentation? Notice that there may be a wide range of situations with fuzzy boundaries. Just for the sake of discussion, what about a person with an exoskeleton that he is wearing because of his job as mason who kills a coworker by choking him with super human force because the latter said something that enraged him and he thought about killing him. His exoskeleton decoded the “killing wish” and acted on it, actually killing the other person. Without the exoskeleton that thought would have remained just that, a thought because that persons wanted to kill the other one but would have never harmed him.  Would thinking make us guilty? If that were the case just think how many crimes we had committed in the privacy of our “brain”…
Would the responsibility be upon who designed the exoskeleton? What if the designer had actually constrained the exoskeleton not to do any harm and as we wear it we are witness to a potential crime we could stop if we throw a punch to a criminal and the exoskeleton is refusing to do it so that we are stuck and the crime takes place?  Again, these are just –naïve- examples I am using to make the point.

We are simply not prepared for this. Ethical challenges ahead are many, diverse, and very likely unexpected…

The Reilly Institute is releasing every year a list of ethical challenges resulting from technology evolution. Here is a glimpse on the most recent ones:

  • CRISPR/Cas9 gene editing technology, clearly fraught with issues
  • Rapid whole genomic diagnoses applied to newborn
  • Talking Barbie, privacy violation dangers versus safety and improved care
  • Digital labour rights, interaction with anonimous workers and anonymous bosses
  • Head transplant, the sense of identity
  • Disappearing drones, delivering goods from “nowhere” and then flying away
  • Artificial wombs, taking motherhood to the next step
  • Bone conduction for marketing, providing direct access to the customers brain
  • Exoskeleton for the elderly, pushing labor life postponing retirement
  • Brain hacking, resulting from wearable EEG
  • Robotic clouds, the rise of autonomous systems interacting with one another
  • NeuV’s Emotion Engine, where your car detects your emotion walking a thin line between safety and privacy
  • Self healing body, tiny robots swarming in the body through blood vessels monitoring physiological processes


Author - Roberto Saracco

© 2010-2018 EIT Digital IVZW. All rights reserved. Legal notice