"Artificial" Intelligence - it is getting "natural" - Part 4

A Google self driving car stopped by the police. Will it be fined? Credit: Google

The point I made so far is that as artificial intelligence becomes more widespread and percolates our everyday experience it will tend to fade away from our perception becoming a “natural” intelligence.

However, this will bring to the fore a number of issues we are not used to tackle.

Being intelligent comes with responsibility. Many legislations around the world stipulate that if you are not in a condition to understand what you are doing you cannot be held responsible. Understanding is involving intelligence, as I noted before, as well as being aware (that again is an aspect of intelligence).

Does the reverse apply? In other words, if a robot becomes aware-intelligent, should it be held responsible for its actions and consequences?

Or in this case we would assume that there is an owner and the responsibility falls on the owner? 
 Would a policemen, as it happened in California to a Google car, stop the car and fine it because of some violations?  One could say that an autonomous car will never incur in any violation if it has been trained (programmed?) correctly, if not the blame, and consequences should fall on the programmer. As simple as that. Actually it is quite more complicated and the more one thinks about it the fuzzier the scenario. As an example, the autonomous system designer could point out that the violation, like going through a stop sign, was the consequence of a poorly maintained sign that the vision system in the car could not identify correctly, hence the blame should fall on the municipality.  

Studies are already underway to address the new issues, although my feeling is that we have just started to scrape the surface a whole new world.

Author - Roberto Saracco

© 2010-2018 EIT Digital IVZW. All rights reserved. Legal notice