The Morality of an Artificial Intelligence

24 January 2018

In 1896, a group of people was running away from the Salon indien du Grand Café on Boulevard des Capucines in Paris, terrified by a train approaching them at a fast pace. Although the story linked to the first screening of the film The arrival of a train at the La Ciotat station of the Lumière brothers is probably not much more than a legend, it represents in a picturesque but effective way the reaction of mankind in front of a ‘modern devilry’ only a little over 100 years ago. Nowadays, our sensitivity threshold towards technological progress has risen considerably and the idea that a projection on canvas can be mistaken for reality just makes us smile. However, this is a matter of perspective. Technology and progress arouse fear when nature and governability are not fully known. Yesterday were the trains, forced at the dawn to be preceded by a flag-bearer to ensure the safety of passers-by, today are the robots and artificial intelligence, technologies that we seem to be able to develop more than to understand and manage.

 

Sophia, the robot-woman who has surprised the world with her statements about the conquest of the world, supported by 65 facial expressions and autonomous reasoning comparable to a 3-year-old child, is only the parade horse of a host of ‘intelligences’ that man has created and that now, in retrospect, is trying to understand and explain its evolution. The essential difference between other technologies, with which we have already become accustomed to share our time and our planet, and artificial intelligence, lies in the ability to elaborate autonomous decisions. Although sophisticated, the advanced systems available to all of us today, respond to our needs by fishing in fractions of a second between an infinite number of cases and information, but returning them to us for their final use. The AI, on the other hand, is capable, and it will always be more so, to develop autonomous solutions, learning incrementally from its actions, at a speed not yet fully understood by man.

 

To entrust not only pre-established tasks but autonomous decision-making to a machine opens up questions that are anything but new. It is the dilemma of morality the pivot around which the doubts and uncertainties that the spread of artificial intelligence could cause rotate. Who will be responsible for damage caused to third parties by intelligent machines at our service? We are facing unexpected questions since, if we leave the right to decide and the relative responsibility for action, to a machine, we would be a step away from the chaos or from admitting the existence of a conscience and a different form of life? Furthermore, is it possible to suppose that, in a not so distant future, we may no longer be able to decide about it?

 

The loss of control scares, perhaps in a much more justified and reasonable way than the famous Lumière brothers’ train. The neuroscientist and philosopher Sam Harris paints a scenario in which the machines, from a certain point on, begin to improve themselves without our help, or permission. Harris argues that artificial intelligence does not spontaneously become evil as we see happening in movies, but it certainly possess the need for self-preservation. The smallest discrepancy between our goals and those of the AI ​​could cause damage to the weaker ‘species’ less. The human being, as Harris suggests, does not hate other living species, but does not hesitate to limit or destroy them if it is in his interest.

 

According to the most optimistic current of thought, the problem of the supremacy of the AI ​​will not arise if we will be able to instill something that has always been an exclusive prerogative of man: morality. A robot, however, is not able to understand a command such as ‘do good‘ or ‘choose the lesser evil‘, because it draws its reasoning from infinite numbers of examples and cases. Understanding something that is not univocal even for humans, however, is far more difficult than expected. Although we may all agree that good and bad are two very distinct categories, the agreement fades if we try to establish with absolute precision what is part of one rather than another.

 

One of the first products related to artificial intelligence that has definitely put us in front of this question is the advent of cars without a driver. Materially almost ready to be placed on our roads, these technological wonders lie before a rock that for now seems insurmountable: how can we leave to a car the power to make a choice, which can save a life at the expense of another? What can be the right instructions to give to the car, provided that they exist? The doubt relates to the famous Trolley Dilemma by Philippa Ruth Foot, dating back to 1978, where people had to choose between killing 5 people, leaving the convoy proceeding in the taken direction, or taking away one’s life by operating an exchange lever. There have been numerous tests in this regard and even more are the ethical and moral dilemmas that have arisen. In 2011, for example, the psychologist Carlos Davis Navarrete dell’Università del Michigan has devised a variant, confirming the previously obtained results. Out of 147 participants, as many as 133 (90%) changed direction killing the single person, 11 did not touch the lever and 3 triggered the exchange before bringing the lever back to its original position. The utilitarian choice would seem to be the road most often traveled, that is to prefer the lesser evil and the safeguard of the greatest number of lives.

 

The philosophical conception of utilitarianism claims that moral action is that which generates greater happiness to the greatest number of people. According to this reasoning, a car without a driver should choose to save the highest number of lives and, in the case of the trolley dilemma, always choose the deviation. In addition to utility, however, moral responsibility should also be evaluated. Considering that the driver creates the risk already by taking the car, it would be right to prefer to save him instead of an unsuspecting passer-by? And if in the car there were two people in front of a single pedestrian? If the pedestrians were five but they contribuite much less to their community than does the single driver of the car?

 

Warren Quinn, a professor at the University of California, rejected the utilitarian idea, arguing that from an ethical point of view an action that causes damage in a direct and deliberate manner is more despicable than an indirect one that causes it randomly. According to a study published in October 2015 on the Arxiv scientific site, if you ask people not familiar with philosophy how should behave a car in case it has to choose between the death of passengers or that of pedestrians, most will answer that the cars should be programmed so as not to hurt in any case the passers-by. The psychologist Jean-Francois Bonnefon, of the School of Economics in Toulouse, found that 75% of the participants in his experiments think that the car should always steer and kill the passenger, even to save a single pedestrian. If driverless cars were programmed to sacrifice the driver’s life, what would happen if a pedestrian would step in front of the car on purpose? Self-driving cars are not able to assess the relationships between people and therefore it is impossible at present to leave the decision to them. In the same way, the unanimity of mankind on the possible scenarios with which to program the carsis extremely unlikely.

 

To highlight the different interpretations that people give to the concepts of ‘right’ and ‘wrong’, there is also the Moral Machine of MIT (Massachusetts Institute of Technology) in Boston. An interactive game where, through a test, users can identify themselves with artificial intelligence programmers: they are subjected to a series of situations and they must choose the most correct and moral action, getting a feedback on their personal ranking of ‘sacrifice’ of individuals and animals.

 

The real point is not to be able to completely transfer the modus operandi of the human brain to that of an AI, since man is fallible and in situations of uncertainty often acts by instinct or based on more or less distorted and personal evaluations. The point is the transposition of responsibility for choice and action. The ethics commission established by the German Ministry of Transport, composed of luminaries from the automotive, ethics, religion and jurisprudence sectors, has in fact produced the first document on the guidelines for driverless cars, where it is necessary that the driver must always be in control of the car and the car’s AI must always favor human life in relation to goods or animals. The commission has also established the obligatory presence of a black box on board to rebuild the responsibility in case of accident, which will be always on the driver, except in cases where the automatic driving was active due to production defect or failure . This decision denies any decision-making autonomy of the AI ​​and at the same time hinders its development, given that it learns from its actions. Decisions that then demonstrate the objective difficulty of providing the AI ​​with an ethic and the need not to underestimate the power of the right to make decisions.

 

The cultural resistance of humans towards autonomous machines and the AI ​​as a whole seems justified at the moment but, as in the past, progress can not be stopped but only understood and managed. In a recent survey conducted by the American Automobile Association’s Foundation for Traffic Safety, 78% of respondents said they were afraid of getting into a driverless vehicle, while another survey conducted by the insurance giant AIG shows that 41% of participants didn’t want to share the road with a driverless vehicle. The same result is also given by the surveys conducted over the last 2 years by the Massachusetts Institute of Technology (MIT) and by the marketing company JD Power and Associates. Although companies can invest in the security of these systems, the fear of consumers and their distrust increases, partly due to the mystification of the issues related to artificial intelligence, and partly because the same professionals seem to have no convincing unanimous answers.

 


 

 

Whatever the evolution of artificial intelligence and its use in our everyday life will be, we can be sure that this evolution will take place anyway. We are witnesses of the transition of our world and, as some hypothesize, of our species, towards a new era. We can choose to observe this transformation from a distance, shielded by skepticism and worry, or decide to be part of it, becoming aware of what is happening or even contributing ourselves. Bologna Business School offers to those who want to approach the challenges of the future with the right skills, programs designed to train the specialists of today’s and tomorrow’s technologies.

 



Apply

Back To Top