BBS Leadership Lecture Series – Ethics and Artificial Intelligence – Alon Wolf

9 March 2022

On Thursday, March 3, 2022, a new cycle of talks organized by Bologna Business School began, both in presence and online.

 

Opening the BBS Leadership Lecture Series format is Alon Wolf, Vice President of the Technion Israel Institute of Technology and Professor of Mechanical and Biomedical Engineering.

Discussant: Maurizio Gabbrielli, Professor of Computer Science, Director of the Department of Computer Science – Science and Engineering at the University of Bologna, and Associate Dean for AI and Digital Soul in our School.

Topic: ethics and artificial intelligence.

As Professor Gabrielli points out, an ethical debate is essential, especially in these days marked by a conflict in the heart of Europe.

Alon Wolf, who grew up professionally working on robotics applied to spine surgery at the Technion in Israel, a young, liberal and open-mind University with Campuses in China and New York, was in Pittsburg a few months after 9/11 to manage the work on the robot-snake project, used for research under the rubble of the World Trade Center and, later, for precision surgery.

«We need to take a break from research to think about the consequences of the work we have done». Challenging topic, technological development is the starting point: «We have witnessed the fourth industrial revolution». The first, born with the use of steam in the 18th century in England, changed the idea of industry around the world. The second, at the beginning of the 19th century imposed, with the American Fordist model, a new production and factory concept. The third, culminating in the space race between USA and USSR, was based on autonomous systems and was the beginning of robotics. The fourth, the one we have just experienced, is the artificial intelligence and network combination: cloud, deep learning, wifi, and IoT are now common terms and are the subjects of extremely rapid technological innovation.

The fourth industrial revolution has been accelerated and overtaken by the pandemic: we are now entering the fifth industrial age, the era of global connectivity, of the intelligent cyborgs always connected to the net and able to communicate; of the artificial intelligence that is developing its own cognitive systems. We no longer live in buildings, but in intelligent buildings, and we no longer drive cars, but intelligent cars.

The hallmark of this new revolution? Speed. It took 75 years for the telephone to reach 100 million users. 7 years for the web to reach the same number of users. 4 for Facebook, 2 for Instagram. One month for the Pokemon Go app. It’s an exponential revolution. Data is the new oil: Data doesn’t just generate money, it keeps the world going.

So, what can we say about the new industrial era of which we are spectators (or protagonists) that is related to ethics? Let’s start with two very popular topics: ecology and privacy.

Are we living through a clean revolution? No. It seems as if the aseptic world, the immaculate Apple lines, the technological operators, has no pollution impact on the planet. But it does: even sending a single email generates 4 grams of CO2e, calculated from the energy used by the computers involved. The idea that everything that has to do with new technologies is green is, indeed, just a fantasy, promoted and diffused by companies that want us to be conscientious but happy consumers. The same is true for privacy, a myth in the age of global connectivity: data are the new diamonds. Data protection is one of the main concerns of our time, but can we really protect us from intrusion, fraud, or theft only by disconnecting. If you don’t pay for a product, you are the product. «I don’t have Facebook», Wolf says, «you can check.».

But then, is there space for an ethical debate on contemporary life? Ethics is not an exact science, and the moral crisis comes up as soon as we start talking about any change: can we stop our research and think about the consequences of what we are working on? And more precisely: in what terms can we talk about ethics applied to robotics?

Robots have been around for a long time. The word comes from the Czech robota, meaning heavy work. It was invented by Josef Čapek, who suggested it to his dramatist brother Karel for his R.U.R., Rossumovi univerzáini roboti, a three-act opera from 1920. The robot was the non-human worker of a hypothetical future. Even the concept of Artificial Intelligence is not, as many believe, a child of our times: it was introduced in the 1950s by the psychologist Franck Rosenblatt (1928-1971), the father of Deep Learning, algorithmic learning, the core of computer vision.

Alon public

But why is robotics so relevant today? The answer is in the statistics: in the 1950s, the worker/retired ratio was 5.8 to 1. For every retired, six people in the world were actively working, building the welfare state of the western economic boom. By 2000, this ratio had fallen to 3.9:1. In 2025 it will be 2.1:1. The pyramid is about to be inverted: soon, old age will no longer be sustainable with the structures, models, and mechanisms in place to date. Robots will be part of the answer. Robot workers, robot assistants, robot drivers. Robotics and artificial intelligence will become hugely invasive in one, two decades.

So, how can we teach robots, artificial intelligence, the concept of moral choice? And why is it necessary?

A simple thought experiment in ethical philosophy, known as the trolley dilemma: there is a runaway trolley running the railway tracks. Ahead, on the tracks, there are five people unable to move. You are in the train yard, next to a lever. If you pull the lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on this sidetrack. You have two (and only two) options: do nothing, in that case, the trolley will kill the five people on the main track. Or you can pull the lever, diverting the trolley onto the sidetrack where it will kill one person. Which is the most ethical option? Or, more simply: What is the right thing to do? Taking Bentham’s utilitarianism as a model, the answer would be clear: kill one to save five. But, what if the person on the sidetrack is your brother? Would you kill your brother to save five strangers?

Human ethics are often at an impasse. How can we think of training computers? Yet we cannot afford not to: let us think of self-driving cars and replace them with the trolley in the ethical experiment. The car has five people in front. It has to choose between running them over and risking killing them or turning on the guardrail and risking killing its own driver. Which would be the correct ethical choice? To swerve, of course. And risk losing one life to save five. But who would buy this car?

The market, as well as consumers and scientists, is the subject of the problem. The market has no absolute moral rules and, above all, no time: new things arrive, impose themselves, and then become the subject of discussion. The ethical dilemma is pervasive in the near future. Now is the time to pose new questions, even if we do not have time to work out the answers. Technology and, above all, the technological demands coming from the marketplace do not allow us to slow down and think about the consequences of what we are doing. It’s almost a disarming thought. But perhaps, this is all part of evolution: as Darwin said, it is not the strongest species that survive, but the one that better reacts to change.

In the words of Professor Wolf : «We have to evolve with technology. If you want to predict the future, create it.».



Apply

Back To Top