Randomness: an ethical solution for learning machines?

Cross discussion between physicist and philosopher Alexei Grinbaum, and Frédéric Serval, manager of a data analysis department at LEGO, around the ethical problems linked to learning machines.

“Resorting to randomness is not a means of erasing harm, but of removing the machine from the field of ethical judgement”

A researcher at the Research laboratory on the science of matter (LARSIM) of the Saclay Nuclear Research Centre (CEA), Alexei Grinbaum is a physicist and a philosopher. He is particularly interested in the ethical questions brought about by new technology (nanotechnology, synthetic biology, robotics, artificial intelligence). In his book Satanas Ex Machina, to be published early 2019 by Desclée de Brouwer, he suggests to resort to randomness in the case of ethical conflicts involving users of smart machines, in particular autonomous vehicles, having to “choose” which lives to protect or to sacrifice in the event of an accident. Following the publication of an article in the Revue française d’éthique appliquée (the French journal of applied ethics), which presents a summary of the point of view developed in his book, we invited him to have a discussion with Frédéric Serval, manager of a data analysis department at LEGO.

Frédéric Serval: In your opinion, do the ethical questions that we are facing today with the emergence of learning machines echo the questions of the past?

Alexei Grinbaum: Echo… good choice of word. Obviously, the technological context is new but that doesn’t necessarily mean that to get to grips with it we must come up with completely new ethics. Hans Jonas, a great German philosopher of the 70s, searched for these new ethics for years without finding them. The lesson to be learnt from that is that new technology ethics must be in keeping with time-tested traditional ethical thinking. To go beyond the current technological context, I propose to extract “fundamental motives”, by which I mean the major themes that are repeated from generation to generation and that have already been analysed and developed throughout the history of ethics.

Frédéric Serval: I would like to chat with you about the trolley problem, which describes a choice between two morally unacceptable outcomes. Is there really an ethical solution to this problem? Some people for example, have put forward the idea of an opinion poll. However, if we take the example of the MIT “Moral Machines” website, which gathers people’s moral choices in the case of an unavoidable accident with several types of variables, we notice that the answers have substantial variation. I believe this shows the limits of the possibility of a mathematical approach when solving a moral dilemma.

Alexei Grinbaum: With psychologist Jean-François Bonnefon we took part precisely in a debate on this subject at the l’École normale supérieure. Several strategies were suggested for resolving the trolley problem but before addressing these we must specify that it is a model and not a real situation. In real life, an autonomous vehicle only has limited decision time and limited access to data. The trolley problem puts the technical parameters to one side so as to focus on a purely moral dimension. The question is: should the software developer code moral values into the machine to enable it to make the decision?

Among the possible solutions to resolve the trolley problem, the first consists in letting the people decide, via a referendum for example, which person should die in the event of an accident. The second solution consists in choosing a function to optimise, a quantitative measure of the moral nature of the different outcomes, which seems logical, as the machine only knows how to calculate. However – and this is the “consequentialist” point – morals calculated in a cold manner according to predetermined rules will never be entirely satisfactory because human judgement also depends on the as yet uncertain consequences that the action of an autonomous car will produce in the future.

Whatever the chosen criteria, the decision will therefore never be ethically flawless. So, what is to be done? The main point that I develop in my book starts from this observation that we cannot undo harm. Technology does good and it does bad, it has always been this way. Thus, the question is not that of working out how to prevent the autonomous car from killing anyone, but rather that of how to ensure that the concepts of good and bad remain purely human concepts, and that machines do not become moral agents. Resorting to randomness is not a means of erasing harm, but of removing the machine from the field of ethical judgement.

Serval: Regarding the optimisation function, it’s interesting to highlight that it is always defined by a human being. When an algorithm adjusts the price of plane tickets for example, it isn’t optimised to improve the journey of the majority, but to increase the route’s profitability… This choice was not made by a machine but by the person who developed the optimisation function. It’s not because it is a calculation that it is “cold”, to re-use your words…

Grinbaum: Behind all optimisation is a hierarchy of values, which is based on measurements that are not qualitative but quantitative, i.e. there are several criteria with which different coefficients are associated, from 1 to 10 for example. And you are correct, these hierarchies of values are always defined by human developers. The ethical dilemma is a conflict of values that runs much deeper than a measurement of the type “7 > 4”. Establishing a quantified hierarchy of values is therefore very difficult, and even impossible, in particular when it is a question of human lives.

Furthermore, when man makes a moral choice, he is confronted with a certain lack of transparency. Personally, I don’t know if justice is more important than freedom, or vice-versa. I decide according to context, instinct, but also according to the limited resources that I have. So I’m not entirely transparent in ethics. Yet the calculation of a predetermined function is, by definition, transparent.

Serval: We are getting to the heart of your proposal: to use randomness to add opacity. Randomness is an event or a variable that is not linked to any causality, a true expression of destiny. But, and I see this in my everyday work, true randomness does not exist in computing. I can generate a pseudo-random sequence, but it will never be completely random. There will always be causality because this sequence that I’m going to code will be the same today on my machine as in ten days on that of a colleague…

Grinbaum: I have discussed this question with my fellows… Serge Abiteboul, a computer programmer and member of the French Académie des Sciences, believes this solution can only work if we are in the presence of fundamental and irreducible randomness, produced for example, by a quantum random number generator. Well, in my book I explain that we need to use only apparent randomness. In order for the ethical point to be acceptable, it suffices that the users think that the choice is made randomly. An example we can quote is that of undefined behaviour programs: some choices are unknown to the programmer because their results were not specified on the compiler’s description, and thus appear to be random.

Serval: So, although it is pseudo-random, this opacity for both user and developer provides an apparent randomness that you think is sufficient. Does that not mean that in the end, we’re entering a notion of faith, in the sense that users must have faith in the fact that the machine will let chance choose? Which brings us to the myth of Joshua that is covered in the article. Is it really desirable for Men to maintain the same relationship with learning machines that the Jews did with the Talmud God, when they are only pretending to do something that they cannot in fact do?

Grinbaum: I prefer to use the term “trust”. In effect the user’s trust is fundamental. On this subject I tell a Bible tale. At the end of this story, when Joshua says to Achan “I pray you, make a confession. It is by a draw of lots that the land will be divided amongst the tribes of Israel”, he understands that it is not his life that is at stake but the trust the people shall place in the procedure. A chapter of my book is called “Never will a throw of die abolish trust”. In it I am looking to find out how to maintain this trust. It is very difficult because using randomness raises not only ethical questions, but also psychological and political questions. Yet in these areas, today, everything goes against trust. We see it for example in the APB or Parcoursup French post-secondary education platforms… When you tell people that we’re going to use a random draw, they find it unacceptable and even unfair, even if this perception is tending to evolve with time.

Serval: Once you enable machines to be Moirai (the Greek gods of fate), do Men not then risk rebelling against the IT systems so as to feel like they are taking back control of their destiny? It reminds me of another myth, a much more modern one, that of the Butlerian Jihad in Dune, which results in a ban on artificial intelligences that are replaced by a caste of mentats specialised in data analysis…

Grinbaum: I think that the fundamental problem of the man-machine relationship isn’t one of competition but of mutual imitation. We are attempting to develop machines that are capable of simulating human behaviour. But when interacting with them, we also imitate the machines’ behaviour. Man is a fantastic “imitating machine”! For example, the adoption by young people of SMS language, i.e. information compression, is a machine value that has become human. This mimicry is quite formidable and seems to me all the more dangerous because we are not always aware of it. I would place the cursor of danger more on this point.

The myth of Joshua: After the death of Moses, the people of Israel are guided by a new leader, Joshua. It is he who finally crosses the Jordan and enters the promised land. But this land is already inhabited: Joshua must wage war on the occupiers. Quite quickly, the army of Israel takes the town of Jericho. The jubilant population is then under the illusion that the conquest will be easy: given that this land had been promised to them by God, they should fly from victory to victory. However, the first defeat happens right afterwards. The inhabitants of a small city called Ai, not far from Jericho, push back Joshua’s men. During the battle of Jericho, God had declared that only he could legitimately take possession of the property of the people who occupied the promised land previously. These objects were forbidden to the people of Israel: he who touched it should be put to death. In all logic, the cause of the defeat at Ai could only be a violation of this divine ban. Joshua then stays alone with God, tears his clothes, lies face down on the earth and remains before the Ark of the Covenant until evening. It is here that God informs him of the punishment he inflicted on Israel at Ai. The people must “destroy the accursed things among” them by finding the culprit, who is to be burned. But who is the culprit? That’s what Joshua asks God. But God doesn’t answer. He says: “Vekhi delator ani” – “But am I a denouncer?” “Throw the dice”, God orders Joshua. Behind this command is God’s reluctance to become a denouncer. It is not up to him to report the culprit, for fear of being implicated in a matter of human judgement, but it is up to man to follow the procedure and to seek, rather than create, the truth. At stake, more so than the question of finding the culprit, is that of trust in the procedure. The dice point to a man named Achan. At first he rebels against Joshua: “Are you condemning me by a draw of lots? What if the lot had landed on you?” and Joshua replies “I pray you, make a confession. It is by a draw of lots that the land will be divided amongst the tribes of Israel.” Immediately Achan confesses. He has understood that what is at stake is not his life but trust in the procedure.

Read also on Hello Future

décryptage de la lettre de Charles Quint - Cécile Pierrot à la bibliothèque

AI provides a wide range of new tools for historical research

Discover

Multimodal learning / multimodal AI

Discover
Three people are collaborating around a laptop in a modern office environment. One of them, standing, is explaining something to the two seated individuals, who appear attentive. On the table, there is a desktop computer, a tablet, and office supplies. Plants and desks are visible in the background.

FairDeDup limits social biases in AI models

Discover
A woman stands in a train, holding a phone. She is wearing a beige coat and a blue and brown scarf. The interior of the train is bright, with seats and metal support bars.

A mathematical model to help AIs anticipate human emotions

Discover

AI researchers aim to boost collective organisation among workers for Uber and other platforms

Discover
PLEAIS

P-C. Langlais (PLEAIS): “Our language models are trained on open corpora”

Discover
GettyImages - WineSensed vin et IA - wine and AI

WineSensed uses artificial intelligence to predict taste preferences

Discover