By continuing to use this site, without changing your cookie settings, you agree to the use of cookies enabling us to produce visitor statistics.
Find out more

Trending

Humanisation of AI is not limitless

Algorithms can endeavour to take emotions and feelings into account so as to better understand users’ needs, but the humanised robot remains a mirage.

We want machines to be inhuman in order to be more reliable…” Jean-Gabriel Ganascia, artificial intelligence researcher and President of the CNRS ethics committee, is amused by the paradox. In effect, it is with the aim of improving interactions between humans and machines that various labs are attempting to integrate strictly human characteristics into AI: positive ones such as creativity, or less flattering ones like lying, fear, or transgression. Up until now, machines were programmed to mimic the components of human intelligence in order to reproduce processes. The next stage concerns the “theory of mind”, one of the four types of artificial intelligence described by Arend Hintze of the University of Michigan. The professor of integrative biology & computer science and engineering claims that in the future algorithms will be capable of understanding and ranking the emotions that influence human behaviour.

Monitoring of the pilot

This consideration by AI of the human factor is of interest to various industry and service sectors. Aeronautics for example, where the improvement of interactions between a pilot and an expert system could find truly practical applications. Informed by a set of sensors collecting data on the psychological and emotional state of the pilot – blood pressure, heart rate, eye jerks, stress – a programme can grasp emotions and suggest adapted solutions. That’s the objective of the “Man Machine Teaming” programme launched by Thales and Dassault at the end of 2017, which aims to improve the man-machine relationship by keeping the human permanently in the decision loop.

Emotion detection and establishment of exchanges between the human and the machine that take these emotions into consideration are one aspect of affective computing, studied notably at the LIMSI-CNRS. Imitation of an emotion by a machine that will talk to a human is another aspect of this branch of artificial intelligence. “Fake empathy”, simulated empathy that will be activated in particular in robots, such as Nao and Paro, is an application of this affective computing. Capable of detecting tone of voice or a smile on the face of its interlocutor, Nao is a humanoid that can adapt its responses to emotions. It is used in certain retirement homes and in institutions receiving autistic children. As for Paro, he takes on the appearance of a seal. Originally developed to assist patients with Alzheimer’s disease, this robot that is fed with affective computing is capable of communicating emotions such as joy, surprise, or discontent. Joy, fear, anger, the emotions processed by AI remain basic in comparison with the complexity of the human psyche, which very often combines emotions like fear and relief. To refine their perception, researchers multiply the ways of informing algorithms. By combining for example information from sensors, which grasp signs that are not always easy to interpret, with information from behavioural descriptions, or “psychological templates”.

A moral AI

The journey will be a long one before it’s possible to claim to have put human subtleties into an equation. “Machines are light-years from grasping our affects” stated Laurence Devillers, a researcher at the Limsi, in August 2018: “there is a complexity of mixtures of emotions in real life. We are rarely furiously angry, extremely sad, deliriously happy, but often in a mixture of fear, relief, amusement, and anger. Because context plays a major role.

After having mimicked cognitive mechanisms then modestly started to integrate affective elements, will AI one day have to access consciousness and be endowed with moral values? Some American researchers are toying with this idea, Jean-Gabriel Ganascia informs us, but he doesn’t share this vision: “moral values are of a prescriptive nature not a descriptive one”. Without going as far as attempting to give algorithms a conscience, talking about therapeutic robots used with patients Laurence Devillers says she is “convinced that tomorrow our machines will have to have a “moral” dimension”.

Keyword :

, , , , , , , , , , , ,