AI could reduce human error rate

AI is not the equal of man, who is capable of imagining and successfully carrying out an emergency landing on the Hudson River.

Humans can rely on an algorithm to reduce the risk of error in their interactions with a complex system, but the final decision must remain with the human.

“Errare humanum est”, Jean-Gabriel Ganascia reminds us with a smile, when we mention turning to artificial intelligence (AI) to eliminate error in the interactions between humans and complex systems. Algorithms are written by humans, who are fallible, so who could have introduced errors or sources of errors into their program, emphasizes the AI expert and president of the CNRS (French National Centre for Scientific Research) ethics committee. Relying on AI to eliminate error is therefore taking the risk of seeing the AI make an error, you could say all in good faith. It’s in this sense that an accident was caused by an Uber autonomous car last March, recalls Jean-Gabriel Ganascia. “The AI was not faulty, the programme worked perfectly” the expert claims, explaining that the car had been programmed to take into account a cyclist or a pedestrian but not a pedestrian pushing a bike. It was also programmed to not take into account interfering images such as that of a plastic bag flying on the road, so as not to be stopped erratically. So, it was in perfect compliance with the set of instructions. The error did not come from the machine, but from the person who didn’t describe all the use cases possible in this situation.

Augmented intelligence

This event however should be credited with questioning the notion of error in the relationship between a human and a complex system. “The machine invents nothing, what it can produce comes from the data that we have entered into it, and from that data only”, states Luc Julia, co-inventor of Siri, Apple’s voice interface, and Samsung CTO and Vice President of innovation, who is to publish “Artificial intelligence doesn’t exist” this autumn. “There is no intelligence in AI, he says as a taunt, but there is knowledge – of data and of rules – and there is recognition”. Instead we should talk about “augmented intelligence of the human” who will rely on resources that he cannot mobilise with the same power as the machine, Luc Julia believes, mentioning Alpha Go, the programme that beat the Go champion. “This augmentation of the human will enable the latter to limit the margin of error in areas such as driving a car, medical diagnosis, and the operation of electronic products, the three large application areas of the AI tasked with hunting errors.”, says Samsung’s boss of innovation. The systematic nature of AI combined with the power of calculation is indeed of the sort to compensate for human deficiencies, admits Jean-Gabriel Ganascia: “where man can be faulty because subjected to pressures and moods, AI is not”.

Different types of error

Yet, as all experts agree, even if it is a precious help for piloting as complex a system as a plane, AI is not the equal of man, who alone is capable of imagining and successfully carrying out an emergency water landing on the Hudson River. In 2009, this feat of the Airbus 230 pilot who crash-landed his plane on the river that runs past Manhattan, with no human casualties, created admiration and was a reminder of the primacy of man over machine. “Without replacing man or completely eliminating human error, AI can limit it. It all depends on the nature of the error”, points out Célestin Sedogbo, director of the Cognition Institute and language processing expert at ENSC-Bordeaux. When human error originates in a lack of knowledge, the cognitive augmentation enabled by AI can make the operator completely fool-proof by providing him with an operating procedure and by guiding him step by step. “Tunnelling” is another type of error that is studied in the institute managed by Célestin Sedogbo. “A perfectly competent, experienced pilot, preoccupied by a particular problem, will not hear the instruction to extend the landing gear” explains the expert who works with Thales. “His attention is concentrated in a tunnel”. It will not be possible to get him out of there with a new alarm that he wouldn’t hear. The only solution is to bring into play a reflex known as the “mirror neuron”. Working on the same principle as the yawning caused simply by the sight of another person yawning, the programme initiated by the AI will play a video in front of the pilot’s eyes, precisely showing a pilot extending the landing gear. The pilot will then “exit” his tunnel to copy the action that is being played out in front of his eyes.

Unfair algorithm

Another common type of human error, cultural “bias” and prejudice, can distort judgement and influence a decision. In this case AI will only be able to spot and correct the bias, specifies Gaël Varoquaux, researcher at the Inria, Saclay, if this bias has been described previously and integrated into the algorithm. Recalling the case of the recidivism prediction software used in American prisons, that was unfavourable towards black convicts as shown by the ProPublica study, Gaël Varoquaux reminds us that there is no neutral algorithm: if we do not commit to correct these biases in the data we provide to the AI, we will reproduce them. “AI will not correct human errors in this case, it will simply learn them”.

AI is not the equal of man, who is capable of imagining and successfully carrying out an emergency landing on the Hudson River.

Keyword :

, , , , , , , , , , ,