Decision making: AI can reduce rates of human errors

Shot of a young woman looking stressed while using a laptop to work from home
● When making decisions, human beings can draw on support from algorithms to reduce the risk of making mistakes in their interaction with machines. However, they should always retain control over final decisions.
● Artificial intelligence, based on neural networks and machine learning, has the potential for the real-time resolution of problems that cannot be detected by the human brain.
● We talk to experts like the co-inventor of Siri, Luc and Jean-Gabriel Ganascia (French National Centre for Scientific Research) about how we should approach the interaction between humans and complex systems.

AI is not the equal of man, who is capable of imagining and successfully carrying out an emergency landing on the Hudson River.

Human error can have disastrous consequences ranging from production slowdowns to physical injuries at critical production sites or serious medical errors. And mistakes in health care are not only harmful to patients, they also affect the lives of medical professionals, who are often wracked by guilt and other negative feelings, creating a vicious circle in which further errors are more likely to occur. Machine learning models trained on critical data, like the one evaluated last year by BMC Health Services Research, can accompany health professionals taking decisions about medication for elderly patients, and improve precision screening for errors in neo-natal intensive care units. However, notwithstanding the positive results demonstrated by these promising systems, the WHO has cautioned against a rush to automate decision making, which could potentially increase the number of medical errors and undermine confidence in artificial intelligence.

“Errare humanum est”, Jean-Gabriel Ganascia reminds us with a smile, when we mention using artificial intelligence (AI) to eliminate errors in the interaction between humans and complex systems. Algorithms are written by humans, who are fallible, so they may introduce errors or sources of errors into programming, emphasizes the AI expert and president of the CNRS (French National Centre for Scientific Research) ethics committee. Relying on AI to eliminate errors means taking the risk of seeing the AI make other mistakes, in good faith, you might say. A good example of this was the accident caused by an Uber autonomous car in March 2018, recalls Jean-Gabriel Ganascia. “The AI was not faulty, the programme worked perfectly,” points out the expert, who adds that the self-driving vehicle had been programmed to take into account cyclists and pedestrians but not a pedestrian pushing a bicycle. It had also been trained to ignore extraneous phenomena such as plastic bags flying into its path, which would otherwise cause it to stop unexpectedly. So, the model complied perfectly with instructions it had been given. The error did not come from the machine, but from the programmers who didn’t comprehensively describe all the possible hazards in this situation.

There is no intelligence in AI, Luc Julia says provocatively, but there is knowledge – of data and of rules – and there is recognition.

Augmented intelligence

The story has the merit of throwing into question the notion of error in relations between humans and complex systems. “Machines don’t invent anything; what they produce comes solely from the data we put into them,” points out Luc Julia, the co-inventor of Apple’s virtual assistant, Siri, and the current Chief Scientific Officer of Renault, who published L’intelligence artificielle n’existe pas (“Artificial Intelligence Does Not Exist”) in late 2019. “There is no intelligence in AI,” he says provocatively, “but knowledge of data and rules – and recognition.” Instead, we should be talking about “augmented human intelligence”, which will draw on resources that we cannot mobilise with the same power as machines”, says Luc Julia, citing AlphaGo, the first ever programme to defeat a Go world champion. “Augmenting human intelligence will enable us to limit the margin for error in areas like driving, medical diagnostics and the operation of electronics, three major areas of application in which AI is used to track down errors.” Luc Julia is not the only thinker to highlight the role of artificial intelligence as a means to circumvent human miscalculations, French computer scientist and philosopher, Jean-Gabriel Ganascia has also pointed out that the systematic nature of AI, combined with its computing power, can compensate for human shortcomings in areas “where humans may fail, because they are subject to stress and moods, whereas AI is not.” In an article published by the journal Nuclear Engineering and Technology in February 2023, American researchers documented how they trained generative adversarial networks – unsupervised machine-learning algorithms – to detect mismatches between automatically recorded sensor data and manually collected surveillance data in a nuclear power plant. The results of their study were unequivocal: the new tool improved both the detection of anomalies and human errors.

Different kinds of errors

However, all the experts are keen to emphasise that although AI can provide valuable help in controlling of complex systems, like modern aircraft, it should not be considered to be the equal of trained humans, who may be required to respond to unforeseen situations. In 2009, an incident in which the pilots of an Airbus 320 were forced to make an emergency landing on the Hudson River is an apt illustration of this point. The quickfire decision to bring down their crippled aircraft next to Manhattan, which they succeeded in doing without any loss of life, not only won them admiration of the world, it also provided a telling reminder of the primacy of humans over machines.

“Without replacing humans, or completely eliminating human error, AI can act as a limiting force. It all depends on the nature of the errors involved,” points out Célestin Sedogbo, the director of research laboratory on cognition and a specialist in language processing at ENSC-Bordeaux. When human error stems from a lack of knowledge, cognitive enhancement offered by AI can inform an operator with a step-by-step guide to necessary procedure. The “tunnel vision” effect is another type of error studied by Célestin Sedogbo’s research institute. It can happen that “an experienced and perfectly competent pilot, who is busy with a particular problem, does not hear the landing gear extension instruction,”  points out Sedogbo who works with Thales, “because his or her attention is focused in a tunnel.” In such cases, a second audio alert may also be ignored.

To get around this problem, the solution is to call on a mirror-neuron reflex, which operates on the principle that observation of a behaviour will cause others to imitate it, rather like the sight of someone yawning causing other people to yawn. The programme deployed by the AI will show the distracted pilot a video of another pilot extending aircraft landing gear, which will break through the tunnel vision effect and cause him or her to initiate the necessary procedure.

Biased and unfair algorithms

Cultural biases and prejudices that distort judgement and influence decisions are another common type of human error. “In such cases,” explains Gaël Varoquaux, a researcher at Inria Saclay, “AI will only be able to identify and correct biases if they have been described and incorporated into algorithms. Citing the recidivism prediction software used in American prisons, which has been shown to be unfavourable to black convicts by a ProPublica study,  Varoquaux explains that there is no such thing as a neutral algorithm: if we don’t rectify biases in data submitted to AIs, then these tools will reproduce them. “AI will not correct human errors in such cases, it will just learn to imitate them.”

Read also on Hello Future

A woman stands in a train, holding a phone. She is wearing a beige coat and a blue and brown scarf. The interior of the train is bright, with seats and metal support bars.

A mathematical model to help AIs anticipate human emotions

Discover

Vyntelligence: Video Notes for Operators in the Field

Discover

Are the voice’s channels impenetrable?

Discover

Artificial voices are being humanised and entering our everyday lives

Discover

Artificial speech at Orange: fluency is the priority!

Discover

Humanisation of AI is not limitless

Discover

Artificial Intelligence, between hopes and fears for Humankind

Discover