X-AI: understanding how algorithms reason

Explainability of artificial intelligence defines the ability to explain the way in which an algorithm works in order to understand how and why it produces a particular result. This new field of research is also a democracy issue and a scientific challenge.

It’s the “black box” phenomenon: algorithms whose inputs and outputs can be observed, but whose internal workings we do not understand. The question of confidence is pivotal to the future of AI. A lack of public confidence would hinder its development. Tomorrow, it will therefore be about finding a satisfactory compromise between efficiency and explainability.

Advances made by artificial intelligence (AI) and its increasing use in sensitive areas such as health, justice, education, or banking and insurance are forcing us to question ourselves about the (regulatory, ethical, environmental, etc.) issues that are raised by these technologies.

Among these issues, Explainable AI (or X-AI), is emerging as one of the criteria for an ethical AI. With algorithms being more and more involved in decision-making processes that can have a significant impact on the lives of those concerned, it is important to understand how they “reason”.

Thus, the doctor who makes a diagnosis with the help of a decision-making algorithm or the judge who imposes a sentence based on recidivism-prediction software should be able to know how these systems achieve such or such a result. In the same way, someone who is refused a bank loan by a credit scoring algorithm should be able to know why.

The problem is that these technologies, and in particular those based on machine learning techniques, are often opaque. It is sometimes very difficult – even for their developers – to explain their predictions or their decisions.

It’s the “black box” phenomenon: algorithms whose inputs and outputs can be observed, but whose internal workings we do not understand. In effect, as opposed to traditional algorithms, which follow a set of predetermined rules, machine learning algorithms automatically generate the rules they follow by themselves.

For fairer and more reliable algorithms

According to the Villani public report on artificial intelligence, explainability of AI is one of the conditions of its social acceptability.

First of all, the report reminds us, it is a “question of principle” because as a society we cannot allow certain important decisions to be made with no explanation: “Without being able to explain decisions taken by autonomous systems, it is difficult to justify them: it would seem inconceivable to accept what cannot be justified in areas as crucial to the life of an individual as access to credit, employment, accommodation, justice and health”.

Several examples have shown that algorithms can take “bad” decisions due to errors or biases of human origin that are present in the datasets or the code. By making their reasoning transparent, explainability helps to identify the source of these errors and biases, and to correct them.

This also makes it possible to guarantee the reliability and fairness of algorithms, and thus to establish public confidence. This question is pivotal to the future of AI, as a lack of public confidence could hinder its development.

Improving explainability of algorithms without adversely affecting efficiency

What, precisely, does the concept of explainable AI imply?

Explainability – or interpretability – is a component of algorithm transparency. It describes the AI system’s property of being easily understandable by humans. The information must therefore be presented in a form that is intelligible for experts (programmers, data scientists, researchers, etc.) but also for the general public.

In other words, publishing the source code is not enough, not only because that doesn’t systematically make it possible to identify algorithmic bias (the running of certain algorithms cannot be apprehended independently from the training data), but also because it is not readable by a large majority of the public.

Furthermore, this could be in conflict with intellectual property rights, as an algorithm’s source code can be assimilated with a trade secret.

What’s more, X-AI holds several challenges. The first is in the complexity of certain algorithms, based on machine learning techniques such as deep neural networks or random forests, which are intrinsically difficult to grasp for humans; then there is the large quantity of variables that are taken into account.

Second challenge: it’s precisely this complexity that has made algorithms more efficient. In the current state of the art, increasing explainability is often achieved at the expense of precision of the results.

A new field of research

Research works on X-AI are still fairly recent. In 2016, the United States Defense Advanced Research Projects Agency (DARPA) launched its program aimed at creating a suite of machine learning techniques to “produce more explainable models, while maintaining a high level of learning performance” and “enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners”.

It’s all about fostering the development of explainable models “by design” (i.e. right from the design phase) that are capable of describing their reasoning and of identifying their strengths and weaknesses, in combination with the production of more intelligible user interfaces.

In France, the National Centre for Scientific Research (CNRS), the Institute for research in computer science and automation (Inria), and several research laboratories of the Hauts-de-France region have decided to join forces to reflect on these questions within an alliance called “humAIn”. Their work draws notably upon a set of symbolic AI techniques, “based on …”symbolic” (human-readable) representations”.

On the startup side, we can mention craft ai, which offers AI APIs “as a service”. This fresh French startup has chosen to limit its offer to explainable models by using decision trees, which are graphs that represent the hierarchy of the data structure in the form of sequences of decisions, with a view to predicting a result. It is undertaking a huge research effort to improve these algorithms, industrialise them, and make them more accessible to businesses.

Algorithms squared

The number of explainable-by-design AI systems remains limited, which has led to the emergence of “algorithms that explain algorithms”. This means adding an extra layer of explainability to the black box models that make it possible to understand how and why an algorithm produces its results, for example by highlighting the importance of certain variables or by representing the decision-making process.

The complexity introduced by machine learning has enabled considerable improvement of algorithm performance in many areas. This quest for efficiency has often been carried out at the expense of transparency. Today, explainability is seen ever more as an important criterion of a “good” algorithm, more so with regulation evolving in this direction. Tomorrow, it will therefore be about finding a satisfactory compromise between efficiency and explainability.

Read also on Hello Future

décryptage de la lettre de Charles Quint - Cécile Pierrot à la bibliothèque

AI provides a wide range of new tools for historical research

Discover
An individual in a lab coat and protective glasses holds a microprocessor in their gloved hand. The setting is bright and modern, suggesting a research or technology development laboratory.

Algorithmic biases: neural networks are also influenced by hardware

Discover

Multimodal learning / multimodal AI

Discover

Controversies around AI: from ethical questions to legal regulation

Discover

Cybersecurity: AI attacks and hijacking

Discover
PLEAIS

P-C. Langlais (PLEAIS): “Our language models are trained on open corpora”

Discover

Khiops: Simple and Automated Machine Learning

Discover