Behind the Scenes of AI — The Challenges and Methods of Explainability

Handsome entrepreneur concentrating during virtual meeting on laptop, working from home, technology, connection
AI systems have become so complex that most experts are unable to understand the intricacies of how they work. Explainability is the foundation of transparency, making it a key requirement for being able to trust algorithms.

“A company that uses algorithms should not be a black box company,” explained Cédric Villani in his 2018 report on AI ethics. In fact, AI systems have become part of our daily lives, even in critical areas such as health, mobility and education. Some very advanced Machine Learning or Deep Learning models are similar to black boxes — the input and output data is known and clear, but how the data is processed in between is unknown.

Explainability, the Foundation for Transparency

This can be in response to different needs and use cases. For example, an algorithm designer who wishes to correct or improve their model, a customer who would like to know the reasons that led to a credit refusal based on an automated decision or a publisher wishing to ensure their tool is compliant.

Various Research Approaches

As the regulatory environment becomes increasingly restrictive, this subject is all the more essential. GDPR already sets out specific transparency requirements for fully automated decisions, while a new framework is being prepared in Europe that will stipulate general requirements for risk management, transparency and explainability for the riskiest systems.

Explainability is key to AI transparency — being able to provide the right information or explanation to the right person at the right time.

In recent years, a dynamic research ecosystem has developed around the explainability of AI, resulting in the emergence of various techniques and implementation approaches. Among the most well-known are the variable-based tools Shap and LIME. LIME offers an explanation of a specific decision by analyzing its environment and establishing which variable(s) had the most impact in the final prediction.

Proof by Example

As part of Orange’s research, a thesis project focuses specifically on the explainability method through counterfactual examples, a preferred method compared to the previous and sometimes unstable approach. For a given decision, this involves looking for an example as close as possible to the case studied, but one that has reached a different decision. The expert or customer themselves can then identify what the differences are between their case and another, as well as which parameters should be used in order to reach the same decision. “As part of our work, this method has been applied to a marketing use case — predicting churn (termination) and determining which variable values to modify in order to retain a customer. There are many advantages to this, including being able to provide an explanation at the same time as the decision. It is also an advantage to have an explanation that is both intelligible to a non-expert and actionable, as the actions needed to change the decision are clearly identifiable. As well as addressing the issue of trust, transparency should help people to regain control over the decisions that are impacting their lives.”

In addition to providing a technical solution that supports the explainability method, the research project also includes work on ergonomics to ensure the usability of the solution and the relevance of the explanations.

Read also on Hello Future

décryptage de la lettre de Charles Quint - Cécile Pierrot à la bibliothèque

AI provides a wide range of new tools for historical research

Discover
An individual in a lab coat and protective glasses holds a microprocessor in their gloved hand. The setting is bright and modern, suggesting a research or technology development laboratory.

Algorithmic biases: neural networks are also influenced by hardware

Discover

Multimodal learning / multimodal AI

Discover

Controversies around AI: from ethical questions to legal regulation

Discover

Cybersecurity: AI attacks and hijacking

Discover
PLEAIS

P-C. Langlais (PLEAIS): “Our language models are trained on open corpora”

Discover

Khiops: Simple and Automated Machine Learning

Discover