How to make AI explainable?
• Have you ever wondered how AI reaches its decisions? The challenge is crucial: to enable humans to understand the results of an Artificial Intelligence system.
• Explainable AI techniques combine methods and processes designed to show the operating logic of an algorithm and provide clear explanations to users of how AI makes decisions.
• There are already myriad techniques depending on context, target audience and the impact of the algorithm. And that’s without factoring in the development of generative AI systems, which raises new challenges in terms of explainability techniques.
Read the article
• Explainable AI techniques combine methods and processes designed to show the operating logic of an algorithm and provide clear explanations to users of how AI makes decisions.
• There are already myriad techniques depending on context, target audience and the impact of the algorithm. And that’s without factoring in the development of generative AI systems, which raises new challenges in terms of explainability techniques.


Explainability of artificial intelligence systems: what are the requirements and limits?
Read the article
Data and AI Ethics Council, guarantor of responsible AI at Orange
Read the article
AI: “the divide between freelance and in-house developers can be damaging”
Read the article

Machine learning for intuitive robots that are aware of their environment
Read the article