AI

How to make AI explainable?

• Have you ever wondered how AI reaches its decisions? The challenge is crucial: to enable humans to understand the results of an Artificial Intelligence system.
• Explainable AI techniques combine methods and processes designed to show the operating logic of an algorithm and provide clear explanations to users of how AI makes decisions.
• There are already myriad techniques depending on context, target audience and the impact of the algorithm. And that’s without factoring in the development of generative AI systems, which raises new challenges in terms of explainability techniques.
Read the article

Explainability of artificial intelligence systems: what are the requirements and limits?

Read the article

Data and AI Ethics Council, guarantor of responsible AI at Orange

Read the article
Two people collaborate in front of computer screens, one pointing something out to the other. The screens display computer code in a modern office environment.

AI: “the divide between freelance and in-house developers can be damaging”

Read the article

Orange OpenTech 2024: AI is here

Read the article

Machine learning for intuitive robots that are aware of their environment

Read the article
Rob Wood (Harvard / CETI), deploying a drone in Dominica 

An AI to predict where sperm whales will surface

Read the article