FairDeDup limits social biases in AI models
• Large artificial intelligence models trained on massive datasets often produce biased results, raising important ethical questions.
• A PhD student working on a project in partnership with Adobe, Eric Slyman has developed a method to preserve fair representation in model training data.
• The new algorithm christened FairDeDup, which can be applied to a wide range of models, reduces AI training costs without scrificing fairness.
Read the article
• A PhD student working on a project in partnership with Adobe, Eric Slyman has developed a method to preserve fair representation in model training data.
• The new algorithm christened FairDeDup, which can be applied to a wide range of models, reduces AI training costs without scrificing fairness.



Lower emissions and reinforced digital sovereignty: the plan for datacentres in space
Read the article
Automated intervention reports for augmented technicians thanks to generative AI
Read the article
Téléphone Grave Danger (serious danger telephone): the technical foundations behind this essential device
Read the article
Orange’s radio propagation model: An essential element in mobile network development
Read the article