Data

FairDeDup limits social biases in AI models

• Large artificial intelligence models trained on massive datasets often produce biased results, raising important ethical questions.
• A PhD student working on a project in partnership with Adobe, Eric Slyman has developed a method to preserve fair representation in model training data.
• The new algorithm christened FairDeDup, which can be applied to a wide range of models, reduces AI training costs without scrificing fairness.
Read the article
Three people are collaborating around a laptop in a modern office environment. One of them, standing, is explaining something to the two seated individuals, who appear attentive. On the table, there is a desktop computer, a tablet, and office supplies. Plants and desks are visible in the background.
Two people are seated in front of a computer, discussing a project. Spools of thread are visible on the table.

How to avoid replicating bias and human error in LLMs

Read the article
Conceptual image of the Thales Alenia Space data centre - Thales Alenia Space_MasterImageProgrammes

Lower emissions and reinforced digital sovereignty: the plan for datacentres in space

Read the article
GettyImages - A man in a gray vest is consulting a tablet on a construction site, with visible cables in the background.

Automated intervention reports for augmented technicians thanks to generative AI

Read the article
GettyImages - Téléphone Grave Danger

Téléphone Grave Danger (serious danger telephone): the technical foundations behind this essential device

Read the article

Orange’s radio propagation model: An essential element in mobile network development

Read the article
PLEAIS

P-C. Langlais (PLEAIS): “Our language models are trained on open corpora”

Read the article