FairDeDup limits social biases in AI models
• Large artificial intelligence models trained on massive datasets often produce biased results, raising important ethical questions.
• A PhD student working on a project in partnership with Adobe, Eric Slyman has developed a method to preserve fair representation in model training data.
• The new algorithm christened FairDeDup, which can be applied to a wide range of models, reduces AI training costs without scrificing fairness.
Read the article