FairDeDup limits social biases in AI models
• Large artificial intelligence models trained on massive datasets often produce biased results, raising important ethical questions.
• A PhD student working on a project in partnership with Adobe, Eric Slyman has developed a method to preserve fair representation in model training data.
• The new algorithm christened FairDeDup, which can be applied to a wide range of models, reduces AI training costs without scrificing fairness.
Read the article
• A PhD student working on a project in partnership with Adobe, Eric Slyman has developed a method to preserve fair representation in model training data.
• The new algorithm christened FairDeDup, which can be applied to a wide range of models, reduces AI training costs without scrificing fairness.


A mathematical model to help AIs anticipate human emotions
Read the article
David Caswell: “All journalists should be trained to use generative AI”
Read the article
Health: Jaide aims to reduce diagnostic errors with generative AI
Read the article
AI researchers aim to boost collective organisation among workers for Uber and other platforms
Read the article
