Machine learning

FairDeDup limits social biases in AI models

• Large artificial intelligence models trained on massive datasets often produce biased results, raising important ethical questions.
• A PhD student working on a project in partnership with Adobe, Eric Slyman has developed a method to preserve fair representation in model training data.
• The new algorithm christened FairDeDup, which can be applied to a wide range of models, reduces AI training costs without scrificing fairness.
Read the article
Three people are collaborating around a laptop in a modern office environment. One of them, standing, is explaining something to the two seated individuals, who appear attentive. On the table, there is a desktop computer, a tablet, and office supplies. Plants and desks are visible in the background.
A woman stands in a train, holding a phone. She is wearing a beige coat and a blue and brown scarf. The interior of the train is bright, with seats and metal support bars.

A mathematical model to help AIs anticipate human emotions

Read the article

David Caswell: “All journalists should be trained to use generative AI”

Read the article

Health: Jaide aims to reduce diagnostic errors with generative AI

Read the article

Autonomous vehicles may soon benefit from 100 times faster neuromorphic cameras

Read the article

AI researchers aim to boost collective organisation among workers for Uber and other platforms

Read the article

AI in video game design

Watch the video