Deep learning

Algorithmic biases: neural networks are also influenced by hardware

• Researchers have demonstrated that the fairness of AI models is dependent on hardware platforms used for their deployment. Some hardware configurations have been found to introduce demographic biases which are highly problematic, notably for healthcare applications.
• Model compression is proposed as a key solution for deploying neural networks on devices with limited hardware resources, such as AI capable PCs or edge computing devices.
• The adoption of co-design frameworks for hardware and software architectures will play an essential role in the drive to optimise the fairness and performance of AI models. The integration of non-volatile memory (NVM) devices and noise reduction in neuromorphic systems are also promising avenues for future development.
Read the article
An individual in a lab coat and protective glasses holds a microprocessor in their gloved hand. The setting is bright and modern, suggesting a research or technology development laboratory.

Multimodal learning / multimodal AI

Watch the video
Three people are collaborating around a laptop in a modern office environment. One of them, standing, is explaining something to the two seated individuals, who appear attentive. On the table, there is a desktop computer, a tablet, and office supplies. Plants and desks are visible in the background.

FairDeDup limits social biases in AI models

Read the article
A woman stands in a train, holding a phone. She is wearing a beige coat and a blue and brown scarf. The interior of the train is bright, with seats and metal support bars.

A mathematical model to help AIs anticipate human emotions

Read the article

David Caswell: “All journalists should be trained to use generative AI”

Read the article

Health: Jaide aims to reduce diagnostic errors with generative AI

Read the article

AI researchers aim to boost collective organisation among workers for Uber and other platforms

Read the article