Cybersecurity: AI attacks and hijacking

● AI and generative AI systems can be easily hijacked to generate malicious code, even when designed to reject such requests.
● Other types of attacks, known as "model evasion attacks," exploit modified inputs to cause unexpected behaviours in AIs, such as making a self-driving car misinterpret traffic signs.
● Poisoned data can introduce backdoors into AI models, leading to unintended behaviours, which is concerning due to the lack of control engineers have over their data sources.

Read also on Hello Future

Flooding: how machine learning can help save lives

Discover
décryptage de la lettre de Charles Quint - Cécile Pierrot à la bibliothèque

AI provides a wide range of new tools for historical research

Discover
An individual in a lab coat and protective glasses holds a microprocessor in their gloved hand. The setting is bright and modern, suggesting a research or technology development laboratory.

Algorithmic biases: neural networks are also influenced by hardware

Discover

Multimodal learning / multimodal AI

Discover
Three people are collaborating around a laptop in a modern office environment. One of them, standing, is explaining something to the two seated individuals, who appear attentive. On the table, there is a desktop computer, a tablet, and office supplies. Plants and desks are visible in the background.

FairDeDup limits social biases in AI models

Discover
A woman stands in a train, holding a phone. She is wearing a beige coat and a blue and brown scarf. The interior of the train is bright, with seats and metal support bars.

A mathematical model to help AIs anticipate human emotions

Discover

David Caswell: “All journalists should be trained to use generative AI”

Discover