Data

Multimodal learning / multimodal AI

• Multimodal AI - or multimodal learning - mimics the human brain’s ability to simultaneously process textual, visual, and audio information, enabling a more nuanced understanding of reality.
• Transitioning from a unimodal model (like those specialized in text, images, or sounds) to a multimodal model presents technical challenges, particularly in creating shared representations for different types of data.
• Multimodal AI offers advantages such as capturing more comprehensive knowledge of the environment and enabling new applications, like merging data from various modalities for complex tasks.
Watch the video
Three people are collaborating around a laptop in a modern office environment. One of them, standing, is explaining something to the two seated individuals, who appear attentive. On the table, there is a desktop computer, a tablet, and office supplies. Plants and desks are visible in the background.

FairDeDup limits social biases in AI models

Read the article
Two people are seated in front of a computer, discussing a project. Spools of thread are visible on the table.

How to avoid replicating bias and human error in LLMs

Read the article
Conceptual image of the Thales Alenia Space data centre - Thales Alenia Space_MasterImageProgrammes

Lower emissions and reinforced digital sovereignty: the plan for datacentres in space

Read the article
GettyImages - A man in a gray vest is consulting a tablet on a construction site, with visible cables in the background.

Automated intervention reports for augmented technicians thanks to generative AI

Read the article

Orange’s radio propagation model: An essential element in mobile network development

Read the article
PLEAIS

P-C. Langlais (PLEAIS): “Our language models are trained on open corpora”

Read the article