Machine learning

Multimodal learning / multimodal AI

• Multimodal AI - or multimodal learning - mimics the human brain’s ability to simultaneously process textual, visual, and audio information, enabling a more nuanced understanding of reality.
• Transitioning from a unimodal model (like those specialized in text, images, or sounds) to a multimodal model presents technical challenges, particularly in creating shared representations for different types of data.
• Multimodal AI offers advantages such as capturing more comprehensive knowledge of the environment and enabling new applications, like merging data from various modalities for complex tasks.
Watch the video
Soft Robotics Lab – ETH Zürich (lab head: Prof. Robert Katzschmann (not in the picture). From left to right: Jose Greminger (Master student), Pablo Paniagua (Master student), Jakob Schreiner (visiting PhD student), Aiste Balciunaite (PhD student), Miriam Filippi (Established researcher), and Asia Badolato (PhD student).

When will we see living robots? The challenges facing biohybrid robotics

Read the article
Three people are collaborating around a laptop in a modern office environment. One of them, standing, is explaining something to the two seated individuals, who appear attentive. On the table, there is a desktop computer, a tablet, and office supplies. Plants and desks are visible in the background.

FairDeDup limits social biases in AI models

Read the article
A woman stands in a train, holding a phone. She is wearing a beige coat and a blue and brown scarf. The interior of the train is bright, with seats and metal support bars.

A mathematical model to help AIs anticipate human emotions

Read the article

David Caswell: “All journalists should be trained to use generative AI”

Read the article

Health: Jaide aims to reduce diagnostic errors with generative AI

Read the article

Autonomous vehicles may soon benefit from 100 times faster neuromorphic cameras

Read the article