Deep learning

AI: challenges faced by developers of automated systems to moderate hate speech

● AI tools can be used to moderate large quantities of social media posts but adapting them to effectively vet material from a variety of social and cultural contexts is not an easy task.
● Researchers from the US and the UK have recently presented a new model for the detection of hate speech which strikes an optimal balance between accuracy and fairness.
● A multidisciplinary team at Orange is also investigating ways to boost the efficiency and fairness of these technologies by combining AI-generated hate speech with social science data.
Read the article
Close-up of a woman in a white coat carefully looking through the eyepiece of a black microscope, with her right eye aligned.

Efficient, lightweight computer vision models for innovative applications

Read the article
Illustration of a smiling robot emerging from a large smartphone screen, reaching out to a seated man with a

The drive to simulate human behaviour in AI agents

Read the article

Omnimodal AI: a game-changer for customer relations

Read the article

AI therapy: marketing hype and the hidden risks for users

Read the article

A lexicon of artificial intelligence: understanding different AIs and their uses

Read the article