Digital equality

AI: challenges faced by developers of automated systems to moderate hate speech

● AI tools can be used to moderate large quantities of social media posts but adapting them to effectively vet material from a variety of social and cultural contexts is not an easy task.
● Researchers from the US and the UK have recently presented a new model for the detection of hate speech which strikes an optimal balance between accuracy and fairness.
● A multidisciplinary team at Orange is also investigating ways to boost the efficiency and fairness of these technologies by combining AI-generated hate speech with social science data.
Read the article
The image shows a man sitting at a table, slightly turned to the left. He is using a smartphone held in his right hand. The phone's screen displays a colorful background with app icons. On the table, there is a small white cup and a wooden tray. The man is wearing a hearing aid visible in his right ear. The environment is bright and modern, featuring furniture with soft shapes and neutral colors.

An optimised hearing-aid experience thanks to smartphones

Read the article
Two people are seated in front of a computer, discussing a project. Spools of thread are visible on the table.

How to avoid replicating bias and human error in LLMs

Read the article

Understanding the general public’s perception of online risks: beyond official definitions

Read the article

AI and inclusion: CTOs urgently need to take up the challenge

Read the article

French Sign Language Sets Sail with Signs@Work

Read the article