Society

AI: challenges faced by developers of automated systems to moderate hate speech

● AI tools can be used to moderate large quantities of social media posts but adapting them to effectively vet material from a variety of social and cultural contexts is not an easy task.
● Researchers from the US and the UK have recently presented a new model for the detection of hate speech which strikes an optimal balance between accuracy and fairness.
● A multidisciplinary team at Orange is also investigating ways to boost the efficiency and fairness of these technologies by combining AI-generated hate speech with social science data.
Read the article
A woman lying on a yoga mat, looking at her phone with headphones beside her. Natural light illuminates the wooden room.

The perilous charms of relational AIs

Read the article

AI therapy: marketing hype and the hidden risks for users

Read the article

Let’s Talk Tech innovation news: AI, cybersecurity, networks, digital transformation

Read the article

Boosting women’s involvement in solar energy in Senegal: a key factor for society

Read the article
An individual is working on the inside of a phone, using a tool to manipulate electronic components. A wooden table is visible, with spare parts next to it.

Say goodbye to disposables; hello to circular electronics

Read the article

Virtual reality for addiction treatment: The importance of social plausibility in simulated situations.

Read the article